0% found this document useful (0 votes)
86 views9 pages

Di Scipio - 2003 - Sound Is The Interface' From Interactive

This document discusses different perspectives on interactive signal processing and introduces the author's Audible Eco-Systemic Interface project. It examines the paradigm of 'interaction' in existing computer music approaches and proposes understanding 'interaction' as a network of interdependencies among system components that allows emergent dynamical behavior. The author describes their interface's design philosophy based on bio-cybernetic principles and compositional implications.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
86 views9 pages

Di Scipio - 2003 - Sound Is The Interface' From Interactive

This document discusses different perspectives on interactive signal processing and introduces the author's Audible Eco-Systemic Interface project. It examines the paradigm of 'interaction' in existing computer music approaches and proposes understanding 'interaction' as a network of interdependencies among system components that allows emergent dynamical behavior. The author describes their interface's design philosophy based on bio-cybernetic principles and compositional implications.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

‘Sound is the interface’: from interactive to

ecosystemic signal processing

AGOSTINO DI SCIPIO
Scuola di Musica Elettronica, Conservatorio di Napoli, Italy
E-mail: [email protected]

This paper takes a systemic perspective on interactive signal 2. WHAT KIND OF SYSTEMS ARE
processing and introduces the author’s Audible Eco-Systemic ‘INTERACTIVE MUSIC SYSTEMS’?
Interface (AESI) project. It starts with a discussion of the
paradigm of ‘interaction’ in existing computer music and live Interactive music systems are dedicated computa-
electronics approaches, and develops following bio-cybernetic tional tools capable of reacting in real time (i.e. in
principles such as ‘system / ambience coupling’, ‘noise’, and a time shorter than it takes to perceive two events –
‘self-organisation’. Central to the paper is an understanding command and execution, cause and effect – as subse-
of ‘interaction’ as a network of interdependencies among quent, anyway of the order of milliseconds; see Fraisse
system components, and as a means for dynamical behaviour 1967: 111–12) upon changes in their ‘external condi-
to emerge upon the contact of an autonomous system tions’. Minimal external conditions typically include
(e.g. a DSP unit) with the external environment (room or else
initial input data and run-time control data. In most
hosting the performance). The author describes the design
philosophy in his current work with the AESI (whose DSP
existing approaches, such data are set, changed and
component was implemented as a signal patch in KYMA5.2), adjusted by some agent – a performer, or group of
touching on compositional implications (not only live performers (which could include the composer, too,
electronics situations, but also sound installations). either working in the studio or improvising on stage).
Control devices, with their mechanical and/or visual
interfaces, are operated to determine these data. In
1. INTRODUCTION
short, the agent’s operations, as reflected in the control
Talk of ‘interactivity’ in today’s Western world is data, implement the system external conditions and all
common, ubiquitous, and often meaningless. The changes therein.
history of the ‘interactive arts’ and their paradigms is The main purpose of control data is to determine
documented in an overwhelming body of literature – a a system’s changes of internal state. This is done indi-
survey is Dinkla (1994). ‘Interactive music’ is certainly rectly, by updating the parameters in either digital
an integral part of that picture. The goal of the present signal processing techniques or program routines
paper, however, is not to overview existing ‘interactive operating at a more abstract, symbolic level. Changes
music systems’, but to discuss the paradigm of ‘inter- of internal state are heard as changes in the musical
action’ inherent to most efforts in this area, and par- output.
ticularly in the design of signal processing interfaces. By operating the available control devices, the agent
The paper describes personal work that may represent in effect ‘plays’ the system as if it were a new kind
an alternative strategy. of music instrument. A variety of known interactive
I start by asking: What kind of systems are ‘inter- performance situations is described in available publi-
active’ computer music systems? What paradigm of cations, ranging from the ‘solo instrument’ set-up, to
interaction do they implement and make socially the ‘duo’ and larger ‘ensembles’ where several per-
available? I try to answer by adopting a system-theory formers and/or computer systems are interconnected
view, more precisely a radical constructivistic view and play together (Rowe 1993). However, in my
(von Glasersfeld 1999, Riegler 2000) as found in the opinion, whether the ‘instrument’ metaphor is entirely
cybernetics of living systems (Maturana and Varela useful when discussing interactive music systems,
1980) as well as social systems and ecosystems (Morin remains debatable. For example, the utilisation of
1977). I think it is technically possible and musically interactive interfaces in studio work (in the production
desirable to achieve a broader understanding, if not of ‘tape’ music, not performance-oriented) raises
a reformulation, of what is meant by ‘interaction’. As specific issues, and is of high compositional relevance,
a practical example, I will later illustrate the design although it has been less often discussed and too often
philosophy in my own Audible Eco-Systemic Interface taken for granted (a preliminary, ground-breaking
project. discussion was sketched, a. o., in Truax 1978).
Organised Sound 8(3): 269–277 © 2003 Cambridge University Press. Printed in the United Kingdom. DOI: 10.1017/S1355771803000244
270 Agostino Di Scipio

Of special interest for the present paper are real-time


digital signal processing (DSP) interfaces designed
and dealt with by composers in creative ways, because
the interactions mediated by such interfaces often
have direct influence on the structure and the internal
development of the output sound. The interface
design becomes then the very object of composition
(Hamman 1999), and the array of DSP algorithms,
and the methods by which they communicate among
themselves, should be seen as the material implemen-
tation of a compositional process or concept. This
approach, by which one invents and works out inter-
dependencies among real-time control variables,
already reflects a paradigm shift from interactive Figure 2. Implicit feedback loop in interactive system design.
composing (as in the pioneering work of Joel Chadabe
and other composers, in the 1970s) to composing inter- turn may affect the computer internal state in some
actions. In my view, the shift is especially relevant
way, etc.; see figure 2). This recursive element is a
when composed interactions are audibly experienced
source of creative developments, yet it remains purely
as a music of sound (timbre composition), more than
optional (dashed arrow in figure 2), as the basic
a music of notes (as is often the case with interactive
communication flow is a linear one. The perfomer is
music systems, especially when instrumentalists are
first the initiator agent of the computer’s reaction, and
involved).
only secondly, and indeed optionally, might become
Some interesting research work in physical model-
the very locus of feedback, injecting some noise into
ling (e.g. Smith and Smith 2002) shows that even the
the overall system loop.
design of mechanical control devices for computer-
Here, ‘interaction’ means that the computer’s inter-
operated sound synthesis, far from being a mere
nal state depends on the performer’s action, and that
question of exerting proper controls over a separate
the latter may itself be influenced by the computer
sound generating process, can indeed become a direct
output. One could introduce complex transfer func-
determinant of the timbral quality of the output. This
is especially the case when complex textural sonorities tions in the mapping of information from one domain
are considered (in Smith and Smith 2002, the sound of to another, from control data to signal processing
the cicadas). parameters (similar to many musical instruments
with their complex mapping of gesture into sound),
but that is not essential to the underlying ontology:
3. WHERE AND WHEN IS ‘INTERACTION’? agent acts, computer re-acts. For the proper reaction
Notwithstanding the sheer variety of devices and com- to take place, in principle there should be no noise in
puter protocols currently available, most interactive the external conditions, no unwanted or unforeseen
music systems – including developments over the actions on the agent’s part.
Internet – share a basic design, namely a linear com- On a closer look, the role of the agent-performer
munication flow: information supplied by an agent is appears itself ambivalent (no criticism implied), in that
sent to and processed by some computer algorithms, it is the only signifier of the system’s external condi-
and that determines the output (see figure 1). tions and, at the same time, it represents an internal
This design implicitly assumes a recursive element, component of the overall meta-system including man,
namely a loop between the output sound and the machine and environment. Indeed, in this common
agent-performer: the agent determines the computer’s notion of interaction, the agent is indeed the interface
changes of internal state, and the latter, as heard by between the computer and the environment, and, at
the agent, may affect his or her next action (which in the same time, it is the only source of energy and
change.
Today, most efforts in interactive computer music
can be referred to this notion, which also returns in live
electronics music (relevant exceptions are noted later,
others exist that will go unmentioned here). Even
in recent surveys (Battier 1999, Schnell and Battier
2002), a linear design ontology is taken as if it were the
only one we may think of when speaking of such things
Figure 1. Basic design in interactive music systems. as ‘live electronics’ and ‘interactive music’.
‘Sound is the interface’ 271

In a broader perspective, in this standard approach, down relevant sonic features in the external world, not
the sound-generating system is not itself able to directly demanding this from a separate agent-performer),2 it
cause any change or adjustment in the ‘external condi- would also be able to become a self-observing system,
tions’ set to its own process, i.e. it has no active part that is, to determine its own internal states based on
in determining the control data needed for its changes the available information on the external conditions –
of internal state to take place. The only source of including the traces of its own existence left in the sur-
dynamical behaviour lies in the perfomer’s ears and roundings. A kind self-organisation is thus achieved
mind. (von Foerster 1960). Here, ‘interaction’ is a structural
Observe, too, that interaction is normally referred element for something like a ‘system’ to emerge (Greek
to the man/machine interrelationship, never to the sys-thema = a gathering of connected components,
mechanisms themselves implemented within the com- like a community of agents, or any other complex
puter: the array of generative and transformative DSP structure as a result of syn-thesis, i.e. com-position).
methods in interactive music systems usually consists System interactions, then, would be only indirectly
of separate processes and functions working indepen- implemented, the by-product of carefully planned-out
dent of one another. User interfaces do not normally interdependencies among system components (see also
allow to create a communication between DSP pro- Lewis 1999), and would allow in their turn to establish
cesses, but only to independently handle their para- the overall system dynamics, upon contact with the
meters in the form of separate run-time variables. The external conditions.
agent selects the particular function(s) and process(es) This is a substantial move from interactive music
active at any given time, and the output sample composing to composing musical interactions, and
streams of active processes are linearly summed perhaps more precisely it should be described as a
together. No mutual influence is exerted among them, shift from creating wanted sounds via interactive means,
no interdependency among sonic processes is imple- towards creating wanted interactions having audible
mented.1 As an example, the sudden occurrence of, traces. In the latter case, one designs, implements and
say, too dense a mass of notes – or sound grains or maintains a network of connected components whose
other atom units – would not automatically determine, emergent behaviour in sound one calls music.
say, a decrease in amplitude (a perceptually correlate
dimension of density). In general, adjustments in the
5. AMBIENCE AND NOISE
interference among sonically relevant parameters are
left to the agent. I think these interrelationships may, When a system enters a non-destructive interaction
instead, be the object of design, and hence worked out with the surrounding environment (the system’s
creatively as a substantial part of the compositional houseplace, literally its oikoz), it is called an eco-system
process. (oiko-sys-thema). In which case, though ‘external’,
the environment is indeed an integral, uneliminable
component. Eco-systems are systems whose structure
4. FROM INTERACTIVE COMPOSING TO
and development cannot exist (let alone be observed
COMPOSING INTERACTIONS
or modelled) except in its permanent contact with a
The very process of ‘interaction’ is today rarely medium. They are autonomous (i.e., literally, self-
understood and implemented for what it seems to regulating) as their process reflects their own peculiar
be in living organisms (either human or not, e.g. internal structure. Yet they cannot be isolated from the
animal, or social), namely a by-product of lower-level external world, and cannot achieve their own auto-
interdependencies among system components. nomous function except in close conjunction with a
In a different approach, a principal aim would be source of information (or energy). To isolate them
to create a dynamical system exhibiting an adaptive from the medium is to kill them.
behaviour to the surrounding external conditions, The role of noise is crucial here. Noise is the medium
and capable to interfere with the external conditions itself where a sound-generating system is situated,
themselves. Not only would it be able to detect strictly speaking, its ambience. In addition, noise is
changes in the external world and ‘hear’ what happens the energy supply by which a self-organising system
out there (an ‘observing’ system, capable of tracking can maintain itself and develop. Paradoxical as it
may appear, no autonomous system exists if no direct
access to the external is available to it (in other words:
1
Computer systems capable of ‘listening’ to instrumentalists, no context, no text). Indeed, a complex dialectic takes
‘making decisions’ based on what they listen to, are no exception, as
the computer’s decision-making usually depends on a predeter- place here between ‘autonomy’ and ‘eteronomy’ in
mined, sonically abstract knowledge-base representation. A survey all living systems, either human or social (a dialectic
on ‘machine listening’ is in Rowe (2001). The typical example is
‘score following’, where run-time control variables are updated
2
based on the successful or unsuccessful matching of an instrumental A similar approach was pionieered by Gordon Mumma in his 1967
performance against a stored event list (score representation). composition, Hornpipe.
272 Agostino Di Scipio

that may eventually bring the reader to issues of a has been termed micro-sound, some of which are
socio-political nature). described in Di Scipio (1994a) and Roads (2001a:
In effect, as was made clear by Heinz von Foerster 322). In Di Scipio (1997), ‘interactive micro-time sonic
(1960), self-referantial attributes – like ‘self-observing’ design’ was discussed.
or ‘self-organising’ – are meaningless unless we also I started the Audible Eco-Systemic Interface (AESI)
account for the relationship to the ambience, and to project three years ago in order to explore the actual
the noise that the ambience provides a system with. implementation (not a formalised model, nor a purely
Noise is a necessary element, crucial for a coherent, sensuous-aesthetical illustration) of sonorous niches,
but flexible and dynamical behaviour to emerge. (In
either sounding natural or artificial to the ear. The
the linear communication flow of most interactive
task is not to evoke existing environmental phenom-
systems, noise remains something to be filtered out in
ena, but to create small audible ecosystems that can
order to minimise odd reactions on the computer’s
part – and still therefore it is, even here, the only source be coherent in their internal structure and temporal
of creative behaviour). unfolding, and that can develop in close relationship
to the space hosting the music and the audience. So
far, I developed the project to the point where I could
6. THE AUDIBLE ECO-SYSTEMIC INTERFACE compose two short live electronics solos (mentioned
PROJECT later, but not described in their musical characteris-
To deal with these matters in actual compositional tics). A most appropriate public presentation of works
work, I think the agent-perfomer should firstly be thus composed will eventually be that of a large-scale
dropped, and the DSP routines implemented in such sound installation.
a way as to function only based on purely acoustical The basic idea reflects a self-feeding loop design
information including, in particular, the ambience (figure 4). A chain of causes and effects is established,
noise. The ambience is the real – not virtual! – space ideally without any human intervention but the prac-
hosting the performance. tical instalment and set-up of everything needed for
Accordingly, I will from now on refer to ‘interac- the performance to take place (loudspeakers, electret
tion’ as not meaning the man/machine interrelation- condenser microphones, a programmable DSP-based
ship, but the machine/ambience interrelationship, and workstation, and a mixer console).
always keeping in mind the triangular, ecosystemic A compact description of the overall process is
connection, man/ambience/machine, that can be thus as follows. (i) The computer emits some initial sound
established (figure 3). Direct man/machine interactions
(either synthetic or sampled), heard through the loud-
(via control devices) are optional to an ecosystemic
speakers; (ii) this is also fed back to the computer by
design (dashed arrow in figure 3), as they are replaced
two or more microphones scattered around the room
with a permanent indirect interrelationship mediated
by the ambience (dashed lines in figure 3). (their placement is crucial); (iii) the computer analyses
Also, I will from now on assume all information the microphone signals and extracts information on
exchanges – from and to the ambience, from and relevant sonic features; (iv) the extracted data is
to the computer, from and to a possibly included used to generate low-rate control signals and drive the
agent-performer – to be of a purely sonic nature. I audio signal processing parameters (DSP modules I
am interested in the interdependencies, connections often use here include granulators and sample play-
and disconnections that can be listened to across the back modules); submitted to audio signal processing is
micro-, meso- and macro-temporal unfolding of the computer-generated sound itself that was initially
sound, as hey are brought forth by micro-time pro- emitted; (v) meanwhile, the microphone signals are
cesses (granular rate, or even sample rate). This comes matched against the original synthetic or sampled
from my own previous compositional efforts in what signal, and the difference-signal is calculated (the

Figure 3. Triangular recursive ecosystemic connection. Figure 4. Basic design of the Audible Eco-Systemic Interface.
‘Sound is the interface’ 273

difference numerical values between original and Control signals are also submitted to a variety of
ambience sound signal, reflecting the room resonances time maps, such as multiple time-dilations (delays
added); (vi) the difference signal is used to adapt a of the order of seconds to minutes), time-reverse of
number of signal processing parameters to the room control samples, etc. Indeed, the implementation of
characteristics (the adaptation process takes a variable the DSP component of the AESI project was largely
time-span to complete; see below for a discussion of a matter of control-signal processing (as opposed to
the system sensitivity to external conditions). the more normal audio signal processing). I think
All AESI run-time variables are therefore in a con- this area – the real-time synthesis and processing
stant flux of change depending on the resonances in of control signals – is rich in musical implications
the room as they are elicited first by the sound initally hitherto unexplored in the digital domain (perhaps
emitted and, henceforth, by all of the sonorities that more explored in the analogue, e.g. the voltage control
are created in the feed-back loop process. The room technology of the 1970s). It raises technical and
resonances affect the parameters in the DSP methods musical-psychoacoustical questions worthy of further
implemented, and the DSP output affects the total investigation.
sound in the room, generating new sonic material as All these time-related variables sum up to constitute
a function of the resonances themselves. With the a network of temporal coordinates of different magni-
language of bio-cybernetics, this is a recursive coupling tudes (= several time scales). This allows the overall
(von Foerster 1960). An ecosystem is created by the system loop to bring forth a variety of behaviours in
recursive coupling of an autonomous system with its sound, from more textural to gestural or rhythmical,
ambience, i.e. with its medium of existence. In our from very dense to sparser. Time variables also affect
case, the ‘autonomous system’ is the array of DSP the relative promptness with which the signal pro-
methods (a complicated signal patch that I designed cessing methods react to the ambience resonances.
in Kyma), and the ‘medium’ or ‘ambience’ is the real Their individual magnitude cannot be established in
performance space. abstract, and must be carefully and empirically tested
Observe that, due to the built-in recursion, all real- on the particular sound material introduced into the
time functions operated in the AESI (either at the level system loop. This is because the micro-time character-
of audio or control signals) become iterated functions. istics of the input sound in the end contributes to the
Were these functions to include nonlinear maps of overall system dynamics (in this sense, all signal
data, the recursion would cause them to be nonlinear processing in the AESI is sound-specific, as discussed
iterated functions, which happens to be a peculiar later). Overall, the network of time variables also
model of complex dynamical systems. Indeed, in prin- contributes to a phenomenon peculiar to ecosystems,
ciple, the overall AESI process could be modelled in namely the fact that important long-term conse-
mathematical terms, as found in the theory of chaotic quences could follow from events that at first may
systems (e.g. Collet and Eckmann 1980). But perhaps seem marginal.
more significant to the present discussion is the notion
that the recursion is structural to the overall design,
8. ECOSYSTEMIC DYNAMICS
not optional: once started, the process develops inde-
pendently of human agents, nurturing itself with the 8.1. Run-time variables
noise provided by its permanent interaction with the
ambience. (Clearly, performers could eventually enter In the AESI run-time process, variables are control sig-
the process cycle and contribute to the overall system nals created by processing sonic data extracted from
dynamics.) the ambience. Psychoacoustically relevant features
taken into account include

7. NETWORK OF TIME VARIABLES • amplitude,


• density of events (rate of onset transients),
In the AESI design, the main role of feedback is to • brilliance (or other spectral properties), and
create control signals, as the system loop is not in the • transient delay between room microphones (to
audio domain, but in the sub-audio (low, inaudible detect different ‘early reflections’ in different
frequency). This process is heard as patterns of varia- places of the room, and tracking other spatial
tion in the sound texture, and eventually as the rhythm cues, e.g. the occurrence of the perceptual
or pace of an overriding musical flow. precedence effect in the room).
Each control signal in the computer DSP patch has
its own sample rate (control rate), equal to the integra- Pitch and frequency-related data could be exploited,
tion time of the feature-extraction method by which it too (with frequency trackers), although I personally
is created. As an example, with amplitude followers, tend not to rely on pitched material too much. (In my
the time-window of the sample averaging process electroacoustic music, pitch is more often a resultant
becomes de facto the sample rate of the generated or by-product of micro-compositional processes, than
control signals. a determinant factor in the musical flow.)
274 Agostino Di Scipio

Based on empirical experience, I think it is good idea variables, and to regulate such constraints depending
to create several control signals out of a single feature- on both external (room resonances) and internal
extraction process. Imagine, in a simplest example, conditions (features of the computer-generated sound
the occurrence of comparatively ‘dark’ spectra in itself, before it is output). In this way, the DSP
the ambience sound causing (i) an amplitude decrease routines interact among them, creating a feedback
in lower frequency bands and (ii), at the same time, that is heard as patterns of variations in some per-
by complementing the numerical value for that vari- ceptually relevant dimension of the output sound.
able, a shortening of grain durations (shorter grain These patterns, in turn, influence the continuing
durations having the effect of a high-pass filter). machine/ambience exchanges, thus affecting the
system development in the long run.
8.2. Functions
The DSP component of the AESI includes the follow- 8.3. Sensitivity to external conditions
ing functions (or automated controls) described here in
All system functions implemented in the AESI com-
their systemic meaning:
puter component have a variable, automatically regu-
• Compensation, e.g. decrease amplitude of the lated sensitivity to external conditions. In other words,
input material, when the sonic density in the their response or reaction to the external conditions is
ambience gets larger. Let’s call memWriteLevel not fixed, as it varies in direct proportion to the rate of
the system variable that scales the amplitude of events (density) and to the absolute local maximum
input samples before they are written into an amplitude in the external sound. This is implemented
internal memory buffer (granulators will later simply by boosting or scaling down all control signals
read samples off this buffer). At time t it is in direct proportion to those external variables:
calculated as: the more responsive the room is to the particular
memWriteLevelt = 1 - [(dens1t + dens2t) /2]2 computer-generated material, the quicker and quanti-
tatively more substantial becomes the effect of control
where dens1 and dens2 are the current local maxima
(calculated by some other process not described here) signals on DSP parameters.
of the sound coming over microphones 1 and 2. Thus Clearly, the ambience noise may happen to include
calculated, memWriteLevel is the squared inverse of random events completely independent of the musical
the room sound max amplitude. By scaling down the process. The AESI process would not be much affec-
new input sound materials, in effect it counterbalances ted by, say, a separate cough in the audience. But it
the amplitude of the room sounds when the latter would certainly become sensitive to very frequent
increases. Thus, an equilibrium point is eventually coughs. In an ecosystemic view, this is coherent and
reached, as in a short turn of time the attenuation of acceptable. A single cough doesn’t make for a signifi-
new input material will determine a softer total sound cant characteristic of the external world, but many do!
in the room. (We human listeners would behave in a very similar
Another example of compensation is: shorten the way.)
grain durations (or other musical atom unit) as the
input signal gets louder. Shorter grains will less often 8.4. Competing orientation criteria
overlap among them, resulting in a decrease of the
total amplitude. By influencing the machine/ambience interaction,
functions contribute in the long-run to orient the over-
• Following, i.e. run after, and finally match the
value of a given variable (as set by some other all system development, ultimately building-up what
process), with some delay. This is like the hyster- could be heard as an overriding musical process.
esis found in many biological and electro- On this aspect, we should consider at least two very
mechanical systems. general criteria:
• Redundancy, i.e. support a given predominant • Omeostasis, the centripetal tendency to keep to
feature, e.g. automatically increase the density a stable or recurrent behaviour. This marks the
of generated grains as the external amplitude system’s sonic ‘identity’ to an external ear.3
gets larger. This will increase the perceived inten-
sity (or ‘volume’) of the total sound, without 3
Listeners are a very special kind of external observer or hearer,
necessarily boosting the actual signal level. because their mere physical presence in the room acts as an element
• Concurrency, i.e. support a sonic feature contrast- of acoustical absorption. Hence they are rather an internal compo-
ing or even competing with the predominant one, nent of the ecosystemic dynamics. As is well-known, audience-less
rehearsals are far from replicating the real performance context,
e.g. boost comparatively high frequencies when and even a relatively small audience can deeply modify the room
low frequencies predominate in the room. response. In the AESI project, this is not considered a problem, nor
an element irrelevant to the music: changes in the ambience will
As should be clear, the purpose of system functions reveal peculiar changes in the overall ecosystemic dynamics, and
is to create a network of constraints among run-time therefore in the audible results themselves.
‘Sound is the interface’ 275

• Omeoresis, the opposite, centrifugal tendency to sonic information.4 As finally perceived by listeners,
meander, following a more varied, random path. sound bears traces of the structural coupling it is born
This marks the system’s ambiguity to an external of. Therefore, I think we could speak of audible inter-
ear. faces, meaning interfaces whose process – the continu-
ing mediations between machine and environment,
A rich and varied system behaviour, making and eventually perfomers, too – is actually experienced
for a desirable performance, is a resultant of the as time-varying shapes of sound.
competition of these two. And still, these orientation
criteria are patterns arising from the lower-level
process operated by the above-mentioned system 10. PRELIMINARY CONCLUSIONS
functions, and not explicitly implemented as such. A thorough technical description of the AESI is out
of the scope of the present paper. It would require
details of the DSP methods, including at least the
9. SOME OBSERVATIONS
audio signal transformations (several granulation
In the real-time AESI process, interactions between techniques, and automatically controlled sampling),
computer and ambience appear as emergent proper- feature-extraction algorithms (averaging processes,
ties of a self-organisational dynamics. They emerge filter circuits, logical operations at signal level, etc.).
not only from (i) the mapping of the control signals And it would extend to the microphones’ technical
onto the parameter space of the DSP routines, and characteristics, their placement (relative distance and
(ii) the network of time-variables (integration time orientation), size and geometry of the room, place-
in feature-extraction processes, time maps of control ment of loudspeakers, etc. The purpose of the present
signals), but they also emerge from (iii) the particular paper is instead limited to introduce an ecosystemic
sound materials introduced into the system loop. The perspective on interactive signal processing, and
systemic meaning of the input raw sound material is sketch the design philosophy behind my Audible
to elicit the room resonances (which will in turn be Eco-Systemic Interface project.
used to create the control signals driving the audio In the above described approach, ‘interaction’
DSP routines, to which the sound material is itself sub- means ‘interdependency among system components’,
mitted). But this means, too, that the overall emergent and ‘structural coupling’ between an autonomous
interactions also depend upon (iv) the specific room DSP component and the external world. The notion
acoustics, i.e. upon the very material and geometrical that a computer reacts to a performer’s action is
replaced with a permanent contact, in sound, between
characteristics of the room or court, or other place
computer and the environment (room or else). The
hosting the performance. In the AESI approach, the
computer acts upon the environment, observes the
overall musical performance stems from the meeting,
latter’s response, and adapts itself, re-orienting
or the clash, of a particular sound material with a
the sequence of its internal states based on the data
particular venue (room or other space). collected. At all time except the very beginning, the
In this perspective, it makes little sense to say that a data that constitutes the ambience to the system (noise
room has or does not have good acoustics. No such in external conditions) is a result of previous inter-
value judgement is pertinent when the aim is to meet actions. Therefore, not in a merely metaphorical sense,
and welcome, into the music, the particular space the overall process develops based on its own history,
hosting performers and listeners, although it’s clear i.e. on the sequence of past interactions. That’s the
that some rooms will contribute more, and others will way by which ecosystems are cognisant of their past
contribute less, depending on their material and geo- and do exhibit a kind of memory.
metrical design. The AESI run-time process unfolds Reflecting a radical constructivistic epistemology
as it ‘learns’ about the environment where it is set (as in the work of philosopher Ernst von Glasersfeld
to work. The structural coupling between the DSP and bio-cyberneticians Humberto Maturana and
system and the external takes place in the medium Francisco Varela), the Audible Eco-Systemic Interface
of sound. In a way not at all metaphorical, by way of implements a ‘structurally closed’ yet ‘organisation-
doing something to the environment (sending sounds ally open’ process, to use terms borrowed from what
to it, thus having it resonate to them), the AESI deter- was once called ‘system-theory’ (von Bertanlanffy
mines its own internal state as a function of previous
ones. ‘Learning by doing’ clearly echoes a contruc- 4
In a similar vein, Lewis (1999: 104) writes: ‘There is no built-in hier-
tivistic view of cognition (a Piagetian perspective). archy of human leader/computer follower: no ‘veto’ buttons, pedals
or cues. All communication between the system [he means the
Because all exchanges between system components computer, which in the AESI approach is just a system component,
take place in the medium of sound, in the approach though] and the improviser takes place sonically. [Such] a perfor-
taken here, sound is the interface. All processes mance . . . is in a very real sense the result of a negotiation . . .’ While
in Lewis’ approach a human ‘improviser’ is the only ‘ambience’ to
or equipment involved, including microphones and the computer, in the AESI project that would represent just another
loudspeakers, are uniquely vehicles or transformers of element inhabiting the shared ambience.
276 Agostino Di Scipio

1968). ‘Closed’ because no component can be removed latter gets softer, the ratio gets smaller, and eventually
or altered without causing a collapse of the overall silent segments (pauses) in the input material, detected
system, or a substantial change of its behaviour. In as amplitude levels below some threshold, are not
this sense, ‘closure’ preserves the system identity, stretched at all, or they are made much shorter than
heard as the timbre, the specific sonic presence of the real. The dependency thus created between amplitude
implemented ecosystem. ‘Open’ because the random and duration, at the level of signal processing, has
configuration of system variables and the sequence of direct compositional implications. Truax calls this
internal states are not pre-determined, and rather an automated control (Truax 1994: 42).
depend on a close and permanent contact with the The idea of the Audible Eco-Systemic Interface can
ambience and all events happening therein. ‘Openess’ be seen as a systemic generalisation of that notion of
reflects the system’s ability to creatively adapt to, automated, sound-specific control, and materialises
and itself act upon, the ambience. This is heard as in DSP methods using an array of mutually connected
variations of the system timbral identity.
sound variables of psychoacoustical relevance. Ulti-
mately, the Audible Eco-Systemic Interface is the
10.1. Addendum same as a computer program operating at a micro-
time scale, with output data structured in a purely
A major source of inspiration behind these efforts was
audible format (rather than printed, graphical, or
for me the observation of processes by which natural
anything else). In this sense, it represents a real-time
and social phenomena emerge from, and nurture
themselves with, noise. Another was the hand-made algorithmic-composition approach, but one whose
algorithmic process by which Iannis Xenakis com- by-products are mainly heard as a timbral construc-
posed Analogique A et B (for 9 string instruments and tion. The merging of formalised (algorithmic composi-
tape, 1958–1959), an often-reputed marginal work of tion) and sonic (timbre composition) is made by means
his, yet one that raised all-important general issues in of ecosystemic principles.
composing. Last but not least, I should mention my
own idiosyncratic conviction that, sound being the ACKNOWLEDGEMENTS
primary domain of experience in all electroacoustic
musics, algorithmic composition (formalised approa- As already mentioned above, in passing, the current
ches) and timbre composition (sonic design usually implementation of the DSP component of the AESI
pursued in more qualitative, intuitive approaches) was made with SymbolicSound’s KYMA5.2 (running
can merge and fuse together, yielding into a sonic art on the CAPYBARA320 DSP engine). The first work
that both transcends and collapses the traditional I composed with it, the live electronics solo, Audible
dichotomy of sound materials and musical form EcoSystemics n.1 (Impulse Response Study), also
– allowing timbre to be truly experienced as form requires two condenser microphones, and an audio
(Di Scipio 1994b). That the latter notion – timbre as track of short to shortest sound pulses as the raw
form – could materialise in live electronics situations, sound material introduced into the system loop. I cre-
with compositional implications impossible in non- ated the pulse material with the PULSARGENERATOR
real-time media, has always been of profound interest program (Roads 2001b), during a compositional resi-
to me (my first efforts in this regard include the 1993 dency at CCMIX (Centre Creation Musicale Iannis
work, Texture-Multiple, for small chamber ensemble Xenakis, Paris, April 2002). Audible EcoSystemics n.1
and interactive signal processing, and the 1998 work, was premiered in Stoke-on-Trent (Keele University,
5 difference-sensitive circular interactions, for string October 2002), and was soon played again in Leicester
quartet and interactive signal processing). (City Gallery & DeMontfort University, October
As we have seen, ecosystemic interactions are
2002). In these UK concerts, Kurt Hebel took care
context-specific not only in the sense that they depend
of the set-up and supervised the overall performance.
on the particular performance space, but also in the
On the occasion of the Italian premiere, in Florence
sense that they depend on the specific sound material
being used. The micro-time characteristics of the latter (Centro Tempo Reale, May 2003), I could more
may interact in constructive and destructive ways with thoroughly work out the overall ecosystemic dynam-
the built-in network of time-variables and with the ics, with the assistance of Alvise Vidolin. Another
ambience. Ecosystemic interactions are sound-specific. performance took place in Coimbra, Portugal (Musica
In real-time computer music applications, an Viva festival, September 2003), with technical support
example of sound-specific musical signal processing is by Carlos Alberto Augusto.
the granular time-shift operated in Barry Truax’s In a second live electronics solo, Audible Eco-
GSAMX program (Truax 1994). There, the duration Systemics n.2 (Feedback Study), the only raw sound
of an input sound is stretched according to a real: material is Larsen tones created live by carefully
stretched ratio that is directly proportional to the handling the gain of the two condenser microphones,
current amplitude of the input material. When the using a (possibly analogue!) mixer console. At the
‘Sound is the interface’ 277

time of writing, this work is scheduled for premiere in Riegler, A. 2000. Web documentation, available from
Ghent (IPEM, October 2003). <www.univie.ac.at/constructivism/>.
Roads, C. 2001a. Microsound. Cambridge, MA: MIT Press.
Roads, C. 2001b. Sound composition with pulsars. Journal
REFERENCES of the Acoustical Engineering Society 49(3).
Rowe, R. 1993. Interactive Music Systems. Cambridge, MA:
Collet, P., and Eckmann, J. P. 1980, Iterated Maps on the MIT Press.
Interval as Dynamical Systems. Boston: Birkäuser. Rowe, R. 2001. Machine Musicianship. Cambridge, MA:
Dinkla, S. 1994. The history of interfaces in interactive art. MIT Press.
In Proc. of the 1994 Int. Symp. on the Electronic Arts Schnell, N., and Battier, M. 2002. Introducing composed
(ISEA). instruments. Technical and musicological implications.
Di Scipio, A. 1994a. Micro-time sonic design and the In Proc. of the 2002 Conf. on New Interfaces for Musical
formation of timbre. Contemporary Music Review 10(2). Expression (NIME).
Di Scipio, A. 1994b. Formal processes of timbre composi- Smith, T., and Smith, J. O. 2002. Creating sustained tones
tion challenging the dualistic paradigm of computer with the cicada’s rapid sequential buckling mechanism.
music. In Proc. of the 1994 Int. Computer Music Conf. Proc. of the 2002 NIME.
(ICMC). Truax, B. 1978. Computer music composition: the poly-
Di Scipio, A. 1997. Interactive micro-time sonic design. phonic POD system. IEEE Computer, August 1978 issue.
Truax, B. 1994. Discovering inner complexity: time-shifting
Two compositional examples. Journal of Electroacoustic
and transposition with a real-time granulation technique.
Music 10.
Computer Music Journal 18(2).
Hamman, M. 1999. From symbol to semiotic: repre-
von Bertanlanffy, L. 1968. General System Theory. New
sentation, signification and the composition of music York.
interaction. Journal of New Music Research 28(2). von Foerster, H. 1960. On self-organizing systems and their
Lewis, G. 1999. Interacting with the latter-day musical environment. In C. Yovits (ed.) Self-Organizing Systems.
automaton. Contemporary Music Review 18(3). New York.
Maturana, H., and Varela, F. 1980. Autopoiesis. The von Glasersfeld, E. 1999. The Roots of Constructivism.
Realization of the Living. Dordrecht: D. Reidel Publ. Unpublished (lecture text presented at the Scientific
Morin, E. 1977. La méthode. La nature de la nature. Paris: Reasoning Research Institute, 1999). Available from
Seuil. <www.oikos.org/>.

You might also like