0% found this document useful (0 votes)
98 views6 pages

Takahiko Tsuchiya & Jason Freeman - Data-Driven Live Coding With DataToMusic API

Uploaded by

Paolo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
98 views6 pages

Takahiko Tsuchiya & Jason Freeman - Data-Driven Live Coding With DataToMusic API

Uploaded by

Paolo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Data-Driven Live Coding with DataToMusic API

Takahiko Tsuchiya & Jason Freeman Lee W. Lerner


Georgia Institute of Technology Georgia Institute of Technology
Center For Music Technology Georgia Tech Research Institute
840 McMillan St., Atlanta, GA 30318 250 14th St. NW, Atlanta, GA 30318
[email protected] [email protected]
[email protected]

ABSTRACT visualizations2 . Similar to visualization, data sonification,


Creating interactive audio applications for web browsers the use of non-speech audio to represent information [14],
often involves challenges such as time synchronization is a widely explored area for analytics, communication, and
between non-audio and audio events within thread other purposes. Data sonification is used in both practical
constraints and format-dependent mapping of data to and artistic applications. In the latter, musicians have
synthesis parameters. In this paper, we describe a unique created algorithmic compositions and live performances
approach for these issues with a data-driven symbolic with non-musical information, such as weather and sensor
music application programming interface (API) for rapid input [7, 8]. In recent years, more and more applications of
and interactive development. We introduce DataToMusic data sonification have been deployed on-line for web
(DTM) API, a data-sonification tool set for web browsers browsers.
that utilizes the Web Audio API1 as the primary means of In a previous paper, we introduced the DataToMusic
audio rendering. The paper demonstrates the possibility of (DTM) application programming interface (API), a
processing and sequencing audio events at the JavaScript tool set for data sonification for web browsers
audio-sample level by combining various features of the [13]. In the paper, we discussed the effectiveness of
Web Audio API, without relying on the common musical structures and expressions for
ScriptProcessorNode, which is currently under a redesign. representing multi-dimensional data. Using the DTM API,
We implemented an audio event system in the clock and we explored possibilities in data-agnostic models that can
synthesizer classes in the DTM API, in addition to a flexibly translate unknown data input to musical output.
modular audio effect structure and a flexible We created such algorithms by combining various analysis
data-to-parameter mapping interface. For complex and transformation tools of the API as well as rendering
real-time configuration and sequencing, we also present a methods including real-time notation with Guido [5] and
model system for creating reusable functions with a audio synthesis and playback using the Web Audio API.
data-agnostic interface and symbolic musical We also developed a live coding [1] capability in the DTM
transformations. Using these tools, we aim to create a API, allowing us to use it in a musical performance in
seamless connection between high-level (musical structure) addition to interactive development within a web browser.
and low-level (sample rate) processing in the context of Live coding benefits performers and developers in data
real-time data sonification. sonification in many ways. For example, it lets us
experiment with the design of musical algorithms as we
process a large data set or a data stream in real time.
Keywords With immediate feedback on the design changes, it creates
Web Audio API, Data Sonification, Sample-Level a “continuity between the old and the new behavior” [10]
Modulation, Real-Time Clock, Live Coding that enables fine tuning of designs without interrupting the
musical flow in time and rhythm. Live coding also requires
a robust modular framework in which we can safely
1. INTRODUCTION connect and disconnect modules, facilitating the
In recent years, web browsers have become a versatile reconfiguration of a complex application easier.
platform for interactive multimedia applications. Many Developing a real-time system capable of live coding,
web pages, for example, integrate various data sources and however, involves technical challenges. For instance,
real-time visual rendering to create interactive data creating a precise and fail-safe clock in single-thread
JavaScript is very difficult when we want to transform and
1
http://www.w3.org/TR/webaudio/ map data to audio, automate synthesis parameters, and
sequence audio events all in a synchronized manner. In
addition, creating an interface for a simple
data-to-parameter mapping can be challenging as we work
with high-level sequencing as well as low-level audio
Licensed under a Creative Commons Attribution 4.0 International License (CC BY
4.0). Attribution: owner/author(s). processing that the Web Audio is capable of. In this paper,
Web Audio Conference WAC-2016, April 4–6, 2016, Atlanta, USA
2

c 2016 Copyright held by the owner/author(s). For example, http://www.nytimes.com/interactive/2015/
us/year-in-interactive-storytelling.html
we discuss a runtime data-driven approach for audio complex audio expressions using JIT compilation as well as
synthesis and performance that takes advantage of the intervalic evaluation of code.
functionalities of Web Audio and addresses some of its
limitations. This paper will describe the design problems 3. DATATOMUSIC API
and propose solutions for them in the context of
implementations of the modules of our API. In the 3.1 Overview
following section, we will review the literature and compare
DataToMusic API is a JavaScript library for
prior approaches to our approach. Section three introduces
data-agnostic sonification in web browsers. We originally
the DataToMusic API, focusing on both the
developed this tool set to experiment with symbolic
implementations of synthesizer and clock modules and the
musical structures and create reusable models for varying
adaptive musical model, which may be applied to symbolic
data formats. When creating an application of sonification,
as well as timbre-level transformations.
our dataset or the data stream usually has a unique
dimensionality, cardinality, types, and value ranges.
2. RELATED WORK Integrating a specific data format can lead to the design of
In the last few decades, interactive and real-time coding an audio- and data-mapping scheme that is not easily
in native environments has become increasingly popular reusable; and its audio or musical expressivity may also
among multimedia artists and developers with popular tool depend on particular data.
sets such as Max/MSP3 , SuperCollider4 , Chuck5 , and To address such problems, researchers of sonification
many more. Compared to native tool sets, web tool sets have proposed reusable design frameworks such as
often offer less comprehensive but more specialized or model-based sonification (MBS) [4] and parameter
characteristic functionalities, and bring higher accessibility mapping sonification (PMSon) [3]. MBS offers a high
for general users and developers. For example, Gibber is an interactivity and generalizability with acoustic modeling.
audio-visual live-coding environment [9] with a high Nonetheless, it can be computationally demanding for
flexibility for parameter mapping and automation. It takes web-browser-based implementations, and it is also mainly
an “everything is a sequence” stance, with which enables specialized in timbral rather than musical expressivity.
easy sequencing of any property or method of any Gibber PMSon recommends strategies for generalized data
object6 . Using Gibber’s audio engine Gibberish API7 , preprocessing, analysis, and mapping procedures for
BRAID allows us to interactively construct musical data-to-sound synthesis. While this technique is widely
instruments with graphical interfaces and to quick accepted, the mapping of a PMSon system may not be
configure the synthesizer using an in-line code editor [12]. compatible with various data sources.
Another audio live-coding application is Wavepot8 , which Although DTM was inspired by PMSon in the audio
automatically evaluates the changes of code at a musical synthesis domain, it focuses on creating a model structure
interval to provide real-time and incremental feedback for that adaptively maps data input to parameters of a
design changes. Lissajous9 allows multi-track musical musical structure, providing uniform mapping interface
sequencing with rapid chainable methods, utilizing the similar to the “flowboxes” of UrSound proposed by Essl [2].
browser JavaScript console for read-eval-print-loop (REPL) The following code example shows the default adaptive
based live coding. Another web API capable of live coding mapping models of DTM, which take a single-dimensional
is EarSketch, an on-line programming education system array of any type, convert the type (e.g., for a character
based on music remixing [6]. In EarSketch, while the audio array, it may be encoded into a numerical representation
graph is constructed at user-script compilation, it supports such as bag-of-words vector, ordered by the frequency),
live coding in the form of quick re-compilation of audio and normalize, while preserving the domain range of the
tracks during a playback. This live-coding approach is input and re-sampling into a target length if specified
effective for the real-time manipulation of audio events (Code Example 1).
with a minimum downtime between the events. // Default mapping models that convert an
These different applications use or combine runtime input array ( of any type ) into a
development paradigms such as just-in-time (JIT) normalized numeric array .
compilation [10], REPL, or selective line evaluation, uni = dtm . model ( ’ unipolar ’) ; // 0 to 1
functional programming that allows the dynamic creation bi = dtm . model ( ’ bipolar ’) ; // −1 to 1
and handling of functionalities and algorithms as well as // Create a synthesizer object
the automatic update of the audio graph. Collins suggests s = dtm . synth () . freq (440) . play ()
the dynamic restructuring of the audio graph as the main
principle of audio live coding, as found in Max/MSP and // Synth amp modulated with ’ hello ’
SuperCollider [1]. The DTM API extends this idea with s . amp ( uni ( ’ hello ’) . fit (16 , ’ linear ’) )
real-time data processing and mapping for creating
// Pan modulated with a linear envelope
3
https://cycling74.com/ s . pan ( bi ([1 ,3 ,2 ,5]) . fit (1000) )
4
http://supercollider.github.io/ // Random values with the length of 1024.
5
http://chuck.cs.princeton.edu/ random = dtm . gen ( ’ random ’) . size (1024) ;
6
https://www.gitbook.com/book/bigbadotis/ s . wavetable ( bi ( random ) ) ;
gibber-user-manual/details
7 Code Example 1: Adaptive Mapping Models
http://www.charlie-roberts.com/gibberish/
8
http://wavepot.com/
9
http://lissajousjs.com/
Table 1: The Main Modules in DTM
Data Real-time Model Output
Structure Operations Abstracts
dtm.data, dtm.master, dtm.model, dtm.synth,
dtm.array dtm.clock dtm.instr dtm.osc,
dtm.gen dtm.guido

For creating an adaptive mapping interface,


transformation functions, and other real-time processing
features, the DTM API includes various modules
categorized as follows: data structures, helper functions,
real-time event handlers, model abstracts, and output and
renderers (Table 1). In this paper, we mainly focus on the
dtm.synth (output) and dtm.clock (event handler) modules
that together integrate the Web Audio API in a novel
approach.

3.2 Synthesizer Implementation


Figure 1: Audio Event Overview
In designing and developing the synthesizer class, we
examined a few unconventional approaches to achieve a
balance among the ease of data mapping, the modularity of
audio effects, and the sample-level operation on the audio
event with high-resolution automation and sequencing. the dtm.synth consists of two distinct phases: an off-line
The dtm.synth module is essentially an interface to the rendering of audio events followed by a real-time playback
Web Audio API that offers real-time audio synthesis and a and processing of the rendered clip (see Figure 1). In the
flexible audio graph environment. Writing out first phase, the basic parameters such as the wavetable,
instantiations and connections of nodes directly in Web amplitude, and frequency are created with default values in
Audio, however, can become quite verbose and is not an instance of the OfflineAudioContext. The off-line
suitable for rapid development or live coding scenarios. events, including the pre-rendering effect chain, are
The dtm.synth instead provides simple chainable methods processed with parameter automations, and then passed to
for constructing as well as reordering audio nodes (which a new BufferSourceNode for real-time playback. The
may or may not consist of the default Web Audio nodes pre-rendering effects can be such as a delay, a filter, or a
such as a DelayNode) by simply moving the insertion point ring modulator and a frequency modulator that utilize
of a method call (see Code Example 2). Similar techniques audio-rate (or control-rate depending on the parameter of
of dynamic construction of an audio effect chain are found an AudioNode) modulation of the setValueCurveAtTime
in other Web Audio applications such as EarSketch. function.
In the second stage, post-rendering audio effects are
// Create a note applied to the rendered buffer. The post-rendering effects
var s = dtm . synth () . play ()
may include the similar types from the pre-rendering
// Set the wavetable ( a square wave ) effects, but also adds sample-level operations such as a bit
s . wavetable ([ −1 ,1]) quantizer and wave shapers that directly modify the
// Pre−rendering delay effect rendered buffer. This structure, therefore, allows us to
s . delay (0.3) apply custom audio effects either in real time or an
// Another delay for comb−filtering instantaneous manner. The two-fold rendering is especially
s . delay (0.9 , 0.001 , 0.8)
// A low pass filter . effective with wavetable synthesis, in which one may want
s . lpf (2000) to apply sample-level effects to the wavetable itself (that is
// Post−rendering sample−level effect typically very short) as well as to the resulting audio from
s . bitquantize (8) parameter automations with a longer duration.
// Post−rendering LPF effect Another design challenge for the dtm.synth was
s . lpf . post (5000 , 1) interfacing the data input to the Web Audio synthesis and
// Panning only applied at the post−
rendering stage parameter curves. In our previous implementation of the
s . pan (−0.2) dtm.synth, the audio synthesis parameters, such as an
oscillator’s frequency (a “number” type), a wavetable (a
Code Example 2: Chaining Audio Effects PeriodicWave generator using a numerical array), and an
amplitude envelope ([A, D, S, R]), all accepted
In the dtm.synth, we can apply custom effects to an non-uniform data structures. Many of the parameters had
audio event at the sample level without the a single-value interface, and they were automated with the
ScriptProcessorNode while operating with data in “real setTargetAtTime method triggered by the real-time clock.
time”. This operation is done by utilizing the automation With this interface, one could take some values from a data
methods such as the setValueCurveAtTime and multiple source to modulate various parameters, but it lacked in
offline audio contexts of the Web Audio. The basic flexibility of mapping any length of sequence to a
framework of audio synthesis and parameter mapping in parameter, or synthesizing and modulating at a higher rate
and with complex curves. In addition, the timing of dtm.synth first tries to use the setValueAtTime. As both
real-time clock for updating parameters was also not automation methods do not provide a linear interpolation
reliable enough for precise sequencing. To address these for all browsers, it is expected for the user of the DTM
issues, we took a completely new approach for parameter API to utilize the fit and stretch functions of the dtm.array
mapping and automation. In the new version, the when mapping it to a parameter of the dtm.synth. These
dtm.synth uses a variable-length Float32Array for every functions re-sample the input array into the target length
modulatable parameter, including the wavetable for the with interpolation methods such as linear, step, cubic,
oscillator. Compared to the previous single-value mapping fill-zeros, and many other. With a relatively lower number
interface (which still can be done by using a single-value of data points (up to a few thousand per second), the
array), this allows more direct mapping of data points to setValueAtTime method works reliably and precisely.
time-ordered events, close to linear value-to-value mapping. However, when the number of data points is larger or even
One can process complex curves with a large number of exceeding the duration of the audio event in samples, a
data points (e.g., 10,000 or more) using the dtm.array or large number of the setValueAtTime being scheduled in
generate simple shapes as a LFO with a few data points the process starts to cause delays in the main and the
and map to any parameter. audio threads. In such cases, the dtm.synth automatically
switches at a certain threshold to use the
// Create a synth object
var s = dtm . synth () . play () ; setValueCurveAtTime method for less computation but
with less timing accuracy.
// Create a wavetable with array generator
var someSteps = dtm . gen ( ’ noise ’) . size (10) ; 3.3 Clock Implementation
// Stretch the wavetable into the length When programming real-time musical algorithms and
of 3000 , using the cubic interpolation applications, a clock generator is likely to be essential for
s . wavetable ( someSteps . fit (3000 , ’ cubic ’) ) ; playing musical notes and processing other events in a
synchronized manner. The DTM API, in fact, heavily
// Generate a pattern , rescale and relies on clocks for audio synthesis, creating rhythmic
quantize
var melody = dtm . gen ( ’ fibonacci ’) structures, processing data such as streaming and
. size (10) . range (60 , 90) . round () ; block-wise querying, and live-coding operations. Despite
the many attempts to implement a precise and robust
// Assign to the MIDI pitch with some clock in browser JavaScript, implementation has always
transformation been difficult with the limitation of single-thread
s . notenum ( melody . repeat (2) . mirror () ) ; operation, which may randomly delay a clock callback
// Set the base amplitude because of other heavy computations such as rendering of
s . amp (0.5) ; visual elements. We try to implement a clock system that
minimizes such artifacts on the audio synthesis and the
// Modulate the base amplitude with rhythmic performance of audio events by utilizing the Web
repeating ramps Audio schedulers and error compensation with a lookahead
s . amp . mult ( dtm . gen ( ’ decay ’) time for the dtm.synth.
. repeat ( melody . len ) )
In our earlier implementation of the DTM API, we
Code Example 3: Mapping Arrays to the dtm.synth experimented with the behaviors of callback clocks with
the setInterval, the ScriptProcessorNode, the onended
For automating parameters in real-time, we tested and EventHandler with audio source nodes, and the
implemented two approaches: one with a single call of the requestAnimationFrame method. As discussed by Wilson
setValueCurveAtTime, and the other with the [15], the delay caused by the main-thread operations is
setValueAtTime called for every data point. The highly unpredictable, directly affecting the callback timing.
setValueCurveAtTime method is beneficial in several ways; Even using the audio-thread timer and callback with the
for example, it automatically fits an array to the target Web Audio functions such as the BufferSourceNode,
length, and it can modulate a parameter at a higher rate, OscillatorNode, and ScriptProcessorNode, the main-thread
as described above. It has, however, limitations in the delays cannot be isolated, and it adds a complication of
current browser implementations in terms of the correcting the timing gap caused by buffer-based audio
interpolation method and synchronization of array data processing. For implementing a re-schedulable clock with
points to time. For the value interpolation, as opposed to a the Web Audio, the most common approach may be to
linear interpolation specified in the API documentation, it combine a lower-rate main-thread clock for
only applies a step interpolation to the array. The resulting short-segmented scheduling and a higher-rate and
stepped value curve also is shifted in time when applied, higher-precision audio-thread scheduler for Web Audio
causing an unwanted rhythmic offset.10 The events [15, 11], with optional overlaps for overcoming
setValueAtTime method, in contrast, works more reliably unpredictable delays on the main thread. This technique is
for time synchronization. Therefore, by default, the effective in managing tempo changes in real time, but its
application is basically limited to Web Audio events as we
10 cannot precisely synchronize main-thread functions with
In addition to these limitations, the audio-rate modulation the Web Audio events. In order to synchronize the audio
of the setValueCurveAtTime does not work well in FireFox.
These issues were present in Chrome, FireFox, and Safari in events, rhythmic structure generated from data, and
the late 2015. As of February, 2016, Chrome has the linear constant array processing, we employed a similar approach
interpolation behavior implemented. to the above-mentioned twofold clocks with additional
functions such as process deferring and lookahead delay for 3.4 Live Coding and Mapping Complex
a timing error compensation, master-slave synchronization, Sequences
and the callback management for live coding. For clock The dtm.synth and dtm.clock modules, therefore, allow a
synchronization, the real-time master clock runs at the complex sequence of audio events constructed in (almost)
highest resolution for a given tempo, and the tick of a slave real time with a sample-rate parameter modulation by
clock is triggered at a specified lower-rate. This allows data. A parameter curve may contain values from a single
multiple instances of dtm.clock with different rates to be data point to thousands of data points that can be time
synchronized. In the musical context, a synchronized clock scaled dynamically with the dtm.array transformation
is typically used at a fixed rate between a quarter note to a functions (i.e., using up or down sampling with various
few measures with the dtm.synth audio events. In this interpolation methods), which then is fit into the total
rather large interval, a precisely-timed sequence of Web sample length of the audio event (Code Example 5).
Audio events can be generated using a single note event or
a set of notes with specified delays. This callback clock is // Load an offline data set .
also used in other output formats such as real-time musical dtm . data ( ’ sample . csv ’ , function ( d ) {
notation and note list, as shown in Section 3.1. // Get a column by the index .
var data = d . col (0) ;
As mentioned above, the dtm.synth utilizes the
lookahead value of the dtm.clock for error compensation. // Create an exponentially decaying
The API connects these objects in an automatic and curve from 1 to 0
context-aware manner. That is, once a dtm.synth object is var env = dtm . array ([1 ,0])
instantiated within a dtm.clock’s callback function, the . fit (1000 , ’ linear ’)
synth object locates the parent clock in the context11 and . expcurve (100) ;
retrieves its look-ahead value as well as the tick interval. // Random jitter between 0 to 0.3 of the
From these, the dtm.synth calculates the starting time of length 100
the audio processes and the event duration for parameter var sus = dtm . gen ( ’ random ’ , 0.3)
modulations (see Code Example 4). In particular, the . size (8) . fit (100 , ’ cubic ’) ;
lookahead period is used to defer the first off-line buffer
rendering until all the array operations (and other heavy env . concat ( sus ) ;
computations) are resolved, then the second on-line var s = dtm . synth () . play () . amp ( env )
rendering is played at a delayed timing using the specified
lookahead value. Besides this automatic time adjustment, // clone () allows multiple edits from
it is also possible to separate the clock and synth and the same source
assign an external clock used in another synth object to s . wavetable ( data . clone () . range (−1,1))
synchronize the audio events together. s . freq ( data . clone () . range (1000 , 8000)
. logcurve (200) . fit (16) )
// Generate a decaying envelope with the // Downsample into the length of 16 ( a
length of 1000. typical musical beat length ) .
var env = dtm . gen ( ’ decay ’) . size (1000) ; s . bitquantize ( data . clone () . range (16 ,2) )

dtm . clock ( function () { var sin = dtm . gen ( ’ sine ’) . size (32) ;
// Note duration set to 0.25 seconds . s . lpf ( sin . range (200 ,2000) . logcurve (30) ) ;
dtm . synth () . play () }) ;
. notenum (60) . amp ( env ) ;
Code Example 5: Mapping Data to a dtm.synth
// Duration may also be manually
specified . One concern, however, is that the time scale of the
dtm . synth () . play () . dur (2.0) parameter curve is always relative to the duration of an
. notenum (67) . amp ( env ) ;
audio event (set by the clock interval or the duration
// Set the clock behavior parameter of the dtm.synth), which may require the user’s
}) . lookahead (0.1) . bpm (120) . time (1/8) ; attention on the resulting speed of modulation for the
temporal or rhythmic alignment in a musical structure.
Code Example 4: Automatic Duration and Lookahead Another potential inconvenience is that the dtm.synth
expects a certain range of numerical values for each
In the context of live coding, the clock may be used for parameter. A data input needs to be, therefore, converted
periodically (re-)evaluating the entire or a selected part of a accordingly to the input data type, range, distribution, as
script as well as managing the registered callback functions. well as the synth parameter ranges. Such requirements of
It keeps track of named and anonymous callback functions appropriating data format for various parameters is
using either the function name or the whole (stringified) sometimes not ideal for live-coding situations, as it slows
structure of the object, detects live modification in them, down the design process, data (re)mapping, and may also
and selectively retains, updates, or clears them. This helps cause semantic errors. We can automate the mapping
prevent registering the old and new versions of a callback process by using the previously-mentioned model system as
function separately. a simple scaler and type converter (Code Example 6).

11
This is done by using the Function.caller.arguments
property.
// Create a model object 4. CONCLUSION
var freqModel = dtm . model ( ’ array ’) We presented our approaches for implementing a
// Specify the type conversion method data-driven interface for a Web-Audio-based synthesizer,
. toNumeric ( ’ histogram ’) as well as a real-time clock system for controlling audio
// Modify the preset behavior
. domain ( function ( a ) { and non-audio events with fewer timing errors. In addition,
freqModel . params . domain = a . get ( ’ using the sample-level mapping and musical structure
extent ’) ; models, we described possibilities of complex musical
}) expressions in both symbolic and timbre-level time scales.
// Default behavior : freqModel ( data ) We experimented with these features in the DataToMusic
. output ( function (a , c ) { API, a data sonification library for web browsers capable of
return a . range (20 , 200)
. logcurve (30) live coding. The DataToMusic API is publicly available as
. fit ( c . get ( ’ div ’) ) ; a GitHub repository12 , and as a demo application for
}) ; on-line live coding13 .

// Create a self−repeating note


dtm . synth () . play () . repeat () 5. REFERENCES
. freq ( freqModel ( data . block (100) ) ) ; [1] N. Collins. Generative music and laptop performance.
Contemporary Music Review, 22(4):67–79, 2003.
Code Example 6: Creating a Model for Synth Parameter
[2] G. Essl. Ursound: Live Patching Of Audio And
Multimedia Using A Multi-Rate Normed Single-Stream
Lastly, although a single audio event with the dtm.synth Data-Flow Engine. Ann Arbor, MI: MPublishing,
may be able to create a rich musical expression using the University of Michigan Library, 2010.
data-driven parameter automation, one can create
[3] F. Grond and J. Berger. Parameter mapping
furthermore dynamic expressions with rhythmic, melodic,
sonification. The sonification handbook, pages 363–397,
or harmonic sequences with audio events. Code Example 7
2011.
shows a simple sequencing model for making the specific
[4] T. Hermann. Model-based sonification. The
beats to be delayed for creating a swing effect.
Sonification Handbook, pages 399–427, 2011.
var swing = dtm . model () [5] H. H. Hoos, K. A. Hamel, K. Renz, and J. Kilian. The
// Modify the default method call GUIDO Notation Format: A Novel Approach for
behavior Adequately Representing Score-Level Music. 1998.
. output ( function ( clock ) {
if ( clock . get ( ’ beat ’) % 2 === 0) { [6] A. Mahadevan, J. Freeman, B. Magerko, and J. C.
// No delay for the down beats Martinez. EarSketch: Teaching computational music
return 0; remixing in an online Web Audio based learning
} else { environment. 2015.
// Delay the up beats by 15% [7] A. Polli. Atmospherics/Weatherworks: A
return clock . get ( ’ interval ’) ∗ 0.15;
} Multi-Channel Storm Sonification Project. In ICAD,
}) ; 2004.
[8] T. Riley and K. Quartet. Sun Rings, 2002.
dtm . clock ( function ( c ) { [9] C. Roberts and J. Kuchera-Morin. Gibber: Live coding
var delay = swing ( c ) ;
// Delay the playback timing audio in the browser. Ann Arbor, MI: MPublishing,
dtm . synth () . play () . offset ( delay ) ; University of Michigan Library, 2012.
}) ; [10] J. Rohrhuber, A. de Campo, and R. Wieser.
Algorithms today - Notes on language design for just
Code Example 7: Creating a Swing-Rhythm Model in time programming. context, 1:291, 2005.
[11] N. Schnell, V. Saiz, K. Barkati, and S. Goldszmidt. Of
Another approach for rhythmic sequencing is to modulate Time Engines and Masters. 2015.
the interval and duration of self-repeating synth notes. [12] B. Taylor and J. Allison. BRAID: A Web Audio
// Create a self−repeating note Instrument Builder with Embedded Code Blocks.
dtm . synth () . play () . rep ( Infinity ) 2015.
. amp ( dtm . gen ( ’ decay ’) . expcurve (10) ) [13] T. Tsuchiya, J. Freeman, and L. W. Lerner.
Data-to-Music API: Real-Time Data-Agnostic
// Each note ’s duration is randomized
. dur ( dtm . gen ( ’ random ’ ,0 ,0.5) . size (8) ) Sonification with Musical Structure Models. Proc. of
the 21st Int. Conf. on Auditory Display, 2015.
// The onset intervals alternates [14] B. N. Walker and M. A. Nees. Theory of Sonification.
between these values In The Sonification Handbook. Berlin, 2011.
. interval ([0.5 , 0.3])
[15] C. Wilson. A tale of two clocks: Scheduling Web audio
// Each note can contain a complex pitch with precision. 2013.
modulation
. notenum ( data . fit (16) )
Code Example 8: Rhythmic Sequencing Without Clock 12
https://github.com/GTCMT/DataToMusicAPI
13
http://dtmdemo.herokuapp.com/

You might also like