Csound
Csound
Csound
1. PREFACE
Csound is one of the most widely acknowledged and long standing programs in the eld of audioprogramming. It was developed in the mid-80s at the Massachusetts Institute of Technology (MIT) by Barry Vercoe. Csound's history lies deep in the roots of computer music, however, as it is a direct descendant of the oldest computer-program for sound synthesis, 'MusicN' by Max Mathews. Csound is free, distributed under the LPGL licence and it is tended and expanded by a core of developers with support from a wider community. Csound has been growing for more than 25 years. There are few things related to audio that you cannot do with Csound. You can work by rendering offline, or in real-time by processing live audio and synthesizing sound on the fly. You can control Csound via MIDI, OSC, or via the Csound API (Application Programming Interface). In Csound, you will find the widest collection of tools for sound synthesis and sound modification, including special filters and tools for spectral processing. Is Csound difficult to learn? Generally, graphical audio programming languages like Pd, Max or Reaktor are easier to learn than text-coded audio programming languages like Csound, SuperCollider or ChucK. You cannot make a typo which produces an error which you do not understand. You program without being aware that you are programming. It feels like patching together different units in a studio. This is a fantastic approach. But when you deal with more complex projects, a text-based programming language is often easier to use and debug, and many people prefer programming by typing words and sentences rather than by wiring symbols together using the mouse. Note: Thanks to the work of Victor Lazzarini and Davis Pyon, it is also very easy to use Csound as a kind of audio engine inside Pd or Max. See the chapter "Csound in other applications" for further information. Amongst text-based audio programming languages, Csound is arguably the simplest. You do not need to know anything about objects or functions. The basics of the Csound language are a straightforward transfer of the signal flow paradigm to text. For example, to make a 400 Hz sine oscillator with an amplitude of 0.2, a PD patch may look like this:
One line for the oscillator, with amplitude, frequency and phase input; one line for the output. The connection between them is an audio variable (aSig). The first and last lines encase these connections inside an instrument. That's it.
But it is often difficult to find out how you can do all the things in Csound that are actually possible. Documentation and tutorials produced by many experienced users are scattered across many different locations. This was one of the main motivations in producing this manual: To facilitate a flow between these users and those willing to learn more about Csound, offering both the beginner and the advanced user all the necessary information about how they can work with Csound in any way they choose for creating their music. Ten years after the milestone of Richard Boulanger's Csound Book the Csound FLOSS Manual is intended to be a platform for keeping the information about Csound up to date and to offer an easy-to-understand introduction and an explanation of different topics - not as detailed and in depth as the Csound Book, but including new information and sharing this knowledge with the wider Csound community. Throughout this manual we will attempt a difficult balancing act. We want to provide users with nearly everything important there is to know about Csound, but we also want to keep things simple and concise to save you from drowning under the thousands of things that we could say about Csound. At many points, this manual will link to other more detailed resources like the Canonical Csound Reference Manual (which is the primary documentation provided by the Csound developers and associated community over the years) and the Csound Journal (edited by Steven Yi and James Hearon), which is a great collection of many different aspects of Csound. Good luck and happy Csounding!
3. ON THIS RELEASE
In spring 2010 a group of Csounders decided to start this project. The outline has been suggested by Joachim Heintz and has been discussed and improved by Richard Boulanger, Oeyvind Brandtsegg, Andrs Cabrera, Alex Hofmann, Jacob Joaquin, Iain McCurdy, Rory Walsh and others. Rory also pointed us to the FLOSS Manuals platform as a possible environment for writing and publishing. Stefano Bonetti, Franois Pinot, Davis Pyon and Steven Yi joined later and wrote chapters. For a volunteer project like this, it is not easy to "hold the line". So we decided to meet for some days for a "book sprint" to finish what we can, and publish a first release. We are happy and proud to do it now, with smoking heads and squared eyes ... But we do also know that this is just a first release, with a lot of potential for further improvements. Some few chapter are simply empty. Others are not as complete as we wished them to be. Individual differences between the authors are perhaps larger as they should. This is, hopefully, a beginning. Everyone is invited to improve this book. You can write a still empty chapter or contribute to an exsting one. You can insert new examples. You just need to create an account at http://booki.flossmanuals.net. Or let us know your suggestions. We had fun writing this book and hope you have fun using it. Enjoy!
Berlin, march 31, 2011 Joachim Heintz Alex Hofmann Iain McCurdy
jh at joachimheintz.de
alex at boomclicks.de
i_mccurdy at hotmail.com
4. LICENSE
All chapters copyright of the authors (see below). Unless otherwise stated all chapters in this manual licensed with GNU General Public License version 2 This documentation is free documentation; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This documentation is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this documentation; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
AUTHORS
INTRODUCTION
PREFACE Alex Hofmann 2010 Andres Cabrera 2010 Iain McCurdy 2010 Joachim Heintz 2010 HOW TO USE THIS MANUAL Joachim Heintz 2010 Andres Cabrera 2010 Iain McCurdy 2011 CREDITS adam hyde 2006, 2007 Joachim Heintz 2011
01 BASICS
A. DIGITAL AUDIO Alex Hofmann 2010 Iain McCurdy 2010 Rory Walsh 2010 Joachim Heintz 2010 B. PITCH AND FREQUENCY Iain McCurdy 2010 Rory Walsh 2010 Joachim Heintz 2010 C. INTENSITIES Joachim Heintz 2010
02 QUICK START
7
A. MAKE CSOUND RUN Alex Hofmann 2010 Joachim Heintz 2010 Andres Cabrera 2010 Iain McCurdy 2010 B. CSOUND SYNTAX Alex Hofmann 2010 Joachim Heintz 2010 Andres Cabrera 2010 Iain McCurdy 2010 C. CONFIGURING MIDI Andres Cabrera 2010 Joachim Heintz 2010 Iain McCurdy 2010 D. LIVE AUDIO Alex Hofmann 2010 Andres Cabrera 2010 Iain McCurdy 2010 Joachim Heintz 2010 E. RENDERING TO FILE Joachim Heintz 2010 Alex Hofmann 2010 Andres Cabrera 2010 Iain McCurdy 2010
03 CSOUND LANGUAGE
A. INITIALIZATION AND PERFORMANCE PASS Joachim Heintz 2010 B. LOCAL AND GLOBAL VARIABLES Joachim Heintz 2010 Andres Cabrera 2010 Iain McCurdy 2010 C. CONTROL STRUCTURES Joachim Heintz 2010 D. FUNCTION TABLES Joachim Heintz 2010 Iain McCurdy 2010 E. TRIGGERING INSTRUMENT EVENTS Joachim Heintz 2010 Iain McCurdy 2010 F. USER DEFINED OPCODES Joachim Heintz 2010
04 SOUND SYNTHESIS
A. ADDITIVE SYNTHESIS Andres Cabrera 2010
Joachim Heintz 2011 B. SUBTRACTIVE SYNTHESIS Iain McCurdy 2011 C. AMPLITUDE AND RINGMODULATION Alex Hofmann 2011 D. FREQUENCY MODULATION Alex Hofmann 2011 E. WAVESHAPING F. GRANULAR SYNTHESIS Iain McCurdy 2010 G. PHYSICAL MODELLING
05 SOUND MODIFICATION
A. ENVELOPES Iain McCurdy 2010 B. PANNING AND SPATIALIZATION Iain McCurdy 2010 C. FILTERS Iain McCurdy 2010 D. DELAY AND FEEDBACK Iain McCurdy 2010 E. REVERBERATION Iain McCurdy 2010 F. AM / RM / WAVESHAPING Alex Hofmann 2011 G. GRANULAR SYNTHESIS Iain McCurdy 2011 H. CONVOLUTION I. FOURIER ANALYSIS / SPECTRAL PROCESSING Joachim Heintz 2011
06 SAMPLES
A. RECORD AND PLAY SOUNDFILES Joachim Heintz 2010 Iain McCurdy 2010 B. RECORD AND PLAY BUFFERS Joachim Heintz 2010 Andres Cabrera 2010
07 MIDI
A. RECEIVING EVENTS BY MIDIIN Iain McCurdy 2010 B. TRIGGERING INSTRUMENT INSTANCES Joachim Heintz 2010 Iain McCurdy 2010 C. WORKING WITH CONTROLLERS Iain McCurdy 2010 D. READING MIDI FILES Iain McCurdy 2010 E. MIDI OUTPUT Iain McCurdy 2010
11 CSOUND FRONTENDS
QUTECSOUND Andrs Cabrera 2011 WINXOUND Stefano Bonetti 2010 BLUE Steven Yi 2011
12 CSOUND UTILITIES
CSOUND UTILITIES
10
14 EXTENDING CSOUND
EXTENDING CSOUND
OPCODE GUIDE
OVERVIEW Joachim Heintz 2010 SIGNAL PROCESSING I Joachim Heintz 2010 SIGNAL PROCESSING II Joachim Heintz 2010 DATA Joachim Heintz 2010 REALTIME INTERACTION Joachim Heintz 2010 INSTRUMENT CONTROL Joachim Heintz 2010 MATH, PYTHON/SYSTEM, PLUGINS Joachim Heintz 2010
APPENDIX
GLOSSARY Joachim Heintz 2010 LINKS Joachim Heintz 2010 Stefano Bonetti 2010
11
V.1 - Final Editing Team in March 2011: Joachim Heintz, Alex Hofmann, Iain McCurdy
12
0. This License applies to any program or other work which contains a notice placed by the copyright holder saying it may be distributed under the terms of this General Public License. The "Program", below, refers to any such program or work, and a "work based on the Program" means either the Program or any derivative work under copyright law: that is to say, a work containing the Program or a portion of it, either verbatim or with modifications and/or translated into another language. (Hereinafter, translation is included without limitation in the term "modification".) Each licensee is addressed as "you". Activities other than copying, distribution and modification are not covered by this License; they are outside its scope. The act of running the Program is not restricted, and the output from the Program is covered only if its contents constitute a work based on the Program (independent of having been made by running the Program). Whether that is true depends on what the Program does. 1. You may copy and distribute verbatim copies of the Program's source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice and disclaimer of warranty; keep intact all the notices that refer to this License and to the absence of any warranty; and give any other recipients of the Program a copy of this License along with the Program. You may charge a fee for the physical act of transferring a copy, and you may at your option offer warranty protection in exchange for a fee. 2. You may modify your copy or copies of the Program or any portion of it, thus forming a work based on the Program, and copy and distribute such modifications or work under the terms of Section 1 above, provided that you also meet all of these conditions: a) You must cause the modified files to carry prominent notices stating that you changed the files and the date of any change. b) You must cause any work that you distribute or publish, that in whole or in part contains or is derived from the Program or any part thereof, to be licensed as a whole at no charge to all third parties under the terms of this License. c) If the modified program normally reads commands interactively when run, you must cause it, when started running for such interactive use in the most ordinary way, to print or display an announcement including an appropriate copyright notice and a notice that there is no warranty (or else, saying that you provide a warranty) and that users may redistribute the program under these conditions, and telling the user how to view a copy of this License. (Exception: if the Program itself is interactive but does not normally print such an announcement, your work based on the Program is not required to print an announcement.) These requirements apply to the modified work as a whole. If identifiable sections of that work are not derived from the Program, and can be reasonably considered independent and separate works in themselves, then this License, and its terms, do not apply to those sections when you distribute them as separate works. But when you distribute the same sections as part of a whole which is a work based on the Program, the distribution of the whole must be on the terms of this License, whose permissions for other licensees extend to the entire whole, and thus to each and every part regardless of who wrote it. Thus, it is not the intent of this section to claim rights or contest your rights to work written entirely by you; rather, the intent is to exercise the right to control the distribution of derivative or collective works based on the Program. In addition, mere aggregation of another work not based on the Program with the Program (or with a work based on the Program) on a volume of a storage or distribution medium does not bring the other work under the scope of this License.
13
3. You may copy and distribute the Program (or a work based on it, under Section 2) in object code or executable form under the terms of Sections 1 and 2 above provided that you also do one of the following: a) Accompany it with the complete corresponding machine-readable source code, which must be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange; or, b) Accompany it with a written offer, valid for at least three years, to give any third party, for a charge no more than your cost of physically performing source distribution, a complete machine-readable copy of the corresponding source code, to be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange; or, c) Accompany it with the information you received as to the offer to distribute corresponding source code. (This alternative is allowed only for noncommercial distribution and only if you received the program in object code or executable form with such an offer, in accord with Subsection b above.) The source code for a work means the preferred form of the work for making modifications to it. For an executable work, complete source code means all the source code for all modules it contains, plus any associated interface definition files, plus the scripts used to control compilation and installation of the executable. However, as a special exception, the source code distributed need not include anything that is normally distributed (in either source or binary form) with the major components (compiler, kernel, and so on) of the operating system on which the executable runs, unless that component itself accompanies the executable. If distribution of executable or object code is made by offering access to copy from a designated place, then offering equivalent access to copy the source code from the same place counts as distribution of the source code, even though third parties are not compelled to copy the source along with the object code. 4. You may not copy, modify, sublicense, or distribute the Program except as expressly provided under this License. Any attempt otherwise to copy, modify, sublicense or distribute the Program is void, and will automatically terminate your rights under this License. However, parties who have received copies, or rights, from you under this License will not have their licenses terminated so long as such parties remain in full compliance. 5. You are not required to accept this License, since you have not signed it. However, nothing else grants you permission to modify or distribute the Program or its derivative works. These actions are prohibited by law if you do not accept this License. Therefore, by modifying or distributing the Program (or any work based on the Program), you indicate your acceptance of this License to do so, and all its terms and conditions for copying, distributing or modifying the Program or works based on it. 6. Each time you redistribute the Program (or any work based on the Program), the recipient automatically receives a license from the original licensor to copy, distribute or modify the Program subject to these terms and conditions. You may not impose any further restrictions on the recipients' exercise of the rights granted herein. You are not responsible for enforcing compliance by third parties to this License. 7. If, as a consequence of a court judgment or allegation of patent infringement or for any other reason (not limited to patent issues), conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot distribute so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not distribute the Program at all. For example, if a patent license would not permit royaltyfree redistribution of the Program by all those who receive copies directly or indirectly through you, then the only way you could satisfy both it and this License would be to refrain entirely from distribution of the Program.
14
If any portion of this section is held invalid or unenforceable under any particular circumstance, the balance of the section is intended to apply and the section as a whole is intended to apply in other circumstances. It is not the purpose of this section to induce you to infringe any patents or other property right claims or to contest validity of any such claims; this section has the sole purpose of protecting the integrity of the free software distribution system, which is implemented by public license practices. Many people have made generous contributions to the wide range of software distributed through that system in reliance on consistent application of that system; it is up to the author/donor to decide if he or she is willing to distribute software through any other system and a licensee cannot impose that choice. This section is intended to make thoroughly clear what is believed to be a consequence of the rest of this License. 8. If the distribution and/or use of the Program is restricted in certain countries either by patents or by copyrighted interfaces, the original copyright holder who places the Program under this License may add an explicit geographical distribution limitation excluding those countries, so that distribution is permitted only in or among countries not thus excluded. In such case, this License incorporates the limitation as if written in the body of this License. 9. The Free Software Foundation may publish revised and/or new versions of the General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. Each version is given a distinguishing version number. If the Program specifies a version number of this License which applies to it and "any later version", you have the option of following the terms and conditions either of that version or of any later version published by the Free Software Foundation. If the Program does not specify a version number of this License, you may choose any version ever published by the Free Software Foundation. 10. If you wish to incorporate parts of the Program into other free programs whose distribution conditions are different, write to the author to ask for permission. For software which is copyrighted by the Free Software Foundation, write to the Free Software Foundation; we sometimes make exceptions for this. Our decision will be guided by the two goals of preserving the free status of all derivatives of our free software and of promoting the sharing and reuse of software generally. NO WARRANTY 11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION. 12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. END OF TERMS AND CONDITIONS BASICS 5. DIGITAL AUDIO 6. FREQUENCIES 7. INTENSITIES
15
16
5. DIGITAL AUDIO
At a purely physical level sound is simply a mechanical disturbance of a medium. The medium in question may be air, solid, liquid, gas or a mixture of several of these. This disturbance to the medium causes molecules to move to and fro in a spring-like manner. As one molecule hits the next, the disturbance moves through the medium causing sound to travel. These so called compression and rarefactions in the medium can be described as sound waves. The simplest type of waveform, describing what is referred to as 'simple harmonic motion', is a sine wave.
Each time the waveform signal goes above 0 the molecules are in a state of compression meaning they are pushing towards each other. Every time the waveform signal drops below 0 the molecules are in a state of rarefaction meaning they are pulling away from each other. When a waveform shows a clear repeating pattern, as in the case above, it is said to be periodic. Periodic sounds give rise to the sensation of pitch.
Therefore the frequency is the inverse of the period, so a wave of 100 Hz frequency has a period of 1/100 or 0.01 secs, likewise a frequency of 256Hz has a period of 1/256, or 0.004 secs. To calculate the wavelength of a sound in any given medium we can use the following equation:
= Velocity/Frequency
Humans can hear in the region of between 20Hz and 20000Hz although this can differ dramatically between individuals. You can read more about frequency in the section of this chapter.
17
Phase: This is the starting point of our waveform. The starting point along the Y-axis of our plotted waveform is not always 0. This can be expressed in degrees or in radians. A complete cycle of a waveform will cover 360 degrees or 2(pi) radians. Amplitude: Amplitude is represented by the y-axis of a plotted pressure wave. The strength at which the molecules pull or push away from each other will determine how far above and below 0 the wave fluctuates. The greater the y-values the greater the amplitude of our wave. The greater the compressions and rarefactions the greater the amplitude.
TRANSDUCTION
The analogue sound waves we hear in the world around us need to be converted into an electrical signal in order to be amplified or sent to a soundcard for recording. The process of converting acoustical energy in the form of pressure waves into an electrical signal is carried out by a device known as a a transducer. A transducer, which is usually found in microphones, produces electrical pressure, i.e., voltage, that changes constantly in sympathy with the vibrations of the sound wave in the air. The continuous variation of pressure is therefore 'transduced' into continuous variation of voltage. The greater the variation of pressure the greater the variation of voltage that is sent down the cable of the recording device to the computer. Ideally, the transduction process should be as transparent and clean as possible: i.e., whatever goes in comes in a perfect voltage representation. In real-world situations however, this is never the case. Noise and distortion are always incorporated into the signal. Every time sound passes through a transducer or is transmitted electrically a change in signal quality will result. When we talk of noise we are talking specifically about any unwanted signal captured during the transduction process. This normally manifests itself as an unwanted hiss.
SAMPLING
The analogue voltage that corresponds to an acoustic signal changes continuously so that at each instant in time it will have a different value. It is not possible for a computer to receive the value of the voltage for every instant because of the physical limitations of both the computer and the data converters (remember also that there are an infinite number of instances between every two instances!). What the soundcard can do however is to measure the power of the analogue voltage at intervals of equal duration. This is how all digital recording works and is known as 'sampling'. The result of this sampling process is a discrete or digital signal which is no more than a sequence of numbers corresponding to the voltage at each successive sample time. Below left is a diagram showing a sinusoidal waveform. The vertical lines that run through the diagram represents the points in time when a snapshot is taken of the signal. After the sampling has taken place we are left with what is known as a discrete signal consisting of a collection of audio samples, as illustrated in the diagram on the right hand side below. If one is recording using a typical audio editor the incoming samples will be stored in the computer RAM (Random Access Memory). In Csound one can process the incoming audio samples in real time and output a new stream of samples, or write them to disk in the form of a sound file.
18
It is important to remember that each sample represents the amount of voltage, positive or negative, that was present in the signal at the point in time the sample or snapshot was taken. The same principle applies to recording of live video. A video camera takes a sequence of pictures of something in motion for example. Most video cameras will take between 30 and 60 still pictures a second. Each picture is called a frame. When these frames are played we no longer perceive them as individual pictures. We perceive them instead as a continuous moving image.
According to this theorem, a soundcard or any other digital recording device will not be able to represent any frequency above 1/2 the sampling rate. Half the sampling rate is also referred to as the Nyquist frequency, after the Swedish physicist Harry Nyquist who formalized the theory in the 1920s. What it all means is that any signal with frequencies above the Nyquist frequency will be misrepresented. Furthermore it will result in a frequency lower than the one being sampled. When this happens it results in what is known as aliasing or foldover.
ALIASING
Here is a graphical representation of aliasing.
The sinusoidal wave form in blue is being sampled at each arrow. The line that joins the red circles together is the captured waveform. As you can see the captured wave form and the original waveform are different frequencies. Here is another example:
19
We can see that if the sample rate is 40,000 there is no problem sampling a signal that is 10KHz. On the other hand, in the second example it can be seen that a 30kHz waveform is not going to be correctly sampled. In fact we end up with a waveform that is 10kHz, rather than 30kHz. The following Csound instrument plays a 1000 Hz tone first directly, and then because the frequency is 1000 Hz lower than the sample rate of 44100 Hz: EXAMPLE 01A01.csd
<CsoundSynthesizer> <CsOptions> -odac </CsOptions> <CsInstruments> ;example by Joachim Heintz sr = 44100 ksmps = 32 nchnls = 2 0dbfs = 1 instr 1 asig oscils outs endin .2, p4, 0 asig, asig
</CsInstruments> <CsScore> i 1 0 2 1000 ;1000 Hz tone i 1 3 2 43100 ;43100 Hz tone sounds like 1000 Hz because of aliasing </CsScore> </CsoundSynthesizer>
The same phenomenon takes places in film and video too. You may recall having seen wagon wheels apparently move backwards in old Westerns. Let us say for example that a camera is taking 60 frames per second of a wheel moving. If the wheel is completing one rotation in exactly 1/60th of a second, then every picture looks the same. - as a result the wheel appears to stand still. If the wheel speeds up, i.e., increases frequency, it will appear as if the wheel is slowly turning backwards. This is because the wheel will complete more than a full rotation between each snapshot. This is the most ugly side-effect of aliasing - wrong information. As an aside, it is worth observing that a lot of modern 'glitch' music intentionally makes a feature of the spectral distortion that aliasing induces in digital audio. Audio-CD Quality uses a sample rate of 44100Kz (44.1 kHz). This means that CD quality can only represent frequencies up to 22050Hz. Humans typically have an absolute upper limit of hearing of about 20Khz thus making 44.1KHz a reasonable standard sampling rate.
20
BIT-DEPTH RESOLUTION
Apart from the sample rate, another important parameter which can affect the fidelity of a digital signal is the accuracy with which each sample is known, in other words knowing how strong each voltage is. Every sample obtained is set to a specific amplitude (the measure of strength for each voltage) level. The number of levels depends on the precision of the measurement in bits, i.e., how many binary digits are used to store the samples. The number of bits that a system can use is normally referred to as the bit-depth resolution. If the bit-depth resolution is 3 then there are 8 possible levels of amplitude that we can use for each sample. We can see this in the diagram below. At each sampling period the soundcard plots an amplitude. As we are only using a 3-bit system the resolution is not good enough to plot the correct amplitude or each sample. We can see in the diagram that some vertical lines stop above or below the real signal. This is because our bit-depth is not high enough to plot the amplitude levels with sufficient accuracy at each sampling period.
example here for 4, 6, 8, 12, 16 bit of a sine signal ... ... coming in the next release
21
The standard resolution for CDs is 16 bit, which allows for 65536 different possible amplitude levels, 32767 either side of the zero axis. Using bit rates lower than 16 is not a good idea as it will result in noise being added to the signal. This is referred to as quantization noise and is a result of amplitude values being excessively rounded up or down when being digitized. Quantization noise becomes most apparent when trying to represent low amplitude (quiet) sounds. Frequently a tiny amount of noise, known as a dither signal, will be added to digital audio before conversion back into an analogue signal. Adding this dither signal will actually reduce the more noticeable noise created by quantization. As higher bit depth resolutions are employed in the digitizing process the need for dithering is reduced. A general rule is to use the highest bit rate available. Many electronic musicians make use of deliberately low bit depth quantization in order to add noise to a signal. The effect is commonly known as 'bit-crunching' and is relatively easy to do in Csound.
ADC / DAC
The entire process, as described above, of taking an analogue signal and converting it into a digital signal is referred to as analogue to digital conversion or ADC. Of course digital to analogue conversion, DAC, is also possible. This is how we get to hear our music through our PCs headphones or speakers. For example, if one plays a sound from Media Player or iTunes the software will send a series of numbers to the computer soundcard. In fact it will most likely send 44100 numbers a second. If the audio that is playing is 16 bit then these numbers will range from -32768 to +32767. When the sound card receives these numbers from the audio stream it will output corresponding voltages to a loudspeaker. When the voltages reach the loudspeaker they cause the loudspeakers magnet to move inwards and outwards. This causes a disturbance in the air around the speaker resulting in what we perceive as sound.
22
6. FREQUENCIES
As mentioned in the previous section frequency is defined as the number of cycles or periods per second. Frequency is measured in Hertz. If a tone has a frequency of 440Hz it completes 440 cycles every second. Given a tone's frequency, one can easily calculate the period of any sound. Mathematically, the period is the reciprocal of the frequency and vice versa. In equation form, this is expressed as follows.
Frequency = 1/Period Period = 1/Frequency
Therefore the frequency is the inverse of the period, so a wave of 100 Hz frequency has a period of 1/100 or 0.01 sec, likewise a frequency of 256Hz has a period of 1/256, or 0.004 seconds. To calculate the wavelength of a sound in any given medium we can use the following equation:
= Velocity/Frequency
For instance, a wave of 1000 Hz in air (velocity of diffusion about 340 m/s) has a length of approximately 340/1000 m = 34 cm.
23
A lot of basic maths is about simplification of complex equations. Shortcuts are taken all the time to make things easier to read and equate. Multiplication can be seen as a shorthand of addition, for example, 5x10 = 5+5+5+5+5+5+5+5+5+5. Exponents are shorthand for multiplication, 35 = 3x3x3x3x3. Logarithms are shorthand for exponents and are used in many areas of science and engineering in which quantities vary over a large range. Examples of logarithmic scales include the decibel scale, the Richter scale for measuring earthquake magnitudes and the astronomical scale of stellar brightnesses. Musical frequencies also work on a logarithmic scale, more on this later. Intervals in music describe the distance between two notes. When dealing with standard musical notation it is easy to determine an interval between two adjacent notes. For example a perfect 5th is always made up of 7 semitones. When dealing with Hz values things are different. A difference of say 100Hz does not always equate to the same musical interval. This is because musical intervals as we hear them are represented in Hz as frequency ratios. An octave for example is always 2:1. That is to say every time you double a Hz value you will jump up by a musical interval of an octave. Consider the following. A flute can play the note A at 440Hz. If the player plays another A an octave above it at 880Hz the difference in Hz is 440. Now consider the a piccolo, the highest pitched instrument of the orchestra. It can play a frequency of 2000Hz but it can also play an octave above this at 4000Hz(2 x 2000Hz). While the difference in hertz between the two notes on the flute is only 440Hz, the difference between the two high pitched notes on a piccolo is 1000Hz yet they are both only playing notes one octave apart. What all this demonstrates is that the higher two pitches become the greater the difference in Hertz needs to be for us to recognize the difference as the same musical interval. The most common ratios found in the equal temperament scale are the unison: (1:1), the octave: (2:1), the perfect fifth(3:2), the perfect fourth (4:3), the major third (5:4) and the minor third (6:5). The following example shows the difference between adding a certain frequency and applying a ratio. First, the frequencies of 100, 400 and 800 Hz all get an addition of 100 Hz. This sounds very different, though the added frequency is the same. Second, the ratio 3/2 (perfect fifth) is applied to the same frequencies. This sounds always the same, though the frequency displacement is different each time. EXAMPLE 01B02.csd
<CsoundSynthesizer> <CsOptions> -odac -m0 </CsOptions> <CsInstruments> ;example by joachim heintz sr = 44100 ksmps = 32 nchnls = 2 0dbfs = 1 instr 1 asig endin instr 2 asig endin instr 3 asig endin </CsInstruments> <CsScore> ;adding a certain frequency (instr 2) i 1 0 1 100 i 2 1 1 100 100 i 1 3 1 400 i 2 4 1 400 100 i 1 6 1 800 prints oscils outs "Applying the ratio of %f (adding %d Hertz) to %d Hertz!\n", p5, p4*p5, p4 .2, p4*p5, 0 asig, asig prints oscils outs "Adding %d Hertz to %d Hertz!\n", p5, p4 .2, p4+p5, 0 asig, asig prints oscils outs "Playing %d Hertz!\n", p4 .2, p4, 0 asig, asig
24
i 2 7 1 800 100 ;applying a certain ratio (instr 3) i 1 10 1 100 i 3 11 1 100 [3/2] i 1 13 1 400 i 3 14 1 400 [3/2] i 1 16 1 800 i 3 17 1 800 [3/2] </CsScore> </CsoundSynthesizer>
So what of the algorithms mentioned above. As some readers will know the current preferred method of tuning western instruments is based on equal temperament. Essentially this means that all octaves are split into 12 equal intervals. Therefore a semitone has a ratio of 2(1/12), which is approximately 1.059463. So what about the reference to logarithms in the heading above? As stated previously, logarithms are shorthand for exponents. 2(1/12)= 1.059463 can also between written as log2(1.059463)= 1/12. Therefore musical frequency works on a logarithmic scale.
MIDI NOTES
Csound can easily deal with MIDI notes and comes with functions that will convert MIDI notes to hertz values and back again. In MIDI speak A440 is equal to A4. You can think of A4 as being the fourth A from the lowest A we can hear, well almost hear. caution: like many 'standards' there is occasional disagreement about the mapping between frequency and octave number. You may occasionally encounter A440 being described as A3.
25
7. INTENSITIES
REAL WORLD INTENSITIES AND AMPLITUDES
There are many ways to describe a sound physically. One of the most common is the Sound Intensity Level (SIL). It describes the amount of power on a certain surface, so its unit is Watt per square meter ( ). The range of human hearing is about at the threshold of hearing to at the threshold of pain. For ordering this immense range, and to facilitate to measurement of one sound intensity based upon its ratio with another, a logarithmic scale is used. The unit Bel describes the relation of one intensity I to a reference intensity I0 as follows: Sound Intensity Level in Bel If, for instance, the ratio is 10, this is 1 Bel. If the ratio is 100, this is 2 Bel.
For real world sounds, it makes sense to set the reference value to the threshold of hearing which has been fixed as at 1000 Hertz. So the range of hearing covers about 12 Bel. Usually 1 Bel is divided into 10 deci Bel, so the common formula for measuring a sound intensity is:
While the sound intensity level is useful to describe the way in which the human hearing works, the measurement of sound is more closely related to the sound pressure deviations. Sound waves compress and expand the air particles and by this they increase and decrease the localized air pressure. These deviations are measured and transformed by a microphone. So the question arises: What is the relationship between the sound pressure deviations and the sound intensity? The answer is: Sound intensity changes are proportional to the square of the sound pressure changes . As a formula: Relation between Sound Intensity and Sound Pressure Let us take an example to see what this means. The sound pressure at the threshold of hearing can be fixed at . This value is the reference value of the Sound Pressure Level (SPL). If we have now a value of , the corresponding sound intensity relation can be calculated as:
So, a factor of 10 at the pressure relation yields a factor of 100 at the intensity relation. In general, the dB scale for the pressure P related to the pressure P0 is:
26
Working with Digital Audio basically means working with amplitudes. What we are dealing with microphones are amplitudes. Any audio file is a sequence of amplitudes. What you generate in Csound and write either to the DAC in realtime or to a sound file, are again nothing but a sequence of amplitudes. As amplitudes are directly related to the sound pressure deviations, all the relations between sound intensity and sound pressure can be transferred to relations between sound intensity and amplitudes:
Relation between Intensity and Ampltitudes Decibel (dB) Scale of Amplitudes with any amplitude amplitude related to an other
If you drive an oscillator with the amplitude 1, and another oscillator with the amplitude 0.5, and you want to know the difference in dB, you calculate:
So, the most useful thing to keep in mind is: When you double the amplitude, you get +6 dB; when you have half of the amplitude as before, you get -6 dB.
WHAT IS 0 DB?
As described in the last section, any dB scale - for intensities, pressures or amplitudes - is just a way to describe a relationship. To have any sort of quantitative measurement you will need to know the reference value referred to as "0 dB". For real world sounds, it makes sense to set this level to the threshold of hearing. This is done, as we saw, by setting the SIL to and the SPL to . But for working with digital sound in the computer, this does not make any sense. What you will hear from the sound you produce in the computer, just depends on the amplification, the speakers, and so on. It has nothing, per se, to do with the level in your audio editor or in Csound. Nevertheless, there is a rational reference level for the amplitudes. In a digital system, there is a strict limit for the maximum number you can store as amplitude. This maximum possible level is called 0 dB. Each program connects this maximum possible amplitude with a number. Usually it is '1' which is a good choice, because you know that everything above 1 is clipping, and you have a handy relation for lower values. But actually this value is nothing but a setting, and in Csound you are free to set it to any value you like via the 0dbfs opcode. Usually you should use this statement in the orchestra header:
0dbfs = 1
This means: "Set the level for zero dB as full scale to 1 as reference value." Note that because of historical reasons the default value in Csound is not 1 but 32768. So you must have this 0dbfs = 1 statement in your header if you want to set Csound to the value probably all other audio applications have.
27
Let's see some practical consequences now of what we have discussed so far. One major point is: for getting smooth transitions between intensity levels you must not use a simple linear transition of the amplitudes, but a linear transition of the dB equivalent. The following example shows a linear rise of the amplitudes from 0 to 1, and then a linear rise of the dB's from -80 to 0 dB, both over 10 seconds. EXAMPLE 01C01.csd
<CsoundSynthesizer> <CsOptions> -odac </CsOptions> <CsInstruments> ;example by joachim heintz sr = 44100 ksmps = 32 nchnls = 2 0dbfs = 1 instr 1 ;linear amplitude rise kamp line 0, p3, 1 ;amp rise 0->1 asig oscils 1, 1000, 0 ;1000 Hz sine aout = asig * kamp outs aout, aout endin instr 2 ;linear rise of dB kdb line -80, p3, 0 ;dB rise -60 -> 0 asig oscils 1, 1000, 0 ;1000 Hz sine kamp = ampdb(kdb) ;transformation db -> amp aout = asig * kamp outs aout, aout endin </CsInstruments> <CsScore> i 1 0 10 i 2 11 10 </CsScore> </CsoundSynthesizer>
You will hear how fast the sound intensity increases at the first note with direct amplitude rise, and then stays nearly constant. At the second note you should hear a very smooth and constant increment of intensity.
RMS MEASUREMENT
Sound intensity depends on many factors. One of the most important is the effective mean of the amplitudes in a certain time span. This is called the Root Mean Square (RMS) value. To calculate it, you have (1) to calculate the squared amplitudes of number N samples. Then you (2) divide the result by N to calculate the mean of it. Finally (3) take the square root. Let's see a simple example, and then have a look how getting the rms value works in Csound. Assumeing we have a sine wave which consists of 16 samples, we get these amplitudes:
28
The mean of these values is: (0+0.146+0.5+0.854+1+0.854+0.5+0.146+0+0.146+0.5+0.854+1+0.854+0.5+0.146)/16=8/16=0.5 And the resulting RMS value is 0.5=0.707 . The rms opcode in Csound calculates the RMS power in a certain time span, and smoothes the values in time according to the ihp parameter: the higher this value (the default is 10 Hz), the snappier the measurement, and vice versa. This opcode can be used to implement a selfregulating system, in which the rms opcode prevents the system from exploding. Each time the rms value exceeds a certain value, the amount of feedback is reduced. This is an example1 : EXAMPLE 01C02.csd
<CsoundSynthesizer> <CsOptions> -odac </CsOptions> <CsInstruments> ;example by Martin Neukom, adapted by Joachim Heintz sr = 44100 ksmps = 32 nchnls = 2 0dbfs = 1 giSine instr 1 a3 kamp asnd if p4 == adel1 adel2 else ;or adel1 adel2 endif a0 a1 a2 krms a3 aout ftgen 0, 0, 2^10, 10, 1 ;table with a sine wave
init 0 linseg 0, 1.5, 0.2, 1.5, 0 ;envelope for initial input poscil kamp, 440, giSine ;initial input 1 then ;choose between two sines ... poscil 0.0523, 0.023, giSine poscil 0.073, 0.023, giSine,.5 a random movement for the delay lines randi 0.05, 0.1, 2 randi 0.08, 0.2, 2 delayr deltapi deltapi rms delayw reson linen outs 1 ;delay line of 1 second adel1 + 0.1 ;first reading adel2 + 0.1 ;second reading a3 ;rms measurement asnd + exp(-krms) * a3 ;feedback depending on rms -(a1+a2), 3000, 7000, 2 ;calculate a3 a1/3, 1, p3, 1 ;apply fade in and fade out aout, aout
endin </CsInstruments> <CsScore> i 1 0 60 1 ;two sine movements of delay with feedback i 1 61 . 2 ;two random movements of delay with feedback </CsScore> </CsoundSynthesizer>
FLETCHER-MUNSON CURVES
29
Human hearing is roughly in a range between 20 and 20000 Hz. But inside this range, the hearing is not equally sensitive. The most sensitive region is around 3000 Hz. If you come to the upper or lower border of the range, you need more intensity to perceive a sound as "equally loud". These curves of equal loudness are mostly called "Fletcher-Munson Curves" because of the paper of H. Fletcher and W. A. Munson in 1933. They look like this:
Try the following test. In the first 5 seconds you will hear a tone of 3000 Hz. Adjust the level of your amplifier to the lowest possible point at which you still can hear the tone. - Then you hear a tone whose frequency starts at 20 Hertz and ends at 20000 Hertz, over 20 seconds. Try to move the fader or knob of your amplification exactly in a way that you still can hear anything, but as soft as possible. The movement of your fader should roughly be similar to the lowest Fletcher-Munson-Curve: starting relatively high, going down and down until 3000 Hertz, and then up again. (As always, this test depends on your speaker hardware. If your speaker do not provide proper lower frequencies, you will not hear anything in the bass region.) EXAMPLE 01C03.csd
<CsoundSynthesizer> <CsOptions> -odac </CsOptions> <CsInstruments> sr = 44100 ksmps = 32 nchnls = 2 0dbfs = 1 giSine instr 1 kfreq asin aout ftgen expseg printk poscil linen outs 0, 0, 2^10, 10, 1 ;table with a sine wave p4, p3, p5 1, kfreq ;prints the frequencies once a second .2, kfreq, giSine asin, .01, p3, .01 aout, aout
It is very important to bear in mind that the perceived loudness depends much on the frequencies. You must know that putting out a sine of 30 Hz with a certain amplitude is totally different from a sine of 3000 Hz with the same amplitude - the latter will sound much louder.
1. cf Martin Neukom, Signale Systeme Klangsynthese, Zrich 2003, p. 383^ QUICK START
30
8. MAKE CSOUND RUN 9. CSOUND SYNTAX 10. CONFIGURING MIDI 11. LIVE AUDIO 12. RENDERING TO FILE
31
Windows
Windows installers are the ones ending in .exe. Look for the latest version of Csound, and find a file which should be called something like: Csound5.11.1-gnu-win32-f.exe. The important thing to note is the final letter of the installer name, which can be "d" or "f". This specifies the computation precision of the Csound engine. Float precision (32-bit float) is marked with "f" and double precision (64-bit float) is marked "d". This is important to bear in mind, as a frontend which works with the "floats" version, will not run if you have the "doubles" version installed. You should usually install the "floats" version as that is the one most frontends are currently using. (Note: more recent versions of the pre-built Windows installer have only been released in the 'doubles' version.) After you have downloaded the installer, just run it and follow the instructions. When you are finished, you will find a Csound folder in your start menu containing Csound utilities and the QuteCsound frontend.
Mac OS X
The Mac OS X installers are the files ending in .dmg. Look for the latest version of Csound for your particular system, for example a Universal binary for 10.5 will be called something like: csound5.12.4-OSX10.5-Universal.dmg. When you double click the downloaded file, you will have a disk image on your desktop, with the Csound installer, QuteCsound and a readme file. Doubleclick the installer and follow the instructions. Csound and the basic Csound utilities will be installed. To install the QuteCsound frontend, you only need to move it to your Applications folder.
32
Csound is available from the official package repositories for many distributions like Debian, Ubuntu, Fedora, Archlinux and Gentoo. If there are no binary packages for your platform, or you need a more recent version, you can get the source package from the SourceForge page and build from source. You can find detailed information in the Building Csound Manual Page.
INSTALL PROBLEMS?
If, for any reason, you can't find the QuteCsound frontend on your system after install, or if you want to install the most recent version of QuteCsound, or if you prefer another frontend altogether: see the CSOUND FRONTENDS section of this manual for further information. If you have any install problems, consider joining the Csound Mailing List to report your issues, or write a mail to one of the maintainers (see ON THIS RELEASE).
2. Open the Terminal / Prompt / Console 3. Type: csound /full/path/HelloWorld.csd where /full/path/HelloWorld.csd is the complete path to your file. You also execute this file by just typing csound then dragging the file into the terminal window and then hitting return. You should hear a 440 Hz tone.
33
34
9. CSOUND SYNTAX
ORCHESTRA AND SCORE
In Csound, you must define "instruments", which are units which "do things", for instance playing a sine wave. These instruments must be called or "turned on" by a "score". The Csound "score" is a list of events which describe how the instruments are to be played in time. It can be thought of as a timeline in text. A Csound instrument is contained within an Instrument Block, which starts with the keyword instr and ends with the keyword endin. All instruments are given a number (or a name) to identify them.
instr 1 ... instrument instructions come here... endin
Score events in Csound are individual text lines, which can turn on instruments for a certain time. For example, to turn on instrument 1, at time 0, for 2 seconds you will use:
i 1 0 2
Comments, which are lines of text that Csound will ignore, are started with the ";" character. Multi-line comments can be made by encasing them between "/*" and "*/".
OPCODES
35
"Opcodes" or "Unit generators" are the basic building blocks of Csound. Opcodes can do many things like produce oscillating signals, filter signals, perform mathematical functions or even turn on and off instruments. Opcodes, depending on their function, will take inputs and outputs. Each input or output is called, in programming terms, an "argument". Opcodes always take input arguments on the right and output their results on the left, like this:
output OPCODE input1, input2, input3, .., inputN
For example the oscils opcode has three inputs: amplitude, frequency and phase, and produces a sine wave signal:
aSin oscils 0dbfs/4, 440, 0
In this case, a 440 Hertz oscillation starting at phase 0 radians, with an amplitude of 0dbfs/4 (a quarter of 0 dB as full scale) will be created and its output will be stored in a container called aSin. The order of the arguments is important: the first input to oscils will always be amplitude, the second, frequency and the third, phase. Many opcodes include optional input arguments and occasionally optional output arguments. These will always be placed after the essential arguments. In the Csound Manual documentation they are indicated using square brackets "[]". If optional input arguments are omitted they are replaced with the default values indicated in the Csound Manual. The addition of optional output arguments normally initiates a different mode of that opcode: for example, a stereo as opposed to mono version of the opcode.
VARIABLES
A "variable" is a named container. It is a place to store things like signals or values from where they can be recalled by using their name. In Csound there are various types of variables. The easiest way to deal with variables when getting to know Csound is to imagine them as cables. If you want to patch this together: Oscillator->Filter->Output, you need two cables, one going out from the oscillator into the filter and one from the filter to the output. The cables carry audio signals, which are variables beginning with the letter "a".
aSource aFiltered buzz 0.8, 200, 10, 1 moogladder aSource, 400, 0.8 out aFiltered
In the example above, the buzz opcode produces a complex waveform as signal aSource. This signal is fed into the moogladder opcode, which in turn produces the signal aFiltered. The out opcode takes this signal, and sends it to the output whether that be to the speakers or to a rendered file. Other common variable types are "k" variables which store control signals, which are updated less frequently than audio signals, and "i" variables which are constants within each instrument note. You can find more information about variable types here in this manual.
QuteCsound
In QuteCsound you can find the Csound Manual in the Help Menu. You can quickly go to a particular opcode entry in the manual by putting the cursor on the opcode and pressing Shift+F1.
36
After selecting the RT MIDI module from a front-end or the command line, you need to select the MIDI devices for input and output. These are set using the flags -M and -Q respectively followed by the number of the interface. You can usually use:
-M999
To get a performance error with a listing of available interfaces. For the PortMidi module (and others like ALSA), you can specify no number to use the default MIDI interface or the 'a' character to use all devices. This will even work when no MIDI devices are present.
-Ma
So if you want MIDI input using the portmidi module, using device 2 for input and device 1 for output, your <CsOptions> section should contain:
-+rtmidi=portmidi -M2 -Q1
There is a special "virtual" RT MIDI module which enables MIDI input from a virtual keyboard. To enable it, you can use:
-+rtmidi=virtual -M0
Linux
On Linux systems, you might also have an "alsa" module to use the alsa raw MIDI interface. This is different from the more common alsa sequencer interface and will typically require the sndvirmidi module to be loaded.
OS X
On OS X you may have a "coremidi" module available.
Windows
On Windows, you may have a "winmme" MIDI module.
As with Audio I/O, you can set the MIDI preferences in the configuration dialog. In it you will find a selection box for the RT MIDI module, and text boxes for MIDI input and output devices.
On the following example, a simple instrument, which plays a sine wave, is defined in instrument 1. There are no score note events, so no sound will be produced unless a MIDI note is received on channel 1. EXAMPLE 02C01.csd
<CsoundSynthesizer> <CsOptions> -+rtmidi=portmidi -Ma -odac </CsOptions> <CsInstruments> ;Example by Andrs Cabrera sr = 44100 ksmps = 32 nchnls = 2 0dbfs = 1 giSine massign ftgen 0, 1 ;assign all MIDI channels to instrument 1 0,0,2^10,10,1 ;a function table with a sine wave ;get the frequency from the key pressed 0dbfs * 0.3 ;get the amplitude iAmp, iCps, giSine ;generate a sine tone aOut, aOut ;write it to the output
38
Note that Csound has an unlimited polyphony in this way: each key pressed starts a new instance of instrument 1, and you can have any number of instrument instances at the same time.
39
40
The input and output devices will be listed seperately. Specify your input device with the -iadc flag and the number of your input device, and your output device with the -odac flag and the number of your output device. For instance, if you select the "XYZ" device from the list above both, for input and output, you include:
-iadc2 -odac3
in the <CsOptions> section of you .csd file. The RT output module can be set with the -+rtaudio flag. If you don't use this flag, the PortAudio driver will be used. Other possible drivers are jack and alsa (Linux), mme (Windows) or CoreAudio (Mac). So, this sets your audio driver to mme instead of Port Audio:
-+rtaudio=mme
The other factor which affects Csound's live performance is the ksmps value which is set in the header of the <CsInstruments> section. By this value, you define how many samples are processed every Csound control cycle. Try your realtime performance with -B512, -b128 and ksmps=32. With a software buffer of 128 samples, a hardware buffer of 512 and a sample rate of 44100 you will have around 12ms latency, which is usable for live keyboard playing. If you have problems with either the latency or the performance, tweak the values as described here.
QuteCsound
41
To define the audio hardware used for realtime performance, open the configuration dialog. In the "Run" Tab, you can choose your audio interface, and the preferred driver. You can select input and output devices from a list if you press the buttons to the right of the text boxes for input and output names. Software and hardware buffer sizes can be set at the top of this dialogue box.
42
kInLev
downsamp aIn ;convert audio input to control signal printk .2, abs(kInLev) ;make modulator frequency oscillate 200 to 1000 Hz kModFreq poscil 400, 1/2, giSine kModFreq = kModFreq+600 aMod poscil 1, kModFreq, giSine ;modulator signal aRM = aIn * aMod ;ring modulation outch 1, aRM, 2, aRM ;output tochannel 1 and 2 endin </CsInstruments> <CsScore> i 1 0 3600 </CsScore> </CsoundSynthesizer>
Live Audio is frequently used with live devices like widgets or MIDI. In QuteCsound, you can find several examples in Examples -> Getting Started -> Realtime Interaction.
43
RENDERING TO FILE
Save the following code as Render.csd: EXAMPLE 02E01.csd
<CsoundSynthesizer> <CsOptions> -o Render.wav </CsOptions> <CsInstruments> ;Example by Alex Hofmann instr 1 aSin oscils 0dbfs/4, 440, 0 out aSin endin </CsInstruments> <CsScore> i 1 0 1 e </CsScore> </CsoundSynthesizer>
Now, because you changed the -o flag in the <CsOptions> from "-o dac" to "-o filename", the audio output is no longer written in realtime to your audio device, but instead to a file. The file will be rendered to the default directory (usually the user home directory). This file can be opened and played in any audio player or editor, e.g. Audacity. (By default, csound is a nonrealtime program. So if no command line options are given, it will always render the csd to a file called test.wav, and you will hear nothing in realtime.) The -o flag can also be used to write the output file to a certain directory. Something like this for Windows ...
<CsOptions> -o c:/music/samples/Render.wav </CsOptions>
Rendering Options
The internal rendering of audio data in Csound is done with 32-bit floating point numbers (or even with 64-bit numbers for the "double" version). Depending on your needs, you should decide the precision of your rendered output file:
44
If you want to render 32-bit floats, use the option flag -f. If you want to render 24-bit, use the flag -3. If you want to render 16-bit, use the flag -s (or nothing, because this is also the default in Csound). For making sure that the header of your soundfile will be written correctly, you should use the W flag for a WAV file, or the -A flag for a AIFF file. So these options will render the file "Wow.wav" as WAV file with 24-bit accuracy:
<CsOptions> -o Wow.wav -W -3 </CsOptions>
instr 1 kFreq randomi aSig poscil kPan randomi aL, aR pan2 outs fout endin </CsInstruments> <CsScore> i 1 0 10 e </CsScore> </CsoundSynthesizer>
QuteCsound
All the options which are described in this chapter can be handled very easily in QuteCsound: Rendering to file is simply done by clicking the "Render" button, or choosing "Control>Render to File" in the Menu. To set file-destination and file-type, you can make your own settings in "QuteCsound Configuration" under the tab "Run -> File (offline render)". The default is a 16-Bit .wav-file. To record a live performance, just click the "Record" button. You will find a file with the same name as your .csd file, and a number appended for each record task, in the same folder as your .csd file. CSOUND LANGUAGE 13. INITIALIZATION AND PERFORMANCE PASS 14. LOCAL AND GLOBAL VARIABLES 15. CONTROL STRUCTURES 16. FUNCTION TABLES
45
46
calls instrument 1, starting at time 0, for 3 seconds. It is very important to understand that such a call consists of two different stages: the initialization and the performance pass. At first, Csound initializes all the variables which begin with a i or a gi. This initialization pass is done just once. After this, the actual performance begins. During this performance, Csound calculates all the time-varying values in the orchestra again and again. This is called the performance pass, and each of these calculations is called a control cycle (also abbreviated as k-cycle or k-loop). The time for each control cycle depends on the ksmps constant in the orchestra header. If ksmps=10 (which is the default), the performance pass consists of 10 samples. If your sample rate is 44100, with ksmps=10 you will have 4410 control cycles per second (kr=4410), and each of them has a duration of 1/4410 = 0.000227 seconds. On each control cycle, all the variables starting with k, gk, a and ga are updated (see the next chapter about variables for more explanations). This is an example instrument, containing i-, k- and a-variables: EXAMPLE 03A01.csd
<CsoundSynthesizer> <CsOptions> -o dac </CsOptions> <CsInstruments> ;Example by Joachim Heintz sr = 44100 ksmps = 441 nchnls = 2 0dbfs = 1 instr 1 iAmp = p4 ;amplitude taken from the 4th parameter of the score line iFreq = p5 ;frequency taken from the 5th parameter kPan line 0, p3, 1 ;move from 0 to 1 in the duration of this instrument call (p3) aNote oscils iAmp, iFreq, 0 ;create an audio signal aL, aR pan2 aNote, kPan ;let the signal move from left to right outs aL, aR ;write it to the output endin </CsInstruments> <CsScore> i 1 0 3 0.2 443 </CsScore> </CsoundSynthesizer>
As ksmps=441, each control cycle is 0.01 seconds long (441/44100). So this happens when the instrument call is performed:
47
Here is another simple example which shows the internal loop at each k-cycle. As we print out the value at each control cycle, ksmps is very high here, so that each k-pass takes 0.1 seconds. The init opcode can be used to set a k-variable to a certain value first (at the init-pass), otherwise it will have the default value of zero until it is assigned something else during the first k-cycle. EXAMPLE 03A02.csd
<CsoundSynthesizer> <CsInstruments> ;Example by Joachim Heintz sr = 44100 ksmps = 4410 instr 1 kcount kcount endin </CsInstruments> <CsScore> i 1 0 1 </CsScore> </CsoundSynthesizer> init = printk 0; set kcount to 0 first kcount + 1; increase at each k-pass 0, kcount; print the value
Your output should contain the lines: i i i i i i i i i i 1 1 1 1 1 1 1 1 1 1 time time time time time time time time time time 0.10000: 0.20000: 0.30000: 0.40000: 0.50000: 0.60000: 0.70000: 0.80000: 0.90000: 1.00000: 1.00000 2.00000 3.00000 4.00000 5.00000 6.00000 7.00000 8.00000 9.00000 10.00000
Try changing the ksmps value from 4410 to 44100 and to 2205 and observe the difference.
REINITIALIZATION
If you try the example above with i-variables, you will have no success, because the i-variable is calculated just once:
48
EXAMPLE 03A03.csd
<CsoundSynthesizer> <CsInstruments> ;Example by Joachim Heintz sr = 44100 ksmps = 4410 instr 1 icount icount endin </CsInstruments> <CsScore> i 1 0 1 </CsScore> </CsoundSynthesizer> init = print 0; set icount to 0 first icount + 1; increase icount; print the value
The printout is: instr 1: icount = 1.000 Nevertheless it is possible to refresh even an i-rate variable in Csound. This is done with the reinit opcode. You must mark a section by a label (any name followed by a colon). Then the reinit statement will cause the i-variable to refresh. Use rireturn to end the reinit section. EXAMPLE 03A04.csd
<CsoundSynthesizer> <CsInstruments> ;Example by Joachim Heintz sr = 44100 ksmps = 4410 instr 1 icount new: icount init = print reinit rireturn 0; set icount to 0 first icount + 1; increase icount; print the value new; reinit the section each k-pass
This prints now: instr instr instr instr instr instr instr instr instr instr instr 1: 1: 1: 1: 1: 1: 1: 1: 1: 1: 1: icount icount icount icount icount icount icount icount icount icount icount = = = = = = = = = = = 1.000 2.000 3.000 4.000 5.000 6.000 7.000 8.000 9.000 10.000 11.000
ORDER OF CALCULATION
Sometimes it is very important to observe the order in which the instruments of a Csound orchestra are evaluated. This order is given by the instrument numbers. So, if you want to use during the same performance pass a value in instrument 10 which is generated by another instrument, you must not give this instrument the number 11 or higher. In the following example, first instrument 10 uses a value of instrument 1, then a value of instrument 100. EXAMPLE 03A05.csd
49
<CsoundSynthesizer> <CsInstruments> ;Example by Joachim Heintz sr = 44100 ksmps = 4410 instr 1 gkcount gkcount endin instr 10 printk endin instr 100 gkcount init gkcount = endin </CsInstruments> <CsScore> ;first i1 and i10 i 1 0 1 i 10 0 1 ;then i100 and i10 i 100 1 1 i 10 1 1 </CsScore> </CsoundSynthesizer> 0 ;set gkcount to 0 first gkcount + 1 ;increase 0, gkcount ;print the value init = 0 ;set gkcount to 0 first gkcount + 1 ;increase
The output shows the difference: new alloc for instr 1: new alloc for instr 10: i 10 time 0.10000: 1.00000 i 10 time 0.20000: 2.00000 i 10 time 0.30000: 3.00000 i 10 time 0.40000: 4.00000 i 10 time 0.50000: 5.00000 i 10 time 0.60000: 6.00000 i 10 time 0.70000: 7.00000 i 10 time 0.80000: 8.00000 i 10 time 0.90000: 9.00000 i 10 time 1.00000: 10.00000 B 0.000 .. 1.000 T 1.000 TT 1.000 M: new alloc for instr 100: i 10 time 1.10000: 0.00000 i 10 time 1.20000: 1.00000 i 10 time 1.30000: 2.00000 i 10 time 1.50000: 4.00000 i 10 time 1.60000: 5.00000 i 10 time 1.70000: 6.00000 i 10 time 1.80000: 7.00000 i 10 time 1.90000: 8.00000 i 10 time 2.00000: 9.00000 B 1.000 .. 2.000 T 2.000 TT 2.000 M:
0.0
0.0
50
For instance, the print opcode just prints variables which are updated at each initialization pass ("i-time" or "i-rate"). If you want to print a variable which is updated at each control cycle ("krate" or "k-time"), you need its counterpart printk. (As the performance pass is usually updated some thousands times per second, you have an additional parameter in printk, telling Csound how often you want to print out the k-values.) So, some opcodes are just for i-rate variables, like filelen or ftgen. Others are just for k-rate variables like metro or max_k. Many opcodes have variants for either i-rate-variables or k-ratevariables, like printf_i and printf, sprintf and sprintfk, strindex and strindexk. Most of the Csound opcodes are able to work either at i-time or at k-time or at audio-rate, but you have to think carefully what you need, as the behaviour will be very different if you choose the i-, k- or a-variante of an opcode. For example, the random opcode can work at all three rates:
ires kres ares random random random imin, imax : works at "i-time" kmin, kmax : works at "k-rate" kmin, kmax : works at "audio-rate"
If you use the i-rate random generator, you will get one value for each note. For instance, if you want to have a different pitch for each note you are generating, you will use this one. If you use the k-rate random generator, you will get one new value on every control cycle. If your sample rate is 44100 and your ksmps=10, you will get 4410 new values per second! If you take this as pitch value for a note, you will hear nothing but a noisy jumping. If you want to have a moving pitch, you can use the randomi variant of the k-rate random generator, which can reduce the number of new values per second, and interpolate between them. If you use the a-rate random generator, you will get as many new values per second as your sample rate is. If you use it in the range of your 0 dB amplitude, you produce white noise. EXAMPLE 03A06.csd
<CsoundSynthesizer> <CsOptions> -odac </CsOptions> <CsInstruments> ;Example by Joachim Heintz sr = 44100 ksmps = 32 0dbfs = 1 nchnls = 2 giSine seed ftgen 0 ;each time different seed 0, 0, 2^10, 10, 1 ;sine table
instr 1 ;i-rate random iPch random 300, 600 aAmp linseg .5, p3, 0 aSine poscil aAmp, iPch, giSine outs aSine, aSine endin instr 2 ;k-rate random: noisy kPch random 300, 600 aAmp linseg .5, p3, 0 aSine poscil aAmp, kPch, giSine outs aSine, aSine endin instr 3 ;k-rate random with interpolation: sliding pitch kPch randomi 300, 600, 3 aAmp linseg .5, p3, 0 aSine poscil aAmp, kPch, giSine outs aSine, aSine endin instr 4 ;a-rate random: white noise aNoise random -.1, .1 outs aNoise, aNoise endin </CsInstruments> <CsScore> i 1 0 .5
51
Csound's clock is the control cycle. The number of samples in one control cycle - given by the ksmps value - is the smallest possible "tick" in Csound at k-rate. If your sample rate is 44100, and you have 4410 samples in one control cycle (ksmps=4410), you will not be able to start a kevent faster than each 1/10 second, because there is no k-time for Csound "between" two control cycles. Try the following example with larger and smaller ksmps values: EXAMPLE 03A08.csd
<CsoundSynthesizer> <CsOptions> </CsOptions> <CsInstruments> ;Example by Joachim Heintz sr = 44100 ksmps = 4410; try 44100 or 2205 instead instr 1; prints the time once in each control cycle kTimek timek kTimes times printks "Number of control cycles = %d%n", 0, kTimek printks "Time = %f%n%n", 0, kTimes endin </CsInstruments> <CsScore> i 1 0 10 </CsScore> </CsoundSynthesizer>
Consider typical size of 32 for ksmps. When sample rate is 44100, a single tick will be less than a millisecond. This should be sufficient for in most situations. If you need a more accurate time resolution, just decrease the ksmps value. The cost of this smaller tick size is a smaller computational efficiency. So your choice depends on the situation, and usually a ksmps of 32 represents a good tradeoff. Of course the precision of writing samples (at a-rate) is in no way affected by the size of the internal k-ticks. Samples are indeed written "in between" control cycles, because they are vectors. So it can be necessary to use a-time variables instead of k-time variables in certain situations. In the following example, the ksmps value is rather high (128). If you use a k-rate variable for a fast moving envelope, you will hear a certain roughness (instrument 1) sometime referred to as 'zipper' noise. If you use an a-rate variable instead, you will have a much cleaner sound (instr 2). EXAMPLE 03A09.csd
52
<CsoundSynthesizer> <CsOptions> -o dac </CsOptions> <CsInstruments> ;Example by Joachim Heintz sr = 44100 ksmps = 128 ;increase or decrease to hear the difference more or less evident nchnls = 2 0dbfs = 1 instr 1 ;envelope at k-time aSine oscils .5, 800, 0 kEnv transeg 0, .1, 5, 1, .1, -5, 0 aOut = aSine * kEnv outs aOut, aOut endin instr 2 ;envelope at a-time aSine oscils .5, 800, 0 aEnv transeg 0, .1, 5, 1, .1, -5, 0 aOut = aSine * aEnv outs aOut, aOut endin </CsInstruments> <CsScore> r 5 ;repeat the following line 5 times i 1 0 1 s ;end of section r 5 i 2 0 1 e </CsScore> </CsoundSynthesizer>
53
instr 1; i-time variables iVar1 = p2; second parameter in the score iVar2 random 0, 10; random value between 0 and 10 iVar = iVar1 + iVar2; do any math at i-rate print iVar1, iVar2, iVar endin instr 2; k-time variables kVar1 line 0, p3, 10; moves from 0 to 10 in p3 kVar2 random 0, 10; new random value each control-cycle kVar = kVar1 + kVar2; do any math at k-rate printks "kVar1 = %.3f, kVar2 = %.3f, kVar = %.3f%n", 0.1, kVar1, kVar2, kVar ;print each 0.1 seconds endin instr 3; a-variables aVar1 oscils .2, 400, 0; first audio signal: sine aVar2 rand 1; second audio signal: noise aVar3 butbp aVar2, 1200, 12; third audio signal: noise filtered aVar = aVar1 + aVar3; audio variables can also be added outs aVar, aVar; write to sound card endin instr 4; S-variables iMyVar random 0, 10; one random value per note kMyVar random 0, 10; one random value per each control-cycle
54
;S-variable updated just at init-time SMyVar1 sprintf "This string is updated just at init-time: kMyVar = %d\n", iMyVar printf_i "%s", 1, SMyVar1 ;S-variable updates at each control-cycle printks "This string is updated at k-time: kMyVar = %.3f\n", .1, kMyVar endin instr 5; f-variables aSig rand .2; audio signal (noise) ; f-signal by FFT-analyzing the audio-signal fSig1 pvsanal aSig, 1024, 256, 1024, 1 ; second f-signal (spectral bandpass filter) fSig2 pvsbandp fSig1, 350, 400, 400, 450 aOut pvsynth fSig2; change back to audio signal outs aOut*20, aOut*20 endin </CsInstruments> <CsScore> ; p1 p2 p3 i 1 0 0.1 i 1 0.1 0.1 i 2 1 1 i 3 2 1 i 4 3 1 i 5 4 1 </CsScore> </CsoundSynthesizer>
You can think of variables as named connectors between opcodes. You can connect the output from an opcode to the input of another. The type of connector (audio, control, etc.) can be known from the first letter of its name. For a more detailed discussion, see the article An overview Of Csound Variable Types by Andrs Cabrera in the Csound Journal, and the page about Types, Constants and Variables in the Canonical Csound Manual.
LOCAL SCOPE
The scope of these variables is usually the instrument in which they are defined. They are local variables. In the following example, the variables in instrument 1 and instrument 2 have the same names, but different values. EXAMPLE 03B02.csd
<CsoundSynthesizer> <CsOptions> -odac </CsOptions> <CsInstruments> ;Example by Joachim Heintz sr = 44100 ksmps = 4410; very high because of printing nchnls = 2 0dbfs = 1 instr 1 ;i-variable iMyVar init iMyVar = print ;k-variable kMyVar init kMyVar = printk ;a-variable aMyVar oscils outs ;S-variable updated SMyVar1 sprintf printf ;S-variable updated SMyVar2 sprintfk printf endin instr 2 ;i-variable iMyVar init iMyVar = print
0 iMyVar + 1 iMyVar 0 kMyVar + 1 0, kMyVar .2, 400, 0 aMyVar, aMyVar just at init-time "This string is updated just at init-time: kMyVar = %d\n", i(kMyVar) "%s", kMyVar, SMyVar1 at each control-cycle "This string is updated at k-time: kMyVar = %d\n", kMyVar "%s", kMyVar, SMyVar2
55
;k-variable kMyVar init kMyVar = printk ;a-variable aMyVar oscils outs ;S-variable updated SMyVar1 sprintf printf ;S-variable updated SMyVar2 sprintfk printf endin </CsInstruments> <CsScore> i 1 0 .3 i 2 1 .3 </CsScore> </CsoundSynthesizer>
100 kMyVar + 1 0, kMyVar .3, 600, 0 aMyVar, aMyVar just at init-time "This string is updated just at init-time: kMyVar = %d\n", i(kMyVar) "%s", kMyVar, SMyVar1 at each control-cycle "This string is updated at k-time: kMyVar = %d\n", kMyVar "%s", kMyVar, SMyVar2
This is the output (first the output at init-time by the print opcode, then at each k-cycle the output of printk and the two printf opcodes): new alloc for instr 1: instr 1: iMyVar = 1.000 i 1 time 0.10000: 1.00000 This string is updated just at init-time: kMyVar = 0 This string is updated at k-time: kMyVar = 1 i 1 time 0.20000: 2.00000 This string is updated just at init-time: kMyVar = 0 This string is updated at k-time: kMyVar = 2 i 1 time 0.30000: 3.00000 This string is updated just at init-time: kMyVar = 0 This string is updated at k-time: kMyVar = 3 B 0.000 .. 1.000 T 1.000 TT 1.000 M: 0.20000 0.20000 new alloc for instr 2: instr 2: iMyVar = 101.000 i 2 time 1.10000: 101.00000 This string is updated just at init-time: kMyVar = 100 This string is updated at k-time: kMyVar = 101 i 2 time 1.20000: 102.00000 This string is updated just at init-time: kMyVar = 100 This string is updated at k-time: kMyVar = 102 i 2 time 1.30000: 103.00000 This string is updated just at init-time: kMyVar = 100 This string is updated at k-time: kMyVar = 103 B 1.000 .. 1.300 T 1.300 TT 1.300 M: 0.29998 0.29998
GLOBAL SCOPE
If you need variables which are recognized beyond the scope of an instrument, you must define them as global. This is done by prefixing the character g before the types i, k, a or S. See the following example: EXAMPLE 03B03.csd
<CsoundSynthesizer> <CsInstruments> ;Example by Joachim Heintz sr = 44100 ksmps = 4410; very high because of printing nchnls = 2 0dbfs = 1 ;global scalar variables can now be inititalized in the header giMyVar init 0 gkMyVar init 0
56
instr 1 ;global i-variable giMyVar = print ;global k-variable gkMyVar = printk ;global S-variable gSMyVar1 sprintf printf ;global S-variable gSMyVar2 sprintfk printf endin
giMyVar + 1 giMyVar gkMyVar + 1 0, gkMyVar updated just at init-time "This string is updated just at init-time: gkMyVar = %d\n", i(gkMyVar) "%s", gkMyVar, gSMyVar1 updated at each control-cycle "This string is updated at k-time: gkMyVar = %d\n", gkMyVar "%s", gkMyVar, gSMyVar2
instr 2 ;global i-variable, gets value from instr 1 giMyVar = giMyVar + 1 print giMyVar ;global k-variable, gets value from instr 1 gkMyVar = gkMyVar + 1 printk 0, gkMyVar ;global S-variable updated just at init-time, gets value from instr 1 printf "Instr 1 tells: '%s'\n", gkMyVar, gSMyVar1 ;global S-variable updated at each control-cycle, gets value from instr 1 printf "Instr 1 tells: '%s'\n\n", gkMyVar, gSMyVar2 endin </CsInstruments> <CsScore> i 1 0 .3 i 2 0 .3 </CsScore> </CsoundSynthesizer>
The output shows the global scope, as instrument 2 uses the values which have been changed by instrument 1 in the same control cycle: new alloc for instr 1: instr 1: giMyVar = 1.000 new alloc for instr 2: instr 2: giMyVar = 2.000 i 1 time 0.10000: 1.00000 This string is updated just at init-time: gkMyVar = 0 This string is updated at k-time: gkMyVar = 1 i 2 time 0.10000: 2.00000 Instr 1 tells: 'This string is updated just at init-time: gkMyVar = 0' Instr 1 tells: 'This string is updated at k-time: gkMyVar = 1' i 1 time 0.20000: 3.00000 This string is updated just at init-time: gkMyVar = 0 This string is updated at k-time: gkMyVar = 3 i 2 time 0.20000: 4.00000 Instr 1 tells: 'This string is updated just at init-time: gkMyVar = 0' Instr 1 tells: 'This string is updated at k-time: gkMyVar = 3' i 1 time 0.30000: 5.00000 This string is updated just at init-time: gkMyVar = 0 This string is updated at k-time: gkMyVar = 5 i 2 time 0.30000: 6.00000 Instr 1 tells: 'This string is updated just at init-time: gkMyVar = 0' Instr 1 tells: 'This string is updated at k-time: gkMyVar = 5'
57
The next few examples are going into a bit more detail. If you just want to see the result (= global audio usually must be cleared), you can skip the next examples and just go to the last one of this section. It should be understood first that a global audio variable is treated the same by Csound if it is applied like a local audio signal: EXAMPLE 03B04.csd
<CsoundSynthesizer> <CsOptions> -odac </CsOptions> <CsInstruments> ;Example by Joachim Heintz sr = 44100 ksmps = 32 nchnls = 2 0dbfs = 1 instr 1; produces a 400 Hz sine gaSig oscils .1, 400, 0 endin instr 2; outputs gaSig outs gaSig, gaSig endin </CsInstruments> <CsScore> i 1 0 3 i 2 0 3 </CsScore> </CsoundSynthesizer>
Of course, there is absolutely no need to use a global variable in this case. If you do it, you risk that your audio will be overwritten by an instrument with a higher number that uses the same variable name. In the following example, you will just hear a 600 Hz sine tone, because the 400 Hz sine of instrument 1 is overwritten by the 600 Hz sine of instrument 2: EXAMPLE 03B05.csd
<CsoundSynthesizer> <CsOptions> -o dac </CsOptions> <CsInstruments> ;Example by Joachim Heintz sr = 44100 ksmps = 32 nchnls = 2 0dbfs = 1 instr 1; produces a 400 Hz sine gaSig oscils .1, 400, 0 endin instr 2; overwrites gaSig with 600 Hz sine gaSig oscils .1, 600, 0 endin instr 3; outputs gaSig outs gaSig, gaSig endin </CsInstruments> <CsScore> i 1 0 3 i 2 0 3 i 3 0 3 </CsScore> </CsoundSynthesizer>
In general, you will use a global audio variable like a bus to which several local audio signal can be added. It's this addition of a global audio signal to its previous state which can cause some trouble. Let's first see a simple example of a control signal to understand what is happening: EXAMPLE 03B06.csd
<CsoundSynthesizer>
58
<CsInstruments> ;Example by Joachim Heintz sr = 44100 ksmps = 4410; very high because of printing nchnls = 2 0dbfs = 1 instr 1 kSum init kAdd = kSum = printk endin </CsInstruments> <CsScore> i 1 0 1 </CsScore> </CsoundSynthesizer> 0; sum is zero at init pass 1; control signal to add kSum + kAdd; new sum in each k-cycle 0, kSum; print the sum
In this case, the "sum bus" kSum increases at each control cycle by 1, because it adds the kAdd signal (which is always 1) in each k-pass to its previous state. It is no different if this is done by a local k-signal, like here, or by a global k-signal, like in the next example: EXAMPLE 03B07.csd
<CsoundSynthesizer> <CsInstruments> ;Example by Joachim Heintz sr = 44100 ksmps = 4410; very high because of printing nchnls = 2 0dbfs = 1 gkSum init 0; sum is zero at init 1; control signal to add
instr 1 gkAdd = endin instr 2 gkSum = printk endin </CsInstruments> <CsScore> i 1 0 1 i 2 0 1 </CsScore> </CsoundSynthesizer>
gkSum + gkAdd; new sum in each k-cycle 0, gkSum; print the sum
What is happening now when we work with audio signals instead of control signals in this way, repeatedly adding a signal to its previous state? Audio signals in Csound are a collection of numbers (a vector). The size of this vector is given by the ksmps constant. If your sample rate is 44100, and ksmps=100, you will calculate 441 times in one second a vector which consists of 100 numbers, indicating the amplitude of each sample. So, if you add an audio signal to its previous state, different things can happen, depending on what is the present state of the vector and what was its previous state. If the previous state (with ksmps=9) has been [0 0.1 0.2 0.1 0 -0.1 -0.2 -0.1 0], and the present state is the same, you will get a signal which is twice as strong: [0 0.2 0.4 0.2 0 -0.2 -0.4 -0.2 0]. But if the present state is [0 -0.1 -0.2 -0.1 0 0.1 0.2 0.1 0], you wil just get zero's if you add it. This is shown in the next example with a local audio variable, and then in the following example with a global audio variable. EXAMPLE 03B08.csd
<CsoundSynthesizer> <CsOptions> -o dac </CsOptions> <CsInstruments> ;Example by Joachim Heintz sr = 44100 ksmps = 4410; very high because of printing (change to 441 to see the difference) nchnls = 2 0dbfs = 1 instr 1
59
;initialize a general audio variable aSum init 0 ;produce a sine signal (change frequency to 401 to see the difference) aAdd oscils .1, 400, 0 ;add it to the general audio (= the previous vector) aSum = aSum + aAdd kmax max_k aSum, 1, 1; calculate maximum printk 0, kmax; print it out outs aSum, aSum endin </CsInstruments> <CsScore> i 1 0 1 </CsScore> </CsoundSynthesizer>
EXAMPLE 03B09.csd
<CsoundSynthesizer> <CsOptions> -o dac </CsOptions> <CsInstruments> ;Example by Joachim Heintz sr = 44100 ksmps = 4410; very high because of printing (change to 441 to see the difference) nchnls = 2 0dbfs = 1 ;initialize a general audio variable gaSum init 0 instr 1 ;produce a sine signal (change frequency to 401 to see the difference) aAdd oscils .1, 400, 0 ;add it to the general audio (= the previous vector) gaSum = gaSum + aAdd endin instr 2 kmax max_k printk outs endin </CsInstruments> <CsScore> i 1 0 1 i 2 0 1 </CsScore> </CsoundSynthesizer> gaSum, 1, 1; calculate maximum 0, kmax; print it out gaSum, gaSum
In both cases, you get a signal which increases each 1/10 second, because you have 10 control cycles per second (ksmps=4410), and the frequency of 400 Hz can evenly be divided by this. If you change the ksmps value to 441, you will get a signal which increases much faster and is out of range after 1/10 second. If you change the frequency to 401 Hz, you will get a signal which increases first, and then decreases, because each audio vector has 40.1 cycles of the sine wave. So the phases are shifting; first getting stronger and then weaker. If you change the frequency to 10 Hz, and then to 15 Hz (at ksmps=44100), you cannot hear anything, but if you render to file, you can see the whole process of either enforcing or erasing quite clear: Self-reinforcing global audio signal on account of its state in one control cycle being the same as in the previous one
60
Partly self-erasing global audio signal because of phase inversions in two subsequent control cycles
So the result of all is: If you work with global audio variables in a way that you add several local audio signals to a global audio variable (which works like a bus), you must clear this global bus at each control cycle. As in Csound all the instruments are calculated in ascending order, it should be done either at the beginning of the first, or at the end of the last instrument. Perhaps it is the best idea to declare all global audio variables in the orchestra header first, and then clear them in an "always on" instrument with the highest number of all the instruments used. This is an example of a typical situation: EXAMPLE 03B10.csd
<CsoundSynthesizer> <CsOptions> -o dac </CsOptions> <CsInstruments> ;Example by Joachim Heintz sr = 44100 ksmps = 32 nchnls = 2 0dbfs = 1 ;initialize the global audio variables gaBusL init 0 gaBusR init 0 ;make the seed for random values each time different seed 0 instr 1; produces loop: iDur random timout reinit makenote: iFreq random iVol random iPan random aSin oscil3 aEnv transeg aAdd = short signals .3, 1.5 0, iDur, makenote loop 300, 1000 -12, -3; dB 0, 1; random panning for each signal ampdb(iVol), iFreq, 1 1, iDur, -10, 0; env in a-rate is cleaner aSin * aEnv
61
pan2 = =
aAdd, iPan gaBusL + aL; add to the global audio signals gaBusR + aR short filtered noise signals (4 partials) .1, .7 0, iDur, makenote loop 100, 500 -24, -12; dB 0, 1 ampdb(iVol) aNois, iFreq, iFreq/10 aFilt, aNois 1, iDur, -10, 0 aRes * aEnv aAdd, iPan gaBusL + aL; add to the global audio signals gaBusR + aR
instr 2; produces loop: iDur random timout reinit makenote: iFreq random iVol random iPan random aNois rand aFilt reson aRes balance aEnv transeg aAdd = aL, aR pan2 gaBusL = gaBusR = endin
instr 3; reverb of gaBus and output aL, aR freeverb gaBusL, gaBusR, .8, .5 outs aL, aR endin instr 100; clear global audios at the end clear gaBusL, gaBusR endin </CsInstruments> <CsScore> f 1 0 1024 10 1 .5 .3 .1 i 1 0 20 i 2 0 20 i 3 0 20 i 100 0 20 </CsScore> </CsoundSynthesizer>
62
timout reinit do: ifreq iamp asig aenv asine endin instr 11; receive ival1 chnget ival2 chnget print kcntfreq chnget kbandw chnget anoise chnget afilt reson afilt balance chnset endin random random oscils transeg = chnset
0, idur, do loop 400, 1200 .1, .3 iamp, ifreq, 0 1, idur, -10, 0 asig * aenv asine, "sine" some chn values and send again "sio" "non" ival1, ival2 "cntrfreq" "bandw" "noise" anoise, kcntfreq, kbandw afilt, anoise afilt, "filtered"
instr 12; mix the two audio signals amix1 chnget "sine" amix2 chnget "filtered" chnmix amix1, "mix" chnmix amix2, "mix" endin instr 20; receive and reverb amix chnget "mix" aL, aR freeverb amix, amix, .8, .5 outs aL, aR endin instr 100; clear chnclear endin </CsInstruments> <CsScore> i 1 0 20 i 2 0 20 i 3 0 20 i 11 0 20 i 12 0 20 i 20 0 20 i 100 0 20 </CsScore> </CsoundSynthesizer> "mix"
63
If statements can also be nested. Each level must be closed with an "endif". This is an example with three levels:
if <condition1> then; first condition opened if <condition2> then; second condition openend if <condition3> then; third condition openend ... else ... endif; third condition closed elseif <condition2a> then
64
elseif <condition2a> then ... endif; second condition closed else ... endif; first condition closed
i-Rate Examples
A typical problem in Csound: You have either mono or stereo files, and want to read both with a stereo output. For the real stereo ones that means: use soundin (diskin / diskin2) with two output arguments. For the mono ones it means: use soundin / diskin / diskin2 with one output argument, and throw it to both output channels: EXAMPLE 03C01.csd
<CsoundSynthesizer> <CsOptions> -o dac </CsOptions> <CsInstruments> ;Example by Joachim Heintz sr = 44100 ksmps = 32 nchnls = 2 0dbfs = 1 instr 1 Sfile = "/Joachim/Materialien/SamplesKlangbearbeitung/Kontrabass.aif" ;your soundfile path here ifilchnls filenchnls Sfile if ifilchnls == 1 then ;mono aL soundin Sfile aR = aL else ;stereo aL, aR soundin Sfile endif outs aL, aR endin </CsInstruments> <CsScore> i 1 0 5 </CsScore> </CsoundSynthesizer>
If you use QuteCsound, you can browse in the widget panel for the soundfile. See the corresponding example in the QuteCsound Example menu.
k-Rate Examples
The following example establishes a moving gate between 0 and 1. If the gate is above 0.5, the gate opens and you hear a tone. If the gate is equal or below 0.5, the gate closes, and you hear nothing. EXAMPLE 03C02.csd
<CsoundSynthesizer> <CsOptions> -odac </CsOptions> <CsInstruments> ;Example by Joachim Heintz sr = 44100 ksmps = 32 nchnls = 2 0dbfs = 1 giTone instr 1 kGate kFreq kdB aSig kVol if kGate kVol else kVol seed ftgen 0; random values each time different 0, 0, 2^10, 10, 1, .5, .3, .1
randomi 0, 1, 3; moves between 0 and 1 (3 new values per second) randomi 300, 800, 1; moves between 300 and 800 hz (1 new value per sec) randomi -12, 0, 5; moves between -12 and 0 dB (5 new values per sec) oscil3 1, kFreq, giTone init 0 > 0.5 then; if kGate is larger than 0.5 = ampdb(kdB); open gate = 0; otherwise close gate
65
port = outs
kVol, .02; smooth volume curve to avoid clicks aSig * kVol aOut, aOut
Short Form: (a v b ? x : y)
If you need an if-statement to give a value to an (i- or k-) variable, you can also use a traditional short form in parentheses: (a v b ? x : y). It asks whether the condition a or b is true. If a, the value is set to x; if b, to y. For instance, the last example could be written in this way: EXAMPLE 03C03.csd
<CsoundSynthesizer> <CsOptions> -odac </CsOptions> <CsInstruments> ;Example by Joachim Heintz sr = 44100 ksmps = 32 nchnls = 2 0dbfs = 1 giTone seed ftgen 0 0, 0, 2^10, 10, 1, .5, .3, .1 0, 1, 3; moves between 0 and 1 (3 new values per second) 300, 800, 1; moves between 300 and 800 hz (1 new value per sec) -12, 0, 5; moves between -12 and 0 dB (5 new values per sec) 1, kFreq, giTone 0 (kGate > 0.5 ? ampdb(kdB) : 0); short form of condition kVol, .02; smooth volume curve to avoid clicks aSig * kVol aOut, aOut
instr 1 kGate randomi kFreq randomi kdB randomi aSig oscil3 kVol init kVol = kVol port aOut = outs endin </CsInstruments> <CsScore> i 1 0 20 </CsScore> </CsoundSynthesizer>
IF - GOTO
An older way of performing a conditional branch - but still useful in certain cases - is an "if" statement which is not followed by a "then", but by a label name. The "else" construction follows (or doesn't follow) in the next line. Like the if-then-else statement, the if-goto works either at itime or at k-time. You should declare the type by either using igoto or kgoto. Usually you need an additional igoto/kgoto statement for omitting the "else" block if the first condition is true. This is the general syntax: i-time
if <condition> igoto this; same as if-then igoto that; same as else this: ;the label "this" ... ... igoto continue ;skip the "that" block that: ; ... and the label "that" must be found ... continue: ;go on after the conditional branch ...
k-time
if <condition> kgoto this; same as if-then kgoto that; same as else this: ;the label "this" ... ...
66
kgoto continue ;skip the "that" block that: ; ... and the label "that" must be found ... continue: ;go on after the conditional branch ...
i-Rate Examples
This is the same example as above in the if-then-else syntax for a branch depending on a mono or stereo file. If you just want to know whether a file is mono or stereo, you can use the "pure" if-igoto statement: EXAMPLE 03C04.csd
<CsoundSynthesizer> <CsInstruments> ;Example by Joachim Heintz sr = 44100 ksmps = 32 nchnls = 2 0dbfs = 1 instr 1 Sfile = "/Joachim/Materialien/SamplesKlangbearbeitung/Kontrabass.aif" ifilchnls filenchnls Sfile if ifilchnls == 1 igoto mono; condition if true igoto stereo; else condition mono: prints "The file is mono!%n" igoto continue stereo: prints "The file is stereo!%n" continue: endin </CsInstruments> <CsScore> i 1 0 0 </CsScore> </CsoundSynthesizer>
But if you want to play the file, you must also use a k-rate if-kgoto, because you have not just an action at i-time (initializing the soundin opcode) but also at k-time (producing an audio signal). So the code in this case is much more cumbersome than with the if-then-else facility shown previously. EXAMPLE 03C05.csd
<CsoundSynthesizer> <CsOptions> -odac </CsOptions> <CsInstruments> ;Example by Joachim Heintz sr = 44100 ksmps = 32 nchnls = 2 0dbfs = 1 instr 1 Sfile = "/Joachim/Materialien/SamplesKlangbearbeitung/Kontrabass.aif" ifilchnls filenchnls Sfile if ifilchnls == 1 kgoto mono kgoto stereo if ifilchnls == 1 igoto mono; condition if true igoto stereo; else condition mono: aL soundin Sfile aR = aL igoto continue kgoto continue stereo: aL, aR soundin Sfile continue: outs aL, aR endin </CsInstruments> <CsScore> i 1 0 5 </CsScore> </CsoundSynthesizer>
67
k-Rate Examples
This is the same example as above in the if-then-else syntax for a moving gate between 0 and 1: EXAMPLE 03C06.csd
<CsoundSynthesizer> <CsOptions> -odac </CsOptions> <CsInstruments> ;Example by Joachim Heintz sr = 44100 ksmps = 32 nchnls = 2 0dbfs = 1 giTone seed ftgen 0 0, 0, 2^10, 10, 1, .5, .3, .1
instr 1 kGate randomi 0, 1, 3; moves between 0 and 1 (3 new values per second) kFreq randomi 300, 800, 1; moves between 300 and 800 hz (1 new value per sec) kdB randomi -12, 0, 5; moves between -12 and 0 dB (5 new values per sec) aSig oscil3 1, kFreq, giTone kVol init 0 if kGate > 0.5 kgoto open; if condition is true kgoto close; "else" condition open: kVol = ampdb(kdB) kgoto continue close: kVol = 0 continue: kVol port kVol, .02; smooth volume curve to avoid clicks aOut = aSig * kVol outs aOut, aOut endin </CsInstruments> <CsScore> i 1 0 30 </CsScore> </CsoundSynthesizer>
LOOPS
Loops can be built either at i-time or at k-time just with the "if" facility. The following example shows an i-rate and a k-rate loop created using the if-i/kgoto facility: EXAMPLE 03C07.csd
<CsoundSynthesizer> <CsInstruments> ;Example by Joachim Heintz instr 1 ;i-time loop: counts from 1 until 10 has been reached icount = 1 count: print icount icount = icount + 1 if icount < 11 igoto count prints "i-END!%n" endin instr 2 ;k-rate loop: counts in the 100th k-cycle from 1 to 11 kcount init 0 ktimek timeinstk ;counts k-cycle from the start of this instrument if ktimek == 100 kgoto loop kgoto noloop loop: printks "k-cycle %d reached!%n", 0, ktimek kcount = kcount + 1 printk2 kcount if kcount < 11 kgoto loop printks "k-END!%n", 0 noloop: endin </CsInstruments> <CsScore> i 1 0 0
68
i 2 0 1 </CsScore> </CsoundSynthesizer>
But Csound offers a slightly simpler syntax for this kind of i-rate or k-rate loops. There are four variants of the loop opcode. All four refer to a label as the starting point of the loop, an index variable as a counter, an increment or decrement, and finally a reference value (maximum or minimum) as comparision: loop_lt counts upwards and looks if the index variable is lower than the reference value; loop_le also counts upwards and looks if the index is lower than or equal to the reference value; loop_gt counts downwards and looks if the index is greater than the reference value; loop_ge also counts downwards and looks if the index is greater than or equal to the reference value. As always, all four opcodes can be applied either at i-time or at k-time. Here are some examples, first for i-time loops, and then for k-time loops.
i-Rate Examples
The following .csd provides a simple example for all four loop opcodes: EXAMPLE 03C08.csd
<CsoundSynthesizer> <CsInstruments> ;Example by Joachim Heintz instr 1 ;loop_lt: icount = loop: print loop_lt prints endin instr 2 ;loop_le: icount = loop: print loop_le prints endin instr 3 ;loop_gt: icount = loop: print loop_gt prints endin instr 4 ;loop_ge: icount = loop: print loop_ge prints endin </CsInstruments> <CsScore> i 1 0 0 i 2 0 0 i 3 0 0 i 4 0 0 </CsScore> </CsoundSynthesizer> counts from 1 upwards and checks if < 10 1 icount icount, 1, 10, loop "Instr 1 terminated!%n" counts from 1 upwards and checks if <= 10 1 icount icount, 1, 10, loop "Instr 2 terminated!%n" counts from 10 downwards and checks if > 0 10 icount icount, 1, 0, loop "Instr 3 terminated!%n" counts from 10 downwards and checks if >= 0 10 icount icount, 1, 0, loop "Instr 4 terminated!%n"
The next example produces a random string of 10 characters and prints it out: EXAMPLE 03C09.csd
<CsoundSynthesizer> <CsInstruments> ;Example by Joachim Heintz instr 1
69
0 ""; starts with an empty string 65, 90.999 "%c", int(ichar); new character Sname, Schar; append to Sname icount, 1, 10, loop; loop construction "My name is '%s'!\n", 1, Sname; print result
You can also use an i-rate loop to fill a function table (= buffer) with any kind of values. In the next example, a function table with 20 positions (indices) is filled with random integers between 0 and 10 by instrument 1. Nearly the same loop construction is used afterwards to read these values by instrument 2. EXAMPLE 03C10.csd
<CsoundSynthesizer> <CsInstruments> ;Example by Joachim Heintz giTable ftgen seed 0, 0, -20, -2, 0; empty function table with 20 points 0; each time different seed
instr 1 ; writes in the table icount = 0 loop: ival random 0, 10.999 ;random value tableiw int(ival), icount, giTable ;writes in giTable at first, second, third ... position loop_lt icount, 1, 20, loop; loop construction endin instr 2; reads from the table icount = 0 loop: ival tablei icount, giTable ;reads from giTable at first, second, third ... position print ival; prints the content loop_lt icount, 1, 20, loop; loop construction endin </CsInstruments> <CsScore> i 1 0 0 i 2 0 0 </CsScore> </CsoundSynthesizer>
k-Rate Examples
The next example performs a loop at k-time. Once per second, every value of an existing function table is changed by a random deviation of 10%. Though there are special opcodes for this task, it can also be done by a k-rate loop like the one shown here: EXAMPLE 03C11.csd
<CsoundSynthesizer> <CsOptions> -odac </CsOptions> <CsInstruments> ;Example by Joachim Heintz sr = 44100 ksmps = 441 nchnls = 2 0dbfs = 1 giSine ftgen seed 0, 0, 256, 10, 1; sine wave 0; each time different seed
instr 1 ktiminstk timeinstk ;time in control-cycles kcount init 1 if ktiminstk == kcount * kr then; once per second table values manipulation:
70
0 -.1, .1;random factor for deviations kndx, giSine; read old value kval + (kval * krand); calculate new value knewval, kndx, giSine; write new value kndx, 1, 256, loop; loop construction kcount + 1; increase counter .2, 400, giSine asig, asig
TIME LOOPS
Until now, we have just discussed loops which are executed "as fast as possible", either at i-time or at k-time. But, in an audio programming language, time loops are of particular interest and importance. A time loop means, repeating any action after a certain amount of time. This amount of time can be equal to or different to the previous time loop. The action can be, for instance: playing a tone, or triggering an instrument, or calculating a new value for the movement of an envelope. In Csound, the usual way of performing time loops, is the timout facility. The use of timout is a bit intricate, so some examples are given, starting from very simple to more complex ones. Another way of performing time loops is by using a measurement of time or k-cycles. This method is also discussed and similar examples to those used for the timout opcode are given so that both methods can be compared.
timout Basics
The timout opcode refers to the fact that in the traditional way of working with Csound, each "note" (an "i" score event) has its own time. This is the duration of the note, given in the score by the duration parameter, abbreviated as "p3". A timout statement says: "I am now jumping out of this p3 duration and establishing my own time." This time will be repeated as long as the duration of the note allows it. Let's see an example. This is a sine tone with a moving frequency, starting at 400 Hz and ending at 600 Hz. The duration of this movement is 3 seconds for the first note, and 5 seconds for the second note: EXAMPLE 03C12.csd
<CsoundSynthesizer> <CsOptions> -odac </CsOptions> <CsInstruments> ;Example by Joachim Heintz sr = 44100 ksmps = 32 nchnls = 2 0dbfs = 1 giSine ftgen 0, 0, 2^10, 10, 1 400, p3, 600 .2, kFreq, giSine aTone, aTone
instr 1 kFreq expseg aTone poscil outs endin </CsInstruments> <CsScore> i 1 0 3 i 1 4 5 </CsScore> </CsoundSynthesizer>
71
Now we perform a time loop with timout which is 1 second long. So, for the first note, it will be repeated three times, and for the second note five times: EXAMPLE 03C13.csd
<CsoundSynthesizer> <CsOptions> -odac </CsOptions> <CsInstruments> ;Example by Joachim Heintz sr = 44100 ksmps = 32 nchnls = 2 0dbfs = 1 giSine instr 1 loop: timout reinit play: kFreq aTone endin </CsInstruments> <CsScore> i 1 0 3 i 1 4 5 </CsScore> </CsoundSynthesizer> expseg poscil outs 0, 1, play loop 400, 1, 600 .2, kFreq, giSine aTone, aTone ftgen 0, 0, 2^10, 10, 1
The first_label is an arbitrary word (followed by a colon) for marking the beginning of the time loop section. The istart argument for timout tells Csound, when the second_label section is to be executed. Usually istart is zero, telling Csound: execute the second_label section immediately, without any delay. The idur argument for timout defines how many seconds the second_label section is to be executed before the time loop begins again. Note that the "reinit first_label" is necessary to start the second loop after idur seconds with a resetting of all the values. (See the explanations about reinitialization in the chapter Initilalization And Performance Pass.) As usual when you work with the reinit opcode, you can use a rireturn statement to constrain the reinit-pass. In this way you can have both, the timeloop section and the non-timeloop section in the body of an instrument: EXAMPLE 03C14.csd
<CsoundSynthesizer> <CsOptions> -odac </CsOptions> <CsInstruments> ;Example by Joachim Heintz sr = 44100 ksmps = 32 nchnls = 2 0dbfs = 1 giSine instr 1 loop: timout reinit play: kFreq1 aTone1 kFreq2 aTone2 expseg oscil3 rireturn expseg poscil 0, 1, play loop 400, 1, 600 .2, kFreq1, giSine ;end of the time loop 400, p3, 600 .2, kFreq2, giSine ftgen 0, 0, 2^10, 10, 1
72
aTone1+aTone2, aTone1+aTone2
timout Applications
In a time loop, it is very important to change the duration of the loop. This can be done either by referring to the duration of this note (p3) ... EXAMPLE 03C15.csd
<CsoundSynthesizer> <CsOptions> -odac </CsOptions> <CsInstruments> ;Example by Joachim Heintz sr = 44100 ksmps = 32 nchnls = 2 0dbfs = 1 giSine instr 1 loop: timout reinit play: kFreq aTone endin </CsInstruments> <CsScore> i 1 0 3 i 1 4 5 </CsScore> </CsoundSynthesizer> expseg poscil outs 0, p3/5, play loop 400, p3/5, 600 .2, kFreq, giSine aTone, aTone ftgen 0, 0, 2^10, 10, 1
... or by calculating new values for the loop duration on each reinit pass, for instance by random values: EXAMPLE 03C16.csd
<CsoundSynthesizer> <CsOptions> -odac </CsOptions> <CsInstruments> ;Example by Joachim Heintz sr = 44100 ksmps = 32 nchnls = 2 0dbfs = 1 giSine ftgen 0, 0, 2^10, 10, 1
instr 1 loop: idur random timout reinit play: kFreq expseg aTone poscil outs endin </CsInstruments> <CsScore> i 1 0 20 </CsScore> </CsoundSynthesizer>
.5, 3 ;new value between 0.5 and 3 seconds each time 0, idur, play loop 400, idur, 600 .2, kFreq, giSine aTone, aTone
73
The applications discussed so far have the disadvantage that all the signals inside the time loop must definitely be finished or interrupted, when the next loop begins. In this way it is not possible to have any overlapping of events. For achieving this, the time loop can be used just to trigger an event. This can be done with event_i or scoreline_i. In the following example, the time loop in instrument 1 triggers each half to two seconds an instance of instrument 2 for a duration of 1 to 5 seconds. So usually the previous instance of instrument 2 will still play when the new instance is triggered. In instrument 2, some random calculations are executed to make each note different, though having a descending pitch (glissando): EXAMPLE 03C17.csd
<CsoundSynthesizer> <CsOptions> -odac </CsOptions> <CsInstruments> ;Example by Joachim Heintz sr = 44100 ksmps = 32 nchnls = 2 0dbfs = 1 giSine ftgen 0, 0, 2^10, 10, 1
instr 1 loop: idurloop random timout reinit play: idurins random event_i endin instr 2 ifreq1 random idiff random ifreq2 = kFreq expseg iMaxdb random kAmp transeg aTone poscil outs endin </CsInstruments> <CsScore> i 1 0 30 </CsScore> </CsoundSynthesizer>
.5, 2 ;duration of each loop 0, idurloop, play loop 1, 5 ;duration of the triggered instrument "i", 2, 0, idurins ;triggers instrument 2
600, 1000 ;starting frequency 100, 300 ;difference to final frequency ifreq1 - idiff ;final frequency ifreq1, p3, ifreq2 ;glissando -12, 0 ;peak randomly between -12 and 0 dB ampdb(iMaxdb), p3, -10, 0 ;envelope kAmp, kFreq, giSine aTone, aTone
The last application of a time loop with the timout opcode which is shown here, is a randomly moving envelope. If you want to create an envelope in Csound which moves between a lower and an upper limit, and has one new random value in a certain time span (for instance, once a second), the time loop with timout is one way to achieve it. A line movement must be performed in each time loop, from a given starting value to a new evaluated final value. Then, in the next loop, the previous final value must be set as the new starting value, and so on. This is a possible solution: EXAMPLE 03C18.csd
<CsoundSynthesizer> <CsOptions> -odac </CsOptions> <CsInstruments> ;Example by Joachim Heintz sr = 44100 ksmps = 32 nchnls = 2 0dbfs = 1 giSine ftgen seed 0, 0, 2^10, 10, 1 0 0; upper and ... -24; ... lower limit in dB ilower, iupper; starting value
74
.5, 2; duration of each loop 0, idurloop, play loop ilower, iupper; final value ival1, idurloop, ival2 ival2; let ival2 be ival1 for next loop ;end reinit section ampdb(kdb), 400, giSine aTone, aTone
Note that in this case the oscillator has been put after the time loop section (which is terminated by the rireturn statement. Otherwise the oscillator would start afresh with zero phase in each time loop, thus producing clicks.
The example which is given above (0337.csd) as a flexible time loop by timout, can be done with the metro opcode in this way: EXAMPLE 03C20.csd
<CsoundSynthesizer> <CsOptions> -odac </CsOptions> <CsInstruments> ;Example by Joachim Heintz sr = 44100 ksmps = 32 nchnls = 2 0dbfs = 1 giSine ftgen seed 0, 0, 2^10, 10, 1 0
75
1; give a start value for the trigger frequency kfreq ;if trigger impulse: 1, 5; random duration for instr 2 "i", 2, 0, kdur; call instr 2 .5, 2; set new value for trigger frequency
instr 2 ifreq1 random idiff random ifreq2 = kFreq expseg iMaxdb random kAmp transeg aTone poscil outs endin </CsInstruments> <CsScore> i 1 0 30 </CsScore> </CsoundSynthesizer>
600, 1000; starting frequency 100, 300; difference to final frequency ifreq1 - idiff; final frequency ifreq1, p3, ifreq2; glissando -12, 0; peak randomly between -12 and 0 dB ampdb(iMaxdb), p3, -10, 0; envelope kAmp, kFreq, giSine aTone, aTone
Note the differences in working with the metro opcode compared to the timout feature: As metro works at k-time, you must use the k-variants of event or scoreline to call the subinstrument. With timout you must use the i-variants of event or scoreline (event_i and scoreline_i), because it uses reinitialization for performing the time loops. You must select the one k-cycle where the metro opcode sends a "1". This is done with an if-statement. The rest of the instrument is not affected. If you use timout, you usually must seperate the reinitialized from the not reinitialized section by a rireturn statement.
LINKS
Steven Yi: Control Flow (Part I = Csound Journal Spring 2006, Part 2 = Csound Journal Summer 2006)
76
So, if you want to retrieve the value 13.13, you must point to the value stored under index 5. The use of function tables is manifold. A function table can contain pitch values to which you may refer using the input of a MIDI keyboard. A function table can contain a model of a waveform which is read periodically by an oscillator. You can record live audio input in a function table, and then play it back. There are many more applications, all using the fast access (because a function table is part of the RAM) and flexible use of function tables.
This is the traditional way of creating a function table by an "f statement" or an "f score event" (in comparision for instance to "i score events" which call instrument instances). The input parameters after the "f" are the following: #: a number (as positive integer) for this function table; time: at which time to be the function table available (usually 0 = from the beginning); size: the size of the function table. This is a bit tricky, because in the early days of Csound just power-of-two sizes for function tables were possible (2, 4, 8, 16, ...). Nowadays nearly every GEN Routine accepts other sizes, but these non-power-of-two sizes must be declared as a negative number! 2: the number of the GEN Routine which is used to generate the table. And here is another important point which must be regarded. By default, Csound normalizes the table values. This means that the maximum is scaled to +1 if positive, and to -1 if negative. To prevent Csound from normalizing, a negative number must be given as GEN number (here -2 instead of 2). v1 v2 v3 ...: the values which are written into the function table.
77
So this is the way to put the values [1.1 2.2 3.3 5.5 8.8 13.13 21.21 34.34 55.55 89.89] in a function table with the number 1: EXAMPLE 03D01.csd
<CsoundSynthesizer> <CsInstruments> ;Example by Joachim Heintz instr 1 ;prints the values of table 1 or 2 prints "%nFunction Table %d:%n", p4 indx init 0 loop: ival table indx, p4 prints "Index %d = %f%n", indx, ival loop_lt indx, 1, 10, loop endin </CsInstruments> <CsScore> f 1 0 -10 -2 1.1 2.2 3.3 5.5 8.8 13.13 21.21 34.34 55.55 89.89; not normalized f 2 0 -10 2 1.1 2.2 3.3 5.5 8.8 13.13 21.21 34.34 55.55 89.89; normalized i 1 0 0 1; prints function table 1 i 1 0 0 2; prints function table 2 </CsScore> </CsoundSynthesizer>
Instrument 1 just serves to print the values of the table (the tablei opcode will be explained later). See the difference whether the table is normalized (positive GEN number) or not normalized (negative GEN number). Using the ftgen opcode is a more modern way of creating a function table, which is in some ways preferable to the old way of writing an f-statement in the score. The syntax is explained below:
giVar ftgen ifn, itime, isize, igen, iarg1 [, iarg2 [, ...]]
giVar: a variable name. Each function is stored in an i-variable. Usually you want to have access to it from every instrument, so a gi-variable (global initialization variable) is given. ifn: a number for the function table. If you type in 0, you give Csound the job to choose a number, which is mostly preferable. The other parameters (size, GEN number, individual arguments) are the same as in the fstatement in the score. As this GEN call is now a part of the orchestra, each argument is separated from the next by a comma (not by a space or tab like in the score). So this is the same example as above, but now with the function tables being generated in the orchestra header: EXAMPLE 03D02.csd
<CsoundSynthesizer> <CsInstruments> ;Example by Joachim Heintz giFt1 giFt2 ftgen ftgen 1, 0, -10, -2, 1.1, 2.2, 3.3, 5.5, 8.8, 13.13, 21.21, 34.34, 55.55, 89.89 2, 0, -10, 2, 1.1, 2.2, 3.3, 5.5, 8.8, 13.13, 21.21, 34.34, 55.55, 89.89
instr 1; prints the values of table 1 or 2 prints "%nFunction Table %d:%n", p4 indx init 0 loop: ival table indx, p4 prints "Index %d = %f%n", indx, ival loop_lt indx, 1, 10, loop endin </CsInstruments> <CsScore> i 1 0 0 1; prints function table 1 i 1 0 0 2; prints function table 2 </CsScore> </CsoundSynthesizer>
78
varname giFile
ftgen
varname, ifn, itime: These arguments have the same meaning as explained above in reference to GEN02. isize: Usually you won't know the length of your soundfile in samples, and want to have a table length which includes exactly all the samples. This is done by setting isize=0. (Note that some opcodes may need a power-of-two table. In this case you can not use this option, but must calculate the next larger power-of-two value as size for the function table.) igen: As explained in the previous subchapter, this is always the place for indicating the number of the GEN Routine which must be used. As always, a positive number means normalizing, which is usually convenient for audio samples. Sfilnam: The name of the soundfile in double quotes. Similar to other audio programming languages, Csound recognizes just the name if your .csd and the soundfile are in the same folder. Otherwise, give the full path. (You can also include the folder via the "SSDIR" variable, or add the folder via the "--env:NAME+=VALUE" option.) iskip: The time in seconds you want to skip at the beginning of the soundfile. 0 means reading from the beginning of the file. iformat: Usually 0, which means: read the sample format from the soundfile header. ichn: 1 = read the first channel of the soundfile into the table, 2 = read the second channel, etc. 0 means that all channels are read. The next example plays a short sample. You can download it here. Copy the text below, save it to the same location as the "fox.wav" soundfile, and it should work. Reading the function table is done here with the poscil3 opcode which can deal with non-power-of-two tables. EXAMPLE 03D03.csd
<CsoundSynthesizer> <CsOptions> -odac </CsOptions> <CsInstruments> ;Example by Joachim Heintz sr = 44100 ksmps = 32 nchnls = 2 0dbfs = 1 giSample ftgen 0, 0, 0, 1, "fox.wav", 0, 0, 1 ftlen(giSample) ;length of the table itablen / sr ;duration .5, 1/idur, giSample aSamp, aSamp
instr 1 itablen = idur = aSamp poscil3 outs endin </CsInstruments> <CsScore> i 1 0 2.757 </CsScore> </CsoundSynthesizer>
79
EXAMPLE 03D04.csd
<CsoundSynthesizer> <CsOptions> -odac </CsOptions> <CsInstruments> ;Example by Joachim Heintz sr = 44100 ksmps = 32 nchnls = 2 0dbfs = 1 giSine giSaw giSquare giTri giImp ftgen ftgen ftgen ftgen ftgen 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2^10, 2^10, 2^10, 2^10, 2^10, 10, 10, 10, 10, 10, 1 1, 1, 1, 1, 1/2, 1/3, 1/4, 1/5, 1/6, 1/7, 1/8, 1/9 0, 1/3, 0, 1/5, 0, 1/7, 0, 1/9 0, -1/9, 0, 1/25, 0, -1/49, 0, 1/81 1, 1, 1, 1, 1, 1, 1, 1
instr 1 ;plays the sine wavetable aSine poscil .2, 400, giSine aEnv linen aSine, .01, p3, .05 outs aEnv, aEnv endin instr 2 ;plays the saw wavetable aSaw poscil .2, 400, giSaw aEnv linen aSaw, .01, p3, .05 outs aEnv, aEnv endin instr 3 ;plays the square wavetable aSqu poscil .2, 400, giSquare aEnv linen aSqu, .01, p3, .05 outs aEnv, aEnv endin instr 4 ;plays the triangular wavetable aTri poscil .2, 400, giTri aEnv linen aTri, .01, p3, .05 outs aEnv, aEnv endin instr 5 ;plays the impulse wavetable aImp poscil .2, 400, giImp aEnv linen aImp, .01, p3, .05 outs aEnv, aEnv endin instr 6 ;plays a sine and uses the first half of its shape as envelope aEnv poscil .2, 1/6, giSine aSine poscil aEnv, 400, giSine outs aSine, aSine endin </CsInstruments> <CsScore> i 1 0 3 i 2 4 3 i 3 8 3 i 4 12 3 i 5 16 3 i 6 20 3 </CsScore> </CsoundSynthesizer>
80
The basic opcode which writes values to existing function tables is tablew and its i-time descendant tableiw. Note that you may have problems with some features if your table is not a power-of-two size . In this case, you can also use tabw / tabw_i, but they don't have the offsetand the wraparound-feature. As usual, you must differentiate if your signal (variable) is i-rate, krate or a-rate. The usage is simple and differs just in the class of values you want to write to the table (i-, k- or a-variables):
tableiw tablew tablew isig, indx, ifn [, ixmode] [, ixoff] [, iwgmode] ksig, kndx, ifn [, ixmode] [, ixoff] [, iwgmode] asig, andx, ifn [, ixmode] [, ixoff] [, iwgmode]
isig, ksig, asig is the value (variable) you want to write into specified locations of the table; indx, kndx, andx is the location (index) where you write the value; ifn is the function table you want to write in; ixmode gives the choice to write by raw indices (counting from 0 to size-1), or by a normalized writing mode in which the start and end of each table are always referred as 0 and 1 (not depending on the length of the table). The default is ixmode=0 which means the raw index mode. A value not equal to zero for ixmode changes to the normalized index mode. ixoff (default=0) gives an index offset. So, if indx=0 and ixoff=5, you will write at index 5. iwgmode tells what you want to do if your index is larger than the size of the table. If iwgmode=0 (default), any index larger than possible is written at the last possible index. If iwgmode=1, the indices are wrapped around. For instance, if your table size is 8, and your index is 10, in the wraparound mode the value will be written at index 2. Here are some examples for i-, k- and a-rate values.
i-Rate Example
The following example calculates the first 12 values of a Fibonacci series and writes it to a table. This table has been created first in the header (filled with zeros). Then instrument 1 calculates the values in an i-time loop and writes them to the table with tableiw. Instrument 2 just serves to print the values. EXAMPLE 03D05.csd
<CsoundSynthesizer> <CsInstruments> ;Example by Joachim Heintz giFt ftgen 0, 0, -12, -2, 0
instr 1; calculates first 12 fibonacci values and writes them to giFt istart = 1 inext = 2 indx = 0 loop: tableiw istart, indx, giFt ;writes istart to table istartold = istart ;keep previous value of istart istart = inext ;reset istart for next loop inext = istartold + inext ;reset inext for next loop loop_lt indx, 1, 12, loop endin instr 2; prints the values of the table prints "%nContent of Function Table:%n" init 0 table prints loop_lt indx, giFt "Index %d = %f%n", indx, ival indx, 1, ftlen(giFt), loop
k-Rate Example
81
The next example writes a k-signal continuously into a table. This can be used to record any kind of user input, for instance by MIDI or widgets. It can also be used to record random movements of k-signals, like here: EXAMPLE 03D06.csd
<CsoundSynthesizer> <CsOptions> -odac </CsOptions> <CsInstruments> ;Example by Joachim Heintz sr = 44100 ksmps = 32 nchnls = 2 0dbfs = 1 giFt giWave ftgen ftgen seed 0, 0, -5*kr, 2, 0; size for 5 seconds of recording 0, 0, 2^10, 10, 1, .5, .3, .1; waveform for oscillator 0
instr 1 ;recording of a random frequency movement for 5 seconds, and playing it kFreq randomi 400, 1000, 1 ;random frequency aSnd poscil .2, kFreq, giWave ;play it outs aSnd, aSnd ;;record the k-signal prints "RECORDING!%n" ;create a writing pointer in the table, moving in 5 seconds from index 0 to the end kindx linseg 0, 5, ftlen(giFt) ;write the k-signal tablew kFreq, kindx, giFt endin instr 2; read the values of the table and play it again ;;read the k-signal prints "PLAYING!%n" ;create a reading pointer in the table, moving in 5 seconds from index 0 to the end kindx linseg 0, 5, ftlen(giFt) ;read the k-signal kFreq table kindx, giFt aSnd oscil3 .2, kFreq, giWave; play it outs aSnd, aSnd endin </CsInstruments> <CsScore> i 1 0 5 i 2 6 5 </CsScore> </CsoundSynthesizer>
As you see, in this typical case of writing k-values to a table you need a moving signal for the index. This can be done using the line or linseg opcode like here, or by using a phasor. The phasor always moves from 0 to 1 in a certain frequency. So, if you want the phasor to move from 0 to 1 in 5 seconds, you must set the frequency to 1/5. By setting the ixmode argument of tablew to 1, you can use the phasor output directly as writing pointer. So this is an alternative version of instrument 1 taken from the previous example:
instr 1; recording of a random frequency movement for 5 seconds, and playing it kFreq randomi 400, 1000, 1; random frequency aSnd oscil3 .2, kFreq, giWave; play it outs aSnd, aSnd ;;record the k-signal with a phasor as index prints "RECORDING!%n" ;create a writing pointer in the table, moving in 5 seconds from index 0 to the end kindx phasor 1/5 ;write the k-signal tablew kFreq, kindx, giFt, 1 endin
a-Rate Example
Recording an audio signal is quite similar to recording a control signal. You just need an a-signal as input and also as index. The first example shows first the recording of a random audio signal. If you have live audio input, you can then record your input for 5 seconds. EXAMPLE 03D07.csd
<CsoundSynthesizer> <CsOptions>
82
-iadc -odac </CsOptions> <CsInstruments> ;Example by Joachim Heintz sr = 44100 ksmps = 32 nchnls = 2 0dbfs = 1 giFt ftgen seed 0, 0, -5*sr, 2, 0; size for 5 seconds of recording audio 0
instr 1 ;generating a band filtered noise for 5 seconds, and recording it aNois rand .2 kCfreq randomi 200, 2000, 3; random center frequency aFilt butbp aNois, kCfreq, kCfreq/10; filtered noise aBal balance aFilt, aNois, 1; balance amplitude outs aBal, aBal ;;record the audiosignal with a phasor as index prints "RECORDING FILTERED NOISE!%n" ;create a writing pointer in the table, moving in 5 seconds from index 0 to the end aindx phasor 1/5 ;write the k-signal tablew aBal, aindx, giFt, 1 endin instr 2 ;read the prints aindx phasor aSnd table3 outs endin values of the table and play it "PLAYING FILTERED NOISE!%n" 1/5 aindx, giFt, 1 aSnd, aSnd
instr 3 ;record live input ktim timeinsts ; playing time of the instrument in seconds prints "PLEASE GIVE YOUR LIVE INPUT AFTER THE BEEP!%n" kBeepEnv linseg 0, 1, 0, .01, 1, .5, 1, .01, 0 aBeep oscils .2, 600, 0 outs aBeep*kBeepEnv, aBeep*kBeepEnv ;;record the audiosignal after 2 seconds if ktim > 2 then ain inch 1 printks "RECORDING LIVE INPUT!%n", 10 ;create a writing pointer in the table, moving in 5 seconds from index 0 to the end aindx phasor 1/5 ;write the k-signal tablew ain, aindx, giFt, 1 endif endin instr 4 ;read the prints aindx phasor aSnd table3 outs endin </CsInstruments> <CsScore> i 1 0 5 i 2 6 5 i 3 12 7 i 4 20 5 </CsScore> </CsoundSynthesizer> values from the table and play it "PLAYING LIVE INPUT!%n" 1/5 aindx, giFt, 1 aSnd, aSnd
83
kres ares
table table
kndx, ifn [, ixmode] [, ixoff] [, iwrap] andx, ifn [, ixmode] [, ixoff] [, iwrap]
As table reading often requires interpolation between the table values - for instance if you read k or a-values faster or slower than they have been written in the table - Csound offers two descendants of table for interpolation: tablei interpolates linearly, whilst table3 performs cubic interpolation (which is generally preferable but is computationally slightly more expensive). Another variant is the tab_i / tab opcode which misses some features but may be preferable in some situations. If you have any problems in reading non-power-of-two tables, give them a try. They should also be faster than the table opcode, but you must take care: they include fewer built-in protection measures than table, tablei and table3 and if they are given index values that exceed the table size Csound will stop and report a performance error. Examples of the use of the table opcodes can be found in the earlier examples in the How-ToWrite-Values... section.
Oscillators
Reading table values using an oscillator is standard if you read tables which contain one cycle of a waveform at audio-rate. But actually you can read any table using an oscillator, either at a- or at k-rate. The advantage is that you needn't create an index signal. You can simply specify the frequency of the oscillator. You should bear in mind that many of the oscillators in Csound will work only with power-of-two table sizes. The poscil/poscil3 opcodes do not have this restriction and offer a high precision, because they work with floating point indices, so in general it is recommended to use them. Below is an example that demonstrates both reading a k-rate and an a-rate signal from a buffer with poscil3 (an oscillator with a cubic interpolation): EXAMPLE 03D08.csd
<CsoundSynthesizer> <CsOptions> -iadc -odac </CsOptions> <CsInstruments> ;Example by Joachim Heintz sr = 44100 ksmps = 32 nchnls = 2 0dbfs = 1 giControl ftgen giAudio ftgen giWave ftgen seed 0, 0, -5*kr, 2, 0; size for 5 seconds of recording control data 0, 0, -5*sr, 2, 0; size for 5 seconds of recording audio data 0, 0, 2^10, 10, 1, .5, .3, .1; waveform for oscillator 0
instr 1 ;recording of a random frequency movement for 5 seconds, and playing it kFreq randomi 400, 1000, 1; random frequency aSnd poscil .2, kFreq, giWave; play it outs aSnd, aSnd ;;record the k-signal with a phasor as index prints "RECORDING RANDOM CONTROL SIGNAL!%n" ;create a writing pointer in the table, moving in 5 seconds from index 0 to the end kindx phasor 1/5 ;write the k-signal tablew kFreq, kindx, giControl, 1 endin instr 2; read the prints poscil poscil outs endin values of the table and play it with poscil "PLAYING CONTROL SIGNAL!%n" 1, 1/5, giControl .2, kFreq, giWave; play it aSnd, aSnd
kFreq aSnd
instr 3; record live input ktim timeinsts ; playing time of the instrument in seconds prints "PLEASE GIVE YOUR LIVE INPUT AFTER THE BEEP!%n" kBeepEnv linseg 0, 1, 0, .01, 1, .5, 1, .01, 0 aBeep oscils .2, 600, 0 outs aBeep*kBeepEnv, aBeep*kBeepEnv ;;record the audiosignal after 2 seconds if ktim > 2 then ain inch 1 printks "RECORDING LIVE INPUT!%n", 10 ;create a writing pointer in the table, moving in 5 seconds from index 0 to the end aindx phasor 1/5 ;write the k-signal
84
tablew endif endin instr 4; read the prints poscil outs endin
aSnd
values from the table and play it with poscil "PLAYING LIVE INPUT!%n" .5, 1/5, giAudio aSnd, aSnd
instr 1; saving giWave at i-time ftsave "i-time_save.txt", 1, 1 endin instr 2; recording of a line transition between 0 and 1 for one second kline linseg 0, 1, 1 tabw kline, kline, giControl, 1 endin instr 3; saving giWave at k-time ftsave "k-time_save.txt", 1, 2 endin </CsInstruments> <CsScore> i 1 0 0 i 2 0 1 i 3 1 .1 </CsScore> </CsoundSynthesizer>
The counterpart to ftsave/ftsavek are the opcodes ftload/ftloadk. Using them you can load the saved files into function tables.
If you have recorded your live-input to a buffer, you may want to save your buffer as a soundfile. There is no opcode in Csound which does that, but it can be done by using a k-rate loop and the fout opcode. This is shown in the next example, in instrument 2. First instrument 1 records your live input. Then instrument 2 writes the file "testwrite.wav" into the same folder as your .csd. This is done at the first k-cycle of instrument 2, by reading again and again the table values and writing them as an audio signal to disk. After this is done, the instrument is turned off by executing the turnoff statement. EXAMPLE 03D10.csd
<CsoundSynthesizer> <CsOptions> -i adc </CsOptions> <CsInstruments> ;Example by Joachim Heintz sr = 44100 ksmps = 32 nchnls = 2 0dbfs = 1 giAudio ftgen 0, 0, -5*sr, 2, 0; size for 5 seconds of recording audio data
instr 1 ;record live input ktim timeinsts ; playing time of the instrument in seconds prints "PLEASE GIVE YOUR LIVE INPUT AFTER THE BEEP!%n" kBeepEnv linseg 0, 1, 0, .01, 1, .5, 1, .01, 0 aBeep oscils .2, 600, 0 outs aBeep*kBeepEnv, aBeep*kBeepEnv ;;record the audiosignal after 2 seconds if ktim > 2 then ain inch 1 printks "RECORDING LIVE INPUT!%n", 10 ;create a writing pointer in the table, moving in 5 seconds from index 0 to the end aindx phasor 1/5 ;write the k-signal tablew ain, aindx, giAudio, 1 endif endin instr 2; write the giAudio table to a soundfile Soutname = "testwrite.wav"; name of the output file iformat = 14; write as 16 bit wav file itablen = ftlen(giAudio); length of the table in samples kcnt loop: kcnt andx asig init 0; set the counter to 0 at start
= kcnt+ksmps; next value (e.g. 10 if ksmps=10) interp kcnt-1; calculate audio index (e.g. from 0 to 9) tab andx, giAudio; read the table values as audio signal fout Soutname, iformat, asig; write asig to a file if kcnt <= itablen-ksmps kgoto loop; go back as long there is something to do turnoff ; terminate the instrument endin
This code can also be transformed in a User Defined Opcode. It can be found here.
86
tab_i / tab: Read values from a function table at i-rate (tab_i), k-rate or a-rate (tab). Offer no interpolation and less options than the table opcodes, but they work also for non-power-of-two tables. They do not provide a boundary check, which makes them fast but also give the user the resposability not reading any value off the table boundaries. tableiw / tablew: Write values to a function table at i-rate (tableiw), k-rate and a-rate (tablew). These opcodes provide many options and are safe because of boundary check, but you may have problems with non-power-of-two tables. tabw_i / tabw: Write values to a function table at i-rate (tabw_i), k-rate or a-rate (tabw). Offer less options than the tableiw/tablew opcodes, but work also for non-power-of-two tables. They do not provide a boundary check, which makes them fast but also give the user the resposability not writing any value off the table boundaries. poscil / poscil3: Precise oscillators for reading function tables at k- or a-rate, with linear (poscil) or cubic (poscil3) interpolation. They support also non-power-of-two tables, so it's usually recommended to use them instead of the older oscili/oscil3 opcodes. Poscil has also a-rate input for amplitude and frequency, while poscil3 has just k-rate input. oscili / oscil3: The standard oscillators in Csound for reading function tables at k- or a-rate, with linear (oscili) or cubic (oscil3) interpolation. They support all rates for the amplitude and frequency input, but are restricted to power-of-two tables. Particularily for long tables and low frequencies they are not as precise as the poscil/poscil3 oscillators. ftsave / ftsavek: Save a function table as a file, at i-time (ftsave) or k-time (ftsavek). This can be a text file or a binary file, but not a soundfile. If you want to save a soundfile, use the User Defined Opcode TableToSF. ftload / ftloadk: Load a function table which has been written by ftsave/ftsavek. line / linseg / phasor: Can be used to create index values which are needed to read/write k- or asignals with the table/tablew or tab/tabw opcodes.
87
ORDER OF EXECUTION
Whatever you do in Csound with instrument events, you must bear in mind the order of execution that has been explained in chapter 03 under the initialization and performance pass: instruments are executed one by one, both in the initialization pass and in each control cycle, and the order is determined by the instrument number. So if you have an instrument which triggers another instrument, it should usually have the lower number. If, for instance, instrument 10 calls instrument 20 in a certain control cycle, instrument 20 will execute the event in the same control cycle. But if instrument 20 calls instrument 10, then instrument 10 will execute the event only in the next control cycle.
instr 1 kFadout init krel release if krel == 1 && p3 xtratim kFadout linseg endif kEnv linseg aSig poscil outs endin
"1" if last k-cycle ;if so, and negative p3: ;give 0.5 extra seconds ;and make fade out
0, .01, p4, abs(p3)-.1, p4, .09, 0; normal fade out kEnv*kFadout, p5, giWav aSig, aSig
88
</CsInstruments> <CsScore> t 0 120 ;set tempo to 120 beats per minute i 1 0 1 .2 400 ;play instr 1 for one second i 1 2 -10 .5 500 ;play instr 1 indefinetely (negative p3) i -1 5 0 ;turn it off (negative p1) i 1.1 ^+1 -10 .2 600 ;turn on instance 1 of instr 1 one sec after the previous start i 1.2 ^+2 -10 .2 700 ;another instance of instr 1 i -1.2 ^+2 0 ;turn off 1.2 i -1.1 ^+1 . ;turn off 1.1 (dot = same as the same p-field above) s ;end of a section, so time begins from new at zero i 1 1 1 .2 800 r 5 ;repeats the following line (until the next "s") i 1 .25 .25 .2 900 s v 2 ;lets time be double as long i 1 0 2 .2 1000 i 1 1 1 .2 1100 s v 0.5 ;lets time be half as long i 1 0 2 .2 1200 i 1 1 1 .2 1300 s ;time is normal now again i 1 0 2 .2 1000 i 1 1 1 .2 900 s {4 LOOP ;make a score loop (4 times) with the variable "LOOP" i 1 [0 + 4 * $LOOP.] 3 .2 [1200 - $LOOP. * 100] i 1 [1 + 4 * $LOOP.] 2 . [1200 - $LOOP. * 200] i 1 [2 + 4 * $LOOP.] 1 . [1200 - $LOOP. * 300] } e </CsScore> </CsoundSynthesizer>
Triggering an instrument with an indefinite duration by setting p3 to any negative value, and stopping it by a negative p1 value, can be an important feature for live events. If you turn instruments off in this way you may have to add a fade out segment. One method of doing this is shown in the instrument above with a combination of the release and the xtratim opcodes. Also note that you can start and stop certain instances of an instrument with a floating point number as p1.
instr 1 iFreq cpsmidi iAmp ampmidi iRatio random aTone foscili aEnv linenr outs endin
89
USING WIDGETS
If you want to trigger an instrument event in realtime with a Graphical User Interface, it is usually a "Button" widget which will do this job. We will see here a simple example; first implemented using Csound's FLTK widgets, and then using QuteCsound's widgets.
FLTK Button
This is a very simple example demonstrating how to trigger an instrument using an FLTK button. A more extended example can be found here. EXAMPLE 03E03.csd
<CsoundSynthesizer> <CsOptions> -odac </CsOptions> <CsInstruments> ;Example by Joachim Heintz sr = 44100 ksmps = 32 nchnls = 2 0dbfs = 1 FLpanel "Trigger By FLTK Button", 300, 100, 100, 100; creates an FLTK panel k1, ih1 FLbutton "Push me!", 0, 0, 1, 150, 40, 10, 25, 0, 1, 0, 1; trigger instr 1 (equivalent to the score line "i 1 0 1") k2, ih2 FLbutton "Quit", 0, 0, 1, 80, 40, 200, 25, 0, 2, 0, 1; trigger instr 2 FLpanelEnd; end of the FLTK panel section FLrun ; run FLTK seed 0; random seed different each time instr 1 idur random p3 = ioct random idb random aSig oscils aEnv transeg outs endin instr 2 exitnow endin </CsInstruments> <CsScore> f 0 36000 e </CsScore> </CsoundSynthesizer> .5, 3; recalculate instrument duration idur; reset instrument duration 8, 11; random values between 8th and 11th octave -18, -6; random values between -6 and -18 dB ampdb(idb), cpsoct(ioct), 0 1, p3, -10, 0 aSig*aEnv, aSig*aEnv
Note that in this example the duration of an instrument event is recalculated when the instrument is inititalized. This is done using the statement "p3 = i...". This can be a useful technique if you want the duration that an instrument plays for to be different each time it is called. In this example duration is the result of a random function'. The duration defined by the FLTK button will be overwriten by any other calculation within the instrument itself at i-time.
QuteCsound Button
In QuteCsound, a button can be created easily from the submenu in a widget panel:
90
In the Properties Dialog of the button widget, make sure you have selected "event" as Type. Insert a Channel name, and at the bottom type in the event you want to trigger - as you would if writing a line in the score.
In your Csound code, you need nothing more than the instrument you want to trigger:
For more information about QuteCsound, read chapter 11 (QuteCsound) in this manual.
91
seed instr 1 idur random p3 = ioct random idb random aSig oscils aEnv transeg outs endin </CsInstruments> <CsScore> f 0 36000 e </CsScore> </CsoundSynthesizer>
0; random seed different each time .5, 3; calculate instrument duration idur; reset instrument duration 8, 11; random values between 8th and 11th octave -18, -6; random values between -6 and -18 dB ampdb(idb), cpsoct(ioct), 0 1, p3, -10, 0 aSig*aEnv, aSig*aEnv
... you should get a prompt at the end of the Csound messages:
If you now type the line "i 1 0 1" and press return, you should hear that instrument 1 has been executed. After three times your messages may look like this:
92
Have a look in the QuteCsound frontend to see more of the possibilities of "firing" live instrument events using the Live Event Sheet.
BY CONDITIONS
We have discussed first the classical method of triggering instrument events from the score section of a .csd file, then we went on to look at different methods of triggering real time events using MIDI, by using widgets, and by using score lines inserted live. We will now look at the Csound orchestra itself and to some methods by which an instrument can internally trigger another instrument. The pattern of triggering could be governed by conditionals, or by different kinds of loops. As this "master" instrument can itself be triggered by a realtime event, you have unlimited options available for combining the different methods. Let's start with conditionals. If we have a realtime input, we may want to define a threshold, and trigger an event 1. if we cross the threshold from below to above; 2. if we cross the threshold from above to below.
93
In Csound, this could be implemented using an orchestra of three instruments. The first instrument is the master instrument. It receives the input signal and investigates whether that signal is crossing the threshold and if it does whether it is crossing from low to high or from high to low. If it crosses the threshold from low ot high the second instrument is triggered, if it crosses from high to low the third instrument is triggered. EXAMPLE 03E05.csd
<CsoundSynthesizer> <CsOptions> -iadc -odac </CsOptions> <CsInstruments> ;Example by Joachim Heintz sr = 44100 ksmps = 32 nchnls = 2 0dbfs = 1 seed 0; random seed different each time
instr 1; master instrument ichoose = p4; 1 = real time audio, 2 = random amplitude movement ithresh = -12; threshold in dB kstat init 1; 1 = under the threshold, 2 = over the threshold ;;CHOOSE INPUT SIGNAL if ichoose == 1 then ain inch 1 else kdB randomi -18, -6, 1 ain pinkish ampdb(kdB) endif ;;MEASURE AMPLITUDE AND TRIGGER SUBINSTRUMENTS IF THRESHOLD IS CROSSED afoll follow ain, .1; measure mean amplitude each 1/10 second kfoll downsamp afoll if kstat == 1 && dbamp(kfoll) > ithresh then; transition down->up event "i", 2, 0, 1; call instr 2 printks "Amplitude = %.3f dB%n", 0, dbamp(kfoll) kstat = 2; change status to "up" elseif kstat == 2 && dbamp(kfoll) < ithresh then; transition up->down event "i", 3, 0, 1; call instr 3 printks "Amplitude = %.3f dB%n", 0, dbamp(kfoll) kstat = 1; change status to "down" endif endin instr 2; triggered if threshold has been crossed from down to up asig oscils .2, 500, 0 aenv transeg 1, p3, -10, 0 outs asig*aenv, asig*aenv endin instr 3; triggered if threshold has been crossed from up to down asig oscils .2, 400, 0 aenv transeg 1, p3, -10, 0 outs asig*aenv, asig*aenv endin </CsInstruments> <CsScore> i 1 0 1000 2 ;change p4 to "1" for live input e </CsScore> </CsoundSynthesizer>
94
Let's look at a simple example for executing score events from an instrument using the scoreline opcode: EXAMPLE 03E06.csd
<CsoundSynthesizer> <CsOptions> -odac </CsOptions> <CsInstruments> ;Example by Joachim Heintz sr = 44100 ksmps = 32 nchnls = 2 0dbfs = 1 seed 0; random seed different each time
instr 1 ;master instrument with event pool scoreline_i {{i 2 0 2 7.09 i 2 2 2 8.04 i 2 4 2 8.03 i 2 6 1 8.04}} endin instr 2 ;plays the notes asig pluck .2, cpspch(p4), cpspch(p4), 0, 1 aenv transeg 1, p3, 0, 0 outs asig*aenv, asig*aenv endin </CsInstruments> <CsScore> i 1 0 7 e </CsScore> </CsoundSynthesizer>
With good right, you might say: "OK, that's nice, but I can also write scorelines in the score itself!" That's right, but the advantage with the scoreline_i method is that you can render the score events in an instrument, and then send them out to one or more instruments to execute them. This can be done with the sprintf opcode, which produces the string for scoreline in an itime loop (see the chapter about control structures). EXAMPLE 03E07.csd
<CsoundSynthesizer> <CsOptions> -odac </CsOptions> <CsInstruments> ;Example by Joachim Heintz sr = 44100 ksmps = 32 nchnls = 2 0dbfs = 1 giPch ftgen seed 0, 0, 4, -2, 7.09, 8.04, 8.03, 8.04 0; random seed different each time
instr 1 ; master instrument with event pool itimes = 7 ;number of events to produce icnt = 0 ;counter istart = 0 Slines = "" loop: ;start of the i-time loop idur random 1, 2.9999 ;duration of each note: idur = int(idur) ;either 1 or 2 itabndx random 0, 3.9999 ;index for the giPch table: itabndx = int(itabndx) ;0-3 ipch table itabndx, giPch ;random pitch value from the table Sline sprintf "i 2 %d %d %.2f\n", istart, idur, ipch ;new scoreline Slines strcat Slines, Sline ;append to previous scorelines istart = istart + idur ;recalculate start for next scoreline loop_lt icnt, 1, itimes, loop ;end of the i-time loop puts Slines, 1 ;print the scorelines scoreline_i Slines ;execute them iend = istart + idur ;calculate the total duration p3 = iend ;set p3 to the sum of all durations print p3 ;print it endin instr 2 ;plays the notes
95
</CsInstruments> <CsScore> i 1 0 1 ;p3 is automatically set to the total duration e </CsScore> </CsoundSynthesizer>
In this example, seven events have been rendered in an i-time loop in instrument 1. The result is stored in the string variable Slines. This string is given at i-time to scoreline_i, which executes them then one by one according to their starting times (p2), durations (p3) and other parameters. If you have many scorelines which are added in this way, you may run to Csound's maximal string length. By default, it is 255 characters. It can be extended by adding the option "+max_str_len=10000" to Csound's maximum string length of 9999 characters. Instead of collecting all score lines in a single string, you can also execute them inside the i-time loop. Also in this way all the single score lines are added to Csound's event pool. The next example shows an alternative version of the previous one by adding the instrument events one by one in the itime loop, either with event_i (instr 1) or with scoreline_i (instr 2): EXAMPLE 03E08.csd
<CsoundSynthesizer> <CsOptions> -odac </CsOptions> <CsInstruments> ;Example by Joachim Heintz sr = 44100 ksmps = 32 nchnls = 2 0dbfs = 1 giPch ftgen seed 0, 0, 4, -2, 7.09, 8.04, 8.03, 8.04 0; random seed different each time
instr 1; master instrument with event_i itimes = 7; number of events to produce icnt = 0; counter istart = 0 loop: ;start of the i-time loop idur random 1, 2.9999; duration of each note: idur = int(idur); either 1 or 2 itabndx random 0, 3.9999; index for the giPch table: itabndx = int(itabndx); 0-3 ipch table itabndx, giPch; random pitch value from the table event_i "i", 3, istart, idur, ipch; new instrument event istart = istart + idur; recalculate start for next scoreline loop_lt icnt, 1, itimes, loop; end of the i-time loop iend = istart + idur; calculate the total duration p3 = iend; set p3 to the sum of all durations print p3; print it endin instr 2; master instrument with scoreline_i itimes = 7; number of events to produce icnt = 0; counter istart = 0 loop: ;start of the i-time loop idur random 1, 2.9999; duration of each note: idur = int(idur); either 1 or 2 itabndx random 0, 3.9999; index for the giPch table: itabndx = int(itabndx); 0-3 ipch table itabndx, giPch; random pitch value from the table Sline sprintf "i 3 %d %d %.2f", istart, idur, ipch; new scoreline scoreline_i Sline; execute it puts Sline, 1; print it istart = istart + idur; recalculate start for next scoreline loop_lt icnt, 1, itimes, loop; end of the i-time loop iend = istart + idur; calculate the total duration p3 = iend; set p3 to the sum of all durations print p3; print it endin instr 3; plays the notes asig pluck .2, cpspch(p4), cpspch(p4), 0, 1 aenv transeg 1, p3, 0, 0
96
asig*aenv, asig*aenv
instr 1; time loop with timout. events are triggered by event_i (i-rate) loop: idurloop random 1, 4; duration of each loop timout 0, idurloop, play reinit loop play: idurins random 1, 5; duration of the triggered instrument event_i "i", 10, 0, idurins; triggers instrument 10 endin instr 2; time loop with metro. events are triggered by event (k-rate) kfreq init 1; give a start value for the trigger frequency kTrig metro kfreq if kTrig == 1 then ;if trigger impulse: kdur random 1, 5; random duration for instr 10 event "i", 10, 0, kdur; call instr 10 kfreq random .25, 1; set new value for trigger frequency endif endin instr 10; triggers 8-13 partials inumparts random 8, 14 inumparts = int(inumparts); 8-13 as integer ibasoct random 5, 10; base pitch in octave values ibasfreq = cpsoct(ibasoct) ipan random .2, .8; random panning between left (0) and right (1) icnt = 0; counter loop: event_i "i", 100, 0, p3, ibasfreq, icnt+1, inumparts, ipan loop_lt icnt, 1, inumparts, loop endin instr 100; plays one partial ibasfreq = p4; base frequency of sound mixture ipartnum = p5; which partial is this (1 - N) inumparts = p6; total number of partials ipan = p7; panning ifreqgen = ibasfreq * ipartnum; general frequency of this partial ifreqdev random -10, 10; frequency deviation between -10% and +10% ifreq = ifreqgen + (ifreqdev*ifreqgen)/100; real frequency regarding deviation ixtratim random 0, p3; calculate additional time for this partial p3 = p3 + ixtratim; new duration of this partial imaxamp = 1/inumparts; maximum amplitude idbdev random -6, 0; random deviation in dB for this partial
97
imaxamp * ampdb(idbdev-ipartnum); higher partials are softer -.1, .1; panning deviation ipan + ipandev 0, .005, 0, iamp, p3-.005, -10, 0 aEnv, ifreq, giSine aSine, ipan aL, aR "ibasfreq = %d, ipartial = %d, ifreq = %d%n", ibasfreq, ipartnum, ifreq
</CsInstruments> <CsScore> i 1 0 300 ;try this, or the next line (or both) ;i 2 0 300 </CsScore> </CsoundSynthesizer>
Related Opcodes
event_i / event: Generate an instrument event at i-time (event_i) or at k-time (event). Easy to use, but you cannot send a string to the subinstrument. scoreline_i / scoreline: Generate an instrument at i-time (scoreline_i) or at k-time (scoreline). Like event_i/event, but you can send to more than one instrument but unlike event_i/event you can send strings. On the other hand, you must usually preformat your scoreline-string using sprintf. sprintf / sprintfk: Generate a formatted string at i-time (sprintf) or k-time (sprintfk), and store it as a string-variable. -+max_str_len=10000: Option in the "CsOptions" tag of a .csd file which extend the maximum string length to 9999 characters. massign: Assigns the incoming MIDI events to a particular instrument. It is also possible to prevent any assigment by this opcode. cpsmidi / ampmidi: Returns the frequency / velocity of a pressed MIDI key. release: Returns "1" if the last k-cycle of an instrument has begun. xtratim: Adds an additional time to the duration (p3) of an instrument. turnoff / turnoff2: Turns an instrument off; either by the instrument itself (turnoff), or from another instrument and with several options (turnoff2). -p3 / -p1: A negative duration (p3) turns an instrument on "indefinitely"; a negative instrument number (p1) turns this instrument off. See the examples at the beginning of this chapter. -L stdin: Option in the "CsOptions" tag of a .csd file which lets you type in realtime score events. timout: Allows you to perform time loops at i-time with reinitalization passes. metro: Outputs momentary 1s with a definable (and variable) frequency. Can be used to perform a time loop at k-rate. follow: Envelope follower.
98
instr 1 aDel init iFb = aSnd rand kdB randomi aSnd = kFiltFq randomi aFilt reson aFilt balance aDelTm randomi aDel vdelayx kdbFilt randomi kdbDel randomi aOut =
99
aOut, aOut
This is a filtered noise, and its delay, which is feeded back again into the delay line at a certain ratio iFb. The filter is moving as kFiltFq randomly between 100 and 1000 Hz. The volume of the filtered noise is moving as kdB randomly between -18 dB and -6 dB. The delay time moves between 0.1 and 0.8 seconds, and then both parts are mixed in varying volume portions.
Basic Example
If this signal processing unit is to be transformed into a User Defined Opcode, the main question is about its borders: Which portion of the code should be transformed into an own new function? The first solution could be a "radical" (and bad) solution: to transform the whole instrument into a UDO. EXAMPLE 03F02.csd
<CsoundSynthesizer> <CsOptions> </CsOptions> <CsInstruments> ;Example by Joachim Heintz sr = 44100 ksmps = 32 nchnls = 2 0dbfs = 1 giSine ftgen seed 0, 0, 2^10, 10, 1 0 0 0; initialize delay signal .7; feedback multiplier .2; white noise -18, -6, .4; random movement between -18 and -6 aSnd * ampdb(kdB); applied as dB to noise 100, 1000, 1; random movement between 100 and 1000 aSnd, kFiltFq, kFiltFq/5; applied as filter center frequency aFilt, aSnd; bring aFilt to the volume of aSnd .1, .8, .2; random movement between .1 and .8 as delay time aFilt + iFb*aDel, aDelTm, 1, 128; variable delay -12, 0, 1; two random movements between -12 and 0 (dB) ... -12, 0, 1; ... for the filtered and the delayed signal aFilt*ampdb(kdbFilt) + aDel*ampdb(kdbDel); mix it aOut, aOut
opcode FiltFb, 0, aDel init iFb = aSnd rand kdB randomi aSnd = kFiltFq randomi aFilt reson aFilt balance aDelTm randomi aDel vdelayx kdbFilt randomi kdbDel randomi aOut = outs endop instr 1 FiltFb endin </CsInstruments> <CsScore> i 1 0 60 </CsScore> </CsoundSynthesizer>
Before we continue the discussion about the quality of this transormation, we should have a look at the syntax first. The general syntax for a User Defined Opcode is:
opcode name, outtypes, intypes ... endop
100
Here, the name of the UDO is FiltFb. You are free to give any name, but it is strongly recommended to begin the name with a capital letter. By this, you avoid duplicates with usual opcodes which always starts with a lower case letter. As we have no input arguments and no output arguments for this first version of FiltFb, both outtypes and intypes are set to zero. Similar to the instr ... endin block of a usual instrument definition, for a UDO the opcode ... endop keywords start and end the definition block. In the instrument, the UDO is called like a usual opcode by its name, and in the same line the input arguments to the right side and the output arguments to the left side. As in this case the FiltFb has no input and output arguments, it is just called by its name:
instr 1 FiltFb endin
Now - why is this UDO more or less senseless? It gains nothing, compared to the usual instrument definition, and looses some benefits of the instrument definition. First, it is not advisable to include this line into the UDO:
outs aOut, aOut
This statement writes the audio signal aOut from inside the UDO to the output device. Imagine you want to change the output channels, or you want to add any signal modifier after the opcode. This would be impossible with this statement. So instead of including the outs opcode, we give the FiltFb UDO an audio output:
xout aOut
The xout statement of a UDO definition works like the "outlets" in PD or Max, giving the result of an opcode to the "outer world". Now let's go to the input side, and find out what should be done inside the FiltFb unit, and what should be made flexible and controllable from outside. First, the aSnd parameter should not be restricted to a white noise with amplitude 0.2, but should be an input (like a "signal inlet" in PD/Max). This is done with the line:
aSnd xin
Both, the output and the input type must be declared in the first line of the UDO definition, whether they are i-, k- or a-variables. So instead of "opcode FiltFb, 0, 0" the statement has changed now to "opcode FiltFb, a, a", because we have both input and output as a-variable. The UDO is now much more flexible and logical: it takes any audio input, it performs the filtered delay and feedback processing, and returns the result as another audio signal. In the next example, instrument 1 does exactly the same as before. Instrument 2 has live input instead. EXAMPLE 03F03.csd
<CsoundSynthesizer> <CsOptions> </CsOptions> <CsInstruments> ;Example by Joachim Heintz sr = 44100 ksmps = 32 nchnls = 2 0dbfs = 1 giSine ftgen seed 0, 0, 2^10, 10, 1 0 a 0; initialize delay signal .7; feedback multiplier -18, -6, .4; random movement between -18 and -6 aSnd * ampdb(kdB); applied as dB to noise 100, 1000, 1; random movement between 100 and 1000 aSnd, kFiltFq, kFiltFq/5; applied as filter center frequency aFilt, aSnd; bring aFilt to the volume of aSnd .1, .8, .2; random movement between .1 and .8 as delay time aFilt + iFb*aDel, aDelTm, 1, 128; variable dealy -12, 0, 1; two random movements between -12 and 0 (dB) ... -12, 0, 1; ... for the filtered and the delayed signal aFilt*ampdb(kdbFilt) + aDel*ampdb(kdbDel); mix it
opcode FiltFb, a, aSnd xin aDel init iFb = kdB randomi aSnd = kFiltFq randomi aFilt reson aFilt balance aDelTm randomi aDel vdelayx kdbFilt randomi kdbDel randomi aOut =
101
xout endop
aOut
instr 1; white noise input aSnd rand .2 aOut FiltFb aSnd outs aOut, aOut endin instr 2; live audio input aSnd inch 1; input from channel 1 aOut FiltFb aSnd outs aOut, aOut endin </CsInstruments> <CsScore> i 1 0 60 ;change to i 2 for live audio input </CsScore> </CsoundSynthesizer>
opcode FiltFb, aa, akkkia ;;DELAY AND FEEDBACK OF A BAND FILTERED INPUT SIGNAL ;input: aSnd = input sound; kFb = feedback multiplier (0-1); kFiltFq: center frequency for the reson band filter (Hz); kQ = band width of reson filter as kFiltFq/kQ; iMaxDel = maximum delay time in seconds; aDelTm = delay time ;output: aFilt = filtered and balanced aSnd; aDel = delay and feedback of aFilt aSnd, kFb, kFiltFq, kQ, iMaxDel, aDelTm xin aDel init 0 aFilt reson aSnd, kFiltFq, kFiltFq/kQ aFilt balance aFilt, aSnd aDel vdelayx aFilt + kFb*aDel, aDelTm, iMaxDel, 128; variable delay xout aFilt, aDel endop opcode Opus123_FiltFb, a, a ;;the udo FiltFb here in my opus 123 :) ;input = aSnd ;output = filtered and delayed aSnd in different mixtures aSnd xin
102
kdB randomi aSnd = kFiltFq randomi iQ = iFb = aDelTm randomi aFilt, aDel FiltFb kdbFilt randomi kdbDel randomi aOut = xout endop
-18, -6, .4; random movement between -18 and -6 aSnd * ampdb(kdB); applied as dB to noise 100, 1000, 1; random movement between 100 and 1000 5 .7; feedback multiplier .1, .8, .2; random movement between .1 and .8 as delay time aSnd, iFb, kFiltFq, iQ, 1, aDelTm -12, 0, 1; two random movements between -12 and 0 (dB) ... -12, 0, 1; ... for the noise and the delay signal aFilt*ampdb(kdbFilt) + aDel*ampdb(kdbDel); mix it aOut
instr 1; well known context as instrument aSnd rand .2 kdB randomi -18, -6, .4; random movement between -18 and -6 aSnd = aSnd * ampdb(kdB); applied as dB to noise kFiltFq randomi 100, 1000, 1; random movement between 100 and 1000 iQ = 5 iFb = .7; feedback multiplier aDelTm randomi .1, .8, .2; random movement between .1 and .8 as delay time aFilt, aDel FiltFb aSnd, iFb, kFiltFq, iQ, 1, aDelTm kdbFilt randomi -12, 0, 1; two random movements between -12 and 0 (dB) ... kdbDel randomi -12, 0, 1; ... for the noise and the delay signal aOut = aFilt*ampdb(kdbFilt) + aDel*ampdb(kdbDel); mix it aOut linen aOut, .1, p3, 3 outs aOut, aOut endin instr 2; well known context UDO which embedds another UDO aSnd rand .2 aOut Opus123_FiltFb aSnd aOut linen aOut, .1, p3, 3 outs aOut, aOut endin instr 3; other context: two delay lines with buzz kFreq randomh 200, 400, .08; frequency for buzzer aSnd buzz .2, kFreq, 100, giSine; buzzer as aSnd kFiltFq randomi 100, 1000, .2; center frequency aDelTm1 randomi .1, .8, .2; time for first delay line aDelTm2 randomi .1, .8, .2; time for second delay line kFb1 randomi .8, 1, .1; feedback for first delay line kFb2 randomi .8, 1, .1; feedback for second delay line a0, aDel1 FiltFb aSnd, kFb1, kFiltFq, 1, 1, aDelTm1; delay signal 1 a0, aDel2 FiltFb aSnd, kFb2, kFiltFq, 1, 1, aDelTm2; delay signal 2 aDel1 linen aDel1, .1, p3, 3 aDel2 linen aDel2, .1, p3, 3 outs aDel1, aDel2 endin </CsInstruments> <CsScore> i 1 0 30 i 2 31 30 i 3 62 120 </CsScore> </CsoundSynthesizer>
The good thing about the different possibilities of writing a more specified UDO, or a more generalized: You needn't decide this at the beginning of your work. Just start with any formulation you find useful at a certain situation. If you continue and see that you should have some more parameters accessible, it should be easy to rewrite the UDO. Just be careful not to confuse the different versions. Give names like Faulty1, Faulty2 etc. instead of overwriting Faulty. And be generous to yourself in commenting: What is this UDO supposed to do? What are the inputs (included the measurement units like Hertz or seconds)? What are the outputs exactly? How you do this, is up to you and depends on your style and your favour, but you should do it in any way if you do not want to become headache later when you try to understand what the hell this UDO actually does ...
As it can be seen from the examples above, User Defined Opcodes must be defined in the orchestra header (which is sometimes called "instrument 0"). Note that your opcode definitions must be the last part of all your orchestra header statements. Though you are probably right to call Csound intolerant in this case, the following statement gives an error: EXAMPLE 03F05.csd
<CsInstruments> ;Example by Joachim Heintz sr = 44100 ksmps = 32 nchnls = 2 0dbfs = 1 opcode FiltFb, aa, akkkia ;;DELAY AND FEEDBACK OF A BAND FILTERED INPUT SIGNAL ;input: aSnd = input sound; kFb = feedback multiplier (0-1); kFiltFq: center frequency for the reson band filter (Hz); kQ = band width of reson filter as kFiltFq/kQ; iMaxDel = maximum delay time in seconds; aDelTm = delay time ;output: aFilt = filtered and balanced aSnd; aDel = delay and feedback of aFilt aSnd, kFb, kFiltFq, kQ, iMaxDel, aDelTm xin aDel init 0 aFilt reson aSnd, kFiltFq, kFiltFq/kQ aFilt balance aFilt, aSnd aDel vdelayx aFilt + kFb*aDel, aDelTm, iMaxDel, 128; variable delay xout aFilt, aDel endop giSine instr 1 ... ftgen seed 0, 0, 2^10, 10, 1 0
Csound will complain about "misplaced opcodes", which means that the ftgen and the seed statement must be before the opcode definitions. You should not try to discuss with Csound in this case ...
3. If "MyOpcodes.txt" is in a different directory, you must call it by the full path name, for instance:
#include "/Users/me/Documents/Csound/UDO/MyOpcodes.txt"
As always, make sure that the "#include" statement is the last one in the orchestra header, and that the logical order is accepted if one opcode depends on an other one. If you work a lot with User Defined Opcodes, and collect step by step a number of UDOs you need for your work, the #include feature lets you easily import them to your .csd file, like a personal library.
The ksmps assignment in the orchestra header cannot be changed during the performance of a .csd file. But in a User Defined Opcode you have the unique possibility of changing this value by a local assignment. If you use a setksmps statement in your UDO, you can have a locally smaller value for the number of samples per control cycle in the UDO. In the following example, the print statement in the UDO prints ten times compared to one time in the instrument, because the ksmps in the UDO is 10 times smaller: EXAMPLE 03F06.csd
<CsoundSynthesizer> <CsInstruments> ;Example by Joachim Heintz sr = 44100 ksmps = 44100 ;very hight because of printing opcode Faster, 0, 0 setksmps 4410 ;local ksmps is 1/10 of global ksmps printks "UDO print!%n", 0 endop instr 1 printks "Instr print!%n", 0 ;print each control period (once per second) Faster ;print 10 times per second because of local ksmps endin </CsInstruments> <CsScore> i 1 0 2 </CsScore> </CsoundSynthesizer>
Default Arguments
For i-time arguments, you can use a simple feature to set default values: "o" (instead of "i") defaults to 0 "p" (instead of "i") defaults to 1 "j" (instead of "i") defaults to -1 So you can omit these arguments - in this case the default values will be used. If you give an input argument instead, the default value will be overwritten: EXAMPLE 03F07.csd
<CsoundSynthesizer> <CsInstruments> ;Example by Joachim Heintz opcode Defaults, iii, opj ia, ib, ic xin xout ia, ib, ic endop instr 1 ia, ib, ic Defaults print ia, ib, ic Defaults print ia, ib, ic Defaults print ia, ib, ic Defaults print endin </CsInstruments> <CsScore> i 1 0 0 </CsScore> </CsoundSynthesizer>
105
Recursion means that a function can call itself. This is a feature which can be fertile in many situations. Also User Defined Opcodes can be recursive. You can do many things with a recursive UDO which you cannot do in another way; at least not in a simliar simple way. This is an example of generating eight partials by a recursive UDO. See the last example in the next section for a more musical application of a recursive UDO. EXAMPLE 03F08.csd
<CsoundSynthesizer> <CsOptions> </CsOptions> <CsInstruments> ;Example by Joachim Heintz sr = 44100 ksmps = 32 nchnls = 2 0dbfs = 1 opcode Recursion, a, iip ;input: frequency, number of partials, first partial (default=1) ifreq, inparts, istart xin iamp = 1/inparts/istart ;decreasing amplitudes for higher partials if istart < inparts then ;if inparts have not yet reached acall Recursion ifreq, inparts, istart+1 ;call another instance of this UDO endif aout oscils iamp, ifreq*istart, 0 ;execute this partial aout = aout + acall ;add the audio signals xout aout endop instr 1 amix Recursion 400, 8 ;8 partials with a base frequency of 400 Hz aout linen amix, .01, p3, .1 outs aout, aout endin </CsInstruments> <CsScore> i 1 0 1 </CsScore> </CsoundSynthesizer>
EXAMPLES
We will focus here on some examples which will hopefully show the wide range of User Defined Opcodes. Some of them are adaptions of examples from previous chapters about the Csound Syntax. Much more examples can be found in the User-Defined Opcode Database, editied by Steven Yi.
This is a good job for a UDO. We want to have an opcode which works for both, mono and stereo files as input. Two versions are possible: FilePlay1 returns always one audio signal (if the file is stereo it uses just the first channel), FilePlay2 returns always two audio signals (if the file is mono it duplicates this to both channels). We can use the default arguments to make this opcode exactly the same to use as diskin2: EXAMPLE 03F09.csd
<CsoundSynthesizer> <CsOptions> -odac </CsOptions> <CsInstruments> ;Example by Joachim Heintz sr = 44100
106
ksmps = 32 nchnls = 2 0dbfs = 1 opcode FilePlay1, a, Skoooooo ;gives mono output regardless your soundfile is mono or stereo ;(if stereo, just the first channel is used) ;see the diskin2 page of the csound manual for information about Sfil, kspeed, iskip, iloop, iformat, iwsize, ibufsize, iskipinit ichn filenchnls Sfil if ichn == 1 then aout diskin2 Sfil, kspeed, iskip, iloop, iformat, iwsize, else aout, a0 diskin2 Sfil, kspeed, iskip, iloop, iformat, iwsize, endif xout aout endop
opcode FilePlay2, aa, Skoooooo ;gives stereo output regardless your soundfile is mono or stereo ;see the diskin2 page of the csound manual for information about the input arguments Sfil, kspeed, iskip, iloop, iformat, iwsize, ibufsize, iskipinit xin ichn filenchnls Sfil if ichn == 1 then aL diskin2 Sfil, kspeed, iskip, iloop, iformat, iwsize, ibufsize, iskipinit aR = aL else aL, aR diskin2 Sfil, kspeed, iskip, iloop, iformat, iwsize, ibufsize, iskipinit endif xout aL, aR endop instr 1 aMono FilePlay1 outs endin instr 2 aL, aR FilePlay2 outs endin </CsInstruments> <CsScore> i 1 0 4 i 2 4 4 </CsScore> </CsoundSynthesizer> "fox.wav", 1 aMono, aMono
"fox.wav", 1 aL, aR
opcode TabDirtk, 0, ikk ;"dirties" a function table by applying random deviations at a k-rate trigger ;input: function table, trigger (1 = perform manipulation), deviation as percentage ift, ktrig, kperc xin if ktrig == 1 then ;just work if you get a trigger signal kndx = 0 loop: krand random -kperc/100, kperc/100
107
kndx, ift; read old value kval + (kval * krand); calculate new value knewval, kndx, giSine; write new value kndx, 1, ftlen(ift), loop; loop construction
instr 1 kTrig metro TabDirtk aSig poscil outs endin </CsInstruments> <CsScore> i 1 0 10 </CsScore> </CsoundSynthesizer>
1, .00001 ;trigger signal once per second giSine, kTrig, 10 .2, 400, giSine aSig, aSig
Of course you can also change the content of a function table at init-time. The next example permutes a series of numbers randomly each time it is called. For this purpose, first the input function table iTabin is copied as iCopy. This is necessary because we do not want to change iTabin in any way. Then a random index in iCopy is calculated, and the value there is written at the beginning of iTabout, which contains the permutet result. At the end of this cycle, each value in iCopy which is has a larger index than the one which has just been read, is shifted one position to the left. So now iCopy has become one position smaller - not in table size but in the number of values to read. This procedure is continued until all values from iCopy are again in iTabout: EXAMPLE 03F11.csd
<CsoundSynthesizer> <CsInstruments> ;Example by Joachim Heintz giVals ftgen seed 0, 0, -12, -2, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12 0; each time different seed
opcode TabPermRand_i, i, i ;permuts randomly the values of the input table and creates an output table for the result iTabin xin itablen = ftlen(iTabin) iTabout ftgen 0, 0, -itablen, 2, 0 ;create empty output table iCopy ftgen 0, 0, -itablen, 2, 0 ;create empty copy of input table tableicopy iCopy, iTabin ;write values of iTabin into iCopy icplen init itablen ;number of values in iCopy indxwt init 0 ;index of writing in iTabout loop: indxrd random 0, icplen - .0001; random read index in iCopy indxrd = int(indxrd) ival tab_i indxrd, iCopy; read the value tabw_i ival, indxwt, iTabout; write it to iTabout shift: ;shift values in iCopy larger than indxrd one position to the left if indxrd < icplen-1 then ;if indxrd has not been the last table value ivalshft tab_i indxrd+1, iCopy ;take the value to the right ... tabw_i ivalshft, indxrd, iCopy ;... and write it to indxrd position indxrd = indxrd + 1 ;then go to the next position igoto shift ;return to shift and see if there is anything left to do endif indxwt = indxwt + 1 ;increase the index of writing in iTabout loop_gt icplen, 1, 0, loop ;loop as long as there is a value in iCopy ftfree iCopy, 0 ;delete the copy table xout iTabout ;return the number of iTabout endop instr 1 iPerm TabPermRand_i giVals ;perform permutation ;print the result indx = 0 Sres = "Result:" print: ival tab_i indx, iPerm Sprint sprintf "%s %d", Sres, ival Sres = Sprint loop_lt indx, 1, 12, print puts Sres, 1 endin instr 2; the same but performed ten times icnt = 0 loop: iPerm TabPermRand_i giVals ;perform permutation
108
;print the result indx = Sres = print: ival tab_i Sprint sprintf Sres = loop_lt puts loop_lt endin </CsInstruments> <CsScore> i 1 0 0 i 2 0 0 </CsScore> </CsoundSynthesizer>
0 "Result:" indx, iPerm "%s %d", Sres, ival Sprint indx, 1, 12, print Sres, 1 icnt, 1, 10, loop
opcode TableDumpSimp, 0, ijo ;prints the content of a table in a simple way ;input: function table, float precision while printing (default = 3), parameters per row (default = 10, maximum = 32) ifn, iprec, ippr xin iprec = (iprec == -1 ? 3 : iprec) ippr = (ippr == 0 ? 10 : ippr) iend = ftlen(ifn) indx = 0 Sformat sprintf "%%.%df\t", iprec Sdump = "" loop: ival tab_i indx, ifn Snew sprintf Sformat, ival Sdump strcat Sdump, Snew indx = indx + 1 imod = indx % ippr if imod == 0 then puts Sdump, 1 Sdump = "" endif if indx < iend igoto loop puts Sdump, 1 endop instr 1 TableDumpSimp p4, p5, p6 prints "%n" endin </CsInstruments> <CsScore> ;i1 st dur ftab i1 0 0 1 i1 . . 1 i1 . . 2 i1 . . 2 </CsScore> </CsoundSynthesizer>
prec -1 0 3 6
ppr 10 32
In the last example of the chapter about Triggering Instrument Events a number of partials were synthesized, each with a random frequency deviation of maximal 10% compared to the harmonic spectrum and an own duration for each partial. This can also be written as a recursive UDO. Each UDO generates one partial, and calls the next UDO, unless the last partial is generated. Now the code can be reduced to two instruments: instrument 1 performs the time loop, calculates the basic values for one note, and triggers the event. Then instrument 11 is called which feeds the UDO with the values and gives the audio signals to the output. EXAMPLE 03F13.csd
<CsoundSynthesizer> <CsOptions> -odac </CsOptions> <CsInstruments> ;Example by Joachim Heintz sr = 44100 ksmps = 32 nchnls = 2 0dbfs = 1 giSine ftgen seed 0, 0, 2^10, 10, 1 0
opcode PlayPartials, aa, iiipo ;plays inumparts partials with frequency deviation and own envelopes and durations for each partial ibasfreq \ ; base frequency of sound mixture inumparts \ ; total number of partials ipan \ ; panning ipartnum \ ; which partial is this (1 - N, default=1) ixtratim \ ; extra time in addition to p3 needed for this partial (default=0) xin ifreqgen = ibasfreq * ipartnum; general frequency of this partial ifreqdev random -10, 10; frequency deviation between -10% and +10% ifreq = ifreqgen + (ifreqdev*ifreqgen)/100; real frequency ixtratim1 random 0, p3; calculate additional time for this partial imaxamp = 1/inumparts; maximum amplitude idbdev random -6, 0; random deviation in dB for this partial iamp = imaxamp * ampdb(idbdev-ipartnum); higher partials are softer ipandev random -.1, .1; panning deviation ipan = ipan + ipandev aEnv transeg 0, .005, 0, iamp, p3+ixtratim1-.005, -10, 0; envelope aSine poscil aEnv, ifreq, giSine aL1, aR1 pan2 aSine, ipan if ixtratim1 > ixtratim then ixtratim = ixtratim1 ;set ixtratim to the ixtratim1 if the latter is larger endif if ipartnum < inumparts then ;if this is not the last partial aL2, aR2 PlayPartials ibasfreq, inumparts, ipan, ipartnum+1, ixtratim ;call the next one else ;if this is the last partial p3 = p3 + ixtratim; reset p3 to the longest ixtratim value endif xout aL1+aL2, aR1+aR2 endop instr 1; time loop with metro kfreq init 1; give a start value for the trigger frequency kTrig metro kfreq if kTrig == 1 then ;if trigger impulse: kdur random 1, 5; random duration for instr 10 knumparts random 8, 14 knumparts = int(knumparts); 8-13 partials kbasoct random 5, 10; base pitch in octave values kbasfreq = cpsoct(kbasoct) ;base frequency kpan random .2, .8; random panning between left (0) and right (1) event "i", 11, 0, kdur, kbasfreq, knumparts, kpan; call instr 11 kfreq random .25, 1; set new value for trigger frequency endif endin instr 11; plays one mixture with 8-13 partials aL, aR PlayPartials p4, p5, p6 outs aL, aR endin </CsInstruments> <CsScore> i 1 0 300 </CsScore> </CsoundSynthesizer>
110
Related Opcodes
opcode: The opcode to write a User Defined Opcode. #include: Useful to include any loadable Csound code, in this case definitions of User Defined Opcodes. setksmps: Lets you set a smaller ksmps value locally in a User Defined Opcode.
SOUND SYNTHESIS 19. ADDITIVE SYNTHESIS 20. SUBTRACTIVE SYNTHESIS 21. AMPLITUDE AND RING MODULATION 22. FREQUENCY MODULATION 23. WAVESHAPING 24. GRANULAR SYNTHESIS 25. PHYSICAL MODELLING
111
112
Each partial has its own movement and duration. We may or may not be able to achieve this successfully in additive synthesis. Let us begin with some simple sounds and consider ways of programming this with Csound; later we will look at some more complex sounds and advanced ways of programming this.
instr 1 ;harmonic additive synthesis ;receive general pitch and volume from the score ibasefrq = cpspch(p4) ;convert pitch values to frequency ibaseamp = ampdbfs(p5) ;convert dB to amplitude ;create 8 harmonic partials aOsc1 poscil ibaseamp, ibasefrq, giSine aOsc2 poscil ibaseamp/2, ibasefrq*2, giSine aOsc3 poscil ibaseamp/3, ibasefrq*3, giSine aOsc4 poscil ibaseamp/4, ibasefrq*4, giSine aOsc5 poscil ibaseamp/5, ibasefrq*5, giSine aOsc6 poscil ibaseamp/6, ibasefrq*6, giSine aOsc7 poscil ibaseamp/7, ibasefrq*7, giSine aOsc8 poscil ibaseamp/8, ibasefrq*8, giSine ;apply simple envelope kenv linen 1, p3/4, p3, p3/4 ;add partials and write to output aOut = aOsc1 + aOsc2 + aOsc3 + aOsc4 + aOsc5 + aOsc6 + aOsc7 + aOsc8 outs aOut*kenv, aOut*kenv endin instr 2 ;inharmonic additive synthesis ibasefrq = cpspch(p4)
113
ibaseamp = ampdbfs(p5) ;create 8 inharmonic partials aOsc1 poscil ibaseamp, ibasefrq, giSine aOsc2 poscil ibaseamp/2, ibasefrq*1.02, giSine aOsc3 poscil ibaseamp/3, ibasefrq*1.1, giSine aOsc4 poscil ibaseamp/4, ibasefrq*1.23, giSine aOsc5 poscil ibaseamp/5, ibasefrq*1.26, giSine aOsc6 poscil ibaseamp/6, ibasefrq*1.31, giSine aOsc7 poscil ibaseamp/7, ibasefrq*1.39, giSine aOsc8 poscil ibaseamp/8, ibasefrq*1.41, giSine kenv linen 1, p3/4, p3, p3/4 aOut = aOsc1 + aOsc2 + aOsc3 + aOsc4 + aOsc5 + aOsc6 + aOsc7 + aOsc8 outs aOut*kenv, aOut*kenv endin </CsInstruments> <CsScore> ; pch i 1 0 5 8.00 i 1 3 5 9.00 i 1 5 8 9.02 i 1 6 9 7.01 i 1 7 10 6.00 s i 2 0 5 8.00 i 2 3 5 9.00 i 2 5 8 9.02 i 2 6 9 7.01 i 2 7 10 6.00 </CsScore> </CsoundSynthesizer>
amp -10 -14 -12 -12 -10 -10 -14 -12 -12 -10
with the parameters iampfactor (the relative amplitude of a partial) and ifreqfactor (the frequency multiplier) transferred to the score. The next version simplifies the instrument code and defines the variable values as score parameters: EXAMPLE 04A02.csd
<CsoundSynthesizer> <CsOptions> -o dac </CsOptions> <CsInstruments> ;example by Andrs Cabrera and Joachim Heintz sr = 44100 ksmps = 32 nchnls = 2 0dbfs = 1 giSine instr iBaseFreq iFreqMult iBaseAmp iAmpMult iFreq iAmp kEnv aOsc ftgen 1 = = = = = = linen poscil 0, 0, 2^10, 10, 1 cpspch(p4) p5 ;frequency multiplier ampdbfs(p6) p7 ;amplitude multiplier iBaseFreq * iFreqMult iBaseAmp * iAmpMult iAmp, p3/4, p3, p3/4 kEnv, iFreq, giSine
114
aOsc, aOsc
ampmult 1 [1/2] [1/3] [1/4] [1/5] [1/6] [1/7] 1 [1/3] [1/6] [1/9] [1/12] [1/15]
You might say: Okay, where is the simplification? There are even more lines than before! - This is true, and this is certainly just a step on the way to a better code. The main benefit now is flexibility. Now our code is capable of realizing any number of partials, with any amplitude, frequency and duration ratios. Using the Csound score abbreviations (for instance a dot for repeating the previous value in the same p-field), you can do a lot of copy-and-paste, and focus on what is changing from line to line. Note also that you are now calling one instrument in multiple instances at the same time for performing additive synthesis. In fact, each instance of the instrument contributes just one partial for the additive synthesis. This call of multiple and simultaneous instances of one instrument is also a typical procedure for situations like this, and for writing clean and effective Csound code. We will discuss later how this can be done in a more elegant way than in the last example.
115
You see four sine generators, each with fixed frequency and amplitude relations, and mixed together. At the bottom of the illustration you see the composite waveform which repeats itself at each period. So - why not just calculate this composite waveform first, and then read it with just one oscillator? This is what some Csound GEN routines do. They compose the resulting shape of the periodic wave, and store the values in a function table. GEN10 can be used for creating a waveform consisting of harmonically related partials. After the common GEN routine p-fields
<table number>, <creation time>, <size in points>, <GEN number>
you have just to determine the relative strength of the harmonics. GEN09 is more complex and allows you to also control the frequency multiplier and the phase (0-360) of each partial. We are able to reproduce the first example in a shorter (and computational faster) form: EXAMPLE 04A03.csd
<CsoundSynthesizer> <CsOptions> -o dac </CsOptions> <CsInstruments> ;example by Andrs Cabrera and Joachim Heintz sr = 44100 ksmps = 32 nchnls = 2 0dbfs = 1 giSine ftgen 0, 0, 2^10, 10, 1 giHarm ftgen 1, 0, 2^12, 10, 1, 1/2, 1/3, 1/4, 1/5, 1/6, 1/7, 1/8 giNois ftgen 2, 0, 2^12, 9, 100,1,0, 102,1/2,0, 110,1/3,0, 123,1/4,0, 131,1/6,0, 139,1/7,0, 141,1/8,0 instr iBasFreq iTabFreq iBasFreq iBaseAmp iFtNum aOsc aEnv endin </CsInstruments> <CsScore> ; pch i 1 0 5 8.00 i . 3 5 9.00 i . 5 8 9.02 i . 6 9 7.01 i . 7 10 6.00 s i 1 0 5 8.00 i . 3 5 9.00 i . 5 8 9.02 i . 6 9 7.01 i . 7 10 6.00 </CsScore> </CsoundSynthesizer> 1 = = = = = poscil linen outs cpspch(p4) p7 ;base frequency of the table iBasFreq / iTabFreq ampdb(p5) p6 iBaseAmp, iBasFreq, iFtNum aOsc, p3/4, p3, p3/4 aEnv, aEnv
126,1/5,0,
amp -10 -14 -12 -12 -10 -10 -14 -12 -12 -10
table 1 . . . . 2 . . . .
As you can see, for non-harmonically related partials, the construction of a table must be done with a special care. If the frequency multipliers in our first example started with 1 and 1.02, the resulting period is acually very long. For a base frequency of 100 Hz, you will have the frequencies of 100 Hz and 102 Hz overlapping each other. So you need 100 cycles from the 1.00 multiplier and 102 cycles from the 1.02 multiplier to complete one period and to start again both together from zero. In other words, we have to create a table which contains 100 respectively 102 periods, instead of 1 and 1.02. Then the table values are not related to 1 - as usual - but to 100. That is the reason we have to introduce a new parameter iTabFreq for this purpose.
116
This method of composing waveforms can also be used for generating the four standard historical shapes used in a synthesizer. An impulse wave can be created by adding a number of harmonics of the same strength. A sawtooth has the amplitude multipliers 1, 1/2, 1/3, ... for the harmonics. A square has the same multipliers, but just for the odd harmonics. A triangle can be calculated as 1 divided by the square of the odd partials, with changing positive and negative values. The next example creates function tables with just ten partials for each standard form. EXAMPLE 04A04.csd
<CsoundSynthesizer> <CsOptions> -o dac </CsOptions> <CsInstruments> ;example by Joachim Heintz sr = 44100 ksmps = 32 nchnls = 2 0dbfs = 1 giImp giSaw giSqu giTri ftgen ftgen ftgen ftgen 1, 2, 3, 4, 0, 0, 0, 0, 4096, 4096, 4096, 4096, 10, 10, 10, 10, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1 1,1/2,1/3,1/4,1/5,1/6,1/7,1/8,1/9,1/10 1, 0, 1/3, 0, 1/5, 0, 1/7, 0, 1/9, 0 1, 0, -1/9, 0, 1/25, 0, -1/49, 0, 1/81, 0
instr 1 asig poscil .2, 457, p4 outs asig, asig endin </CsInstruments> <CsScore> i 1 0 3 1 i 1 4 3 2 i 1 8 3 3 i 1 12 3 4 </CsScore> </CsoundSynthesizer>
instr 1 ;master instrument inumparts = p4 ;number of partials ibasfreq = 200 ;base frequency ipart = 1 ;count variable for loop ;loop for inumparts over the ipart variable ;and trigger inumpartss instanes of the subinstrument loop:
117
= = event_i loop_le
ibasfreq * ipart 1/ipart/inumparts "i", 10, 0, p3, ifreq, iamp ipart, 1, inumparts, loop
instr 10 ;subinstrument for playing one partial ifreq = p4 ;frequency of this partial iamp = p5 ;amplitude of this partial aenv transeg 0, .01, 0, iamp, p3-0.1, -10, 0 apart poscil aenv, ifreq, giSine outs apart, apart endin </CsInstruments> <CsScore> ; number of partials i 1 0 3 10 i 1 3 3 20 i 1 6 3 2 </CsScore> </CsoundSynthesizer>
This instrument can easily be transformed to be played via a midi keyboard. The next example connects the number of synthesized partials with the midi velocity. So if you play softly, the sound will have fewer partials than if a key is struck with force. EXAMPLE 04A06.csd
<CsoundSynthesizer> <CsOptions> -o dac </CsOptions> <CsInstruments> ;Example by Joachim Heintz sr = 44100 ksmps = 32 nchnls = 2 0dbfs = 1 giSine ftgen massign 0, 0, 2^10, 10, 1 0, 1 ;all midi channels to instr 1
instr 1 ;master instrument ibasfreq cpsmidi ;base frequency iampmid ampmidi 20 ;receive midi-velocity and scale 0-20 inparts = int(iampmid)+1 ;exclude zero ipart = 1 ;count variable for loop ;loop for inumparts over the ipart variable ;and trigger inumpartss instanes of the subinstrument loop: ifreq = ibasfreq * ipart iamp = 1/ipart/inparts event_i "i", 10, 0, 1, ifreq, iamp loop_le ipart, 1, inparts, loop endin instr 10 ;subinstrument for playing one partial ifreq = p4 ;frequency of this partial iamp = p5 ;amplitude of this partial aenv transeg 0, .01, 0, iamp, p3-.01, -3, 0 apart poscil aenv, ifreq, giSine outs apart/3, apart/3 endin </CsInstruments> <CsScore> f 0 3600 </CsScore> </CsoundSynthesizer>
Although this instrument is rather primitive it is useful to be able to control the timbre in this using key velocity. Let us continue to explore other methods of creating parameter variations in additive synthesis.
118
In natural sounds, there is movement and change all the time. Even the best player or singer will not be able to play a note in the exact same way twice. And inside a tone, the partials have some unsteadiness all the time: slight excitations of the amplitudes, uneven durations, slight frequency movements. In an audio programming language like Csound, we can achieve these movements with random deviations. It is not so important whether we use randomness or not, rather in which way. The boundaries of random deviations must be adjusted as carefully as with any other parameter in electronic composition. If sounds using random deviations begin to sound like mistakes then it is probably less to do with actually using random functions but instead more to do with some poorly chosen boundaries. Let us start with some random deviations in our subinstrument. These parameters can be affected: The frequency of each partial can be slightly detuned. The range of this possible maximum detuning can be set in cents (100 cent = 1 semitone). The amplitude of each partial can be altered, compared to its standard value. The alteration can be measured in Decibel (dB). The duration of each partial can be shorter or longer than the standard value. Let us define this deviation as a percentage. If the expected duration is five seconds, a maximum deviation of 100% means getting a value between half the duration (2.5 sec) and the double duration (10 sec). The following example shows the effect of these variations. As a base - and as a reference to its author - we take the "bell-like sound" which Jean-Claude Risset created in his Sound Catalogue.1 EXAMPLE 04A07.csd
<CsoundSynthesizer> <CsOptions> -o dac </CsOptions> <CsInstruments> ;Example by Joachim Heintz sr = 44100 ksmps = 32 nchnls = 2 0dbfs = 1 ;frequency and amplitude multipliers for 11 partials of Risset's bell giFqs ftgen 0, 0, -11, -2, .56,.563,.92,.923,1.19,1.7,2,2.74,3,3.74,4.07 giAmps ftgen 0, 0, -11, -2, 1, 2/3, 1, 1.8, 8/3, 1.46, 4/3, 4/3, 1, 4/3 giSine ftgen 0, 0, 2^10, 10, 1 seed 0 instr 1 ;master instrument ibasfreq = 400 ifqdev = p4 ;maximum freq deviation in cents iampdev = p5 ;maximum amp deviation in dB idurdev = p6 ;maximum duration deviation in % indx = 0 ;count variable for loop loop: ifqmult tab_i indx, giFqs ;get frequency multiplier from table ifreq = ibasfreq * ifqmult iampmult tab_i indx, giAmps ;get amp multiplier iamp = iampmult / 20 ;scale event_i "i", 10, 0, p3, ifreq, iamp, ifqdev, iampdev, idurdev loop_lt indx, 1, 11, loop endin instr 10 ;subinstrument for playing one partial ;receive the parameters from the master instrument ifreqnorm = p4 ;standard frequency of this partial iampnorm = p5 ;standard amplitude of this partial ifqdev = p6 ;maximum freq deviation in cents iampdev = p7 ;maximum amp deviation in dB idurdev = p8 ;maximum duration deviation in % ;calculate frequency icent random -ifqdev, ifqdev ;cent deviation ifreq = ifreqnorm * cent(icent) ;calculate amplitude idb random -iampdev, iampdev ;dB deviation iamp = iampnorm * ampdb(idb) ;calculate duration idurperc random -idurdev, idurdev ;duration deviation (%) iptdur = p3 * 2^(idurperc/100) p3 = iptdur ;set p3 to the calculated value ;play partial aenv transeg 0, .01, 0, iamp, p3-.01, -10, 0
119
apart endin
poscil outs
</CsInstruments> <CsScore> ; frequency amplitude ; deviation deviation ; in cent in dB ;;unchanged sound (twice) r 2 i 1 0 5 0 0 s ;;slight variations in frequency r 4 i 1 0 5 25 0 ;;slight variations in amplitude r 4 i 1 0 5 0 6 ;;slight variations in duration r 4 i 1 0 5 0 0 ;;slight variations combined r 6 i 1 0 5 25 6 ;;heavy variations r 6 i 1 0 5 50 9 </CsScore> </CsoundSynthesizer>
duration deviation in % 0
0 0 30 30 100
For a midi-triggered descendant of the instrument, we can - as one of many possible choices vary the amount of possible random variation on the key velocity. So a key pressed softly plays the bell-like sound as described by Risset but as a key is struck with increasing force the sound produced will be increasingly altered. EXAMPLE 04A08.csd
<CsoundSynthesizer> <CsOptions> -o dac </CsOptions> <CsInstruments> ;Example by Joachim Heintz sr = 44100 ksmps = 32 nchnls = 2 0dbfs = 1 ;frequency and amplitude multipliers for 11 partials of Risset's bell giFqs ftgen 0, 0, -11, -2, .56,.563,.92,.923,1.19,1.7,2,2.74,3,3.74,4.07 giAmps ftgen 0, 0, -11, -2, 1, 2/3, 1, 1.8, 8/3, 1.46, 4/3, 4/3, 1, 4/3 giSine ftgen 0, 0, 2^10, 10, 1 seed 0 massign 0, 1 ;all midi channels to instr 1 instr 1 ;master instrument ;;scale desired deviations for maximum velocity ;frequency (cent) imxfqdv = 100 ;amplitude (dB) imxampdv = 12 ;duration (%) imxdurdv = 100 ;;get midi values ibasfreq cpsmidi ;base frequency iampmid ampmidi 1 ;receive midi-velocity and scale 0-1 ;;calculate maximum deviations depending on midi-velocity ifqdev = imxfqdv * iampmid iampdev = imxampdv * iampmid idurdev = imxdurdv * iampmid ;;trigger subinstruments indx = 0 ;count variable for loop loop: ifqmult tab_i indx, giFqs ;get frequency multiplier from table ifreq = ibasfreq * ifqmult iampmult tab_i indx, giAmps ;get amp multiplier iamp = iampmult / 20 ;scale event_i "i", 10, 0, 3, ifreq, iamp, ifqdev, iampdev, idurdev loop_lt indx, 1, 11, loop endin instr 10 ;subinstrument for playing one partial ;receive the parameters from the master instrument ifreqnorm = p4 ;standard frequency of this partial
120
iampnorm = p5 ;standard amplitude of this partial ifqdev = p6 ;maximum freq deviation in cents iampdev = p7 ;maximum amp deviation in dB idurdev = p8 ;maximum duration deviation in % ;calculate frequency icent random -ifqdev, ifqdev ;cent deviation ifreq = ifreqnorm * cent(icent) ;calculate amplitude idb random -iampdev, iampdev ;dB deviation iamp = iampnorm * ampdb(idb) ;calculate duration idurperc random -idurdev, idurdev ;duration deviation (%) iptdur = p3 * 2^(idurperc/100) p3 = iptdur ;set p3 to the calculated value ;play partial aenv transeg 0, .01, 0, iamp, p3-.01, -10, 0 apart poscil aenv, ifreq, giSine outs apart, apart endin </CsInstruments> <CsScore> f 0 3600 </CsScore> </CsoundSynthesizer>
It will depend on the power of your computer whether you can play examples like this in realtime. Have a look at chapter 2D (Live Audio) for tips on getting the best possible performance from your Csound orchestra. Additive synthesis can still be an exciting way of producing sounds. The nowadays computational power and programming structures open the way for new discoverings and ideas. The later examples were intended to show some of these potentials of additive synthesis in Csound. 1. Jean-Claude Risset, Introductory Catalogue of Computer Synthesized Sounds (1969), cited after Dodge/Jerse, Computer Music, New York / London 1985, p.94^
121
EXAMPLE 04B01.csd
122
<CsoundSynthesizer> <CsOptions> -odevaudio -b512 -Ma </CsOptions> <CsInstruments> sr = 44100 ksmps = 4 nchnls = 2 0dbfs = 1 initc7 1,1,0.8 prealloc 1, 10 instr 1 iNum notnum iCF ctrl7 ;read in midi note number 1,1,0.1,14 ;read in midi controller 1 ;set initial controller position
; set up default p-field values for midi activated notes mididefault iNum, p4 ;pitch (note number) mididefault 0.3, p5 ;amplitude 1 mididefault 2, p6 ;type 1 mididefault 0.5, p7 ;pulse width 1 mididefault 0, p8 ;octave disp. 1 mididefault 0, p9 ;tuning disp. 1 mididefault 0.3, p10 ;amplitude 2 mididefault 1, p11 ;type 2 mididefault 0.5, p12 ;pulse width 2 mididefault -1, p13 ;octave displacement 2 mididefault 20, p14 ;tuning disp. 2 mididefault iCF, p15 ;filter cutoff freq mididefault 0.01, p16 ;filter env. attack time mididefault 1, p17 ;filter env. decay time mididefault 0.01, p18 ;filter env. sustain level mididefault 0.1, p19 ;filter release time mididefault 0.3, p20 ;filter resonance mididefault 0.01, p21 ;amp. env. attack mididefault 0.1, p22 ;amp. env. decay. mididefault 1, p23 ;amp. env. sustain mididefault 0.01, p24 ;amp. env. release ; asign p-fields to variables iCPS = cpsmidinn(p4) ;convert from note number to cps kAmp1 = p5 iType1 = p6 kPW1 = p7 kOct1 = octave(p8) ;convert from octave displacement to multiplier kTune1 = cent(p9) ;convert from cents displacement to multiplier kAmp2 = p10 iType2 = p11 kPW2 = p12 kOct2 = octave(p13) kTune2 = cent(p14) iCF = p15 iFAtt = p16 iFDec = p17 iFSus = p18 iFRel = p19 kRes = p20 iAAtt = p21 iADec = p22 iASus = p23 iARel = p24 ;oscillator 1 if iType1==1||iType1==2 then ;if type is sawtooth or square... iMode1 = (iType1=1?0:2) ;...derive vco2 'mode' from waveform type aSig1 vco2 kAmp1,iCPS*kOct1*kTune1,iMode1,kPW1;VCO audio oscillator else ;otherwise... aSig1 noise kAmp1, 0.5 ;...generate white noise endif ;oscillator 2 - identical to oscillator 1 if iType2==1||iType2==2 then iMode2 = (iType2=1?0:2) aSig2 vco2 kAmp2,iCPS*kOct2*kTune2,iMode2,kPW2 else aSig2 noise kAmp2,0.5 endif ;mix oscillators aMix sum ;lowpass filter kFiltEnv expsegr aSig1,aSig2 0.0001,iFAtt,iCPS*iCF,iFDec,iCPS*iCF*iFSus,iFRel,0.0001
123
aOut
moogladder
<CsScore> ;p4 = oscillator frequency ;oscillator 1 ;p5 = amplitude ;p6 = type (1=sawtooth,2=square-PWM,3=noise) ;p7 = PWM (square wave only) ;p8 = octave displacement ;p9 = tuning displacement (cents) ;oscillator 2 ;p10 = amplitude ;p11 = type (1=sawtooth,2=square-PWM,3=noise) ;p12 = pwm (square wave only) ;p13 = octave displacement ;p14 = tuning displacement (cents) ;global filter envelope ;p15 = cutoff ;p16 = attack time ;p17 = decay time ;p18 = sustain level (fraction of cutoff) ;p19 = release time ;p20 = resonance ;global amplitude envelope ;p21 = attack time ;p22 = decay time ;p23 = sustain level ;p24 = release time ; p1 p2 p3 p4 p5 p6 p7 p8 p9 p10 p11 p12 p24 i 1 0 1 50 0 2 .5 0 -5 0 2 0.5 .05 i 1 + 1 50 .2 2 .5 0 -5 .2 2 0.5 .05 i 1 + 1 50 .2 2 .5 0 -8 .2 2 0.5 .05 i 1 + 1 50 .2 2 .5 0 -8 .2 2 0.5 .05 i 1 + 3 50 .2 1 .5 0 -10 .2 1 0.5 .05 i 1 + 10 50 1 2 .01 -2 0 .2 3 0.5 .05 f 0 3600 e </CsScore> </CsoundSynthesizer
p17 2 1 1 1 3 5
p18 .01 .1 .1 .1
p22 p23
.005 .01 1 .005 .01 1 .005 .01 1 .005 .01 1 .005 .01 1 .005 .01 1
.001 .1
.001 1.5 .1
124
A lowpass and highpass filter are inserted in series before the parallel bandpass filters to shape the frequency spectrum of the source sound. Csound's butterworth filters butlp and buthp are chosen for this task on account of their steep cutoff slopes and lack of ripple at the cutoff point. The outputs of the reson filters are sent alternately to the left and right outputs in order to create a broad stereo effect. This example makes extensive use of the 'rspline' opcode, a generator of random spline functions, to slowly undulate the many input parameters. The orchestra is self generative in that instrument 1 repeatedly triggers note events in instrument 2 and the extensive use of random functions means that the results will continually evolve as the orchestra is allowed to perform. A flow diagram for this instrument is shown below:
EXAMPLE 04B02.csd
<CsoundSynthesizer> <CsOptions> -odevaudio -b512 -dm0 </CsOptions> <CsInstruments> ;Example written by Iain McCurdy sr = 44100 ksmps = 16 nchnls = 2 0dbfs = 1 instr 1 ; triggers notes in instrument 2 with randomised p-fields krate randomi 0.2,0.4,0.1 ;rate of note generation ktrig metro krate ;triggers used by schedkwhen koct random 5,12 ;fundemental pitch of synth note kdur random 15,30 ;duration of note schedkwhen ktrig,0,0,2,0,kdur,cpsoct(koct) ;trigger a note in instrument 2 endin instr aNoise kGap aPulse kCFade aInput ; cutoff kLPF_CF kHPF_CF ; filter ; - done aInput aInput aInput aInput 2 ; subtractive synthesis instrument pinkish 1 ;a noise source sound: pink noise rspline 0.3,0.05,0.2,2 ;time gap between impulses mpulse 15, kGap ;a train of impulses rspline 0,1,0.1,1 ;crossfade point between noise and impulses ntrpol aPulse,aNoise,kCFade ;implement crossfade frequencies for low and highpass filters rspline 13,8,0.1,0.4 rspline 5,10,0.1,0.4 input sound with low and highpass filters in series twice per filter in order to sharpen cutoff slopes butlp aInput, cpsoct(kLPF_CF) butlp aInput, cpsoct(kLPF_CF) buthp aInput, cpsoct(kHPF_CF) buthp aInput, cpsoct(kHPF_CF)
kcf rspline p4*1.05,p4*0.95,0.01,0.1 ; fundemental ; bandwidth for each filter is created individually as a random spline function kbw1 rspline 0.00001,10,0.2,1 kbw2 rspline 0.00001,10,0.2,1 kbw3 rspline 0.00001,10,0.2,1 kbw4 rspline 0.00001,10,0.2,1 kbw5 rspline 0.00001,10,0.2,1 kbw6 rspline 0.00001,10,0.2,1 kbw7 rspline 0.00001,10,0.2,1 kbw8 rspline 0.00001,10,0.2,1
125
kbw9 kbw10 kbw11 kbw12 kbw13 kbw14 kbw15 kbw16 kbw17 kbw18 kbw19 kbw20 kbw21 kbw22 imode a1 a2 a3 a4 a5 a6 a7 a8 a9 a10 a11 a12 a13 a14 a15 a16 a17 a18 a19 a20 a21 a22
rspline rspline rspline rspline rspline rspline rspline rspline rspline rspline rspline rspline rspline rspline = reson reson reson reson reson reson reson reson reson reson reson reson reson reson reson reson reson reson reson reson reson reson
0.00001,10,0.2,1 0.00001,10,0.2,1 0.00001,10,0.2,1 0.00001,10,0.2,1 0.00001,10,0.2,1 0.00001,10,0.2,1 0.00001,10,0.2,1 0.00001,10,0.2,1 0.00001,10,0.2,1 0.00001,10,0.2,1 0.00001,10,0.2,1 0.00001,10,0.2,1 0.00001,10,0.2,1 0.00001,10,0.2,1 0 ; amplitude balancing method used by the reson filters aInput, kcf*1, kbw1, imode aInput, kcf*1.0019054878049, kbw2, imode aInput, kcf*1.7936737804878, kbw3, imode aInput, kcf*1.8009908536585, kbw4, imode aInput, kcf*2.5201981707317, kbw5, imode aInput, kcf*2.5224085365854, kbw6, imode aInput, kcf*2.9907012195122, kbw7, imode aInput, kcf*2.9940548780488, kbw8, imode aInput, kcf*3.7855182926829, kbw9, imode aInput, kcf*3.8061737804878, kbw10,imode aInput, kcf*4.5689024390244, kbw11,imode aInput, kcf*4.5754573170732, kbw12,imode aInput, kcf*5.0296493902439, kbw13,imode aInput, kcf*5.0455030487805, kbw14,imode aInput, kcf*6.0759908536585, kbw15,imode aInput, kcf*5.9094512195122, kbw16,imode aInput, kcf*6.4124237804878, kbw17,imode aInput, kcf*6.4430640243902, kbw18,imode aInput, kcf*7.0826219512195, kbw19,imode aInput, kcf*7.0923780487805, kbw20,imode aInput, kcf*7.3188262195122, kbw21,imode aInput, kcf*7.5551829268293, kbw22,imode each 0.3, 0.3, 0.3, 0.3, 0.3, 0.3, 0.3, 0.3, 0.3, 0.3, 0.3, 0.3, 0.3, 0.3, 0.3, 0.3, 0.3, 0.3, 0.3, 0.3, 0.3, 0.3, filter output 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
; amplitude control for kAmp1 rspline 0, 1, kAmp2 rspline 0, 1, kAmp3 rspline 0, 1, kAmp4 rspline 0, 1, kAmp5 rspline 0, 1, kAmp6 rspline 0, 1, kAmp7 rspline 0, 1, kAmp8 rspline 0, 1, kAmp9 rspline 0, 1, kAmp10 rspline 0, 1, kAmp11 rspline 0, 1, kAmp12 rspline 0, 1, kAmp13 rspline 0, 1, kAmp14 rspline 0, 1, kAmp15 rspline 0, 1, kAmp16 rspline 0, 1, kAmp17 rspline 0, 1, kAmp18 rspline 0, 1, kAmp19 rspline 0, 1, kAmp20 rspline 0, 1, kAmp21 rspline 0, 1, kAmp22 rspline 0, 1,
; left and right channel mixes are created using alternate filter outputs. ; This shall create a stereo effect. aMixL sum a1*kAmp1,a3*kAmp3,a5*kAmp5,a7*kAmp7,a9*kAmp9,a11*kAmp11,\ a13*kAmp13,a15*kAmp15,a17*kAmp17,a19*kAmp19,a21*kAmp21, aMixR sum a2*kAmp2,a4*kAmp4,a6*kAmp6,a8*kAmp8,a10*kAmp10,a12*kAmp12,\ a14*kAmp14,a16*kAmp16,a18*kAmp18,a20*kAmp20,a22*kAmp22 kEnv linseg 0, p3*0.5, 1,p3*0.5,0,1,0 ; global amplitude envelope outs (aMixL*kEnv*0.00002), (aMixR*kEnv*0.00002) ; audio sent to outputs endin </CsInstruments> <CsScore> i 1 0 3600 e </CsScore> ; instrument 1 (note generator) plays for 1 hour
</CsoundSynthesizer>
The final example in this section uses precisely tuned bandpass filters, to simulate the sound of the human voice expressing vowel sounds. Spectral resonances in this context are often referred to as 'formants'. Five formants are used to simulate the effect of the human mouth and head as a resonating (and therefore filtering) body. The filter data for simulating the vowel sounds A,E,I,O and U as expressed by a bass, tenor, counter-tenor, alto and soprano voice were found in the appendix of the Csound manual here. Bandwidth and intensity (dB) information is also needed to accurately simulate the various vowel sounds. reson filters are again used but butbp and others could be equally valid choices. Data is stored in GEN07 linear break point function tables, as this data is read by k-rate line functions we can interpolate and therefore morph between different vowel sounds during a note. The source sound for the filters comes from either a pink noise generator or a pulse waveform. The pink noise source could be used if the emulation is to be that of just the breath whereas the pulse waveform provides a decent approximation of the human vocal chords buzzing. This instrument can however morph continuously between these two sources. A flow diagram for this instrument is shown below:
EXAMPLE 04B03.csd
<CsoundSynthesizer> <CsOptions> -odevaudio -b512 -dm0 </CsOptions> <CsInstruments> ;example by Iain McCurdy sr = 44100 ksmps = 16 nchnls = 2 0dbfs = 1 instr 1 kFund kVow kBW iVoice kSrc aNoise aVCO aInput expon line line = line pinkish vco2 ntrpol p4,p3,p5 p6,p3,p7 p8,p3,p9 p10 p11,p3,p12 3 1.2,kFund,2,0.02 aVCO,aNoise,kSrc ; ; ; ; ; fundemental vowel select bandwidth factor voice select source mix
; read formant cutoff frequenies from tables kCF1 table kVow,1+(iVoice*15),1 kCF2 table kVow,2+(iVoice*15),1 kCF3 table kVow,3+(iVoice*15),1 kCF4 table kVow,4+(iVoice*15),1 kCF5 table kVow,5+(iVoice*15),1 ; read formant intensity values from tables
127
kDB1 table kVow,6+(iVoice*15),1 kDB2 table kVow,7+(iVoice*15),1 kDB3 table kVow,8+(iVoice*15),1 kDB4 table kVow,9+(iVoice*15),1 kDB5 table kVow,10+(iVoice*15),1 ; read formant bandwidths from tables kBW1 table kVow,11+(iVoice*15),1 kBW2 table kVow,12+(iVoice*15),1 kBW3 table kVow,13+(iVoice*15),1 kBW4 table kVow,14+(iVoice*15),1 kBW5 table kVow,15+(iVoice*15),1 ; create resonant formants byt filtering source sound aForm1 reson aInput, kCF1, kBW1*kBW, 1 ; formant aForm2 reson aInput, kCF2, kBW2*kBW, 1 ; formant aForm3 reson aInput, kCF3, kBW3*kBW, 1 ; formant aForm4 reson aInput, kCF4, kBW4*kBW, 1 ; formant aForm5 reson aInput, kCF5, kBW5*kBW, 1 ; formant
1 2 3 4 5
; formants are mixed and multiplied both by intensity values derived from tables and by the on-screen gain controls for each formant aMix sum aForm1*ampdbfs(kDB1),aForm2*ampdbfs(kDB2),aForm3*ampdbfs(kDB3),aForm4*ampdbfs(kDB4),aForm5*ampdbfs(kDB5) kEnv endin </CsInstruments> <CsScore> f 0 3600 ;DUMMY SCORE EVENT - PERMITS REALTIME PERFORMANCE FOR UP TO 1 HOUR ;FUNCTION TABLES STORING FORMANT DATA FOR EACH OF THE FIVE VOICE TYPES REPRESENTED ;BASS f 1 0 32768 -7 600 10922 400 10922 250 10924 350 ;FREQ f 2 0 32768 -7 1040 10922 1620 10922 1750 10924 600 ;FREQ f 3 0 32768 -7 2250 10922 2400 10922 2600 10924 2400 ;FREQ f 4 0 32768 -7 2450 10922 2800 10922 3050 10924 2675 ;FREQ f 5 0 32768 -7 2750 10922 3100 10922 3340 10924 2950 ;FREQ f 6 0 32768 -7 0 10922 0 10922 0 10924 0 ;dB f 7 0 32768 -7 -7 10922 -12 10922 -30 10924 -20 ;dB f 8 0 32768 -7 -9 10922 -9 10922 -16 10924 -32 ;dB f 9 0 32768 -7 -9 10922 -12 10922 -22 10924 -28 ;dB f 10 0 32768 -7 -20 10922 -18 10922 -28 10924 -36 ;dB f 11 0 32768 -7 60 10922 40 10922 60 10924 40 ;BAND WIDTH f 12 0 32768 -7 70 10922 80 10922 90 10924 80 ;BAND WIDTH f 13 0 32768 -7 110 10922 100 10922 100 10924 100 ;BAND WIDTH f 14 0 32768 -7 120 10922 120 10922 120 10924 120 ;BAND WIDTH f 15 0 32768 -7 130 10922 120 10922 120 10924 120 ;BAND WIDTH ;TENOR f 16 0 32768 -7 650 8192 400 8192 290 8192 400 8192 350 ;FREQ f 17 0 32768 -7 1080 8192 1700 8192 1870 8192 800 8192 600 ;FREQ f 18 0 32768 -7 2650 8192 2600 8192 2800 8192 2600 8192 2700 ;FREQ f 19 0 32768 -7 2900 8192 3200 8192 3250 8192 2800 8192 2900 ;FREQ f 20 0 32768 -7 3250 8192 3580 8192 3540 8192 3000 8192 3300 ;FREQ f 21 0 32768 -7 0 8192 0 8192 0 8192 0 8192 0 ;dB f 22 0 32768 -7 -6 8192 -14 8192 -15 8192 -10 8192 -20 ;dB f 23 0 32768 -7 -7 8192 -12 8192 -18 8192 -12 8192 -17 ;dB f 24 0 32768 -7 -8 8192 -14 8192 -20 8192 -12 8192 -14 ;dB f 25 0 32768 -7 -22 8192 -20 8192 -30 8192 -26 8192 -26 ;dB f 26 0 32768 -7 80 8192 70 8192 40 8192 40 8192 40 ;BAND WIDTH f 27 0 32768 -7 90 8192 80 8192 90 8192 80 8192 60 ;BAND WIDTH f 28 0 32768 -7 120 8192 100 8192 100 8192 100 8192 100 ;BAND WIDTH f 29 0 32768 -7 130 8192 120 8192 120 8192 120 8192 120 ;BAND WIDTH f 30 0 32768 -7 140 8192 120 8192 120 8192 120 8192 120 ;BAND WIDTH ;COUNTER TENOR f 31 0 32768 -7 660 8192 440 8192 270 8192 430 8192 370 ;FREQ f 32 0 32768 -7 1120 8192 1800 8192 1850 8192 820 8192 630 ;FREQ f 33 0 32768 -7 2750 8192 2700 8192 2900 8192 2700 8192 2750 ;FREQ f 34 0 32768 -7 3000 8192 3000 8192 3350 8192 3000 8192 3000 ;FREQ f 35 0 32768 -7 3350 8192 3300 8192 3590 8192 3300 8192 3400 ;FREQ f 36 0 32768 -7 0 8192 0 8192 0 8192 0 8192 0 ;dB f 37 0 32768 -7 -6 8192 -14 8192 -24 8192 -10 8192 -20 ;dB f 38 0 32768 -7 -23 8192 -18 8192 -24 8192 -26 8192 -23 ;dB f 39 0 32768 -7 -24 8192 -20 8192 -36 8192 -22 8192 -30 ;dB f 40 0 32768 -7 -38 8192 -20 8192 -36 8192 -34 8192 -30 ;dB f 41 0 32768 -7 80 8192 70 8192 40 8192 40 8192 40 ;BAND WIDTH f 42 0 32768 -7 90 8192 80 8192 90 8192 80 8192 60 ;BAND WIDTH f 43 0 32768 -7 120 8192 100 8192 100 8192 100 8192 100 ;BAND WIDTH f 44 0 32768 -7 130 8192 120 8192 120 8192 120 8192 120 ;BAND WIDTH f 45 0 32768 -7 140 8192 120 8192 120 8192 120 8192 120 ;BAND WIDTH ;ALTO f 46 0 32768 -7 800 8192 400 8192 350 8192 450 8192 325 ;FREQ f 47 0 32768 -7 1150 8192 1600 8192 1700 8192 800 8192 700 ;FREQ f 48 0 32768 -7 2800 8192 2700 8192 2700 8192 2830 8192 2530 ;FREQ f 49 0 32768 -7 3500 8192 3300 8192 3700 8192 3500 8192 2500 ;FREQ f 50 0 32768 -7 4950 8192 4950 8192 4950 8192 4950 8192 4950 ;FREQ f 51 0 32768 -7 0 8192 0 8192 0 8192 0 8192 0 ;dB linseg outs 0,3,1,p3-6,1,3,0 ; an amplitude envelope aMix*kEnv, aMix*kEnv ; send audio to outputs
128
f 52 0 32768 f 53 0 32768 f 54 0 32768 f 55 0 32768 f 56 0 32768 f 57 0 32768 f 58 0 32768 f 59 0 32768 f 60 0 32768 ;SOPRANO f 61 0 32768 f 62 0 32768 f 63 0 32768 f 64 0 32768 f 65 0 32768 f 66 0 32768 f 67 0 32768 f 68 0 32768 f 69 0 32768 f 70 0 32768 f 71 0 32768 f 72 0 32768 f 73 0 32768 f 74 0 32768 f 75 0 32768 ; ; ; ; ; ; ; ; ;
-7 -7 -7 -7 -7 -7 -7 -7 -7 -7 -7 -7 -7 -7 -7 -7 -7 -7 -7 -7 -7 -7 -7 -7
-4 8192 -24 8192 -20 8192 -9 8192 -12 ;dB -20 8192 -30 8192 -30 8192 -16 8192 -30 ;dB -36 8192 -35 8192 -36 8192 -28 8192 -40 ;dB -60 8192 -60 8192 -60 8192 -55 8192 -64 ;dB 50 8192 60 8192 50 8192 70 8192 50 ;BAND WIDTH 60 8192 80 8192 100 8192 80 8192 60 ;BAND WIDTH 170 8192 120 8192 120 8192 100 8192 170 ;BAND WIDTH 180 8192 150 8192 150 8192 130 8192 180 ;BAND WIDTH 200 8192 200 8192 200 8192 135 8192 200 ;BAND WIDTH 800 8192 350 8192 270 8192 450 8192 325 ;FREQ 1150 8192 2000 8192 2140 8192 800 8192 700 ;FREQ 2900 8192 2800 8192 2950 8192 2830 8192 2700 ;FREQ 3900 8192 3600 8192 3900 8192 3800 8192 3800 ;FREQ 4950 8192 4950 8192 4950 8192 4950 8192 4950 ;FREQ 0 8192 0 8192 0 8192 0 8192 0 ;dB -6 8192 -20 8192 -12 8192 -11 8192 -16 ;dB -32 8192 -15 8192 -26 8192 -22 8192 -35 ;dB -20 8192 -40 8192 -26 8192 -22 8192 -40 ;dB -50 8192 -56 8192 -44 8192 -50 8192 -60 ;dB 80 8192 60 8192 60 8192 70 8192 50 ;BAND WIDTH 90 8192 90 8192 90 8192 80 8192 60 ;BAND WIDTH 120 8192 100 8192 100 8192 100 8192 170 ;BAND WIDTH 130 8192 150 8192 120 8192 130 8192 180 ;BAND WIDTH 140 8192 200 8192 120 8192 135 8192 200 ;BAND WIDTH
p4 = fundemental begin value (c.p.s.) p5 = fundemental end value p6 = vowel begin value (0 - 1 : a e i o u) p7 = vowel end value p8 = bandwidth factor begin (suggested range 0 - 2) p9 = bandwidth factor end p10 = voice (0=bass; 1=tenor; 2=counter_tenor; 3=alto; 4=soprano) p11 = input source begin (0 - 1 : VCO - noise) p12 = input source end p7 1 0 1 0 1 p8 p9 p10 p11 2 0 0 0 1 0 1 0 1 0 2 1 0.2 0 3 1 0.2 0 4 0 p12 0 0 1 0 1
CONCLUSION
Hopefully these examples have demonstrated the strengths of subtractive synthesis in its simplicity, intuitive operation and its ability to create organic sounding timbres. Further research could explore Csound's other filter opcodes including vcomb, wguide1, wguide2 and the more esoteric phaser1, phaser2 and resony.
129
130
Ring modulation is the special-case of AM, without DC-offset (DC-Offset = 0). That means the modulator varies between -1 and +1 like the carrier. If the modulator is unipolar (oscilates between 0 and +1) the effect is called AM. The sounding difference is, that AM contains the carrier frequency and RM not.
Using an inharmonic modulator frequency also makes the result sound inharmonic. Varying the DC-offset makes the sound-spectrum evolve over time. Modulator freqs: [230, 460, 690] Resulting freqs: [ (-)90, 140, 370, <-600->, 830, 1060, 1290] (negative frequencies become mirrowed, but phase inverted) Example 04C04.csd
<CsoundSynthesizer> <CsOptions> -o dac </CsOptions> <CsInstruments> sr = 48000 ksmps = 32 nchnls = 1 0dbfs = 1 instr 1 ; Amplitude-Modulation aOffset linseg 0, 1, 0, 5, 1, 3, 0 aSine1 poscil 0.3, 230, 2 ; -> [230, 460, 690] Hz aSine2 poscil 0.3, 600, 1 out (aSine1+aOffset)*aSine2 endin </CsInstruments> <CsScore> f 1 0 1024 10 1 ; sine
131
132
When the modulation-width becomes increased, it becomes harder to describe the basefrequency, but it is still a vibrato. Example 04D02.csd
<CsoundSynthesizer> <CsOptions> -o dac </CsOptions> <CsInstruments> sr = 48000 ksmps = 32 nchnls = 2 0dbfs = 1 instr 1 aMod poscil 90, 5 , 1 ; modulate 90Hz ->vibrato from 350 to 530 hz aCar poscil 0.3, 440+aMod, 1 outs aCar, aCar endin </CsInstruments> <CsScore> f 1 0 1024 10 1 ;Sine wave for table 1 i 1 0 2 </CsScore> </CsoundSynthesizer> ; written by Alex Hofmann (Mar. 2011)
133
<CsOptions> -o dac </CsOptions> <CsInstruments> sr = 48000 ksmps = 32 nchnls = 2 0dbfs = 1 instr 1 aRaise linseg 2, 10, 100 ;increase modulation from 2Hz to 100Hz aMod poscil 10, aRaise , 1 aCar poscil 0.3, 440+aMod, 1 outs aCar, aCar endin </CsInstruments> <CsScore> f 1 0 1024 10 1 ;Sine wave for table 1 i 1 0 12 </CsScore> </CsoundSynthesizer> ; written by Alex Hofmann (Mar. 2011
Hereby the main-oscillator is called carrier and the one changing the carriers frequency is the modulator. The modulation-index: I = mod-amp/mod-freq. Making changes to the modulationindex, changes the amount of overtones, but not the overall volume. That gives the possibility produce drastic timbre-changes without the risk of distortion. When carrier and modulator frequency have integer ratios like 1:1, 2:1, 3:2, 5:4.. the sidebands build a harmonic series, which leads to a sound with clear fundamental pitch. Example 04D04.csd
<CsoundSynthesizer> <CsOptions> -o dac </CsOptions> <CsInstruments> sr = 48000 ksmps = 32 nchnls = 2 0dbfs = 1 instr 1 kCarFreq = 660 ; 660:440 = 3:2 -> harmonic spectrum kModFreq = 440 kIndex = 15 ; high Index.. try lower values like 1, 2, 3.. kIndexM = 0 kMaxDev = kIndex*kModFreq kMinDev = kIndexM * kModFreq kVarDev = kMaxDev-kMinDev kModAmp = kMinDev+kVarDev aModulator poscil kModAmp, kModFreq, 1 aCarrier poscil 0.3, kCarFreq+aModulator, 1 outs aCarrier, aCarrier endin </CsInstruments> <CsScore> f 1 0 1024 10 1 ;Sine wave for table 1 i 1 0 15 </CsScore> </CsoundSynthesizer> ; written by Alex Hofmann (Mar. 2011)
Otherwise the spectrum of the sound is inharmonic, which makes it metallic or noisy. Raising the modulation-index, shifts the energy into the side-bands. The side-bands distance is: Distance in Hz = (carrierFreq)-(k*modFreq) | k = {1, 2, 3, 4 ..} This calculation can result in negative frequencies. Those become reflected at zero, but with inverted phase! So negative frequencies can erase existing ones. Frequencies over Nyquistfrequency (half of samplingrate) "fold over" (aliasing).
134
Using envelopes to control the modulation index and the overall amplitude gives you the possibility to create evolving sounds with enormous spectral variations. Chowning showed these possibilities in his pieces, where he let the sounds transform. In the piece Sabelithe a drum sound morphes over the time into a trumpet tone. Example 04D05.csd
<CsoundSynthesizer> <CsOptions> -o dac </CsOptions> <CsInstruments> sr = 48000 ksmps = 32 nchnls = 2 0dbfs = 1 instr 1 ; simple way to generate a trumpet-like sound kCarFreq = 440 kModFreq = 440 kIndex = 5 kIndexM = 0 kMaxDev = kIndex*kModFreq kMinDev = kIndexM * kModFreq kVarDev = kMaxDev-kMinDev aEnv expseg .001, 0.2, 1, p3-0.3, 1, 0.2, 0.001 aModAmp = kMinDev+kVarDev*aEnv aModulator poscil aModAmp, kModFreq, 1 aCarrier poscil 0.3*aEnv, kCarFreq+aModulator, 1 outs aCarrier, aCarrier endin </CsInstruments> <CsScore> f 1 0 1024 10 1 ;Sine wave for table 1 i 1 0 2 </CsScore> </CsoundSynthesizer> ; written by Alex Hofmann (Mar. 2011)
The following example uses the same instrument, with different settings to generate a bell-like sound: Example 04D06.csd
<CsoundSynthesizer> <CsOptions> -o dac </CsOptions> <CsInstruments> sr = 48000 ksmps = 32 nchnls = 2 0dbfs = 1 instr 1 ; bell-like sound kCarFreq = 200 ; 200/280 = 5:7 -> inharmonic spectrum kModFreq = 280 kIndex = 12 kIndexM = 0 kMaxDev = kIndex*kModFreq kMinDev = kIndexM * kModFreq kVarDev = kMaxDev-kMinDev aEnv expseg .001, 0.001, 1, 0.3, 0.5, 8.5, .001 aModAmp = kMinDev+kVarDev*aEnv aModulator poscil aModAmp, kModFreq, 1 aCarrier poscil 0.3*aEnv, kCarFreq+aModulator, 1 outs aCarrier, aCarrier endin </CsInstruments> <CsScore> f 1 0 1024 10 1 ;Sine wave for table 1 i 1 0 9 </CsScore> </CsoundSynthesizer> ; written by Alex Hofmann (Mar. 2011)
Combining more than two oscillators (operators) is called complex FM synthesis. Operators can be connected in different combinations often 4-6 operators are used. The carrier is always the last operator in the row. Changing it's pitch, shifts the whole sound. All other operators are modulators, changing their pitch alters the sound-spectrum. Two into One: M1+M2 -> C The principle here is, that (M1:C) and (M2:C) will be separate modulations and later added together. Example 04D07.csd
<CsoundSynthesizer> <CsOptions> -o dac </CsOptions> <CsInstruments> sr = 48000 ksmps = 32 nchnls = 2 0dbfs = 1 instr 1 aMod1 poscil 200, 700, 1 aMod2 poscil 1800, 290, 1 aSig poscil 0.3, 440+aMod1+aMod2, 1 outs aSig, aSig endin </CsInstruments> <CsScore> f 1 0 1024 10 1 ;Sine wave for table 1 i 1 0 3 </CsScore> </CsoundSynthesizer> ; written by Alex Hofmann (Mar. 2011)
In series: M1->M2->C This is much more complicated to calculate and sound-timbre becomes harder to predict, because M1:M2 produces a complex spectrum (W), which then modulates the carrier (W:C). Example 04D08.csd
<CsoundSynthesizer> <CsOptions> -o dac </CsOptions> <CsInstruments> sr = 48000 ksmps = 32 nchnls = 2 0dbfs = 1 instr 1 aMod1 poscil 200, 700, 1 aMod2 poscil 1800, 290+aMod1, 1 aSig poscil 0.3, 440+aMod2, 1 outs aSig, aSig endin </CsInstruments> <CsScore> f 1 0 1024 10 1 ;Sine wave for table 1 i 1 0 3 </CsScore> </CsoundSynthesizer> ; written by Alex Hofmann (Mar. 2011)
136
If you'd like to build a feedbacking FM system, it will happen that the self-modulation comes to a zero point, which stops the oscillator forever. To avoid this, it is more practical to modulate the carriers table-lookup phase, instead of its pitch. Even the most famous FM-synthesizer Yamaha DX7 is based on the phase-modulation (PM) technique, because this allows feedback. The DX7 provides 7 operators, and offers 32 routing combinations of these. (http://yala.freeservers.com/t2synths.htm#DX7) To build a PM-synth in Csound tablei opcode needs to be used as oscillator. In order to step through the f-table, a phasor will output the necessary steps. Example 04D09.csd
<CsoundSynthesizer> <CsOptions> -o dac </CsOptions> <CsInstruments> sr = 48000 ksmps = 32 nchnls = 2 0dbfs = 1 instr 1 ; simple PM-Synth kCarFreq = 200 kModFreq = 280 kModFactor = kCarFreq/kModFreq kIndex = 12/6.28 ; 12/2pi to convert from radians to norm. table index aEnv expseg .001, 0.001, 1, 0.3, 0.5, 8.5, .001 aModulator poscil kIndex*aEnv, kModFreq, 1 aPhase phasor kCarFreq aCarrier tablei aPhase+aModulator, 1, 1, 0, 1 outs (aCarrier*aEnv), (aCarrier*aEnv) endin </CsInstruments> <CsScore> f 1 0 1024 10 1 ;Sine wave for table 1 i 1 0 9 </CsScore> </CsoundSynthesizer> ; written by Alex Hofmann (Mar. 2011)
Let's use the possibilities of self-modulation (feedback-modulation) of the oscillator. So in the following example, the oscillator is both modulator and carrier. To control the amount of modulation, an envelope scales the feedback. Example 04D10.csd
<CsoundSynthesizer> <CsOptions> -o dac </CsOptions> <CsInstruments> sr = 48000 ksmps = 32 nchnls = 2 0dbfs = 1 instr 1 ; feedback PM kCarFreq = 200 kFeedbackAmountEnv linseg 0, 2, 0.2, 0.1, 0.3, 0.8, 0.2, 1.5, 0 aAmpEnv expseg .001, 0.001, 1, 0.3, 0.5, 8.5, .001 aPhase phasor kCarFreq aCarrier init 0 ; init for feedback aCarrier tablei aPhase+(aCarrier*kFeedbackAmountEnv), 1, 1, 0, 1 outs aCarrier*aAmpEnv, aCarrier*aAmpEnv endin </CsInstruments> <CsScore> f 1 0 1024 10 1 ;Sine wave for table 1 i 1 0 9 </CsScore> </CsoundSynthesizer> ; written by Alex Hofmann (Mar. 2011)
137
23. WAVESHAPING
coming in the next release ...
138
139
instr 1 kRate expon p4,p3,p5 ; rate of grain generation created as an exponential function from p-field values kTrig metro kRate ; a trigger to generate grains kDur expon p6,p3,p7 ; grain duration is created as a exponential funcion from p-field values kForm expon p8,p3,p9 ; formant is created as an exponential function from p-field values ; p1 p2 p3 p4 schedkwhen kTrig,0,0,2, 0, kDur,kForm ;trigger a note(grain) in instr 2 ;print data to terminal every 1/2 second printks "Rate:%5.2F Dur:%5.2F Formant:%5.2F%n", 0.5, kRate , kDur, kForm endin instr 2 iForm = aEnv linseg aSig poscil out endin </CsInstruments> <CsScore> ;p4 = rate begin ;p5 = rate end ;p6 = duration begin ;p7 = duration end ;p8 = formant begin ;p9 = formant end ; p1 p2 p3 p4 p5 p6 i 1 0 30 1 100 0.02 i 1 31 10 10 10 0.4 i 1 42 20 50 50 0.02 e </CsScore> </CsoundSynthesizer>
p9 400 ;demo of grain generation rate 400 ;demo of grain size 5000 ;demo of changing formant
140
nchnls = 2 0dbfs = 1 instr 1 kFund kVow kBW iVoice expon line line = p4,p3,p5 p6,p3,p7 p8,p3,p9 p10 ; ; ; ; fundemental vowel select bandwidth factor voice select
; read formant cutoff frequenies from tables kForm1 table kVow,1+(iVoice*15),1 kForm2 table kVow,2+(iVoice*15),1 kForm3 table kVow,3+(iVoice*15),1 kForm4 table kVow,4+(iVoice*15),1 kForm5 table kVow,5+(iVoice*15),1 ; read formant intensity values from tables kDB1 table kVow,6+(iVoice*15),1 kDB2 table kVow,7+(iVoice*15),1 kDB3 table kVow,8+(iVoice*15),1 kDB4 table kVow,9+(iVoice*15),1 kDB5 table kVow,10+(iVoice*15),1 ; read formant bandwidths from tables kBW1 table kVow,11+(iVoice*15),1 kBW2 table kVow,12+(iVoice*15),1 kBW3 table kVow,13+(iVoice*15),1 kBW4 table kVow,14+(iVoice*15),1 kBW5 table kVow,15+(iVoice*15),1 ; create resonant formants byt filtering source sound koct = 1 aForm1 fof ampdb(kDB1), kFund, kForm1, 0, kBW1, 3600 aForm2 fof ampdb(kDB2), kFund, kForm2, 0, kBW2, 3600 aForm3 fof ampdb(kDB3), kFund, kForm3, 0, kBW3, 3600 aForm4 fof ampdb(kDB4), kFund, kForm4, 0, kBW4, 3600 aForm5 fof ampdb(kDB5), kFund, kForm5, 0, kBW5, 3600
0.003, 0.02, 0.007, 1000, 101, 102, 0.003, 0.02, 0.007, 1000, 101, 102, 0.003, 0.02, 0.007, 1000, 101, 102, 0.003, 0.02, 0.007, 1000, 101, 102, 0.003, 0.02, 0.007, 1000, 101, 102,
; formants are mixed and multiplied both by intensity values derived from tables and by the on-screen gain controls for each formant aMix sum aForm1,aForm2,aForm3,aForm4,aForm5 kEnv linseg 0,3,1,p3-6,1,3,0 ; an amplitude envelope outs aMix*kEnv, aMix*kEnv ; send audio to outputs endin </CsInstruments> <CsScore> f 0 3600 ;DUMMY SCORE EVENT - PERMITS REALTIME PERFORMANCE FOR UP TO 1 HOUR ;FUNCTION TABLES STORING FORMANT DATA FOR EACH OF THE FIVE VOICE TYPES REPRESENTED ;BASS f 1 0 32768 -7 600 10922 400 10922 250 10924 350 ;FREQ f 2 0 32768 -7 1040 10922 1620 10922 1750 10924 600 ;FREQ f 3 0 32768 -7 2250 10922 2400 10922 2600 10924 2400 ;FREQ f 4 0 32768 -7 2450 10922 2800 10922 3050 10924 2675 ;FREQ f 5 0 32768 -7 2750 10922 3100 10922 3340 10924 2950 ;FREQ f 6 0 32768 -7 0 10922 0 10922 0 10924 0 ;dB f 7 0 32768 -7 -7 10922 -12 10922 -30 10924 -20 ;dB f 8 0 32768 -7 -9 10922 -9 10922 -16 10924 -32 ;dB f 9 0 32768 -7 -9 10922 -12 10922 -22 10924 -28 ;dB f 10 0 32768 -7 -20 10922 -18 10922 -28 10924 -36 ;dB f 11 0 32768 -7 60 10922 40 10922 60 10924 40 ;BAND WIDTH f 12 0 32768 -7 70 10922 80 10922 90 10924 80 ;BAND WIDTH f 13 0 32768 -7 110 10922 100 10922 100 10924 100 ;BAND WIDTH f 14 0 32768 -7 120 10922 120 10922 120 10924 120 ;BAND WIDTH f 15 0 32768 -7 130 10922 120 10922 120 10924 120 ;BAND WIDTH ;TENOR f 16 0 32768 -7 650 8192 400 8192 290 8192 400 8192 350 ;FREQ f 17 0 32768 -7 1080 8192 1700 8192 1870 8192 800 8192 600 ;FREQ f 18 0 32768 -7 2650 8192 2600 8192 2800 8192 2600 8192 2700 ;FREQ f 19 0 32768 -7 2900 8192 3200 8192 3250 8192 2800 8192 2900 ;FREQ f 20 0 32768 -7 3250 8192 3580 8192 3540 8192 3000 8192 3300 ;FREQ f 21 0 32768 -7 0 8192 0 8192 0 8192 0 8192 0 ;dB f 22 0 32768 -7 -6 8192 -14 8192 -15 8192 -10 8192 -20 ;dB f 23 0 32768 -7 -7 8192 -12 8192 -18 8192 -12 8192 -17 ;dB f 24 0 32768 -7 -8 8192 -14 8192 -20 8192 -12 8192 -14 ;dB f 25 0 32768 -7 -22 8192 -20 8192 -30 8192 -26 8192 -26 ;dB f 26 0 32768 -7 80 8192 70 8192 40 8192 40 8192 40 ;BAND WIDTH f 27 0 32768 -7 90 8192 80 8192 90 8192 80 8192 60 ;BAND WIDTH f 28 0 32768 -7 120 8192 100 8192 100 8192 100 8192 100 ;BAND WIDTH f 29 0 32768 -7 130 8192 120 8192 120 8192 120 8192 120 ;BAND WIDTH f 30 0 32768 -7 140 8192 120 8192 120 8192 120 8192 120 ;BAND WIDTH ;COUNTER TENOR f 31 0 32768 -7 660 8192 440 8192 270 8192 430 8192 370 ;FREQ f 32 0 32768 -7 1120 8192 1800 8192 1850 8192 820 8192 630 ;FREQ
141
f 33 0 32768 f 34 0 32768 f 35 0 32768 f 36 0 32768 f 37 0 32768 f 38 0 32768 f 39 0 32768 f 40 0 32768 f 41 0 32768 f 42 0 32768 f 43 0 32768 f 44 0 32768 f 45 0 32768 ;ALTO f 46 0 32768 f 47 0 32768 f 48 0 32768 f 49 0 32768 f 50 0 32768 f 51 0 32768 f 52 0 32768 f 53 0 32768 f 54 0 32768 f 55 0 32768 f 56 0 32768 f 57 0 32768 f 58 0 32768 f 59 0 32768 f 60 0 32768 ;SOPRANO f 61 0 32768 f 62 0 32768 f 63 0 32768 f 64 0 32768 f 65 0 32768 f 66 0 32768 f 67 0 32768 f 68 0 32768 f 69 0 32768 f 70 0 32768 f 71 0 32768 f 72 0 32768 f 73 0 32768 f 74 0 32768 f 75 0 32768
-7 -7 -7 -7 -7 -7 -7 -7 -7 -7 -7 -7 -7 -7 -7 -7 -7 -7 -7 -7 -7 -7 -7 -7 -7 -7 -7 -7 -7 -7 -7 -7 -7 -7 -7 -7 -7 -7 -7 -7 -7 -7 -7
2750 8192 2700 8192 2900 8192 2700 8192 2750 ;FREQ 3000 8192 3000 8192 3350 8192 3000 8192 3000 ;FREQ 3350 8192 3300 8192 3590 8192 3300 8192 3400 ;FREQ 0 8192 0 8192 0 8192 0 8192 0 ;dB -6 8192 -14 8192 -24 8192 -10 8192 -20 ;dB -23 8192 -18 8192 -24 8192 -26 8192 -23 ;dB -24 8192 -20 8192 -36 8192 -22 8192 -30 ;dB -38 8192 -20 8192 -36 8192 -34 8192 -30 ;dB 80 8192 70 8192 40 8192 40 8192 40 ;BAND WIDTH 90 8192 80 8192 90 8192 80 8192 60 ;BAND WIDTH 120 8192 100 8192 100 8192 100 8192 100 ;BAND WIDTH 130 8192 120 8192 120 8192 120 8192 120 ;BAND WIDTH 140 8192 120 8192 120 8192 120 8192 120 ;BAND WIDTH 800 8192 400 8192 350 8192 450 8192 325 ;FREQ 1150 8192 1600 8192 1700 8192 800 8192 700 ;FREQ 2800 8192 2700 8192 2700 8192 2830 8192 2530 ;FREQ 3500 8192 3300 8192 3700 8192 3500 8192 2500 ;FREQ 4950 8192 4950 8192 4950 8192 4950 8192 4950 ;FREQ 0 8192 0 8192 0 8192 0 8192 0 ;dB -4 8192 -24 8192 -20 8192 -9 8192 -12 ;dB -20 8192 -30 8192 -30 8192 -16 8192 -30 ;dB -36 8192 -35 8192 -36 8192 -28 8192 -40 ;dB -60 8192 -60 8192 -60 8192 -55 8192 -64 ;dB 50 8192 60 8192 50 8192 70 8192 50 ;BAND WIDTH 60 8192 80 8192 100 8192 80 8192 60 ;BAND WIDTH 170 8192 120 8192 120 8192 100 8192 170 ;BAND WIDTH 180 8192 150 8192 150 8192 130 8192 180 ;BAND WIDTH 200 8192 200 8192 200 8192 135 8192 200 ;BAND WIDTH 800 8192 350 8192 270 8192 450 8192 325 ;FREQ 1150 8192 2000 8192 2140 8192 800 8192 700 ;FREQ 2900 8192 2800 8192 2950 8192 2830 8192 2700 ;FREQ 3900 8192 3600 8192 3900 8192 3800 8192 3800 ;FREQ 4950 8192 4950 8192 4950 8192 4950 8192 4950 ;FREQ 0 8192 0 8192 0 8192 0 8192 0 ;dB -6 8192 -20 8192 -12 8192 -11 8192 -16 ;dB -32 8192 -15 8192 -26 8192 -22 8192 -35 ;dB -20 8192 -40 8192 -26 8192 -22 8192 -40 ;dB -50 8192 -56 8192 -44 8192 -50 8192 -60 ;dB 80 8192 60 8192 60 8192 70 8192 50 ;BAND WIDTH 90 8192 90 8192 90 8192 80 8192 60 ;BAND WIDTH 120 8192 100 8192 100 8192 100 8192 170 ;BAND WIDTH 130 8192 150 8192 120 8192 130 8192 180 ;BAND WIDTH 140 8192 200 8192 120 8192 135 8192 200 ;BAND WIDTH ;EXPONENTIAL CURVE USED TO DEFINE THE ENVELOPE SHAPE OF FOF
f 101 0 4096 10 1 ;SINE WAVE f 102 0 1024 19 0.5 0.5 270 0.5 PULSES ; ; ; ; ; ; ;
p4 = fundamental begin value (c.p.s.) p5 = fundamental end value p6 = vowel begin value (0 - 1 : a e i o u) p7 = vowel end value p8 = bandwidth factor begin (suggested range 0 - 2) p9 = bandwidth factor end p10 = voice (0=bass; 1=tenor; 2=counter_tenor; 3=alto; 4=soprano) p4 50 78 150 200 400 p5 100 77 118 220 800 p6 0 1 0 1 0 p7 1 0 1 0 1 p8 2 1 1 0.2 0.2 p9 0 0 0 0 0 p10 0 1 2 3 4
; p1 p2 p3 i 1 0 10 i 1 8 . i 1 16 . i 1 24 . i 1 32 . e </CsScore>
</CsoundSynthesizer>
142
The next example is based on the design of example 04F01.csd. Two streams of grains are generated. The first stream begins as a synchronous stream but as the note progresses the periodicity of grain generation is eroded through the addition of an increasing degree of gaussian noise. It will be heard how the tone metamorphosizes from one characterized by steady purity to one of fuzzy airiness. The second the applies a similar process of increasing indeterminacy to the formant parameter (frequency of material within each grain). Other parameters of granular synthesis such as the amplitude of each grain, grain duration, spatial location etc. can be similarly modulated with random functions to offset the psychoacoustic effects of synchronicity when using constant values. EXAMPLE 04F03.csd
<CsoundSynthesizer> <CsOptions> -odevaudio -b512 -dm0 </CsOptions> <CsInstruments> ;Example by Iain McCurdy sr = 44100 ksmps = 1 nchnls = 1 0dbfs = 1 giWave ftgen 0,0,2^10,10,1,1/2,1/4,1/8,1/16,1/32,1/64
instr 1 ;grain generating instrument kRate = p4 kTrig metro kRate ; a trigger to generate grains kDur = p5 kForm = p6 ;note delay time (p2) is defined using a random function ;- beginning with no randomization but then gradually increasing kDelayRange transeg 0,1,0,0, p3-1,4,0.03 kDelay gauss kDelayRange ; p1 p2 p3 p4 schedkwhen kTrig,0,0,3, abs(kDelay), kDur,kForm ;trigger a note (grain) in instr 3 endin instr 2 ;grain generating instrument kRate = p4 kTrig metro kRate ; a trigger to generate grains kDur = p5 ;formant frequency (p4) is multiplied by a random function ;- beginning with no randomization but then gradually increasing kForm = p6 kFormOSRange transeg 0,1,0,0, p3-1,2,12 ;range defined in semitones kFormOS gauss kFormOSRange ; p1 p2 p3 p4 schedkwhen kTrig,0,0,3, 0, kDur,kForm*semitone(kFormOS) ;trigger a note (grain) in instr 3 endin instr 3 iForm aEnv aSig endin </CsInstruments> <CsScore> ;p4 = rate ;p5 = duration ;p6 = formant ; p1 p2 p3 p4 p5 p6 i 1 0 12 200 0.02 400 i 2 12.5 12 200 0.02 400 e </CsScore> </CsoundSynthesizer> ;grain sounding instrument = p4 linseg 0,0.005,0.2,p3-0.01,0.2,0.005,0 poscil aEnv, iForm, giWave out aSig
The next example introduces another of Csound's built-in granular synthesis opcodes to demonstrate the range of dynamic sound spectra that are possible with granular synthesis. Several parameters are modulated slowly using Csound's random spline generator rspline. These parameters are formant frequency, grain duration and grain density (rate of grain generation). The waveform used in generating the content for each grain is randomly chosen using a slow sample and hold random function - a new waveform will be selected every 10 seconds. Five waveforms are provided: a sawtooth, a square wave, a triangle wave, a pulse wave and a band limited buzz-like waveform. Some of these waveforms, particularly the sawtooth, square and pulse waveforms, can generate very high overtones, for this reason a high sample rate is recommended to reduce the risk of aliasing (see chapter 01A). Current values for formant (cps), grain duration, density and waveform are printed to the terminal every second. The key for waveforms is: 1:sawtooth; 2:square; 3:triangle; 4:pulse; 5:buzz. EXAMPLE 04F04.csd
<CsoundSynthesizer> <CsOptions> -odevaudio -b512 -dm0 </CsOptions> <CsInstruments> ;example by Iain McCurdy sr = 96000 ksmps = 16 nchnls = 1 0dbfs = 1 ;waveforms used for granulation giSaw ftgen 1,0,4096,7,0,4096,1 giSq ftgen 2,0,4096,7,0,2046,0,0,1,2046,1 giTri ftgen 3,0,4096,7,0,2046,1,2046,0 giPls ftgen 4,0,4096,7,1,200,1,0,0,4096-200,0 giBuzz ftgen 5,0,4096,11,20,1,1 ;window function - used as an amplitude envelope for each grain ;(hanning window) giWFn ftgen 7,0,16384,20,2,1 instr 1 ;random spline generates formant values in oct format kOct rspline 4,8,0.1,0.5 ;oct format values converted to cps format kCPS = cpsoct(kOct) ;phase location is left at 0 (the beginning of the waveform) kPhs = 0 ;formant(frequency) randomization and phase randomization are not used kFmd = 0 kPmd = 1 ;grain duration and density (rate of grain generation) created as random spline functions kGDur rspline 0.01,0.2,0.05,0.2 kDens rspline 10,200,0.05,0.5 ;maximum number of grain overlaps allowed. This is used as a CPU brake iMaxOvr = 1000 ;function table for source waveform for content of the grain is randomized ;kFn will choose a different wavefrom from the five provided once every 10 seconds kFn randomh 1,5.99,0.1 ;print info. to the terminal printks "CPS:%5.2F%TDur:%5.2F%TDensity:%5.2F%TWaveform:%1.0F%n",1,kCPS,kGDur,kDens,kFn aSig grain3 kCPS, kPhs, kFmd, kPmd, kGDur, kDens, iMaxOvr, kFn, giWFn, 0, 0 out aSig*0.06 endin </CsInstruments> <CsScore> i 1 0 300 e </CsScore> </CsoundSynthesizer>
144
The final example introduces grain3's two built-in randomizing functions for phase and pitch. Phase refers to the location in the source waveform from which a grain will be read, pitch refers to the pitch of the material within grains. In this example a long note is played, initially no randomization is employed but gradually phase randomization is increased and then reduced back to zero. The same process is applied to the pitch randomization amount parameter. This time grain size is relatively large:0.8 seconds and density correspondingly low: 20 Hz. EXAMPLE 04F05.csd
<CsoundSynthesizer> <CsOptions> -odevaudio -b512 -dm0 </CsOptions> <CsInstruments> ;example by Iain McCurdy sr = 44100 ksmps = 16 nchnls = 1 0dbfs = 1 ;waveforms used for granulation giBuzz ftgen 1,0,4096,11,40,1,0.9 ;window function - used as an amplitude envelope for each grain ;(bartlett window) giWFn ftgen 2,0,16384,20,3,1 instr 1 kCPS = kPhs = kFmd transeg kPmd transeg kGDur = kDens = iMaxOvr = kFn = ;print info. to printks aSig grain3 out endin </CsInstruments> <CsScore> i 1 0 51 e </CsScore> </CsoundSynthesizer> 100 0 0,21,0,0, 10,4,15, 10,-4,0 0,1,0,0, 10,4,1, 10,-4,0 0.8 20 1000 1 the terminal "Random Phase:%5.2F%TPitch Random:%5.2F%n",1,kPmd,kFmd kCPS, kPhs, kFmd, kPmd, kGDur, kDens, iMaxOvr, kFn, giWFn, 0, 0 aSig*0.06
CONCLUSION
This chapter has introduced some of the concepts behind the synthesis of new sounds based from simple waveforms by using granular synthesis techniques. Only two of Csound's built-in opcodes for granular synthesis, fof and grain3, have been used; it is beyond the scope of this work to cover all of the many opcodes for granulation that Csound provides. This chapter has focussed mainly on synchronous granular synthesis; chapter 05G, which introduces granulation of recorded sound files, makes greater use of asynchronous granular synthesis for time-stretching and pitch shifting. This chapter will also introduce some of Csound's other opcodes for granular synthesis.
145
146
26. ENVELOPES
Envelopes are used to define the change in a value over time. In early synthesizers, envelopes were used to define the changes in amplitude in a sound across its duration thereby imbuing sounds characteristics such as 'percussive', or 'sustaining'. Of course envelopes can be applied to any parameter and not just amplitude. Csound offers a wide array of opcodes for generating envelopes including ones which emulate the classic ADSR (attack-decay-sustain-release) envelopes found on hardware and commercial software synthesizers. A selection of these opcodes, which represent the basic types, shall be introduced here The simplest opcode for defining an envelope is line. line describes a single envelope segment as a straight line between a start value and an end value which has a given duration.
ares l i n e ia, idur, ib kres l i n e ia, idur, ib
In the following example line is used to create a simple envelope which is then used as the amplitude control of a poscil oscillator. This envelope starts with a value of 0.5 then over the course of 2 seconds descends in linear fashion to zero. EXAMPLE 05A01.csd
<CsoundSynthesizer> <CsOptions> -odac ;activates real time sound output </CsOptions> <CsInstruments> ;Example by Iain McCurdy sr = 44100 ksmps = 32 nchnls = 1 0dbfs = 1 giSine ftgen 0, 0, 2^12, 10, 1; a sine wave 0.5, 2, 0; amplitude envelope aEnv, 500, giSine; audio oscillator aSig; audio sent to output
The envelope in the above example assumes that all notes played by this instrument will be 2 seconds long. In practice it is often beneficial to relate the duration of the envelope to the duration of the note (p3) in some way. In the next example the duration of the envelope is replaced with the value of p3 retrieved from the score, whatever that may be. The envelope will be stretched or contracted accordingly. EXAMPLE 05A02.csd
<CsoundSynthesizer> <CsOptions> -odac ;activates real time sound output </CsOptions> <CsInstruments> ;Example by Iain McCurdy sr = 44100 ksmps = 32 nchnls = 1 0dbfs = 1
147
giSine
ftgen
0, 0, 2^12, 10, 1; a sine wave 0.5, p3, 0; single segment envelope. time value defined by note duration aEnv, 500, giSine; an audio oscillator aSig; audio sent to output
It may not be disastrous if a envelope's duration does not match p3 and indeed there are many occasions when we want an envelope duration to be independent of p3 but we need to remain aware that if p3 is shorter than an envelope's duration then that envelope will be truncated before it is allowed to complete and if p3 is longer than an envelope's duration then the envelope will complete before the note ends (the consequences of this latter situation will be looked at in more detail later on in this section). line (and most of Csound's envelope generators) can output either k or a-rate variables. k-rate envelopes are computationally cheaper than a-rate envelopes but in envelopes with fast moving segments quantization can occur if they output a k-rate variable, particularly when the control rate is low, which in the case of amplitude envelopes can lead to clicking artefacts or distortion. linseg is an elaboration of line and allows us to add an arbitrary number of segments by adding further pairs of time durations followed envelope values. Provided we always end with a value and not a duration we can make this envelope as long as we like. In the next example a more complex amplitude envelope is employed by using the linseg opcode. This envelope is also note duration (p3) dependent but in a more elaborate way. A attack-decay stage is defined using explicitly declared time durations. A release stage is also defined with an explicitly declared duration. The sustain stage is the p3 dependent stage but to ensure that the duration of the entire envelope still adds up to p3, the explicitly defined durations of the attack, decay and release stages are subtracted from the p3 dependent sustain stage duration. For this envelope to function correctly it is important that p3 is not less than the sum of all explicitly defined envelope segment durations. If necessary, additional code could be employed to circumvent this from happening. EXAMPLE 05A03.csd
<CsoundSynthesizer> <CsOptions> -odac ;activates real time sound output </CsOptions> <CsInstruments> ;Example by Iain McCurdy sr = 44100 ksmps = 32 nchnls = 1 0dbfs = 1 giSine ftgen 0, 0, 2^12, 10, 1; a sine wave |-attack-|-decay--|---sustain---|-release-| 0, 0.01, 1, 0.1, 0.1, p3-0.21, 0.1, 0.1, 0; a more complex amplitude aEnv, 500, giSine aSig
instr 1 ; aEnv linseg envelope aSig poscil out endin </CsInstruments> <CsScore> i 1 0 1 i 1 2 5 e </CsScore>
148
</CsoundSynthesizer>
The next example illustrates an approach that can be taken whenever it is required that more than one envelope segment duration be p3 dependent. This time each segment is a fraction of p3. The sum of all segments still adds up to p3 so the envelope will complete across the duration of each each note regardless of duration. EXAMPLE 05A04.csd
<CsoundSynthesizer> <CsOptions> -odac ;activates real time sound output </CsOptions> <CsInstruments> ;Example by Iain McCurdy sr = 44100 ksmps = 32 nchnls = 1 0dbfs = 1 giSine ftgen 0, 0, 2^12, 10, 1; a sine wave 0, p3*0.5, 1, p3*0.5, 0; rising then falling envelope aEnv, 500, giSine aSig
The next example highlights an important difference in the behaviours of line and linseg when p3 exceeds the duration of an envelope. When a note continues beyond the end of the final value of a linseg defined envelope the final value of that envelope is held. A line defined envelope behaves differently in that instead of holding its final value it continues in a trajectory defined by the last segment. This difference is illustrated in the following example. The linseg and line envelopes of instruments 1 and 2 appear to be the same but the difference in their behaviour as described above when they continue beyond the end of their final segment is clear when listening to the example. Note that information given in the Csound Manual in regard to this matter is incorrect at the time of writing. EXAMPLE 05A05.csd
<CsoundSynthesizer> <CsOptions> -odac ;activates real time sound output </CsOptions> <CsInstruments> ;Example by Iain McCurdy
149
sr = 44100 ksmps = 32 nchnls = 1 0dbfs = 1 giSine ftgen 0, 0, 2^12, 10, 1; a sine wave envelope 300, 1, 600; linseg holds its last value 0.2, aCps, giSine aSig
instr 2; line envelope aCps line 300, 1, 600; line continues its trajectory aSig poscil 0.2, aCps, giSine out aSig endin </CsInstruments> <CsScore> i 1 0 5; linseg envelope i 2 6 5; line envelope e </CsScore> </CsoundSynthesizer>
expon and expseg are versions of line and linseg that instead produce envelope segments with concave exponential rather than linear shapes. expon and expseg can often be more musically useful for envelopes that define amplitude or frequency as they will reflect the logarithmic nature of how these parameters are perceived. On account of the mathematics that is used to define these curves, we cannot define a value of zero at any node in the envelope and an envelope cannot cross the zero axis. If we require a value of zero we can instead provide a value very close to zero. If we still really need zero we can always subtract the offset value from the entire envelope in a subsequent line of code. The following example illustrates the difference between line and expon when applied as amplitude envelopes. EXAMPLE 05A06.csd
<CsoundSynthesizer> <CsOptions> -odac ;activates real time sound output </CsOptions> <CsInstruments> ;Example by Iain McCurdy sr = 44100 ksmps = 32 nchnls = 1 0dbfs = 1 giSine ftgen 0, 0, 2^12, 10, 1; a sine wave
instr 1; line envelope aEnv line 1, p3, 0 aSig poscil aEnv, 500, giSine out aSig endin instr 2; expon envelope aEnv expon 1, p3, 0.0001 aSig poscil aEnv, 500, giSine out aSig endin </CsInstruments> <CsScore> i 1 0 2; line envelope i 2 2 1; expon envelope e </CsScore> </CsoundSynthesizer>
150
The nearer our 'near-zero' values are to zero the more concave the segment curve will be. In the next example smaller and smaller envelope end values are passed to the expon opcode using p4 values in the score. The percussive 'ping' sounds are perceived to be increasingly short. EXAMPLE 05A07.csd
<CsoundSynthesizer> <CsOptions> -odac ;activates real time sound output </CsOptions> <CsInstruments> ;Example by Iain McCurdy sr = 44100 ksmps = 32 nchnls = 1 0dbfs = 1 giSine ftgen 0, 0, 2^12, 10, 1; a sine wave
instr 1; expon envelope iEndVal = p4; variable 'iEndVal' retrieved from score aEnv expon 1, p3, iEndVal aSig poscil aEnv, 500, giSine out aSig endin </CsInstruments> <CsScore> ;p1 p2 p3 i 1 0 1 i 1 1 1 i 1 2 1 e </CsScore> p4 0.001 0.000001 0.000000000000001
</CsoundSynthesizer>
Note that expseg does not behave like linseg in that it will not hold its last final value if p3 exceeds its entire duration, instead it continues its curving trajectory in a manner similar to line (and expon). This could have dangerous results if used as an amplitude envelope. When dealing with notes with an indefinite duration at the time of initiation (such as midi activated notes or score activated notes with a negative p3 value), we do not have the option of using p3 in a meaningful way. Instead we can use one of Csound's envelopes that sense the ending of a note when it arrives and adjust their behaviour according to this. The opcodes in question are linenr, linsegr, expsegr, madsr, mxadsr and envlpxr. These opcodes wait until a held note is turned off before executing their final envelope segment. To facilitate this mechanism they extend the duration of the note so that this final envelope segment can complete. The following example uses midi input (either hardware or virtual) to activate notes. The use of the linsegr envelope means that after the short attack stage lasting 0.1 seconds, the penultimate value of 1 will be held as long as the note is sustained but as soon as the note is released the note will be extended by 0.5 seconds in order to allow the final envelope segment to decay to zero. EXAMPLE 05A08.csd
<CsoundSynthesizer> <CsOptions> -odac -+rtmidi=virtual -M0; activates real time sound output and virtual midi device </CsOptions> <CsInstruments> ;Example by Iain McCurdy sr = 44100 ksmps = 32 nchnls = 1 0dbfs = 1 giSine instr 1 ftgen 0, 0, 2^12, 10, 1; a sine wave
151
cpsmidi linsegr poscil out attack-|sustain-|-release 0, 0.01, 1, 0.5,0; envelope that senses note releases aEnv, icps, giSine; audio oscillator aSig; audio sent to output
</CsInstruments> <CsScore> f 0 240; extend csound performance for 4 minutes e </CsScore> </CsoundSynthesizer>
Sometimes designing our envelope shape in a function table can provide us with shapes that are not possible using Csound's envelope generating opcodes. In this case the envelope can be read from the function table using an oscillator and if the oscillator is given a frequency of 1/p3 then it will read though the envelope just once across the duration of the note. The following example generates an amplitude envelope which is the shape of the first half of a sine wave. EXAMPLE 05A09.csd
<CsoundSynthesizer> <CsOptions> -odac ;activates real time sound output </CsOptions> <CsInstruments> ;Example by Iain McCurdy sr = 44100 ksmps = 32 nchnls = 1 0dbfs = 1 giSine giEnv ftgen ftgen 0, 0, 2^12, 10, 1; a sine wave 0, 0, 2^12, 9, 0.5, 1, 0; the envelope shape: a half sine 1, 1/p3, giEnv; read the envelope once during the note aEnv, 500, giSine; audio oscillator aSig; audio sent to output
<CsScore> ;7 notes, increasingly short i 1 0 2 i 1 2 1 i 1 3 0.5 i 1 4 0.25 i 1 5 0.125 i 1 6 0.0625 i 1 7 0.03125 f 0 7.1 e </CsScore> </CsoundSynthesizer>
152
lpshold generates an envelope with in which each break point is held constant until a new break point is encountered. The resulting envelope will contain horizontal line segments. In our example this opcode will be used to generate a looping bassline in the fashion of a Roland TB303. Because the duration of the entire envelope is wholly dependent upon the frequency with which the envelope repeats - in fact it is the reciprocal values for the durations of individual envelope segments are defining times in seconds but represent proportions of the entire envelope duration. The values given for all these segments do not need to add up to any specific value as Csound rescales the proportionality according to the sum of all segment durations. You might find it convenient to contrive to have them all add up to 1, or to 100 either is equally valid. The other looping envelope opcodes discussed here use the same method for defining segment durations. loopseg allows us to define a looping envelope with linear segements. In this example it is used to define the amplitude envelope of each individual note. Take note that whereas the lpshold envelope used to define the pitches of the melody repeats once per phrase the amplitude envelope repeats once for each note of the melody therefore its frequency is 16 times that of the melody envelope (there are 16 notes in our melodic phrase). looptseg is an elaboration of loopseg in that is allows us to define the shape of each segment individually whether that be convex, linear of concave. This aspect is defined using the 'type' parameters. A 'type' value of 0 denotes a linear segement, a positive value denotes a convex segment with higher positive values resulting in increasingly convex curves. Negative values denote concave segments with increasing negative values resulting in increasingly concave curves. In this example looptseg is used to define a filter envelope which, like the amplitude envelope, repeats for every note. The addition of the 'type' parameter allows us to modulate the sharpness of the decay of the filter envelope. This is a crucial element of the TB303 design. Note that looptseg is only available in Csound 5.12 or later. Other crucial features of this instrument such as 'note on/off' and 'hold' for each step are also implemented using lpshold. A number of the input parameters of this example are modulated automatically using the randomi opcodes in order to keep it interesting. It is suggested that these modulations could be replaced by linkages to other controls such as QuteCsound widgets, FLTK widgets or MIDI controllers. Suggested ranges for each of these values are given in the .csd. The filter used in this example is moogladder, an excellent implementation of the classic moog filter. This filter is however rather computationally expensive, if you encounter problems running this example in realtime you might like to swap it for the moogvcf opcode which is provided as an alternative in a commented out line. EXAMPLE 05A10.csd
<CsoundSynthesizer> <CsOptions> -odac ;activates real time sound output </CsOptions> <CsInstruments> ;Example by Iain McCurdy sr = 44100 ksmps = 4 nchnls = 1 0dbfs = 1 seed 0; seed random number generators from system clock instr 1; Bassline instrument kTempo = 90; tempo in beats per second kCfBase randomi 1,4,0.2; base filter cutoff frequency described in octaves above the current pitch. Values should be greater than 0 up to about 8 kCfEnv randomi 0,4,0.2; filter envelope depth. Values probably in the range 0 - 4 although negative numbers could be used for special effects kRes randomi 0.5,0.9,0.2; filter resonance. Suggested range 0 - 0.99 kVol = 0.5; volume control. Suggested range 0 - 1 kDecay randomi -10,10,0.2; decay shape of the filter. Suggested range -10 to +10. Zero=linear, negative=increasingly_concave, positive=increasingly_convex kWaveform = 0;waveform of the audio oscillator. 0=sawtooth 2=square kDist randomi 0,0.8,0.1; amount of distortion. Suggested range 0 - 1 ;read in phrase event widgets - use a macro to save typing
153
kPhFreq = kTempo/240; frequency with which to repeat the entire phrase kBtFreq = (kTempo)/15; frequency of each 1/16th note ; the first value of each pair defines the relative duration of that segment (just leave these as they are unless you want to create quirky rhythmic variations) ; the second, the value itself. Note numbers (kNum) are defined as MIDI note numbers. Note On/Off (kOn) and hold (kHold) are defined as on/off switches, 1 or zero ;envelopes with held segments note:1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 DUMMY kNum lpshold kPhFreq, 0, 0,40, 1,42, 1,50, 1,49, 1,60, 1,54, 1,39, 1,40, 1,46, 1,36, 1,40, 1,46, 1,50, 1,56, 1,44, 1,47, 1,45; need an extra 'dummy' value kOn lpshold kPhFreq, 0, 0,1, 1,1, 1,1, 1,1, 1,1, 1,1, 1,0, 1,1, 1,1, 1,1, 1,1, 1,1, 1,1, 1,1, 1,0, 1,1, 1,1 kHold lpshold kPhFreq, 0, 0,0, 1,1, 1,1, 1,0, 1,0, 1,0, 1,0, 1,1, 1,0, 1,0, 1,1, 1,1, 1,1, 1,1, 1,0, 1,0, 1,0; need an extra 'dummy' value kHold vdel_k kHold, 1/kBtFreq, 1; offset hold by 1/2 note duration kNum portk kNum, (0.01*kHold); apply portamento to pitch changes - if note is not held, no portamento will be applied kCps = cpsmidinn(kNum) kOct = octcps(kCps) ; amplitude envelope ; attack sustain decay gap kAmpEnv loopseg kBtFreq, 0, 0, 0,0.1, 1, 55/kTempo, 1, 0.1,0, 5/kTempo,0 ; sustain segment duration (and therefore attack and decay segment durations) are dependent upon tempo kAmpEnv = (kHold=0?kAmpEnv:1); if hold is off, use amplitude envelope, otherwise use constant value ; filter envelope kCfOct looptseg kBtFreq, 0, 0, kCfBase+kCfEnv+kOct, kDecay, 1, kCfBase+kOct kCfOct = (kHold=0?kCfOct:kCfBase+kOct); if hold is off, use filter envelope, otherwise use steady state value kCfOct limit kCfOct, 4, 14; limit the cutoff frequency to be within sensible limits ;kCfOct port kCfOct, 0.05; smooth the cutoff frequency envelope with portamento kWavTrig changed kWaveform; generate a 'bang' if waveform selector changes if kWavTrig=1 then; if a 'bang' has been generated... reinit REINIT_VCO; begin a reinitialization pass from the label 'REINIT_VCO' endif REINIT_VCO:; a label aSig vco2 0.4, kCps, i(kWaveform)*2, 0.5; generate audio using VCO oscillator rireturn; return from initialization pass to performance passes aSig moogladder aSig, cpsoct(kCfOct), kRes; filter audio ;aSig moogvcf aSig, cpsoct(kCfOct), kRes ;use moogvcf is CPU is struggling with moogladder ; distortion iSclLimit ftgentmp 0, 0, 1024, -16, 1, 1024, -8, 0.01; rescaling curve for clip 'limit' parameter iSclGain ftgentmp 0, 0, 1024, -16, 1, 1024, 4, 10; rescaling curve for gain compensation kLimit table kDist, iSclLimit, 1; read Limit value from rescaling curve kGain table kDist, iSclGain, 1; read Gain value from rescaling curve kTrigDist changed kLimit; if limit value changes generate a 'bang' if kTrigDist=1 then; if a 'bang' has been generated... reinit REINIT_CLIP; begin a reinitialization pass from label 'REINIT_CLIP' endif REINIT_CLIP: aSig clip aSig, 0, i(kLimit); clip distort audio signal rireturn aSig = aSig * kGain; compensate for gain loss from 'clip' processing kOn port kOn, 0.006 out aSig * kAmpEnv * kVol * kOn; audio sent to output, apply amp. envelope, volume control and note On/Off status endin </CsInstruments> <CsScore> i 1 0 3600 e </CsScore> </CsoundSynthesizer>
154
where kPan is within the range zero to 1. If kPan is 1 all the signal will be in the left channel, if it is zero all the signal will be in the right channel and if it is 0.5 there will be signal of equal amplitide in both the left and the right channels. This way the signal can be continuously panned between the left and right channels. The problem with this method is that the overall power drops as the sound is panned to the middle. One possible solution to this problem is to take the square root of the panning variable for each channel before multiplying it to the audio signal like this:
aSigL aSigR = = outs aSig * sqrt(kPan) aSig * sqrt((1 kPan)) aSigL, aSigR
By doing this, the straight line function of the input panning variable becomes a convex curve so that less power is lost as the sound is panned centrally. Using 90 sections of a sine wave for the mapping produces a more convex curve and a less immediate drop in power as the sound is panned away from the extremities. This can be implemented using the code shown below.
aSigL aSigR = = outs aSig * sin(kPan*$M_PI_2) aSig * cos(kPan*$M_PI_2) aSigL, aSigR
(Note that '$M_PI_2' is one of Csound's built in macros and is equivalent to pi/2.) A fourth method, devised by Michael Gogins, places the point of maximum power for each channel slightly before the panning variable reaches its extremity. The result of this is that when the sound is panned dynamically it appears to move beyond the point of the speaker it is addressing. This method is an elaboration of the previous one and makes use of a different 90 section of a sine wave. It is implemented using the following code:
aSigL aSigR = = outs aSig * sin((kPan + 0.5) * $M_PI_2) aSig * cos((kPan + 0.5) * $M_PI_2) aSigL, aSigR
The following example demonstrates all three methods one after the other for comparison. Panning movement is controlled by a slow moving LFO. The input sound is filtered pink noise. EXAMPLE 05B01.csd
<CsoundSynthesizer> <CsOptions>
155
-odac ;activates real time sound output </CsOptions> <CsInstruments> ;Example by Iain McCurdy sr = 44100 ksmps = 10 nchnls = 2 0dbfs = 1 instr 1 imethod = p4; read panning method variable from score (p4) ;generate a source sound================ a1 pinkish 0.3; pink noise a1 reson a1, 500, 30, 1; bandpass filtered aPan lfo 0.5, 1, 1; panning controlled by an lfo aPan = aPan + 0.5; offset shifted +0.5 ;======================================= if imethod=1 then ;method 1=============================== aPanL = aPan aPanR = 1 - aPan ;======================================= endif if imethod=2 then ;method 2=============================== aPanL = sqrt(aPan) aPanR = sqrt(1 - aPan) ;======================================= endif if imethod=3 then ;method 3=============================== aPanL = sin(aPan*$M_PI_2) aPanR = cos(aPan*$M_PI_2) ;======================================= endif if imethod=4 then ;method 3=============================== aPanL = sin ((aPan + 0.5) * $M_PI_2) aPanR = cos ((aPan + 0.5) * $M_PI_2) ;======================================= endif outs endin </CsInstruments> <CsScore> ;4 notes one ;p1 p2 p3 i 1 0 4.5 i 1 5 4.5 i 1 10 4.5 i 1 15 4.5 e </CsScore> after the other to demonstrate 4 different methods of panning p4(method) 1 2 3 4 a1*aPanL, a1*aPanR; audio sent to outputs
</CsoundSynthesizer>
An opcode called pan2 exist which makes panning slightly easier for us to implement simple panning employing various methods. The following example demonstrates the three methods that this opcode offers one after the other. The first is the 'equal power' method, the second 'square root' and the third is simple linear. The Csound Manual alludes to fourth method but this does not seem to function currently. EXAMPLE 05B02.csd
<CsoundSynthesizer> <CsOptions> -odac ;activates real time sound output </CsOptions> <CsInstruments> ;Example by Iain McCurdy sr = 44100 ksmps = 10
156
nchnls = 2 0dbfs = 1 instr 1 imethod = p4; read panning method variable from score (p4) ;generate a source sound==================== aSig pinkish 0.5; pink noise aSig reson aSig, 500, 30, 1; bandpass filtered aPan lfo 0.5, 1, 1; panning controlled by an lfo aPan = aPan + 0.5; offset shifted +0.5 ;=========================================== aSigL, aSigR endin </CsInstruments> <CsScore> ;3 notes one after the other to demonstrate 3 methods used by pan2 ;p1 p2 p3 p4 i 1 0 4.5 0; equal power (harmonic) i 1 5 4.5 1; square root method i 1 10 4.5 2; linear e </CsScore> </CsoundSynthesizer> pan2 outs aSig, aPan, imethod; create stereo panned output aSigL, aSigR; audio sent to outputs
instr 1 ; create an audio signal (noise impulses) krate oscil 30,0.2,giLFOShape; rate of impulses kEnv loopseg krate+3,0, 0,1, 0.1,0, 0.9,0; amplitude envelope: a repeating pulse aSig pinkish kEnv; pink noise. pulse envelope applied ; apply binaural 3d processing kAz linseg 0, 8, 360; break point envelope defines azimuth (one complete circle) kElev linseg 0, 8, 0, 4, 90, 8, -40, 4, 0; break point envelope defines elevation (held horizontal for 8 seconds then up then down then back to horizontal
157
aLeft, aRight hrtfmove2 aSig, kAz, kElev, "hrtf-44100-left.dat","hrtf-44100-right.dat"; apply hrtfmove2 opcode to audio source - create stereo ouput outs aLeft, aRight; audio sent to outputs endin </CsInstruments> <CsScore> i 1 0 60; instr 1 plays a note for 60 seconds e </CsScore> </CsoundSynthesizer>
158
28. FILTERS
Audio filters can range from devices that subtly shape the tonal characteristics of a sound to ones that dramatically remove whole portions of a sound spectrum to create new sounds. Csound includes several versions of each of the commonest types of filters and some more esoteric ones also. The full list of Csound's standard filters can be found here. A list of the more specialized filters can be found here.
LOWPASS FILTERS
The first type of filter encountered is normally the lowpass filter. As its name suggests it allows lower frequencies to pass through unimpeded and therefore filters higher frequencies. The crossover frequency is normally referred to as the 'cutoff' frequency. Filters of this type do not really cut frequencies off at the cutoff point like a brick wall but instead attenuate increasingly according to a cutoff slope. Different filters offer different steepnesses of cutoff slopes. Another aspect of a lowpass filter that we may be concerned with is a ripple that might emerge at the cutoff point. If this is exaggerated intentionally it is referred to as resonance or 'Q'. In the following example, three lowpass filters filters are demonstrated: tone, butlp and moogladder. tone offers a quite gentle cutoff slope and therefore is better suited to subtle spectral enhancement tasks. butlp is based on the Butterworth filter design and produces a much sharper cutoff slope at the expense of a slightly greater CPU overhead. moogladder is an interpretation of an analogue filter found in a moog synthesizer it includes a resonance control. In the example a sawtooth waveform is played in turn through each filter. Each time the cutoff frequency is modulated using an envelope, starting high and descending low so that more and more of the spectral content of the sound is removed as the note progresses. A sawtooth waveform has been chosen as it contains strong higher frequencies and therefore demonstrates the filters characteristics well; a sine wave would be a poor choice of source sound on account of its lack of spectral richness. EXAMPLE 05C01.csd
<CsoundSynthesizer> <CsOptions> -odac ;activates real time sound output </CsOptions> <CsInstruments> ;Example by Iain McCurdy sr = 44100 ksmps = 32 nchnls = 1 0dbfs = 1 instr 1 prints aSig vco2 kcf expon aSig tone out endin instr 2 prints aSig vco2 kcf expon aSig butlp out endin instr 3 prints aSig vco2 kcf expon aSig moogladder out "tone%n"; indicate filter type in console 0.5, 150; input signal is a sawtooth waveform 10000,p3,20; descending cutoff frequency aSig, kcf; filter audio signal aSig; filtered audio sent to output
"butlp%n"; indicate filter type in console 0.5, 150; input signal is a sawtooth waveform 10000,p3,20; descending cutoff frequency aSig, kcf; filter audio signal aSig; filtered audio sent to output
"moogladder%n"; indicate filter type in console 0.5, 150; input signal is a sawtooth waveform 10000,p3,20; descending cutoff frequency aSig, kcf, 0.9; filter audio signal aSig; filtered audio sent to output
159
endin </CsInstruments> <CsScore> ; 3 notes to demonstrate each filter in turn i 1 0 3; tone i 2 4 3; butlp i 3 8 3; moogladder e </CsScore> </CsoundSynthesizer>
HIGHPASS FILTERS
A highpass filter is the converse of a lowpass filter; frequencies higher than the cutoff point are allowed to pass whilst those lower are attenuated. atone and buthp are the analogues of tone and butlp. Resonant highpass filters are harder to find but Csound has one in bqrez. bqrez is actually a multi-mode filter and could also be used as a resonant lowpass filter amongst other things. We can choose which mode we want by setting one of its input arguments appropriately. Resonant highpass is mode 1. In this example a sawtooth waveform is again played through each of the filters in turn but this time the cutoff frequency moves from low to high. Spectral content is increasingly removed but from the opposite spectral direction. EXAMPLE 05C02.csd
<CsoundSynthesizer> <CsOptions> -odac ;activates real time sound output </CsOptions> <CsInstruments> ;Example by Iain McCurdy sr = 44100 ksmps = 32 nchnls = 1 0dbfs = 1 instr 1 prints vco2 expon atone out endin instr 2 prints vco2 expon buthp out endin "atone%n"; indicate filter type in console 0.2, 150; input signal is a sawtooth waveform 20, p3, 20000; define envelope for cutoff frequency aSig, kcf; filter audio signal aSig; filtered audio sent to output
"buthp%n"; indicate filter type in console 0.2, 150; input signal is a sawtooth waveform 20, p3, 20000; define envelope for cutoff frequency aSig, kcf; filter audio signal aSig; filtered audio sent to output
instr 3 prints aSig vco2 kcf expon aSig bqrez out endin </CsInstruments>
"bqrez(mode:1)%n"; indicate filter type in console 0.03, 150; input signal is a sawtooth waveform 20, p3, 20000; define envelope for cutoff frequency aSig, kcf, 30, 1; filter audio signal aSig; filtered audio sent to output
<CsScore> ; 3 notes to demonstrate each filter in turn i 1 0 3; atone i 2 5 3; buthp i 3 10 3; bqrez(mode 1) e </CsScore> </CsoundSynthesizer>
BANDPASS FILTERS
160
A bandpass filter allows just a narrow band of sound to pass through unimpeded and as such is a little bit like a combination of a lowpass and highpass filter connected in series. We normally expect at least one additional parameter of control: control over the width of the band of frequencies allowed to pass through, or 'bandwidth'. In the next example cutoff frequency and bandwidth are demonstrated independently for two different bandpass filters offered by Csound. First of all a sawtooth waveform is passed through a reson filter and a butbp filter in turn while the cutoff frequency rises (bandwidth remains static). Then pink noise is passed through reson and butbp in turn again but this time the cutoff frequency remains static at 5000Hz while the bandwidth expands from 8 to 5000Hz. In the latter two notes it will be heard how the resultant sound moves from almost a pure sine tone to unpitched noise. butbp is obviously the Butterworth based bandpass filter. reson can produce dramatic variations in amplitude depending on the bandwidth value and therefore some balancing of amplitude in the output signal may be necessary if out of range samples and distortion are to be avoided. Fortunately the opcode itself includes two modes of amplitude balancing built in but by default neither of these methods are active and in this case the use of the balance opcode may be required. Mode 1 seems to work well with spectrally sparse sounds like harmonic tones while mode 2 works well with spectrally dense sounds such as white or pink noise.
EXAMPLE 05C03.csd
<CsoundSynthesizer> <CsOptions> -odac ;activates real time sound output </CsOptions> <CsInstruments> ;Example by Iain McCurdy sr = 44100 ksmps = 32 nchnls = 1 0dbfs = 1 instr 1 prints vco2 expon reson out endin "reson%n"; indicate filter type in console 0.5, 150; input signal is a sawtooth waveform 20,p3,10000; rising cutoff frequency aSig, kcf, kcf*0.1, 1; filter audio signal aSig; send filtered audio to output
instr 2 prints aSig vco2 kcf expon aSig butbp out endin instr 3 prints aSig pinkish kbw expon aSig reson out endin instr 4 prints aSig pinkish kbw expon aSig butbp out endin </CsInstruments> <CsScore> i 1 0 3; reson i 2 4 3; butbp i 3 8 6; reson i 4 15 6; butbp e </CsScore> -
"butbp%n"; indicate filter type in console 0.5, 150; input signal is a sawtooth waveform 20,p3,10000; rising cutoff frequency aSig, kcf, kcf*0.1; filter audio signal aSig; send filtered audio to output
"reson%n"; indicate filter type in console 0.5; input signal is pinkish 10000,p3,8; contracting bandwidth aSig, 5000, kbw, 2; filter audio signal aSig; send filtered audio to output
"butbp%n"; indicate filter type in console 0.5; input signal is pinkish 10000,p3,8; contracting bandwidth aSig, 5000, kbw; filter audio signal aSig; send filtered audio to output
cutoff frequency rising cutoff frequency rising bandwidth increasing bandwidth increasing
161
</CsoundSynthesizer>
COMB FILTERING
A comb filter is a special type of filter that creates a harmonically related stack of resonance peaks on an input sound file. A comb filter is really just a very short delay effect with feedback. Typically the delay times involved would be less than 0.05 seconds. Many of the comb filters documented in the Csound Manual term this delay time, 'loop time'. The fundamental of the harmonic stack of resonances produced will be 1/loop time. Loop time and the frequencies of the resonance peaks will be inversely proportionsl as loop time get smaller, the frequencies rise. For a loop time of 0.02 seconds the fundamental resonance peak will be 50Hz, the next peak 100Hz, the next 150Hz and so on. Feedback is normally implemented as reverb time the time taken for amplitude to drop to 1/1000 of its original level or by 60dB. This use of reverb time as opposed to feedback alludes to the use of comb filters in the design of reverb algorithms. Negative reverb times will result in only the odd numbered partials of the harmonic stack being present. The following example demonstrates a comb filter using the vcomb opcode. This opcode allows for performance time modulation of the loop time parameter. For the first 5 seconds of the demonstration the reverb time increases from 0.1 seconds to 2 while the loop time remains constant at 0.005 seconds. Then the loop time decreases to 0.0005 seconds over 6 seconds (the resonant peaks rise in frequency), finally over the course of 10 seconds the loop time rises to 0.1 seconds (the resonant peaks fall in frequency). A repeating noise impulse is used as a source sound to best demonstrate the qualities of a comb filter.
EXAMPLE 05C04.csd
<CsoundSynthesizer> <CsOptions> -odac ;activates real time sound output </CsOptions> <CsInstruments> ;Example by Iain McCurdy sr = 44100 ksmps = 32 nchnls = 1 0dbfs = 1 instr 1 ; generate an input audio signal (noise impulses) kEnv loopseg 1,0, 0,1,0.005,1,0.0001,0,0.9949,0; repeating amplitude envelope aSig pinkish kEnv*0.6; pink noise signal - repeating amplitude envelope applied ; apply comb filter to input signal krvt linseg 0.1, 5, 2; reverb time envelope for comb filter alpt expseg 0.005, 5, 0.005, 6, 0.0005, 10, 0.1, 1, 0.1; loop time envelope for comb filter - using an a-rate variable here will produce better results aRes vcomb aSig, krvt, alpt, 0.1; comb filter out aRes; comb filtered audio sent to output endin </CsInstruments> <CsScore> i 1 0 25 e </CsScore> </CsoundSynthesizer>
162
where 'aSigIn' is the input signal written into the beginning of the buffer and 'aSigOut' is the output signal read from the end of the buffer. The fact that we declare reading from the buffer before writing to it is sometimes initially confusing but, as alluded to before, one reason this is done is to declare the length of the buffer. The buffer length in this case is 1 second and this will be the apparent time delay between the input audio signal and audio read from the end of the buffer. The following example implements the delay described above in a .csd file. An input sound of sparse sine tone pulses is created. This is written into the delay buffer from which a new audio signal is created by read from the end of this buffer. The input signal (sometimes referred to as the dry signal) and the delay output signal (sometimes referred to as the wet signal) are mixed and set to the output. The delayed signal is attenuated with respect to the input signal. EXAMPLE 05D01.csd
<CsoundSynthesizer> <CsOptions> -odac ;activates real time sound output </CsOptions> <CsInstruments> ;Example by Iain McCurdy sr = 44100 ksmps = 32 nchnls = 1 0dbfs = 1 giSine ftgen 0, 0, 2^12, 10, 1; a sine wave
instr 1 ; create an input signal kEnv loopseg 0.5, 0, 0, 0,0.0005, 1 , 0.1, 0, 1.9, 0 kCps randomh 400, 600, 0.5 aEnv interp kEnv aSig poscil aEnv, kCps, giSine
163
; create a delay buffer aBufOut delayr 0.3 delayw aSig ;send audio to output (input and output to the buffer are mixed) out aSig + (aBufOut*0.2) endin </CsInstruments> <CsScore> i 1 0 25 e </CsScore> </CsoundSynthesizer>
If we mix some of the delayed signal into the input signal that is written into the buffer then we will delay some of the delayed signal thus creating more than a single echo from each input sound. Typically the sound that is fed back into the delay input is attenuated so that sound cycle through the buffer indefinitely but instead will eventually die away. We can attenuate the feedback signal by multiplying it by a value in the range zero to 1. The rapidity with which echoes will die away is defined by how close the zero this value is. The following example implements a simple delay with feedback. EXAMPLE 05D02.csd
<CsoundSynthesizer> <CsOptions> -odac ;activates real time sound output </CsOptions> <CsInstruments> ;Example by Iain McCurdy sr = 44100 ksmps = 32 nchnls = 1 0dbfs = 1 giSine ftgen 0, 0, 2^12, 10, 1; a sine wave
instr 1 ; create an input signal kEnv loopseg 0.5, 0, 0, 0,0.0005, 1 , 0.1, 0, 1.9, 0; repeating envelope kCps randomh 400, 600, 0.5; 'held' random values aEnv interp kEnv; interpolate kEnv to create a-rate version aSig poscil aEnv, kCps, giSine; generate audio ; create a delay iFdback = buffer aBufOut delayr delayw buffer 0.5; this value defines the amount of delayed signal fed back into the delay 0.3; read audio from end of 0.3s buffer aSig + (aBufOut*iFdback); write audio into buffer (mix in feedback signal)
; send audio to ther output (mix the input signal with the delayed signal) out aSig + (aBufOut*0.2) endin </CsInstruments> <CsScore> i 1 0 25 e </CsScore> </CsoundSynthesizer>
164
Constructing a delay effect in this way is rather limited as the delay time is static. If we want to change the delay time we need to reinitialise the code that implements the delay buffer. A more flexible approach is to read audio from within the buffer using one of Csounds opcodes for 'tapping' a delay buffer, deltap, deltapi, deltap3 or deltapx. The opcodes are listed in order of increasing quality which also reflects an increase in computational expense. In the next example a delay tap is inserted within the delay buffer (between the delayr and the delayw) opcodes. As our delay time is modulating quite quickly we will use deltapi which uses linear interpolation as it rebuilds the audio signal whenever the delay time is moving. Note that this time we are not using the audio output from the delayr opcode as we are using the audio output from deltapi instead. The delay time used by deltapi is created by randomi which creates a random function of straight line segments. A-rate is used for the delay time to improve the accuracy of its values, use of k-rate would result in a noticeably poorer sound quality. You will notice that as well as modulating the time gap between echoes, this example also modulates the pitch of the echoes if the delay tap is static within the buffer there would be no change in pitch, if is moving towards the beginning of the buffer then pitch will rise and if it is moving towards the end of the buffer then pitch will drop. This side effect has led to digital delay buffers being used in the design of many pitch shifting effects. The user must take care that the delay time demanded from the delay tap does not exceed the length of the buffer as defined in the delayr line. If it does it will attempt to read data beyond the end of the RAM buffer the results of this are unpredictable. The user must also take care that the delay time does not go below zero, in fact the minumum delay time that will be permissible will be the duration of one k cycle (ksmps/sr). EXAMPLE 05D03.csd
<CsoundSynthesizer> <CsOptions> -odac ;activates real time sound output </CsOptions> <CsInstruments> ;Example by Iain McCurdy sr = 44100 ksmps = 32 nchnls = 1 0dbfs = 1 giSine ftgen 0, 0, 2^12, 10, 1; a sine wave
instr 1 ; create an input signal kEnv loopseg 0.5, 0, 0, 0,0.0005, 1 , 0.1, 0, 1.9, 0 aEnv interp kEnv aSig poscil aEnv, 500, giSine aDelayTime randomi 0.05, 0.2, 1; modulating delay time ; create a delay buffer aBufOut delayr 0.2; read audio from end of 0.3s buffer aTap deltapi aDelayTime; 'tap' the delay buffer somewhere along its length delayw aSig + (aTap*0.9); write audio into buffer (mix in feedback signal) ; send audio to ther output (mix the input signal with the delayed signal) out aSig + ((aTap)*0.4) endin </CsInstruments> <CsScore> i 1 0 30 e </CsScore> </CsoundSynthesizer
We are not limited to inserting only a single delay tap within the buffer. If we add further taps we create what is known as a multi-tap delay. The following example implements a multi-tap delay with three delay taps. Note that only the final delay (the one closest to the end of the buffer) is fed back into the input in order to create feedback but all three taps are mixed and sent to the output. There is no reason not to experiment with arrangements other than this but this one is most typical.
165
EXAMPLE 05D04.csd
<CsoundSynthesizer> <CsOptions> -odac ;activates real time sound output </CsOptions> <CsInstruments> ;Example by Iain McCurdy sr = 44100 ksmps = 32 nchnls = 1 0dbfs = 1 giSine ftgen 0, 0, 2^12, 10, 1; a sine wave
instr 1 ; create an input signal kEnv loopseg 0.5, 0, 0, 0,0.0005, 1 , 0.1, 0, 1.9, 0; repeating envelope kCps randomh 400, 1000, 0.5; 'held' random values aEnv interp kEnv; interpolate kEnv to create a-rate version aSig poscil aEnv, kCps, giSine; generate audio ; create a delay aBufOut delayr aTap1 deltap aTap2 deltap aTap3 deltap delayw buffer 0.5; read audio from end of 0.3s buffer 0.1373; delay tap 1 0.2197; delay tap 2 0.4139; delay tap 3 aSig + (aTap3*0.4); write audio into buffer (mix in feedback signal)
; send audio to ther output (mix the input signal with the delayed signals) out aSig + ((aTap1+aTap2+aTap3)*0.4) endin </CsInstruments> <CsScore> i 1 0 25 e </CsScore> </CsoundSynthesizer>
As mentioned at the top of this section many familiar effects are actually created from using delay buffers in various ways. We will briefly look at one of these effects: the flanger. Flanging derives from a phenemenon which occurs when the delay time becomes so short that we begin to no longer perceive individual echoes but instead a stack of harmonically related resonances are perceived the frequencies of which are in simple ratio with 1/delay_time. This effect is known as a comb filter. When the delay time is slowly modulated and the resonances shifting up and down in sympathy the effect becomes known as a flanger. In this example the delay time of the flanger is modulated using an LFO that employs a U-shaped parabola as its waveform as this seems to provide the smoothest comb filter modulations. EXAMPLE 05D05.csd
<CsoundSynthesizer> <CsOptions> -odac ;activates real time sound output </CsOptions> <CsInstruments> ;Example by Iain McCurdy sr = 44100 ksmps = 32 nchnls = 1 0dbfs = 1 giSine ftgen 0, 0, 2^12, 10, 1; a sine wave giLFOShape ftgen 0, 0, 2^12, 19, 0.5, 1, 180, 1; u-shaped parabola instr 1 aSig pinkish 0.1; pink noise
aMod poscil 0.005, 0.05, giLFOShape ;oscillator that makes use of the positive domain only u-shape parabola with function table number gilfoshape iOffset = iFdback = ksmps/sr; minimum delay time 0.9; amount of signal that will be fed back into the input
166
buffer 0.5; read audio from end of 0.5 buffer aMod + iOffset; tap audio from within delay buffer with a modulating delay aSig + (aTap*iFdback); write audio into the delay buffer
; send audio to the output (mix the input signal with the delayed signal) out aSig + aTap endin </CsInstruments> <CsScore> i 1 0 25 e </CsScore> </CsoundSynthesizer>
167
30. REVERBERATION
Reverb is the effect a room or space has on a sound where the sound we perceive is a mixture of the direct sound and the dense overlapping echoes of that sound reflecting off walls and objects within the space. Csound's earliest reverb opcodes are reverb and nreverb. By today's standards these sound rather crude and as a consequence modern Csound users tend to prefer the more recent opcodes freeverb and reverbsc. The typical way to use a reverb is to run as a effect throughout the entire Csound performance and to send it audio from other instruments to which it adds reverb. This is more efficient than initiating a new reverb effect for every note that is played. This arrangement is a reflection of how a reverb effect would be used with a mixing desk in a conventional studio. There are several methods of sending audio from sound producing instruments to the reverb instrument, three of which will be introduced in the coming examples The first method uses Csound's global variables so that an audio variable created in one instrument and be read in another instrument. There are several points to highlight here. First the global audio variable that is use to send audio the reverb instrument is initialized to zero (silence) in the header area of the orchestra. This is done so that if no sound generating instruments are playing at the beginning of the performance this variable still exists and has a value. An error would result otherwise and Csound would not run. When audio is written into this variable in the sound generating instrument it is added to the current value of the global variable. This is done in order to permit polyphony and so that the state of this variable created by other sound producing instruments is not overwritten. Finally it is important that the global variable is cleared (assigned a value of zero) when it is finished with at the end of the reverb instrument. If this were not done then the variable would quickly 'explode' (get astronomically high) as all previous instruments are merely adding values to it rather that redeclaring it. Clearing could be done simply by setting to zero but the clear opcode might prove useful in the future as it provides us with the opportunity to clear many variables simultaneously. This example uses the freeverb opcode and is based on a plugin of the same name. Freeverb has a smooth reverberant tail and is perhaps similar in sound to a plate reverb. It provides us with two main parameters of control: 'room size' which is essentially a control of the amount of internal feedback and therefore reverb time, and 'high frequency damping' which controls the amount of attenuation of high frequencies. Both there parameters should be set within the range 0 to 1. For room size a value of zero results in a very short reverb and a value of 1 results in a very long reverb. For high frequency damping a value of zero provides minimum damping of higher frequencies giving the impression of a space with hard walls, a value of 1 provides maximum high frequency damping thereby giving the impression of a space with soft surfaces such as thick carpets and heavy curtains. EXAMPLE 05E01.csd
<CsoundSynthesizer> <CsOptions> -odac ;activates real time sound output </CsOptions> <CsInstruments> ;Example by Iain McCurdy sr = 44100 ksmps = 32 nchnls = 2 0dbfs = 1 gaRvbSend init 0; global audio variable initialized to zero
168
loopseg
0.5,0, 0,1,0.003,1,0.0001,0,0.9969,0; amplitude envelope: a repeating kEnv; pink noise. pulse envelope applied aSig, aSig; send audio to outputs 0.4; reverb send amount (try range 0 - 1) gaRvbSend + (aSig * iRvbSendAmt); add a proportion of the audio from global reverb send variable
instr 5; reverb - always on kroomsize init 0.85; room size (range zero to 1) kHFDamp init 0.5; high frequency damping (range zero to 1) aRvbL,aRvbR freeverb gaRvbSend, gaRvbSend,kroomsize,kHFDamp; create reverberated version of input signal (note stereo input and output) outs aRvbL, aRvbR; send audio to outputs clear gaRvbSend endin </CsInstruments> <CsScore> i 1 0 300; noise pulses (input sound) i 5 0 300; start reverb e </CsScore> </CsoundSynthesizer>
The next example uses Csound's zak patching system to send audio from one instrument to another. The zak system is a little like a patch bay you might find in a recording studio. Zak channels can be a, k or i-rate. These channels will be addressed using numbers so it will be important to keep track of what numbered channel does what. Our example will be very simple in that we will only be using one zak audio channel. Before using any of the zak opcodes for reading and writing data we must initialize zak storage space. This is done in the orchestra header area using the zakinit opcode. This opcode initialize both a and k rate channel; we must intialize at least one of each even if we don't need it.
zakinit 1, 1
The audio from the sound generating instrument is mixed into a zak audio channel:
zawm aSig * iRvbSendAmt, 1
Because audio is begin mixed into our zak channel but it is never redefined it needs to be cleared after we have finished with it. This is accomplished at the bottom of the reverb instrument.
zacl 0, 1
This example uses the reverbsc opcode. It too has a stereo input and output. The arguments that define its character are feedback level and cutoff frequency. Feedback level should be in the range zero to 1 and controls reverb time. Cutoff frequency should be within the range of human hearing (20Hz -20kHz) and it controls the cutoff frequencies of low pass filters within the algorithm. EXAMPLE 05E02.csd
<CsoundSynthesizer> <CsOptions> -odac ;activates real time sound output </CsOptions> <CsInstruments> ;Example by Iain McCurdy sr = 44100 ksmps = 32 nchnls = 2 0dbfs = 1 zakinit 1, 1; initialize zak space (one a-rate and one k-rate variable. We will only be using the a-rate variable)
169
instr 1 ;sound generating instrument (sparse noise bursts) kEnv loopseg 0.5,0, 0,1,0.003,1,0.0001,0,0.9969,0; amplitude envelope: a repeating pulse aSig pinkish kEnv; pink noise. pulse envelope applied outs aSig, aSig; send audio to outputs iRvbSendAmt = 0.4; reverb send amount (try range 0 - 1) zawm aSig*iRvbSendAmt, 1; write to zak audio channel 1 with mixing endin instr 5; reverb - always on aInSig zar 1; read first zak audio channel kFblvl init 0.85; feedback level - i.e. reverb time kFco init 7000; cutoff frequency of a filter within the feedback loop aRvbL,aRvbR reverbsc aInSig, aInSig, kFblvl, kFco; create reverberated version of input signal (note stereo input and output) outs aRvbL, aRvbR; send audio to outputs zacl 0, 1; clear zak audio channels endin </CsInstruments> <CsScore> i 1 0 10; noise pulses (input sound) i 5 0 12; start reverb e </CsScore> </CsoundSynthesizer>
reverbsc contains a mechanism to modulate delay times internally which has the effect of harmonically blurring sounds the longer they are reverberated. This contrasts with freeverb's rather static reverberant tail. On the other hand screverb's tail is not as smooth as that of freeverb, inidividual echoes are sometimes discernible so it may not be as well suited to the reverberation of percussive sounds. Also be aware that as well as reducing the reverb time, the feedback level parameter reduces the overall amplitude of the effect to the point where a setting of 1 will result in silence from the opcode. A more recent option for sending sound from instrument to instrument in Csound is to use the chn... opcodes. These opcodes can also be used to allow Csound to interface with external programs using the software bus and the Csound API. EXAMPLE 05E03.csd
<CsoundSynthesizer> <CsOptions> -odac ;activates real time sound output </CsOptions> <CsInstruments> ;Example by Iain McCurdy
170
sr = 44100 ksmps = 32 nchnls = 2 0dbfs = 1 instr 1 ;sound generating instrument (sparse noise bursts) kEnv loopseg 0.5,0, 0,1,0.003,1,0.0001,0,0.9969,0; amplitude envelope: a repeating pulse aSig pinkish kEnv; pink noise. pulse envelope applied outs aSig, aSig; send audio to outputs iRvbSendAmt = 0.4; reverb send amount (try range 0 - 1) chnmix aSig*iRvbSendAmt, "ReverbSend" ;write audio into the named software channel endin instr 5; reverb - always on aInSig chnget "ReverbSend"; read audio from the named software channel kTime init 4; reverb time kHDif init 0.5; 'high frequency diffusion' - control of a filter within the feedback loop 0=no damping 1=maximum damping aRvb nreverb aInSig, kTime, kHDif; create reverberated version of input signal (note stereo input and output) outs aRvb, aRvb; send audio to outputs chnclear "ReverbSend"; clear the named channel endin </CsInstruments> <CsScore> i 1 0 10; noise pulses (input sound) i 5 0 12; start reverb e </CsScore> </CsoundSynthesizer>
171
The next example implements the basic Schroeder reverb with four parallel comb filters followed by three series allpass filters. This also proves a useful exercise in routing audio signals within Csound. Perhaps the most crucial element of the Schroeder reverb is the choice of loop times for the comb and allpass filters careful choices here should obviate the undesirable artefacts mentioned in the previous paragraph. If loop times are too long individual echoes will become apparent, if they are too short the characteristic ringing of comb filters will become apparent. If loop times between filters differ too much the outputs from the various filters will not fuse. It is also important that the loop times are prime numbers so that echoes between different filters do not reinforce each other. It may also be necessary to adjust loop times when implementing very short reverbs or very long reverbs. The duration of the reverb is effectively determined by the reverb times for the comb filters. There is ceratinly scope for experimentation with the design of this example and exploration of settings other than the ones suggested here. This example consists of five instruments. The fifth instrument implements the reverb algorithm described above. The first four instruments act as a kind of generative drum machine to provide source material for the reverb. Generally sharp percussive sounds provide the sternest test of a reverb effect. Instrument 1 triggers the various synthesized drum sounds (bass drum, snare and closed hi-hat) produced by instruments 2 to 4. EXAMPLE 05E04.csd
<CsoundSynthesizer> <CsOptions> -odac ;activates real time sound output </CsOptions> <CsInstruments> ;Example by Iain McCurdy sr = 44100 ksmps = 1 nchnls = 2 0dbfs = 1 giSine ftgen gaRvbSend init giRvbSendAmt init instr 1 ;trigger drum ktrigger metro kdrum random schedkwhen endin 0, 0, 2^12, 10, 1 ;a sine wave 0; global audio variable initialized to zero 0.4; reverb send amount (try range 0 - 1) hits 5; rate of drum strikes 2, 4.999; randomly choose drum to strike ktrigger, 0, 0, kdrum, 0, 0.1; strike a drum
instr 2; sound 1 - bass drum iamp random 0, 0.5; amplitude randomly chosen from between the given values p3 = 0.2; define duration for this sound aenv line 1,p3,0.001; amplitude envelope - percussive decay icps exprand 30; cycles-per-second offset randomly chosen from an exponential distribution kcps expon icps+120,p3,20; pitch glissando aSig oscil aenv*0.5*iamp, kcps, giSine; oscillator outs aSig, aSig; send audio to outputs gaRvbSend = gaRvbSend + (aSig * giRvbSendAmt); add portion of signal to global reverb send audio variable endin instr 3; sound 3 - snare
172
iAmp random 0, 0.5; amplitude randomly chosen from between the given values p3 = 0.3; define duration for this sound aEnv expon 1, p3, 0.001; amplitude envelope - percussive decay aNse noise 1, 0; create noise component for snare drum sound iCps exprand 20; cycles-per-second offset randomly chosen from an exponential distribution kCps expon 250 + iCps, p3, 200+iCps; create tone component frequency glissando for snare drum sound aJit randomi 0.2, 1.8, 10000; jitter on frequency for tone component aTne oscil aEnv, kCps*aJit, giSine; create tone component aSig sum aNse*0.1, aTne; mix noise and tone sound components aRes comb aSig, 0.02, 0.0035; pass signal through a comb filter to create static harmonic resonance aSig = aRes * aEnv * iAmp; apply envelope and amplitude factor to sound outs aSig, aSig; send audio to outputs gaRvbSend = gaRvbSend + (aSig * giRvbSendAmt); add portion of signal to global reverb send audio variable endin instr 4; sound 4 - closed hi-hat iAmp random 0, 1.5; amplitude randomly chosen from between the given values p3 = 0.1; define duration for this sound aEnv expon 1,p3,0.001; amplitude envelope - percussive decay aSig noise aEnv, 0; create sound for closed hi-hat aSig buthp aSig*0.5*iAmp, 12000; highpass filter sound aSig buthp aSig, 12000; highpass filter sound again to sharpen cutoff outs aSig, aSig; send audio to outputs gaRvbSend = gaRvbSend + (aSig * giRvbSendAmt); add portion of signal to global reverb send audio variable endin instr 5; schroeder reverb - always on ; read in variables from the score kRvt = p4 kMix = p5 ; print some information about current settings gleaned from the score prints "Type:" prints p6 prints "\\nReverb Time:%2.1f\\nDry/Wet Mix:%2.1f\\n\\n",p4,p5 ; four parallel comb filters a1 comb gaRvbSend, kRvt, a2 comb gaRvbSend, kRvt, a3 comb gaRvbSend, kRvt, a4 comb gaRvbSend, kRvt, asum sum a1,a2,a3,a4; sum 0.0297; comb filter 1 0.0371; comb filter 2 0.0411; comb filter 3 0.0437; comb filter 4 (mix) the outputs of all 4 comb filters
; two allpass filters in series a5 alpass asum, 0.1, 0.005; send comb filter mix through first allpass filter aOut alpass a5, 0.1, 0.02291; send comb filter mix through second allpass filter amix ntrpol reverberated signal outs clear endin </CsInstruments> <CsScore> ; room reverb i 1 0 10; start drum machine trigger instr i 5 0 11 1 0.5 "Room Reverb"; start reverb ; tight ambience i 1 11 10; start drum machine trigger instr i 5 11 11 0.3 0.9 "Tight Ambience"; start reverb ; long reverb (low in the mix) i 1 22 10; start drum machine trigger instr i 5 22 15 5 0.1 "Long Reverb (Low In the Mix)"; start reverb ; very long reverb (high in the mix) i 1 37 10; start drum machine trigger instr i 5 37 25 8 0.9 "Very Long Reverb (High in the Mix)"; start reverb e </CsScore> </CsoundSynthesizer> gaRvbSend, aOut, kMix; create a dry/wet mix between the dry and the amix, amix; send audio to outputs gaRvbSend ;clear global audio variables
173
31. AM / RM / WAVESHAPING
A theoretical introduction into amplitude-modulation, ringmodulation and waveshaping is given in the "sound-synthesis" chapter 4.
AMPLITUDE MODULATION
In "sound-synthesis" the principle of AM was shown as a amplitude multiplication of two sine oscillators. Later we've used a more complex modulators, to generate more complex spectrums. The principle also works very well with sound-files (samples) or live-audio-input. Karlheinz Stockhausens "Mixtur fur Orchester, vier Sinusgeneratoren und vier Ringmodulatoren (1964) was the first piece which used analog ringmodulation (AM without DC-offset) to alter the acoustic instruments pitch in realtime during a live-performance. The word ringmodulation inherites from the analog four-diode circuit which was arranged in a "ring". In the following example shows how this can be done digitally in Csound. In this case a sound-file works as the carrier which is modulated by a sine-wave-osc. The result sounds like old 'Harald Bode' pitch-shifters from the 1960's. Example: 05F01.csd
<CsoundSynthesizer> <CsOptions> -o dac </CsOptions> <CsInstruments> sr = 48000 ksmps = 32 nchnls = 1 0dbfs = 1 instr 1 ; Ringmodulation aSine1 poscil 0.8, p4, 1 aSample diskin2 "fox.wav", 1, 0, 1, 0, 32 out aSine1*aSample endin </CsInstruments> <CsScore> f 1 0 1024 10 1 ; sine i 1 0 2 400 i 1 2 2 800 i 1 4 2 1600 i 1 6 2 200 i 1 8 2 2400 e </CsScore> </CsoundSynthesizer> ; written by Alex Hofmann (Mar. 2011)
WAVESHAPING
coming soon..
174
GRANULAR SYNTHESIS
This chapter will focus upon granular synthesis used as a DSP technique upon recorded sound files and will introduce techniques including time stretching, time compressing and pitch shifting. The emphasis will be upon asynchronous granulation. For an introduction to synchronous granular synthesis using simple waveforms please refer to chapter 04F. Csound offers a wide range of opcodes for sound granulation. Each has its own strengths and weaknesses and suitability for a particular task. Some are easier to use than others, some, such as granule and partikkel, are extremely complex and are, at least in terms of the number of input arguments they demand, amongst Csound's most complex opcodes.
175
giWFn
ftgen 2,0,16384,9,0.5,1,0 0.1 p4,p3,p5 p6,p3,p7 giSound giWFn 0 3000 3000 50 0 p8 kamp,ktimewarp,kresample,ifn1,ibeg, \ iwsize,irandw,ioverlap,ifn2,itimemode aSigL,aSigR
instr 1 kamp ktimewarp kresample ifn1 ifn2 ibeg iwsize irandw ioverlap itimemode
endin </CsInstruments> <CsScore> ;p3 = stretch factor begin / pointer location begin ;p4 = stretch factor end / pointer location end ;p5 = resample begin (transposition) ;p6 = resample end (transposition) ;p7 = procedure description ;p8 = description string ; p1 p2 p3 p4 p5 p6 p7 p8 i 1 0 10 1 1 1 1 "No time stretch. No pitch shift." i 1 10.5 10 2 2 1 1 "%nTime stretch x 2." i 1 21 20 1 20 1 1 "%nGradually increasing time stretch factor from x 1 to x 20." i 1 41.5 10 1 1 2 2 "%nPitch shift x 2 (up 1 octave)." i 1 52 10 1 1 0.5 0.5 "%nPitch shift x 0.5 (down 1 octave)." i 1 62.5 10 1 1 4 0.25 "%nPitch shift glides smoothly from 4 (up 2 octaves) to 0.25 (down 2 octaves)." i 1 73 15 4 4 1 1 "%nA chord containing three transpositions: unison, +5th, +10th. (x4 time stretch.)" i 1 73 15 4 4 [3/2] [3/2] "" i 1 73 15 4 4 3 3 "" e </CsScore> </CsoundSynthesizer>
The next example uses sndwarp's other timestretch mode with which we explicitly define a pointer position from where in the source file grains shall begin. This method allows us much greater freedom with how a sound will be time warped; we can even freeze movement an go backwards in time - something that is not possible with timestretching mode. This example is self generative in that instrument 2, the instrument that actually creates the granular synthesis textures, is repeatedly triggered by instrument 1. Instrument 2 is triggered once every 12.5s and these notes then last for 40s each so will overlap. Instrument 1 is played from the score for 1 hour so this entire process will last that length of time. Many of the parameters of granulation are chosen randomly when a note begins so that each note will have unique characteristics. The timestretch is created by a line function: the start and end points of which are defined randomly when the note begins. Grain/window size and window size randomization are defined randomly when a note begins - notes with smaller window sizes will have a fuzzy airy quality wheres notes with a larger window size will produce a clearer tone. Each note will be randomly transposed (within a range of +/- 2 octaves) but that transposition will be quantized to a rounded number of semitones - this is done as a response to the equally tempered nature of source sound material used. Each entire note is enveloped by an amplitude envelope and a resonant lowpass filter in each case encasing each note under a smooth arc. Finally a small amount of reverb is added to smooth the overall texture slightly EXAMPLE 05G02.csd
176
<CsoundSynthesizer> <CsOptions> -odevaudio -b1024 -dm0 </CsOptions> <CsInstruments> ;example by Iain McCurdy sr = 44100 ksmps = 32 nchnls = 2 0dbfs = 1 ;the name of the sound file used is defined as a string variable ;- as it will be used twice in the code. ;The simplifies the task of adapting the orchestra ;to use a different sound file gSfile = "ClassicalGuitar.wav" ;waveform used for granulation giSound ftgen 1,0,2097152,1,gSfile,0,0,0 ;window function - used as an amplitude envelope for each grain ;(first half of a sine wave) giWFn ftgen 2,0,16384,9,0.5,1,0 seed 0; seed the random generators from the system clock gaSendL init 0 gaSendR init 0 instr 1 ; triggers instrument 2 ktrigger metro 0.08 ;metronome of triggers. One every 12.5s schedkwhen ktrigger,0,0,2,0,40 ;trigger instr. 2 for 40s endin instr 2 ; generates granular synthesis textures ;define the input variables ifn1 = giSound ilen = nsamp(ifn1)/sr iPtrStart random 1,ilen-1 iPtrTrav random -1,1 ktimewarp line iPtrStart,p3,iPtrStart+iPtrTrav kamp linseg 0,p3/2,0.2,p3/2,0 iresample random -24,24.99 iresample = semitone(int(iresample)) ifn2 = giWFn ibeg = 0 iwsize random 400,10000 irandw = iwsize/3 ioverlap = 50 itimemode = 1 ;create a stereo granular synthesis texture using sndwarp aSigL,aSigR sndwarpst kamp,ktimewarp,iresample,ifn1,ibeg,\ iwsize,irandw,ioverlap,ifn2,itimemode ;envelope the signal with a lowpass filter kcf expseg 50,p3/2,12000,p3/2,50 aSigL moogvcf2 aSigL, kcf, 0.5 aSigR moogvcf2 aSigR, kcf, 0.5 ; add a little of out audio signals to the global send variables ; these will be sent to the reverb instrument (2) gaSendL = gaSendL+(aSigL*0.4) gaSendR = gaSendR+(aSigR*0.4) outs aSigL,aSigR endin instr 3 ; global reverb instrument (always on) ; use Sean Costello's high quality reverbsc opcode for creating reverb signal aRvbL,aRvbR reverbsc gaSendL,gaSendR,0.85,8000 outs aRvbL,aRvbR ;clear variables to prevent out of control accumulation clear gaSendL,gaSendR endin </CsInstruments> <CsScore> ; p1 p2 p3 i 1 0 3600 ; triggers instr 2 i 3 0 3600 ; reverb instrument e </CsScore> </CsoundSynthesizer>
The granule opcode is one of Csound's most complex opcodes requiring up to 22 input arguments in order to function. Only a few of these arguments are available during performance (k-rate) so it is less well suited for real-time modulation, for real-time a more nimble implementation such as syncgrain, fog, or grain3 would be recommended. Instead granule proves itself ideally suited at the production of massive clouds of granulated sound in which individual grains are often completed indistinguishable. There are still two important k-rate variables that have a powerful effect on the texture created when they are modulated during a note, they are: grain gap effectively density - and grain size which will affect the clarity of the texture - textures with smaller grains will sound fuzzier and airier, textures with larger grains will sound clearer. In the following example transeg envelopes move the grain gap and grain size parameters through a variety of different states across the duration of each note. With granule we define a number a grain streams for the opcode using its 'ivoice' input argument. This will also have an effect on the density of the texture produced. Like sndwarp's first timestretching mode, granule also has a stretch ratio parameter. Confusingly it works the other way around though, a value of 0.5 will slow movement through the file by 1/2, 2 will double is and so on. Increasing grain gap will also slow progress through the sound file. granule also provides up to four pitch shift voices so that we can create chord-like structures without having to use more than one iteration of the opcode. We define the number of pitch shifting voices we would like to use using the 'ipshift' parameter. If this is given a value of zero, all pitch shifting intervals will be ignored and grain-by-grain transpositions will be chosen randomly within the range +/-1 octave. granule contains built-in randomizing for several of it parameters in order to easier facilitate asynchronous granular synthesis. In the case of grain gap and grain size randomization these are defined as percentages by which to randomize the fixed values. Unlike Csound's other granular synthesis opcodes, granule does not use a function table to define the amplitude envelope for each grain, instead attack and decay times are defined as percentages of the total grain duration using input arguments. The sum of these two values should total less than 100. Five notes are played by this example. While each note explores grain gap and grain size in the same way each time, different permutations for the four pitch transpositions are explored in each note. Information about what these transpositions are, are printed to the terminal as each note begins. EXAMPLE 05G03.csd
178
<CsoundSynthesizer> <CsOptions> -odevaudio -b1024 -dm0 </CsOptions> <CsInstruments> ;example by Iain McCurdy sr = 44100 ksmps = 32 nchnls = 2 0dbfs = 1 ;waveforms used for granulation giSoundL ftgen 1,0,1048576,1,"ClassicalGuitar.wav",0,0,1 giSoundR ftgen 2,0,1048576,1,"ClassicalGuitar.wav",0,0,2 seed 0; seed the random generators from the system clock gaSendL init 0 gaSendR init 0 instr 1 ; generates granular synthesis textures prints p9 ;define the input variables kamp linseg 0,1,0.1,p3-1.2,0.1,0.2,0 ivoice = 64 iratio = 0.5 imode = 1 ithd = 0 ipshift = p8 igskip = 0.1 igskip_os = 0.5 ilength = nsamp(giSoundL)/sr kgap transeg 0,20,14,4, 5,8,8, 8,-10,0, 15,0,0.1 igap_os = 50 kgsize transeg 0.04,20,0,0.04, 5,-4,0.01, 8,0,0.01, 15,5,0.4 igsize_os = 50 iatt = 30 idec = 30 iseedL = 0 iseedR = 0.21768 ipitch1 = p4 ipitch2 = p5 ipitch3 = p6 ipitch4 = p7 ;create the granular synthesis textures; one for each channel aSigL granule kamp,ivoice,iratio,imode,ithd,giSoundL,ipshift,igskip,\ igskip_os,ilength,kgap,igap_os,kgsize,igsize_os,iatt,idec,iseedL,\ ipitch1,ipitch2,ipitch3,ipitch4 aSigR granule kamp,ivoice,iratio,imode,ithd,giSoundR,ipshift,igskip,\ igskip_os,ilength,kgap,igap_os,kgsize,igsize_os,iatt,idec,iseedR,\ ipitch1,ipitch2,ipitch3,ipitch4 ;send a little to the reverb effect gaSendL = gaSendL+(aSigL*0.3) gaSendR = gaSendR+(aSigR*0.3) outs aSigL,aSigR endin instr 2 ; global reverb instrument (always on) ; use reverbsc opcode for creating reverb signal aRvbL,aRvbR reverbsc gaSendL,gaSendR,0.85,8000 outs aRvbL,aRvbR ;clear variables to prevent out of control accumulation clear gaSendL,gaSendR endin </CsInstruments> <CsScore> ;p4 = pitch 1 ;p5 = pitch 2 ;p6 = pitch 3 ;p7 = pitch 4 ;p8 = number of pitch shift voices (0=random pitch) ; p1 p2 p3 p4 p5 p6 p7 p8 p9 i 1 0 48 1 1 1 1 4 "pitches: all unison" i 1 + . 1 0.5 0.25 2 4 "%npitches: 1(unison) 0.5(down 1 octave) 0.25(down 2 octaves) 2(up 1 octave)" i 1 + . 1 2 4 8 4 "%npitches: 1 2 4 8" i 1 + . 1 [3/4] [5/6] [4/3] 4 "%npitches: 1 3/4 5/6 4/3" i 1 + . 1 1 1 1 0 "%npitches: all random" i 2 0 [48*5+2]; reverb instrument e </CsScore> </CsoundSynthesizer>
179
CONCLUSION
Two contrasting opcodes for granular synthesis have been considered in this chapter but this is in no way meant to suggest that these are the best, in fact it is strongly recommended to explore all of Csound's other opcodes as they each have their own unique character. The syncgrain family of opcodes (including also syncloop and diskgrain) are deceptively simple as their k-rate controls encourages further abstractions of grain manipulation, fog is designed for FOF synthesis type synchronous granulation but with sound files and partikkel offers a comprehensive control of grain characteristics on a grain-by-grain basis inspired by Curtis Roads' encyclopedic book on granular synthesis 'Microsound'.
180
33. CONVOLUTION
coming in the next release ...
181
HOW TO DO IT IN CSOUND?
As usual, there is not just one way to work with FFT and spectral processing in Csound. There are several families of opcodes. Each family can be very useful for a specific approach of working in the frequency domain. Have a look at the Spectral Processing overview in the Csound Manual. This introduction will focus on the so-called "Phase Vocoder Streaming" opcodes (all these opcodes begin with the charcters "pvs") which came into Csound by the work of Richard Dobson, Victor Lazzarini and others. They are designed to work in realtime in the frequency domain in Csound; and indeed they are not just very fast but also easier to use than FFT implementations in some other applications.
182
the audio signal derives from playing back a soundfile from the hard disc (instr 1) the audio signal is the live input (instr 2) (Be careful - the example can produce a feedback three seconds after the start. Best results are with headphones.) EXAMPLE 04I01.csd 1
<CsoundSynthesizer> <CsOptions> -i adc -o dac </CsOptions> <CsInstruments> ;Example by Joachim Heintz ;uses the file "fox.wav" (distributed with the Csound Manual) sr = 44100 ksmps = 32 nchnls = 2 0dbfs = 1 ;general values for fourier transform gifftsiz = 1024 gioverlap = 256 giwintyp = 1 ;von hann window instr 1 ;soundfile to fsig asig soundin "fox.wav" fsig pvsanal asig, gifftsiz, gioverlap, gifftsiz*2, giwintyp aback pvsynth fsig outs aback, aback endin instr 2 ;live input prints ain inch fsig pvsanal alisten pvsynth outs endin </CsInstruments> <CsScore> i 1 0 3 i 2 3 10 </CsScore> </CsoundSynthesizer> to fsig "LIVE INPUT NOW!%n" 1 ;live input from channel 1 ain, gifftsiz, gioverlap, gifftsiz, giwintyp fsig alisten, alisten
You should hear first the "fox.wav" sample, and then, the slightly delayed live input signal. The delay depends first on the general settings for realtime input (ksmps, -b and -B: see chapter 2D). But second, there is also a delay added by the FFT. The window size here is 1024 samples, so the additional delay is 1024/44100 = 0.023 seconds. If you change the window size gifftsiz to 2048 or to 512 samples, you should get a larger or shorter delay. - So for realtime applications, the decision about the FFT size is not only a question "better time resolution versus better frequency resolution", but it is also a question of tolerable latency. What happens in the example above? At first, the audio signal (asig, ain) is being analyzed and transformed in an f-signal. This is done via the opcode pvsanal. Then nothing happens but transforming the frequency domain signal back into an audio signal. This is called inverse Fourier transformation (IFT or IFFT) and is done by the opcode pvsynth.2 In this case, it is just a test: to see if everything works, to hear the results of different window sizes, to check the latency. But potentially you can insert any other pvs opcode(s) in between this entrance and exit:
PITCH SHIFTING
183
Simple pitch shifting can be done by the opcode pvscale. All the frequency data in the f-signal are scaled by a certain value. Multiplying by 2 results in transposing an octave upwards; multiplying by 0.5 in transposing an octave downwards. For accepting cent values instead of ratios as input, the cent opcode can be used. EXAMPLE 04I02.csd
<CsoundSynthesizer> <CsOptions> -odac </CsOptions> <CsInstruments> ;example by joachim heintz sr = 44100 ksmps = 32 nchnls = 1 0dbfs = 1 gifftsize = gioverlap = giwinsize = giwinshape = 1024 gifftsize / 4 gifftsize 1; von-Hann window
instr 1 ;scaling by a factor ain soundin "fox.wav" fftin pvsanal ain, gifftsize, gioverlap, giwinsize, giwinshape fftscal pvscale fftin, p4 aout pvsynth fftscal out aout endin instr 2 ;scaling by a cent value ain soundin "fox.wav" fftin pvsanal ain, gifftsize, gioverlap, giwinsize, giwinshape fftscal pvscale fftin, cent(p4) aout pvsynth fftscal out aout/3 endin </CsInstruments> <CsScore> i 1 0 3 1; original pitch i 1 3 3 .5; octave lower i 1 6 3 2 ;octave higher i 2 9 3 0 i 2 9 3 400 ;major third i 2 9 3 700 ;fifth e </CsScore> </CsoundSynthesizer>
Pitch shifting via FFT resynthesis is very simple in general, but more or less complicated in detail. With speech for instance, there is a problem because of the formants. If you simply scale the frequencies, the formants are shifted, too, and the sound gets the typical "Mickey-Mousing" effect. There are some parameters in the pvscale opcode, and some other pvs-opcodes which can help to avoid this, but the result always depends on the individual sounds and on your ideas.
TIME STRETCH/COMPRESS
As the Fourier transformation seperates the spectral information from the progression in time, both elements can be varied independently. Pitch shifting via the pvscale opcode, as in the previous example, is independent from the speed of reading the audio data. The complement is changing the time without changing the pitch: time stretching or time compression. The simplest way to alter the speed of a samples sound is using pvstanal (which is new in Csound 5.13). This opcode transforms a sound which is stored in a function table, in an f-signal, and time manipulations are simply done by altering the ktimescal parameter. Example 04I03.csd
<CsoundSynthesizer> <CsOptions> -odac </CsOptions> <CsInstruments> ;example by joachim heintz sr = 44100
184
ksmps = 32 nchnls = 1 0dbfs = 1 ;store the sample "fox.wav" in a function table (buffer) gifil ftgen 0, 0, 0, 1, "fox.wav", 0, 0, 1 ;general values for the pvstanal opcode giamp = 1 ;amplitude scaling gipitch = 1 ;pitch scaling gidet = 0 ;onset detection giwrap = 0 ;no loop reading giskip = 0 ;start at the beginning gifftsiz = 1024 ;fft size giovlp = gifftsiz/8 ;overlap size githresh = 0 ;threshold instr 1 ;simple time stretching / compressing fsig pvstanal p4, giamp, gipitch, gifil, gidet, giwrap, giskip, gifftsiz, giovlp, githresh aout pvsynth fsig out aout endin instr 2 ;automatic scratching kspeed randi 2, 2, 2 ;speed randomly between -2 and 2 kpitch randi p4, 2, 2 ;pitch between 2 octaves lower or higher fsig pvstanal kspeed, 1, octave(kpitch), gifil aout pvsynth fsig aenv linen aout, .003, p3, .1 out aout endin </CsInstruments> <CsScore> ; speed i 1 0 3 1 i . + 10 .33 i . + 2 3 s i 2 0 10 0;random scratching without ... i . 11 10 2 ;... and with pitch changes </CsScore> </CsoundSynthesizer>
CROSS SYNTHESIS
Working in the frequency domain makes it possible to combine or "cross" the spectra of two sounds. As the Fourier transform of an analysis frame results in a frequency and an amplitude value for each frequency "bin", there are many different ways of performing cross synthesis. The most common methods are: Combine the amplitudes of sound A with the frequencies of sound B. This is the classical phase vocoder approach. If the frequencies are not completely from sound B, but can be scaled between A and B, the crossing is more flexible and adjustable to the sounds being used. This is what pvsvoc does. Combine the frequencies of sound A with the amplitudes of sound B. Give more flexibility by scaling the amplitudes between A and B: pvscross. Get the frequencies from sound A. Multiply the amplitudes of A and B. This can be described as spectral filtering. pvsfilter gives a flexible portion of this filtering effect. This is an example for phase vocoding. It is nice to have speech as sound A, and a rich sound, like classical music, as sound B. Here the "fox" sample is being played at half speed and "sings" through the music of sound B: EXAMPLE 04I04.csd
<CsoundSynthesizer> <CsOptions> -odac </CsOptions> <CsInstruments> ;example by joachim heintz sr = 44100 ksmps = 32 nchnls = 1 0dbfs = 1 ;store the samples in function tables (buffers)
185
gifilA gifilB
ftgen ftgen
0, 0, 0, 1, "fox.wav", 0, 0, 1 0, 0, 0, 1, "ClassGuit.wav", 0, 0, 1
;general values for the pvstanal opcode giamp = 1 ;amplitude scaling gipitch = 1 ;pitch scaling gidet = 0 ;onset detection giwrap = 1 ;loop reading giskip = 0 ;start at the beginning gifftsiz = 1024 ;fft size giovlp = gifftsiz/8 ;overlap size githresh = 0 ;threshold instr 1 ;read "fox.wav" in half speed and cross with classical guitar sample fsigA pvstanal .5, giamp, gipitch, gifilA, gidet, giwrap, giskip, gifftsiz, giovlp, githresh fsigB pvstanal 1, giamp, gipitch, gifilB, gidet, giwrap, giskip, gifftsiz, giovlp, githresh fvoc pvsvoc fsigA, fsigB, 1, 1 aout pvsynth fvoc aenv linen aout, .1, p3, .5 out aout endin </CsInstruments> <CsScore> i 1 0 11 </CsScore> </CsoundSynthesizer>
The last example shows spectral filtering via pvsfilter. The well-known "fox" (sound A) is now filtered by the viola (sound B). Its resulting intensity depends on the amplitudes of sound B, and if the amplitudes are strong enough, you hear a resonating effect: EXAMPLE 04I06.csd
186
<CsoundSynthesizer> <CsOptions> -odac </CsOptions> <CsInstruments> ;example by joachim heintz sr = 44100 ksmps = 32 nchnls = 1 0dbfs = 1 ;store the samples in function tables (buffers) gifilA ftgen 0, 0, 0, 1, "fox.wav", 0, 0, 1 gifilB ftgen 0, 0, 0, 1, "BratscheMono.wav", 0, 0, 1 ;general values for the pvstanal opcode giamp = 1 ;amplitude scaling gipitch = 1 ;pitch scaling gidet = 0 ;onset detection giwrap = 1 ;loop reading giskip = 0 ;start at the beginning gifftsiz = 1024 ;fft size giovlp = gifftsiz/4 ;overlap size githresh = 0 ;threshold instr 1 ;filters "fox.wav" (half speed) by the spectrum of the viola (double speed) fsigA pvstanal .5, giamp, gipitch, gifilA, gidet, giwrap, giskip, gifftsiz, giovlp, githresh fsigB pvstanal 2, 5, gipitch, gifilB, gidet, giwrap, giskip, gifftsiz, giovlp, githresh ffilt pvsfilter fsigA, fsigB, 1 aout pvsynth ffilt aenv linen aout, .1, p3, .5 out aout endin </CsInstruments> <CsScore> i 1 0 11 </CsScore> </CsoundSynthesizer>
There are much more ways of working with the pvs opcodes. Have a look at the Signal Processing II section of the Opcodes Overview to find some hints.
1. All soundfiles used in this manual are free and can be downloaded at www.csoundtutorial.net ^ 2. For some cases it is good to have pvsadsyn as an alternative, which is using a bank of oscillators for resynthesis.^ SAMPLES 35. RECORD AND PLAY SOUNDFILES 36. RECORD AND PLAY BUFFERS
187
188
instr 1; a simple tone generator aEnv expon 0.2, p3, 0.001; a percussive amplitude envelope aSig poscil aEnv, cpsmidinn(p4), giSine; audio oscillator out aSig; send audio to output endin </CsInstruments> <CsScore> ; two chords i 1 0 5 60 i 1 0.1 5 65 i 1 0.2 5 67 i 1 0.3 5 71 i 1 3 5 65 i 1 3.1 5 67 i 1 3.2 5 73 i 1 3.3 5 78 e </CsScore> </CsoundSynthesizer>
1; a simple tone generator expon 0.2, p3, 0.001; percussive amplitude envelope poscil aEnv, cpsmidinn(p4), giSine; audio oscillator = gaSig + aSig; accumulate this note with the global audio variable
instr 2; write to a file (always on) ; USE FOUT TO WRITE TO A FILE ON DISK ; FORMAT 4 RESULTS IN A 16BIT WAV ; NUMBER OF CHANNELS IS DETERMINED BY THE NUMBER OF AUDIO VARIABLES SUPPLIED TO fout fout "WriteToDisk2.wav", 4, gaSig out gaSig; send audio for all notes combined to the output clear gaSig; clear global audio variable to prevent run away accumulation endin </CsInstruments> <CsScore> ; activate recording instrument to encapsulate the entire performance i 2 0 8.3 ; two chords
189
i i i i
190
; STORE AUDIO IN RAM USING GEN01 FUNCTION TABLE giSoundFile ftgen 0, 0, 1048576, 1, "loop.wav", 0, 0, 0 instr 1; play audio from function table using flooper2 opcode kAmp init 1; amplitude parameter kPitch init 1; pitch/speed parameter kLoopStart init 0; point where looping begins (in seconds) - in this case the very beginning of the file kLoopEnd = nsamp(giSoundFile)/sr; point where looping ends (in seconds) - in this case the end of the file kCrossFade = 0; cross-fade time ; READ AUDIO FROM FUNCTION TABLE USING flooper2 OPCODE aSig flooper2 kAmp, kPitch, kLoopStart, kLoopEnd, kCrossFade, giSoundFile out aSig; send audio to output endin </CsInstruments> <CsScore> i 1 0 6 </CsScore> </CsoundSynthesizer>
191
EXAMPLE 06B02.csd
<CsoundSynthesizer> <CsOptions> ; audio in and out are required -iadc -odac -d -m0 </CsOptions> <CsInstruments> ;example written by Iain McCurdy sr = 44100 ksmps = 32 nchnls = 1 0dbfs = 1 ; maximum amplitude regardless of bit depth instr 1 ; PRINT INSTRUCTIONS prints "Press 'r' to record, 's' to stop playback, '+' to increase pitch, '-' to decrease pitch.\\n" ; SENSE KEYBOARD ACTIVITY kKey sensekey; sense activity on the computer keyboard aIn inch 1; read audio from first input channel kPitch init 1; initialize pitch parameter iDur init 2; inititialize duration of loop parameter iFade init 0.05 ;initialize crossfade time parameter if kKey = 114 then; if 'r' has been pressed... kTrig = 1; set trigger to begin record-playback process elseif kKey = 115 then; if 's' has been pressed... kTrig = 0; set trigger to deactivate sndloop record-playback process elseif kKey = 43 then; if '+' has been pressed... kPitch = kPitch + 0.02; increment pitch parameter elseif kKey = 95 then; if ''-' has been pressed kPitch = kPitch - 0.02; decrement pitch parameter endif; end of conditional branch ; CREATE SNDLOOP INSTANCE aOut, kRec sndloop aIn, kPitch, kTrig, iDur, iFade; (kRec output is not used) out aOut; send audio to output endin </CsInstruments> <CsScore> i 1 0 3600; sense keyboard activity instrument </CsScore> </CsoundSynthesizer>
192
<CsOptions> ; audio in and out are required -iadc -odac -d -m0 </CsOptions> <CsInstruments> ;example written by Iain McCurdy sr = 44100 ksmps = 32 nchnls = 1 0dbfs = 1 ; maximum amplitude regardless of bit depth giBuffer ftgen 0, 0, 2^17, 7, 0; table for audio data storage maxalloc 2,1; allow only one instance of the recordsing instrument at a time instr 1; sense keyboard activity and start record or playback instruments accordingly prints "Press 'r' to record, 'p' for playback.\\n" iTableLen = ftlen(giBuffer); derive buffer function table length in points idur = iTableLen / sr; derive storage time potential of buffer function table kKey sensekey; sense activity on the computer keyboard if kKey=114 then; if ASCCI value of 114 is output, i.e. 'r' has been pressed... event "i", 2, 0, idur, iTableLen; activate recording instrument for the duration of the buffer storage potential. Pass it table length in point as a p-field variable endif; end of conditional branch if kKey=112 then; if ASCCI value of 112 is output, i.e. 'p' has been pressed... event "i", 3, 0, idur, iTableLen; activate recording instrument for the duration of the buffer storage potential. Pass it table length in point as a p-field variable endif; end of conditional branch endin instr 2; record to buffer iTableLen = p4; read in value from p-field (length of function table in samples) ;PRINT PROGRESS INFORMATION TO TERMINAL prints "recording" printks ".", 0.25; print a '.' every quarter of a second krelease release; sense when note is in final performance pass (output=1) if krelease=1 then; if note is in final performance pass and about to end... printks "\\ndone\\n", 0; print a message bounded by 'newlines' endif; end of conditional branch ; WRITE TO TABLE ain inch 1; read audio from live input channel 1 andx line 0, p3, iTableLen; create a pointer for writing to table tablew ain, andx, giBuffer ;write audio to audio storage table endin instr 3; playback from buffer iTableLen = p4; read in value from p-field (length of function table in samples) ;PRINT PROGRESS INFORMATION TO TERMINAL prints "playback" printks ".", 0.25; print a '.' every quarter of a second krelease release; sense when note is in final performance pass (output=1) if krelease=1 then; if note is in final performance pass and about to end... printks "\\ndone\\n", 0; print a message bounded by 'newlines' endif; end of conditional branch ; READ FROM TABLE aNdx line 0, p3, iTableLen; create a pointer for reading from the table a1 table aNdx, giBuffer ;read audio to audio storage table out a1; send audio to output endin </CsInstruments> <CsScore> i 1 0 3600; sense keyboard activity instrument </CsScore> </CsoundSynthesizer>
193
You can decide whether you want to assign a certain number to the table, or you let Csound do this job, and call the table via its variable, in this case giBuf1. So let's start with writing a UDO for creating a mono buffer, and another UDO for creating a stereo buffer:
opcode BufCrt1, i, io ilen, inum xin ift ftgen inum, 0, -(ilen*sr), 2, 0 xout ift endop opcode BufCrt2, ii, io ilen, inum xin iftL ftgen inum, 0, -(ilen*sr), 2, 0 iftR ftgen inum, 0, -(ilen*sr), 2, 0 xout iftL, iftR endop
This simplifies the procedure of creating a record/play buffer, because the user is just asked for the length of the buffer. If he likes, he can also give a number, but by default Csound will assign this number. This statement will create an empty stereo table for 5 seconds of recording:
iBufL,iBufR BufCrt2 5
A first, simple version of a UDO for recording will just write the incoming audio to sequential locations of the table. This can be done by setting the ksmps value to 1 inside this UDO (setksmps 1), so that each audio sample has its own discrete k-value. Then we can directly assign the write index for the table via the statement andx=kndx, and increase the index by one for the next k-cycle. An additional k-input turns recording on and of:
opcode BufRec1, 0, aik ain, ift, krec xin setksmps 1 if krec == 1 then ;record as long as krec=1 kndx init 0 andx = kndx tabw ain, andx, ift kndx = kndx+1 endif endop
The reading procedure is simple, too. Actually we can use the same code and just replace the opcode for writing (tabw) with the opcode for reading (tab):
opcode BufPlay1, a, ik ift, kplay xin setksmps 1 if kplay == 1 then ;play as long as kplay=1 kndx init 0 andx = kndx aout tab andx, ift kndx = kndx+1 endif endop
So - let's use these first simple UDOs in a Csound instrument. Press the "r" key as long as you want to record, and the "p" key for playing back. Note that you must disable the key repeats on your computer keyboard for this example (in QuteCsound, disable "Allow key repeats" in Configuration -> General). EXAMPLE 06B04.csd
<CsoundSynthesizer> <CsOptions> -i adc -o dac -d -m0 </CsOptions> <CsInstruments> ;example written by Joachim Heintz sr = 44100 ksmps = 32 nchnls = 1 0dbfs = 1 opcode BufCrt1, i, io ilen, inum xin ift ftgen inum, 0, -(ilen*sr), 2, 0 xout ift endop opcode BufRec1, 0, aik
194
ain, ift, krec xin setksmps 1 imaxindx = ftlen(ift)-1 ;max index to write knew changed krec if krec == 1 then ;record as long as krec=1 if knew == 1 then ;reset index if restarted kndx = 0 endif kndx = (kndx > imaxindx ? imaxindx : kndx) andx = kndx tabw ain, andx, ift kndx = kndx+1 endif endop opcode BufPlay1, a, ik ift, kplay xin setksmps 1 imaxindx = ftlen(ift)-1 ;max index to read knew changed kplay if kplay == 1 then ;play as long as kplay=1 if knew == 1 then ;reset index if restarted kndx = 0 endif kndx = (kndx > imaxindx ? imaxindx : kndx) andx = kndx aout tab andx, ift kndx = kndx+1 endif xout aout endop opcode KeyStay, k, kkk ;returns 1 as long as a certain key is pressed key, k0, kascii xin ;ascii code of the key (e.g. 32 for space) kprev init 0 ;previous key value kout = (key == kascii || (key == -1 && kprev == kascii) ? 1 : 0) kprev = (key > 0 ? key : kprev) kprev = (kprev == key && k0 == 0 ? 0 : kprev) xout kout endop opcode KeyStay2, kk, kk ;combines two KeyStay UDO's (this way is necessary because just one sensekey opcode is possible in an orchestra) kasci1, kasci2 xin ;two ascii codes as input key,k0 sensekey kout1 KeyStay key, k0, kasci1 kout2 KeyStay key, k0, kasci2 xout kout1, kout2 endop instr 1 ain inch iBuf BufCrt1 kRec,kPlay KeyStay2 BufRec1 aout BufPlay1 out endin </CsInstruments> <CsScore> i 1 0 1000 </CsScore> </CsoundSynthesizer>
1 ;audio input on channel 1 3 ;buffer for 3 seconds of recording 114, 112 ;define keys for record and play ain, iBuf, kRec ;record if kRec=1 iBuf, kPlay ;play if kPlay=1 aout ;send out
Let's realize now a more extended and easy to operate version of these two UDO's for recording and playing a buffer. The wishes of a user might be the following: Recording: allow recording not just from the beginning of the buffer, but also from any arbitrary starting point kstart allow circular recording (wrap around) if the end of the buffer has been reached: kwrap=1 Playing:
195
allow certain modes of wraparound kwrap while playing: kwrap=0 stops at the defined end point of the buffer kwrap=1 repeats playback between defined end and start points kwrap=2 starts at a the defined starting point but wraps between end point and beginning of the buffer kwrap=3 wraps between kstart and the end of the table The following example provides versions of BufRec and BufPlay which do this job. We will use the table3 opcode instead of the simple tab or table opcodes in this case, because we want to translate any number of samples in the table to any number of output samples by different speed values:
As you see, for higher or lower speed values than the original record speed, we must interpolate in between certain sample values, if we want to keep the original shape of the wave as truely as possible. This job is done in a good quality by table3 with cubic interpolation. It is in the nature of recording and playing buffers, that the interactive component is dominant. Actually, we need interactive devices for doing these jobs: starting and stopping record adjusting the start and end points of recording do or avoid wraparound while recording starting and stopping playback adjusting the start and end points of playback adjusting wraparound in playback at one of the specified modes (1 - 4) applying volume at playback
196
These interactive devices can be widgets, midi, osc or something else. As we want to provide here examples which can be used with any Csound frontend, we must abandon the live input except the live audio, and triggering the record or play events by hitting the space bar of the computer keyboard. See, for instance, the QuteCsound version of this example for a more interactive version. EXAMPLE 06B05.csd
<CsoundSynthesizer> <CsOptions> -i adc -o dac -d </CsOptions> <CsInstruments> ;example written by joachim heintz sr = 44100 ksmps = 32 nchnls = 2 0dbfs = 1 opcode BufCrt2, ii, io ;creates a stereo buffer ilen, inum xin ;ilen = length of the buffer (table) in seconds iftL ftgen inum, 0, -(ilen*sr), 2, 0 iftR ftgen inum, 0, -(ilen*sr), 2, 0 xout iftL, iftR endop opcode BufRec1, k, aikkkk ;records to a buffer ain, ift, krec, kstart, kend, kwrap xin setksmps 1 kendsmps = kend*sr ;end point in samples kendsmps = (kendsmps == 0 || kendsmps > ftlen(ift) ? ftlen(ift) : kendsmps) kfinished = 0 knew changed krec ;1 if record just started if krec == 1 then if knew == 1 then kndx = kstart * sr - 1 ;first index to write endif if kndx >= kendsmps-1 && kwrap == 1 then kndx = -1 endif if kndx < kendsmps-1 then kndx = kndx + 1 andx = kndx tabw ain, andx, ift else kfinished = 1 endif endif xout kfinished endop opcode BufRec2, k, aaiikkkk ;records to a stereo buffer ainL, ainR, iftL, iftR, krec, kstart, kend, kwrap xin kfin BufRec1 ainL, iftL, krec, kstart, kend, kwrap kfin BufRec1 ainR, iftR, krec, kstart, kend, kwrap xout kfin endop opcode BufPlay1, ak, ikkkkkk ift, kplay, kspeed, kvol, kstart, kend, kwrap xin ;kstart = begin of playing the buffer in seconds ;kend = end of playing in seconds. 0 means the end of the table ;kwrap = 0: no wrapping. stops at kend (positive speed) or kstart (negative speed). this makes just sense if the direction does not change and you just want to play the table once ;kwrap = 1: wraps between kstart and kend ;kwrap = 2: wraps between 0 and kend ;kwrap = 3: wraps between kstart and end of table ;CALCULATE BASIC VALUES kfin init 0 iftlen = ftlen(ift)/sr ;ftlength in seconds kend = (kend == 0 ? iftlen : kend) ;kend=0 means end of table kstart01 = kstart/iftlen ;start in 0-1 range kend01 = kend/iftlen ;end in 0-1 range kfqbas = (1/iftlen) * kspeed ;basic phasor frequency ;DIFFERENT BEHAVIOUR DEPENDING ON WRAP: if kplay == 1 && kfin == 0 then ;1. STOP AT START- OR ENDPOINT IF NO WRAPPING REQUIRED (kwrap=0) if kwrap == 0 then kfqrel = kfqbas / (kend01-kstart01) ;phasor freq so that 0-1 values match distance startend andxrel phasor kfqrel ;index 0-1 for distance start-end andx = andxrel * (kend01-kstart01) + (kstart01) ;final index for reading the table (0-1) kfirst init 1 ;don't check condition below at the first k-cycle (always true) kndx downsamp andx kprevndx init 0
197
;end of table check: ;for positive speed, check if this index is lower than the previous one if kfirst == 0 && kspeed > 0 && kndx < kprevndx then kfin = 1 ;for negative speed, check if this index is higher than the previous one else kprevndx = (kprevndx == kstart01 ? kend01 : kprevndx) if kfirst == 0 && kspeed < 0 && kndx > kprevndx then kfin = 1 endif kfirst = 0 ;end of first cycle in wrap = 0 endif ;sound out if end of table has not yet reached asig table3 andx, ift, 1 kprevndx = kndx ;next previous is this index ;2. WRAP BETWEEN START AND END (kwrap=1) elseif kwrap == 1 then kfqrel = kfqbas / (kend01-kstart01) ;same as for kwarp=0 andxrel phasor kfqrel andx = andxrel * (kend01-kstart01) + (kstart01) asig table3 andx, ift, 1 ;sound out ;3. START AT kstart BUT WRAP BETWEEN 0 AND END (kwrap=2) elseif kwrap == 2 then kw2first init 1 if kw2first == 1 then ;at first k-cycle: reinit wrap3phs ;reinitialize for getting the correct start phase kw2first = 0 endif kfqrel = kfqbas / kend01 ;phasor freq so that 0-1 values match distance start-end wrap3phs: andxrel phasor kfqrel, i(kstart01) ;index 0-1 for distance start-end rireturn ;end of reinitialization andx = andxrel * kend01 ;final index for reading the table asig table3 andx, ift, 1 ;sound out ;4. WRAP BETWEEN kstart AND END OF TABLE(kwrap=3) elseif kwrap == 3 then kfqrel = kfqbas / (1-kstart01) ;phasor freq so that 0-1 values match distance start-end andxrel phasor kfqrel ;index 0-1 for distance start-end andx = andxrel * (1-kstart01) + kstart01 ;final index for reading the table asig table3 andx, ift, 1 endif else ;if either not started or finished at wrap=0 asig = 0 ;don't produce any sound endif xout asig*kvol, kfin endop opcode BufPlay2, aak, iikkkkkk ;plays a stereo buffer iftL, iftR, kplay, kspeed, kvol, kstart, kend, kwrap xin aL,kfin BufPlay1 iftL, kplay, kspeed, kvol, kstart, kend, kwrap aR,kfin BufPlay1 iftR, kplay, kspeed, kvol, kstart, kend, kwrap xout aL, aR, kfin endop opcode In2, aa, kk ;stereo audio input kchn1, kchn2 xin ain1 inch kchn1 ain2 inch kchn2 xout ain1, ain2 endop opcode Key, kk, k ;returns '1' just in the k-cycle a certain kascii xin ;ascii code of the key (e.g. key,k0 sensekey knew changed key kdown = (key == kascii && knew kup = (key == kascii && knew xout kdown, kup endop instr 1 giftL,giftR BufCrt2 gainL,gainR In2 prints 2.\n" prints MANNERS.\n" krec,k0 Key if krec == 1 then event endif endin key has been pressed (kdown) or released (kup) 32 for space) == 1 && k0 == 1 ? 1 : 0) == 1 && k0 == 0 ? 1 : 0)
3 ;creates a stereo buffer for 3 seconds 1,2 ;read input channels 1 and 2 and write as global audio "PLEASE PRESS THE SPACE BAR ONCE AND GIVE AUDIO INPUT ON CHANNELS 1 AND "AUDIO WILL BE RECORDED AND THEN AUTOMATICALLY PLAYED BACK IN SEVERAL 32 "i", 2, 0, 10
instr 2 kfin BufRec2 gainL, gainR, giftL, giftR, 1, 0, 0, 0 ;records the whole buffer and returns 1 at the end
198
if kfin == 0 then printks endif if kfin == 1 then ispeed random istart random iend random iwrap random iwrap = printks p3, ispeed, istart, aL,aR,kf BufPlay2 if kf == 0 then printks endif endif krel release if kfin == 1 && kf printks turnoff endif outs endin </CsInstruments> <CsScore> i 1 0 1000 e </CsScore> </CsoundSynthesizer>
"Recording!\n", 1 -2, 2 0, 1 2, 3 0, 1.999 int(iwrap) "Playing back with speed = %.3f, start = %.3f, end = %.3f, wrap = %d\n", iend, iwrap giftL, giftR, 1, ispeed, 1, istart, iend, iwrap "Playing!\n", 1
MIDI 37. RECEIVING EVENTS BY MIDIIN 38. TRIGGERING INSTRUMENT INSTANCES 39. C. WORKING WITH CONTROLLERS 40. MIDI OUTPUT 41. READING MIDI FILES
199
just after the header statement. For this example to work you will need to ensure that you have activated live midi input within Csound (either by using the -M flag or from within the QuteCsound configuration menu) and that you have a midi keyboard or controller connected. (You may also want to include the -m0 flag which will disable some of Csound's additional messaging output and therefore allow our midi printout to be presented more clearly.) The status byte tells us what sort of midi information has been received. For example, a value of 144 tells us that a midi note event has been received, a value of 176 tells us that a midi controller event has been received, a value of 224 tells us that pitch bend has been received and so on. The meaning of the two data bytes depends on what sort of status byte has been received. For example if a midi note event has been received then data byte 1 gives us the note velocity and data byte 2 gives us the note number, if a midi controller event has been received then data byte 1 gives us the controller number and data byte 2 gives us the controller value. EXAMPLE 07A01.csd
<CsoundSynthesizer> <CsOptions> -Ma ;activates all midi devices </CsOptions> <CsInstruments> ;Example by Iain McCurdy ;no audio so no 'sr' or 'nchnls' ksmps = 32 ;using massign with these arguments disables Csound's default instrument triggering massign 0,0 instr 1 kstatus, kchan, kdata1, kdata2 midiin; read in midi ktrigger changed kstatus, kchan, kdata1, kdata2; trigger if midi data changes if ktrigger=1&&kstatus!=0 then; conditionally branch when trigger is received and when status byte is something other than zero printks "status:%d%tchannel:%d%tdata1:%d%tdata2:%d%n", 0, kstatus, kchan, kdata1, kdata2; print midi data to the terminal with formatting endif endin </CsInstruments> <CsScore> i 1 0 3600; run midi scanning for 1 hour </CsScore> </CsoundSynthesizer>
200
The principle advantage of the midiin opcode is that, unlike opcodes such as cpsmidi, ampmidi and ctrl7 which only receive specific midi data types on a specific channel, midiin listens to all incoming data including system exclusive. In situations where elaborate Csound instrument triggering mappings that are beyond the default triggering mechanism's capabilities, are required then a use for midiin might be found.
201
202
prints "instrument/midi channel: %d%n",p1; print instrument number to terminal reset: timout 0, 1, impulse; jump to pulse generation section for 1 second reinit reset; reninitialize pass from label 'reset' impulse: aenv expon 1, 0.3, 0.0001; a short percussive amplitude envelope aSig poscil aenv, 500, gisine a2 delay aSig, 0.15 a3 delay a2, 0.15 out aSig+a2+a3 rireturn endin </CsInstruments> <CsScore> f 0 300 e </CsScore> <CsoundSynthesizer>
203
rireturn endin instr 3; 3 impulses (midi channel 3) iChn midichn; discern what midi channel this instrument was activated on prints "channel:%d%tinstrument: %d%n",iChn,p1; print instrument number and midi channel to terminal reset: timout 0, 1, impulse; jump to pulse generation section for 1 second reinit reset; reninitialize pass from label 'reset' impulse: aenv expon 1, 0.3, 0.0001; a short percussive amplitude envelope aSig poscil aenv, 500, gisine a2 delay aSig, 0.15 a3 delay a2, 0.15 out aSig+a2+a3 rireturn endin </CsInstruments> <CsScore> f 0 300 e </CsScore> <CsoundSynthesizer>
massign also has a couple of additional functions that may come in useful. A channel number of zero is interpretted as meaning 'any'. The following instruction will map notes on any channel to instrument 1.
massign 0,1
An instrument number of zero is interpretted as meaning 'none' so the following instruction will instruct Csound to ignore triggering for notes received on any channel.
massign 0,0
The above feature is useful when we want to scan midi data from an already active instrument using the midiin opcode, as we did in EXAMPLE 0701.csd.
;global midi instrument, calling instr 2.cc.nnn (c=channel, n=note number) notnum ;get midi note number midichn ;get midi channel = 2 + ichn/100 + inote/100000 ;make fractional instr number event_i "i", instrnum, 0, -1, ichn, inote ;call with indefinite duration kend release ;get a "1" if instrument is turned off if kend == 1 then event "i", -instrnum, 0, 1 ;then turn this instance off endif endin instr 2 ichn = inote = prints printks endin int(frac(p1)*100) round(frac(frac(p1)*100)*1000) "instr %f: ichn = %f, inote = %f%n", p1, ichn, inote "instr %f playing!%n", 1, p1
204
In this case, it is more like a toy, because you use the fractional instrument number just for decoding an information in instrument 2 you already have in instrument 1 ... - But imagine you want to call several instruments depending on some regions on your keyboard. Then you need to change just the line
instrnum = 2 + ichn/100 + inote/100000
to this:
if inote < 48 then instrnum = 2 elseif inote < 72 then instrnum = 3 else instrnum = 4 endif instrnum = instrnum + ichn/100 + inote/100000
In this case you will call for any key below C3 instrument 2, for any key between C3 and B4 instrument 3, and for any higher key instrument 4. By this multiple triggering you are also able to trigger more than one instrument at the same time (which is not possible by the massign opcode). This is an example using a User Defined Opcode (see the UDO chapter of this manual): EXAMPLE 07B04.csd
<CsoundSynthesizer> <CsOptions> -Ma </CsOptions> <CsInstruments> ;Example by Joachim Heintz, using code of Victor Lazzarini sr = 44100 ksmps = 32 nchnls = 1 0dbfs = 1 giInstrs massign ftgen 0, 1 ;assign all incoming midi to instr 1 0, 0, -5, -2, 2, 3, 4, 10, 100 ;instruments to be triggered
opcode MidiTrig, 0, io ;triggers the first inum instruments in the function table ifn by a midi event, with fractional numbers containing channel and note number information ifn, inum xin ;if inum=0 or not given, all instrument numbers in ifn are triggered inum = (inum == 0 ? ftlen(ifn) : inum) inote notnum ichn midichn iturnon = 0 turnon: iinstrnum tab_i iturnon, ifn if iinstrnum > 0 then ifracnum = iinstrnum + ichn/100 + inote/100000 event_i "i", ifracnum, 0, -1 endif loop_lt iturnon, 1, inum, turnon kend release if kend == 1 then kturnoff = 0 turnoff: kinstrnum tab kturnoff, ifn if kinstrnum > 0 then kfracnum = kinstrnum + ichn/100 + inote/100000 event "i", -kfracnum, 0, 1 loop_lt kturnoff, 1, inum, turnoff endif endif endop instr 1 ;global midi instrument MidiTrig giInstrs, 2; triggers the first two instruments in the giInstrs table endin instr 2 ichn = int(frac(p1)*100)
205
= prints printks
round(frac(frac(p1)*100)*1000) "instr %f: ichn = %f, inote = %f%n", p1, ichn, inote "instr %f playing!%n", 1, p1
= = prints printks
int(frac(p1)*100) round(frac(frac(p1)*100)*1000) "instr %f: ichn = %f, inote = %f%n", p1, ichn, inote "instr %f playing!%n", 1, p1
206
There are also 14 bit and 21 bit versions of ctrl7 (ctrl14 and ctrl21) which improve upon the 7 bit resolution of ctrl7 but hardware that outputs 14 or 21 bit controller information is rare so these opcodes are probably rarely used.
207
kTrig1 changed kPchBnd; if 'kPchBnd' changes generate a trigger ('bang') if kTrig1=1 then printks "Pitch Bend Value: %f%n", 0, kPchBnd; print kPchBnd to console only when its value changes endif kAfttch aftouch; read in aftertouch information kTrig2 changed kAfttch; if 'kAfttch' changes generate a trigger ('bang') if kTrig2=1 then printks "Aftertouch Value: %d%n", 0, kAfttch; print kAfttch to console only when its value changes endif endin </CsInstruments> <CsScore> f 0 300 e </CsScore> <CsoundSynthesizer>
208
instr 1 iCps cpsmidi ;read in midi pitch in cycles-per-second iAmp ampmidi 1; read in note velocity - re-range to be from 0 to 1 kVol ctrl7 1,1,0,1; read in controller 1, channel 1. Re-range to be from 0 to 1 kPortTime linseg 0,0.001,0.01; create a value that quickly ramps up to 0.01 kVol portk kVol, kPortTime; create a new version of kVol that has been filtered (smoothed) using portk aVol interp kVol; create an a-rate version of kVol. Use intepolation to smooth this signal even further aSig poscil iAmp*aVol, iCps, giSine out aSig endin </CsInstruments> <CsScore> f 0 300 e </CsScore> <CsoundSynthesizer>
All of the techniques introduced in this section are combined in the final example which includes a 2-semitone pitch bend and tone control which is controlled by aftertouch. For tone generation this example uses the gbuzz opcode.
209
EXAMPLE 07C05.csd
<CsoundSynthesizer> <CsOptions> -Ma -odac </CsOptions> <CsInstruments> ;Example by Iain McCurdy sr = 44100 ksmps = 32 nchnls = 1 0dbfs = 1 giSine ftgen initc7 0,0,2^12,10,1 1,1,1; initialize controller 1 on midi channel 1 to its maximum level
instr 1 iOct octmidi; read in midi pitch in Csound's 'oct' format iAmp ampmidi 0.1; read in note velocity - re-range to be from 0 to 0.2 kVol ctrl7 1,1,0,1; read in controller 1, channel 1. Re-range to be from 0 to 1 kPortTime linseg 0,0.001,0.01; create a value that quickly ramps up to 0.01 kVol portk kVol, kPortTime; create a new version of kVol that has been filtered (smoothed) using portk aVol interp kVol; create an a-rate version of kVol. Use intepolation to smooth this signal even further iBndRange = 2; pitch bend range in semitones imin = 0; equilibrium position imax = iBndRange * 1/12; max pitch displacement (in oct format) kPchBnd pchbend imin, imax; pitch bend variable (in oct format) kPchBnd portk kPchBnd, kPortTime; create a new version of kPchBnd that has been filtered (smoothed) using portk aEnv linsegr 0,0.005,1,0.1,0; amplitude envelope with release stage kMul aftouch 0.4,0.85; read in a value that will be used with gbuzz as a kind of tone control kMul portk kMul,kPortTime; create a new version of kPchBnd that has been filtered (smoothed) using portk aSig gbuzz iAmp*aVol*aEnv, cpsoct(iOct+kPchBnd), 70,0,kMul,giSine out aSig endin </CsInstruments> <CsScore> f 0 300 e </CsScore> <CsoundSynthesizer>
210
211
instr 1 ;arguments for midiout are read from p-fields istatus init p4 ichan init p5 idata1 init p6 idata2 init p7 midiout istatus, ichan, idata1, idata2; send raw midi data turnoff; turn this instrument off to prevent repeated iterations of midiout endin </CsInstruments> <CsScore> ;p1 p2 p3 p4 p5 p6 p7 i 1 0 0.01 144 1 60 100; note on i 1 2 0.01 144 1 60 0; note off (using velocity zero) i 1 3 0.01 144 1 i 1 5 0.01 128 1 </CsScore> 60 100; note on 60 100; note off (using 'note off' status byte)
</CsoundSynthesizer>
The use of separate score events for note-ons and note-offs is rather a hassle. It would be more sensible to use the Csound note duration (p3) to define when the midi note-off is sent. The next example does this by utilizing a release flag generated by the release opcode whenever a note ends and sending the note-off then. EXAMPLE 07E02.csd
<CsoundSynthesizer> <CsOptions> ; amend device number accordingly -Q999 </CsOptions> <CsInstruments> ;Example by Iain McCurdy ksmps = 32 ;no audio so sr and nchnls omitted instr 1 ;arguments for midiout are read from p-fields istatus init p4 ichan init p5 idata1 init p6 idata2 init p7 kskip init 0 if kskip=0 then midiout istatus, ichan, idata1, idata2; send raw midi data (note on) kskip = 1; ensure that the note on will only be executed once endif krelease release; normally output is zero, on final k pass output is 1 if krelease=1 then; i.e. if we are on the final k pass... midiout istatus, ichan, idata1, 0; send raw midi data (note off) endif endin </CsInstruments> <CsScore> ;p1 p2 p3 p4 p5 p6 p7 i 1 0 4 144 1 60 100 i 1 1 3 144 1 64 100 i 1 2 2 144 1 67 100 f 0 5; extending performance time prevents note-offs from being lost </CsScore> </CsoundSynthesizer>
Obviously midiout is not limited to only sending only midi note information but instead this information could include continuous controller information, pitch bend, system exclusive data and so on. The next example, as well as playing a note, sends controller 1 (modulation) data which rises from zero to maximum (127) across the duration of the note. To ensure that unnessessary midi data is not sent out, the output of the line function is first converted into integers, and midiout for the continuous controller data is only executed whenever this integer value changes. The function that creates this stream of data goes slightly above this maximum value (it finishes at a value of 127.1) to ensure that a rounded value of 127 is actually achieved.
212
In practice it may be necessary to start sending the continuous controller data slightly before the note-on to allow the hardware time to respond. EXAMPLE 07E03.csd
<CsoundSynthesizer> <CsOptions> ; amend device number accordingly -Q999 </CsOptions> <CsInstruments> ;Example by Iain McCurdy ksmps = 32 ;no audio so sr and nchnls omitted instr 1 ; play a midi note ; read in values from p-fields ichan init p4 inote init p5 iveloc init p6 kskip init 0; 'skip' flag will ensure that note-on is executed just once if kskip=0 then midiout 144, ichan, inote, iveloc; send raw midi data (note on) kskip = 1; ensure that the note on will only be executed once by flipping flag endif krelease release; normally output is zero, on final k pass output is 1 if krelease=1 then; i.e. if we are on the final k pass... midiout 144, ichan, inote, 0; send raw midi data (note off) endif ; send continuous controller data iCCnum = p7 kCCval line 0, p3, 127.1; continuous controller data function kCCval = int(kCCval); convert data function to integers ktrig changed kCCval; generate a trigger each time kCCval (integers) changes if ktrig=1 then; if kCCval has changed midiout 176, ichan, iCCnum, kCCval; send a continuous controller message endif endin </CsInstruments> <CsScore> ;p1 p2 p3 p4 p5 p6 p7 i 1 0 5 1 60 100 1 f 0 7; extending performance time prevents note-offs from being lost </CsScore> </CsoundSynthesizer>
213
p6 100 100 100 100 performance time prevents note-offs from being lost
</CsoundSynthesizer>
Changing any of midion's k-rate input arguments in realtime will force it to stop the current midi note and send out a new one with the new parameters. midion2 allows us to control when new notes are sent (and the current note is stopped) through the use of a trigger input. The next example uses midion2 to algorithmically generate a melodic line. New note generation is controlled by a metro, the rate of which undulates slowly through the use of a randomi function. EXAMPLE 07E05.csd
<CsoundSynthesizer> <CsOptions> ; amend device number accordingly -Q999 </CsOptions> <CsInstruments> ;Example by Iain McCurdy ksmps = 32 ;no audio so sr and nchnls omitted instr 1 ; read values in kchn = knum random kvel random krate randomi ktrig metro midion2 endin </CsInstruments> <CsScore> i 1 0 20 1 f 0 21; extending performance time prevents the final note-off from being lost </CsScore> </CsoundSynthesizer> from p-fields p4 48,72.99; note numbers will be chosen randomly across a 2 octave range 40, 115; velocities are chosen randomly 1,2,1; rate at which new notes will be output krate^2; 'new note' trigger kchn, int(knum), int(kvel), ktrig; send a midi note whenever ktrig=1
midion and midion2 generate monophonic melody lines with no gaps between notes. moscil works in a slightly different way and allows us to explicitly define note durations as well as the pauses between notes thereby permitting the generation of more staccato melodic lines. Like midion and midion2, moscil will not generate overlapping notes (unless two or more instances of it are concurrent). The next example algorithmically generates a melodic line using moscil. EXAMPLE 07E06.csd
<CsoundSynthesizer> <CsOptions> ; amend device number accordingly -Q999 </CsOptions> <CsInstruments> ;Example by Iain McCurdy ksmps = 32 ;no audio so sr and nchnls omitted seed 0; random number generators seeded by system clock instr 1 ; read value in from p-field kchn = p4 knum random 48,72.99; note numbers will be chosen randomly across a 2 octave range
214
40, 115; velocities are chosen randomly 0.2, 1; note durations will chosen randomly from between the given limits 0, 0.4; pauses between notes will be chosen randomly from between the given kchn, knum, kvel, kdur, kpause; send a stream of midi notes
</CsInstruments> <CsScore> ;p1 p2 p3 p4 i 1 0 20 1 f 0 21; extending performance time prevents the final note-off from being lost </CsScore> </CsoundSynthesizer>
will simultaneously stream realtime midi to midi output device number 2 and render to a file named 'midiout.mid' which will be saved in our home directory.
215
This dummy 'f' event will force Csound to wait for 3600 second (1 hour) before terminating performance. It doesn't really matter what number of seconds we put in here, as long as it is more than the number of seconds duration of the midi file. Alternatively a conventional 'i' score event can also keep performance going; sometimes we will have, for example, a reverb effect running throughout the performance which can also prevent Csound from terminating. The following example plays back a midi file using Csound's 'fluidsynth' family of opcodes to facilitate playing soundfonts (sample libraries). For more information on these opcodes please consult the Csound Reference Manual. In order to run the example you will need to download a midi file and two (ideally contrasting) soundfonts. Adjust the references to these files in the example accordingly. Free midi files and soundfont are readily available on the internet. I am suggesting that you use contrasting soundfonts, such as a marimba and a trumpet, so that you can easily hear the parsing of midi channels in the midi file to different Csound instruments. In the example channels 1,3,5,7,9,11,13 and 15 play back using soundfont 1 and channels 2,4,6,8,10,12,14 and 16 play back using soundfont 2. When using fluidsynth in Csound we normally use an 'always on' instrument to gather all the audio from the various soundfonts (in this example instrument 99) which also conveniently keeps performance going while our midi file plays back. EXAMPLE 07D01.csd
<CsoundSynthesizer> <CsOptions> ;'-F' flag reads in a midi file -F AnyMIDIfile.mid </CsOptions> <CsInstruments> ;Example by Iain McCurdy sr = 44100 ksmps = 32 nchnls = 1 0dbfs = 1 sr = 44100 ksmps = 32 nchnls = 2 giEngine iSfNum1 iSfNum2 soundfont fluidEngine; start fluidsynth engine fluidLoad "ASoundfont.sf2", giEngine, 1; load a soundfont fluidLoad "ADifferentSoundfont.sf2", giEngine, 1; load a different giEngine, 1, iSfNum1, 0, 0; direct each midi channel to a giEngine, giEngine, giEngine, giEngine, giEngine, giEngine, giEngine, giEngine, giEngine, 3, iSfNum1, 0, 0 5, iSfNum1, 0, 0 7, iSfNum1, 0, 0 9, iSfNum1, 0, 0 11, iSfNum1, 0, 0 13, iSfNum1, 0, 0 15, iSfNum1, 0, 0 2, iSfNum2, 0, 0 4, iSfNum2, 0, 0
fluidProgramSelect particular soundfont fluidProgramSelect fluidProgramSelect fluidProgramSelect fluidProgramSelect fluidProgramSelect fluidProgramSelect fluidProgramSelect fluidProgramSelect fluidProgramSelect
216
4, iSfNum2, 0, 0 6, iSfNum2, 0, 0 8, iSfNum2, 0, 0 10, iSfNum2, 0, 0 12, iSfNum2, 0, 0 14, iSfNum2, 0, 0 16, iSfNum2, 0, 0
instr 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16 ;fluid synths for midi channels 1-16 iKey notnum; read in midi note number iVel ampmidi 127; read in key velocity fluidNote giEngine, p1, iKey, iVel; apply note to relevant soundfont endin instr 99; gathering of fluidsynth audio and audio output aSigL, aSigR fluidOut giEngine; read all audio from the given soundfont outs aSigL, aSigR; send audio to outputs endin </CsInstruments> <CsScore> i 99 0 3600; audio output instrument also keeps performance going e </CsScore> <CsoundSynthesizer>
Midi file input can be combined with other Csound inputs from the score or from live midi and also bear in mind that a midi file doesn't need to contain midi note events, it could instead contain, for example, a sequence of controller data used to automate parameters of effects during a live performance. Rather than to directly play back a midi file using Csound instruments it might be useful to import midi note events as a standard Csound score. This way events could be edited within the Csound editor or several score could be combined. The following example takes a midi file as input and outputs standard Csound .sco files of the events contained therein. For convenience each midi channel is output to a separate .sco file, therefore up to 16 .sco files will be created. Multiple .sco files can be later recombined by using #include... statements or simply by copy and paste. The only tricky aspect of this example is that note-ons followed by note-offs need to be sensed and calculated as p3 duration values. This is implemented by sensing the note-off by using the release opcode and at that moment triggering a note in another instrument with the required score data. It is this second instrument that is responsible for writing this data to a score file. Midi channels are rendered as p1 values, midi note numbers as p4 and velocity values as p5. EXAMPLE 07D02.csd
<CsoundSynthesizer> <CsOptions> -F InputMidiFile.mid </CsOptions> <CsInstruments> ;Example by Iain McCurdy ;ksmps needs to be 10 to ensure accurate rendering of timings ksmps = 10 massign 0,1 instr 1 iChan midichn iCps cpsmidi; read pitch in frequency from midi notes iVel veloc 0, 127; read in velocity from midi notes kDur timeinsts; running total of duration of this note kRelease release; sense when note is ending if kRelease=1 then; if note is about to end ; p1 p2 p3 p4 p5 p6 event "i", 2, 0, kDur, iChan, iCps, iVel ; send full note data to instr 2 endif endin instr 2 iDur = p3 iChan = p4 iCps = p5 iVel = p6 iStartTime times; read current time since the start of performance SFileName sprintf "Channel%d.sco",iChan; form file name for this channel (1-16) as a string variable fprints SFileName, "i%d\\t%f\\t%f\\t%f\\t%d\\n",iChan,iStartTime-
217
iDur,iDur,iCps,iVel; write a line to the score for this channel's .sco file endin </CsInstruments> <CsScore> f 0 480; ensure that this duration is as long or longer that the duration of the input midi file e </CsScore> </CsoundSynthesizer>
The example above ignores continuous controller data, pitch bend and aftertouch. The second example on the page in the Csound Manual for the opcode fprintks renders all midi data to a score file. OPEN SOUND CONTROL 42. OPEN SOUND CONTROL - NETWORK COMMUNICATION
218
instr 1000 ; this instrument sends OSC-values kValue1 randomh 0, 0.8, 4 kNum randomh 0, 8, 8 kMidiKey tab (int(kNum)), 2 kOctave randomh 0, 7, 4 kValue2 = cpsmidinn (kMidiKey*kOctave+33) kValue3 randomh 0.4, 1, 4 Stext sprintf "%i", $S_PORT OSCsend kValue1+kValue2, $IPADDRESS, $S_PORT, "/QuteCsound", "fff", kValue1, kValue2, kValue3 endin instr 1001 ; this instrument receives OSC-values kValue1Received init 0.0 kValue2Received init 0.0 kValue3Received init 0.0 Stext sprintf "%i", $R_PORT ihandle OSCinit $R_PORT kAction OSClisten ihandle, "/QuteCsound", "fff", kValue1Received, kValue2Received, kValue3Received if (kAction == 1) then printk2 kValue2Received printk2 kValue1Received endif aSine poscil3 kValue1Received, kValue2Received, 1 ; a bit reverbration aInVerb = aSine*kValue3Received aWetL, aWetR freeverb aInVerb, aInVerb, 0.4, 0.8 outs aWetL+aSine, aWetR+aSine endin </CsInstruments> <CsScore> f 1 0 1024 10 1 f 2 0 8 -2 0 2 4 7 9 11 0 2 e 3600 </CsScore> </CsoundSynthesizer> ; example by Alex Hofmann (Mar. 2011)
219
220
43. CSOUND IN PD
INSTALLING
You can embed Csound in PD via the external csoundapi~, which has been written by Victor Lazzarini. This external is part of the Csound distribution. For instance, on OSX, you find it in the following path: /Library/Frameworks/CsoundLib.framework/Versions/5.2/Resources/PD/csoundapi~.pd_darwin Put this file in a folder which is in PD's search path. For PD-extended, it's by default ~/Library/Pd. But you can put it anywhere. Just make sure that the location is specified in PD's Preferences > Path... menu. If this is done, you should be able to call the csoundapi~ object in PD. Just open a PD window, put a new object, and type in "csoundapi~":
CONTROL DATA
You can send control data from PD to your Csound instrument via the keyword "control" in a message box. In your Csound code, you must receive the data via invalue or chnget. This is a simple example: EXAMPLE 09A01.csd
<CsoundSynthesizer> <CsOptions> </CsOptions> <CsInstruments> ;Example by Joachim Heintz sr = 44100 nchnls = 2 0dbfs = 1 ksmps = 8 giSine instr 1 kFreq kAmp aSin endin ftgen invalue invalue oscili outs 0, 0, 2^10, 10, 1 "freq" "amp" kAmp, kFreq, giSine aSin, aSin
221
Save this file under the name "control.csd". Save a PD window in the same folder and create the following patch:
Note that for invalue channels, you first must register these channels by a "set" message. As you see, the first two outlets of the csoundapi~ object are the signal outlets for the audio channels 1 and 2. The third outlet is an outlet for control data (not used here, see below). The rightmost outlet sends a bang when the score has been finished.
LIVE INPUT
Audio streams from PD can be received in Csound via the inch opcode. As many input channels there are, as many audio inlets are created in the csoundapi~ object. The following CSD uses two audio inputs: EXAMPLE 09A02.csd
<CsoundSynthesizer> <CsOptions> </CsOptions> <CsInstruments> ;Example by Joachim Heintz sr = 44100 0dbfs = 1 ksmps = 8 nchnls = 2 instr 1 aL aR kcfL kcfR aFiltL aoutL aFiltR aoutR endin </CsInstruments> <CsScore> i 1 0 10000 </CsScore> </CsoundSynthesizer> inch inch randomi randomi butterbp balance butterbp balance outch outch 1 2 100, 1000; center frequency 100, 1000; for band pass filter aL, kcfL, kcfL/10 aFiltL, aL aR, kcfR, kcfR/10 aFiltR, aR 1, aoutL 2, aoutR
222
MIDI
The csoundapi~ object receives MIDI data via the keyword "midi". Csound is able to trigger instrument instances in receiving a "note on" message, and turning them off in receiving a "note off" message (or a note-on message with velocity=0). So this is a very simple way to build a synthesizer with arbitrary polyphonic output:
This is the corresponding midi.csd. It must contain the options -+rtmidi=null -M0 in the <CsOptions> tag. It's an FM synth which changes the modulation index according to the verlocity: the more you press a key, the higher the index, and the more partials you get. The ratio is calculated randomly between two limits which can be adjusted. EXAMPLE 09A03.csd
<CsOptions> -+rtmidi=null -M0 </CsOptions> <CsoundSynthesizer> <CsInstruments> ;Example by Joachim Heintz sr = 44100 ksmps = 8 nchnls = 2 0dbfs = 1 giSine instr 1 iFreq iAmp iRatio ftgen cpsmidi ampmidi random 0, 0, 2^10, 10, 1 ;gets frequency of a pressed key 8;gets amplitude and scales 0-8 .9, 1.1; ratio randomly between 0.9 and 1.1
223
.1, iFreq, 1, iRatio/5, iAmp+1, giSine; fm aTone, 0, .01, .01; for avoiding clicks at the end of a note aEnv, aEnv
SCORE EVENTS
Score events can be sent from PD to Csound by a message with the keyword event. You can send any kind of score events, like instrument calls or function table statements. The following example triggers Csound's instrument 1 whenever you press the message box on the top. Different sounds can be selected by sending f events (building/replacing a function table) to Csound.
EXAMPLE 09A04.csd
<CsoundSynthesizer> <CsOptions> </CsOptions> <CsInstruments> ;Example by Joachim Heintz sr = 44100 ksmps = 8 nchnls = 2 0dbfs = 1 giSine instr 1 iDur p3 iFreq1 iFreq2 idB kFreq kEnv aTone endin </CsInstruments> <CsScore> f 0 36000; play for 10 hours e </CsScore> </CsoundSynthesizer> seed ftgen random = random random random linseg transeg oscili outs 0; each time different seed 1, 0, 2^10, 10, 1; function table 1 0.5, 3 iDur 400, 1200 400, 1200 -18, -6 iFreq1, iDur, iFreq2 ampdb(idB), p3, -10, 0 kEnv, kFreq, 1 aTone, aTone
224
CONTROL OUTPUT
If you want Csound to give any sort of control data to PD, you can use the opcodes outvalue or chnset. You will receive this data at the second outlet from the right of the csoundapi~ object. This is a simple example:
EXAMPLE 09A05.csd
<CsoundSynthesizer> <CsOptions> </CsOptions> <CsInstruments> ;Example by Joachim Heintz sr = 44100 nchnls = 2 0dbfs = 1 ksmps = 8 instr 1 ktim kphas endin </CsInstruments> <CsScore> i 1 0 30 </CsScore> </CsoundSynthesizer> times phasor outvalue outvalue
SETTINGS
Make sure that the Csound vector size given by the ksmps value, is not larger than the internal PD vector size. It should be a power of 2. I'd recommend to start with ksmps=8. If there are performance problems, try to increase this value to 16, 32, or 64. The csoundapi~ object runs by default if you turn on audio in PD. You can stop it by sending a "run 0" message, and start it again with a "run 1" message. You can recompile the .csd file of a csoundapi~ object by sending a "reset" message. By default, you see all the messages of Csound in the PD window. If you don't want to see them, send a "message 0" message. "message 1" prints the output again. If you want to open a new .csd file in the csoundapi~ object, send the message "open", followed by the path of the .csd file you want to load.
225
A "rewind" message rewinds the score without recompilation. The message "offset", followed by a number, offsets the score playback by an amount of seconds.
226
INTRODUCTION
Csound can be embedded in a Max patch using the csound~ object. This allows you to synthesize and process audio, MIDI, or control data with Csound.
INSTALLING
Before installing csound~, install Csound5. csound~ needs a normal Csound5 installation in order to work. You can download Csound5 from here. Once Csound5 is installed, download the csound~ zip file from here.
INSTALLING ON MAC OS X
1. Expand the zip file and navigate to binaries/MacOSX/. 2. Choose an mxo file based on what kind of CPU you have (intel or ppc) and which type of floating point numbers are used in your Csound5 version (double or float). The name of the Csound5 installer may give a hint with the letters "f" or "d" or explicitly with the words "double" or "float". However, if you do not see a hint, then that means the installer contains both, in which case you only have to match your CPU type. 3. Copy the mxo file to: Max 4.5: /Library/Application Support/Cycling '74/externals/ Max 4.6: /Applications/MaxMSP 4.6/Cycling'74/externals/ Max 5: /Applications/Max5/Cycling '74/msp-externals/ 4. Rename the mxo file to "csound~.mxo". 5. If you would like to install the help patches, navigate to the help_files folder and copy all files to: Max 4.5: /Applications/MaxMSP 4.5/max-help/ Max 4.6: /Applications/MaxMSP 4.6/max-help/ Max 5: /Applications/Max5/Cycling '74/msp-help/
INSTALLING ON WINDOWS
1. Expand the zip file and navigate to binaries\Windows\. 2. Choose an mxe file based on the type of floating point numbers used in your Csound5 version (double or float). The name of the Csound5 installer may give a hint with the letters "f" or "d" or explicitly with the words "double" or "float". 3. Copy the mxe file to: Max 4.5: C:\Program Files\Common Files\Cycling '74\externals\ Max 4.6: C:\Program Files\Cycling '74\MaxMSP 4.6\Cycling '74\externals\ Max 5: C:\Program Files\Cycling '74\Max 5.0\Cycling '74\msp-externals\ 4. Rename the mxe file to "csound~.mxe". 5. If you would like to install the help patches, navigate to the help_files folder and copy all files to: Max 4.5: C:\Program Files\Cycling '74\MaxMSP 4.5\max-help\ Max 4.6: C:\Program Files\Cycling '74\MaxMSP 4.6\max-help\ Max 5: C:\Program Files\Cycling '74\Max 5.0\Cycling '74\msp-help\
KNOWN ISSUES
227
On Windows (only), various versions of Csound5 have a known incompatibility with csound~ that has to do with the fluid opcodes. How can you tell if you're affected? Here's how: if you stop a Csound performance (or it stops by itself) and you click on a non-MaxMSP or non-Live window and it crashes, then you are affected. Until this is fixed, an easy solution is to remove/delete fluidOpcodes.dll from your plugins or plugins64 folder. Here are some common locations for that folder: C:\Program Files\Csound\plugins C:\Program Files\Csound\plugins64
2. Save as "helloworld.maxpat" and close it. 3. Create a text file called "helloworld.csd" within the same folder as your patch. 4. Add the following to the text file: EXAMPLE 09B01.csd
<CsoundSynthesizer> <CsInstruments> ;Example by Davis Pyon sr = 44100 ksmps = 32 nchnls = 2 0dbfs = 1 instr 1 aNoise noise .1, 0 outch 1, aNoise, 2, aNoise endin </CsInstruments> <CsScore> f0 86400 i1 0 86400 e </CsScore> </CsoundSynthesizer>
5. Open the patch, press the bang button, then press the speaker icon. At this point, you should hear some noise. Congratulations! You created your first csound~ patch. You may be wondering why we had to save, close, and reopen the patch. This is needed in order for csound~ to find the csd file. In effect, saving and opening the patch allows csound~ to "know" where the patch is. Using this information, csound~ can then find csd files specified using a relative pathname (e.g. "helloworld.csd"). Keep in mind that this is only necessary for newly created patches that have not been saved yet. By the way, had we specified an absolute pathname (e.g. "C:/Mystuff/helloworld.csd"), the process of saving and reopening would have been unnecessary.
228
The "@scale 0" argument tells csound~ not to scale audio data between Max and Csound. By default, csound~ will scale audio to match 0dB levels. Max uses a 0dB level equal to one, while Csound uses a 0dB level equal to 32768. Using "@scale 0" and adding the statement "0dbfs = 1" within the csd file allows you to work with a 0dB level equal to one everywhere. This is highly recommended.
AUDIO I/O
All csound~ inlets accept an audio signal and some outlets send an audio signal. The number of audio outlets is determined by the arguments to the csound~ object. Here are four ways to specify the number of inlets and outlets: [csound~ [csound~ [csound~ [csound~ @io 3] @i 4 @o 7] 3] 4 7]
"@io 3" creates 3 audio inlets and 3 audio outlets. "@i 4 @o 7" creates 4 audio inlets and 7 audio outlets. The third and fourth lines accomplish the same thing as the first two. If you don't specify the number of audio inlets or outlets, then csound~ will have two audio inlets and two audio oulets. By the way, audio outlets always appear to the left of non-audio outlets. Let's create a patch called audio_io.maxpat that demonstrates audio i/o:
Here is the corresponding text file (let's call it audio_io.csd): EXAMPLE 09B02.csd
<CsoundSynthesizer> <CsInstruments> ;Example by Davis Pyon sr = 44100 ksmps = 32 nchnls = 3 0dbfs = 1 instr aTri1 aTri2 aTri3 aMix endin </CsInstruments> <CsScore> f0 86400 i1 0 86400 e </CsScore> </CsoundSynthesizer> 1 inch 1 inch 2 inch 3 = (aTri1 + aTri2 + aTri3) * .2 outch 1, aMix, 2, aMix
In audio_io.maxpat, we are mixing three triangle waves into a stereo pair of outlets. In audio_io.csd, we use inch and outch to receive and send audio from and to csound~. inch and outch both use a numbering system that starts with one (the left-most inlet or outlet).
229
Notice the statement "nchnls = 3" in the orchestra header. This tells the Csound compiler to create three audio input channels and three audio output channels. Naturally, this means that our csound~ object should have no more than three audio inlets or outlets.
CONTROL MESSAGES
Control messages allow you to send numbers to Csound. It is the primary way to control Csound parameters at i-rate or k-rate. To control a-rate (audio) parameters, you must use and audio inlet. Here are two examples: control frequency 2000 c resonance .8 Notice that you can use either "control" or "c" to indicate a control message. The second argument specifies the name of the channel you want to control and the third argument specifies the value. The following patch and text file demonstrates control messages:
EXAMPLE 09B03.csd
<CsoundSynthesizer> <CsInstruments> ;Example by Davis Pyon sr = 44100 ksmps = 32 nchnls = 2 0dbfs = 1 giSine ftgen 1, 0, 16384, 10, 1 ; Generate a sine wave table. instr 1 kPitch chnget "pitch" kMod invalue "mod" aFM foscil .2, cpsmidinn(kPitch), 2, kMod, 1.5, giSine outch 1, aFM, 2, aFM endin </CsInstruments> <CsScore> f0 86400 i1 0 86400 e </CsScore> </CsoundSynthesizer>
In the patch, notice that we use two different methods to construct control messages. The "pak" method is a little faster than the message box method, but do whatever looks best to you. You may be wondering how we can send messages to an audio inlet (remember, all inlets are audio inlets). Don't worry about it. In fact, we can send a message to any inlet and it will work. In the text file, notice that we use two different opcodes to receive the values sent in the control messages: chnget and invalue. chnget is more versatile (it works at i-rate and k-rate, and it accepts strings) and is a tiny bit faster than invalue. On the other hand, the limited nature of invalue (only works at k-rate, never requires any declarations in the header section of the orchestra) may be easier for newcomers to Csound.
230
MIDI
csound~ accepts raw MIDI numbers in it's first inlet. This allows you to create Csound instrument instances with MIDI notes and also control parameters using MIDI Control Change. csound~ accepts all types of MIDI messages, except for: sysex, time code, and sync. Let's look at a patch and text file that uses MIDI:
EXAMPLE 09B04.csd
<CsoundSynthesizer> <CsInstruments> ;Example by Davis Pyon sr = 44100 ksmps = 32 nchnls = 2 0dbfs = 1 massign 0, 0 ; Disable default MIDI assignments. massign 1, 1 ; Assign MIDI channel 1 to instr 1. giSine ftgen 1, 0, 16384, 10, 1 ; Generate a sine wave table. instr 1 iPitch cpsmidi kMod midic7 1, 0, 10 aFM foscil .2, iPitch, 2, kMod, 1.5, giSine outch 1, aFM, 2, aFM endin </CsInstruments> <CsScore> f0 86400 e </CsScore> </CsoundSynthesizer>
In the patch, notice how we're using midiformat to format note and control change lists into raw MIDI bytes. The "1" argument for midiformat specifies that all MIDI messages will be on channel one. In the text file, notice the massign statements in the header of the orchestra. "massign 0,0" tells Csound to clear all mappings between MIDI channels and Csound instrument numbers. This is highly recommended because forgetting to add this statement may cause confusion somewhere down the road. The next statement "massign 1,1" tells Csound to map MIDI channel one to instrument one. To get the MIDI pitch, we use the opcode cpsmidi. To get the FM modulation factor, we use midic7 in order to read the last known value of MIDI CC number one (mapped to the range [0,10]).
231
Notice that in the score section of the text file, we no longer have the statement "i1 0 86400" as we had in earlier examples. This is a good thing as you should never instantiate an instrument via both MIDI and score events (at least that has been this writer's experience).
EVENTS
To send Csound events (i.e. score statements), use the "event" or "e" message. You can send any type of event that Csound understands. The following patch and text file demonstrates how to send events:
EXAMPLE 09B05.csd
<CsoundSynthesizer> <CsInstruments> ;Example by Davis Pyon sr = 44100 ksmps = 32 nchnls = 2 0dbfs = 1 instr 1 iDur = p3 iCps = cpsmidinn(p4) iMeth = 1 print iDur, iCps, iMeth aPluck pluck .2, iCps, iCps, 0, iMeth outch 1, aPluck, 2, aPluck endin </CsInstruments> <CsScore> f0 86400 e </CsScore> </CsoundSynthesizer>
In the patch, notice how the arguments to the pack object are declared. The "i1" statement tells Csound that we want to create an instance of instrument one. There is no space between "i" and "1" because pack considers "i" as a special symbol signifying an integer. The next number specifies the start time. Here, we use "0" because we want the event to start right now. The duration "3." is specified as a floating point number so that we can have non-integer durations. Finally, the number "64" determines the MIDI pitch. You might be wondering why the pack object output is being sent to a message box. This is good practice as it will reveal any mistakes you made in constructing an event message.
232
In the text file, we access the event parameters using p-statements. We never access p1 (instrument number) or p2 (start time) because they are not important within the context of our instrument. Although p3 (duration) is not used for anything here, it is often used to create audio envelopes. Finally, p4 (MIDI pitch) is converted to cycles-per-second. The print statement is there so that we can verify the parameter values.
233
234
46. QUTECSOUND
QuteCsound is a free, cross-platform graphical frontend to Csound. It features syntax highlighting, code completion and a graphical widget editor for realtime control of Csound. It comes with many useful code examples, from basic tutorials to complex synthesizers and pieces written in Csound. It also features an integrated Csound language help display. QuteCsound can be used as a code editor tailored for Csound, as it facilitates running and rendering Csound files without the need of typing on the command line using the Run and Render buttons.
In the widget editor panel, you can create a variety of widgets to control Csound. To link the value from a widget, you first need to set its channel, and then use the Csound opcode invalue. To send values to widgets, e.g. for data display, you need to use the outvalue opcode.
235
QuteCsound also offers convenient facilities for score editing in a spreadsheet like environment which can be transformed using Python scripting.
You will find more detailed information and video tutorials in the QuteCsound home page at http://qutecsound.sourceforge.net.
236
237
47. WINXOUND
WinXound Description: WinXound is a free and open-source Front-End GUI Editor for CSound 5, CSoundAV, CSoundAC, with Python and Lua support, developed by Stefano Bonetti. It runs on Microsoft Windows, Apple Mac OsX and Linux. WinXound is optimized to work with the new CSound 5 compiler.
WinXound Features: Edit CSound, Python and Lua files (csd, orc, sco, py, lua) with Syntax Highlight and Rectangular Selection; Run CSound, CSoundAV, CSoundAC, Python and Lua compilers; Run external language tools (QuteCsound, Idle, or other GUI Editors); CSound analysis user friendly GUI; Integrated CSound manual help; Possibilities to set personal colors for the syntax highlighter; Convert orc/sco to csd or csd to orc/sco; Split code into two windows horizontally or vertically; CSound csd explorer (File structure for Tags and Instruments); CSound Opcodes autocompletion menu; Line numbers; Bookmarks; ...and much more ... (Download it!) Web Site and Contacts: - Web: winxound.codeplex.com - Email: [email protected] (or [email protected])
REQUIREMENTS System requirements for Microsoft Windows: Supported: Xp, Vista, Seven (32/64 bit versions); (Note: For Windows Xp you also need the Microsoft Framework .Net version 2.0 or major. You can download it from www.microsoft.com site); CSound 5: http://sourceforge.net/projects/csound - (needed for CSound and LuaJit compilers); Not requested but suggested: CSoundAV by Gabriel Maldonado (http://www.csounds.com/maldonado/); Requested to work with Python: Python compiler (http://www.python.org/download/) System requirements for Apple Mac OsX: Osx 10.5 or major; CSound 5: http://sourceforge.net/projects/csound - (needed for CSound compiler); System requirements for Linux: Gnome environment or libraries; Please, read carefully the "ReadMe" file in the source code.
238
Download and install the Microsoft Framework .Net version 2.0 or major (only for Windows Xp); Download and install the latest version of CSound 5 (http://sourceforge.net/projects/csound); Download the WinXound zipped file, decompress it where you want (see the (*)note below), and double-click on "WinXound_Net" executable; (*)note: THE WINXOUND FOLDER MUST BE LOCATED IN A PATH WHERE YOU HAVE FULL READ AND WRITE PERMISSION (for example in your User Personal folder). Apple Mac OsX Installation and Usage: Download and install the latest version of CSound 5 (http://sourceforge.net/projects/csound); Download the WinXound zipped file, decompress it and drag WinXound.app to your Applications folder (or where you want). Launch it from there. Linux Installation and Usage: Download and install the latest version of CSound 5 for your distribution; Ubuntu (32/64 bit): Download the WinXound zipped file, decompress it in a location where you have the full read and write permissions; To compile the source code: 1) Before to compile WinXound you need to install: - gtkmm-2.4 (libgtkmm-2.4-dev) >= 2.12 - vte (libvte-dev) - webkit-1.0 (libwebkit-dev) 2) To compile WinXound open the terminal window, go into the uncompressed "winxound_gtkmm" directory and type: ./configure make 3) To use WinXound without installing it: make standalone ./bin/winxound [Note: WinXound folder must be located in a path where you have full read and write permission.] 4) To install WinXound: make install
Source Code: Windows: The source code is written in C# using Microsoft Visual Studio C# Express Edition 2008. OsX: The source code is written in Cocoa and Objective-C using XCode 3.2 version. Linux: The source code is written in C++ (Gtkmm) using Anjuta. Note: The TextEditor is entirely based on the wonderful SCINTILLA text control by Neil Hodgson (http://www.scintilla.org).
239
Credits: Many thanks for suggestions and debugging help to Roberto Doati, Gabriel Maldonado, Mark Jamerson, Andreas Bergsland, Oeyvind Brandtsegg, Francesco Biasiol, Giorgio Klauer, Paolo Girol, Francesco Porta, Eric Dexter, Menno Knevel, Joseph Alford, Panos Katergiathis, James Mobberley, Fabio Macelloni, Giuseppe Silvi, Maurizio Goina, Andrs Cabrera, Peiman Khosravi, Rory Walsh and Luis Jure.
240
48. BLUE
blue is a Java-based music composition environment for use with Csound. It provides higher level abstractions such as a timeline, GUI-based instruments, score generating soundObjects like pianoRolls, scripting, and more. It is available at: http://blue.kunstmusik.com CSOUND UTILITIES 49. CSOUND UTILITIES
241
242
First we create an instance of Csound, getting an opaque pointer that will be passed to most C API functions we will use. Then we compile the orc, sco pair of files or the csd file given as input argument through the argv parameter of the main function. If the compilation is successful (result == 0), we call the csoundPerform function. Finally, when csoundPerform returns, we destroy our instance before ending the program. On a linux system, with libcsound named libcsound64 (double version of the csound library), supposing that all include and library paths are set correctly, we would build the above example with the following command:
gcc -DUSE_DOUBLE -o csoundCommand csoundCommand.c -lcsound64
The C API has been wrapped in a C++ class for convenience. This gives the Csound basic C++ API. With this API, the above example would become:
#include <csound/csound.hpp> int main(int argc, char **argv) { Csound *cs = new Csound(); int result = cs->Compile(argc, argv); if (result == 0) { result = cs->Perform(); } return (result >= 0 ? 0 : result); }
Here, we get a pointer to a Csound object instead of the csound opaque pointer. We call methods of this object instead of C functions, and we don't need to call csoundDestroy in the end of the program, because the C++ object destruction mechanism takes care of this. On our linux system, the example would be built with the following command:
g++ -DUSE_DOUBLE -o csoundCommandCpp csoundCommand.cpp -lcsound64
The Csound API has been wrapped to other languages. The Csound Python API wraps the Csound API to the Python language. To use this API, you have to import the csnd module. The csnd module is normally installed in the site-packages or dist-packages directory of your python distribution as a csnd.py file. Our csound command example becomes:
import sys import csnd def csoundCommand(args): csound = csnd.Csound() arguments = csnd.CsoundArgVList() for s in args: arguments.Append(s) result = csound.Compile(arguments.argc(), arguments.argv()) if result == 0: result = csound.Perform() return result def main():
243
We use a Csound object (remember Python has OOp features). Note the use of the CsoundArgVList helper class to wrap the program input arguments into a C++ manageable object. In fact, the Csound class has syntactic sugar (thanks to method overloading) for the Compile method. If you have less than six string arguments to pass to this method, you can pass them directly. But here, as we don't know the number of arguments to our csound command, we use the more general mechanism of the CsoundArgVList helper class. The Csound Java API wraps the Csound API to the Java language. To use this API, you have to import the csnd package. The csnd package is located in the csnd.jar archive which has to be known from your Java path. Our csound command example becomes:
import csnd.*; public class CsoundCommand { private Csound csound = null; private CsoundArgVList arguments = null; public CsoundCommand(String[] args) { csound = new Csound(); arguments = new CsoundArgVList(); arguments.Append("dummy"); for (int i = 0; i < args.length; i++) { arguments.Append(args[i]); } int result = csound.Compile(arguments.argc(), arguments.argv()); if (result == 0) { result = csound.Perform(); } System.out.println(result); } public static void main(String[] args) { CsoundCommand csCmd = new CsoundCommand(args); } }
Note the "dummy" string as first argument in the arguments list. C, C++ and Python expect that the first argument in a program argv input array is implicitly the name of the calling program. This is not the case in Java: the first location in the program argv input array contains the first command line argument if any. So we have to had this "dummy" string value in the first location of the arguments array so that the C API function called by our csound.Compile method is happy. This illustrates a fundamental point about the Csound API. Whichever API wrapper is used (C++, Python, Java, etc), it is the C API which is working under the hood. So a thorough knowledge of the Csound C API is highly recommended if you plan to use the Csound API in any of its different flavours. The main source of information about the Csound C API is the csound.h header file which is fully commented. On our linux system, with csnd.jar located in /usr/local/lib/csound/java, our Java Program would be compiled and run with the following commands:
javac -cp /usr/local/lib/csound/java/csnd.jar CsoundCommand.java java -cp /usr/local/lib/csound/java/csnd.jar:. CsoundCommand
There are too an extended Csound C++ API, which adds to the Csound C++ API a CsoundFile class, the CsoundAC C++ API, which provides a class hierarchy for doing algorithmic composition using Michael Gogins' concept of music graphs, and API wrappers for the LISP, LUA and HASKELL languages. In this introductory chapter we will focus on the basic C++ API, and the Python and Java API.
245
246
247
ENVELOPES
Simple Standard Envelopes linen linenr adsr madsr more Envelopes By Linear And Exponential Generators linseg expseg transeg ( linsegr expsegr transegr) more Envelopes By Function Tables
248
DELAYS
Audio Delays vdelay vdelayx vdelayw delayr delayw deltap deltapi deltap3 deltapx deltapxw deltapn Control Delays delk vdel_k
FILTERS
Compare Standard Filters and Specialized Filters overviews. Low Pass Filters tone tonex butlp clfilt High Pass Filters atone atonex buthp clfilt Band Pass And Resonant Filters reson resonx resony resonr resonz butbp Band Reject Filters areson butbr Filters For Smoothing Control Signals port portk
REVERB
(pconvolve) freeverb reverbsc reverb nreverb babo
249
SPATIALIZATION
Panning pan2 pan VBAP vbaplsinit vbap4 vbap8 vbap16 Ambisonics bformenc1 bformdec1 Binaural / HRTF hrtfstat hrtfmove hrtfmove2 hrtfer
250
GRANULAR SYNTHESIS
partikkel others sndwarp
CONVOLUTION
pconvolve ftconv dconv
251
DATA
BUFFER / FUNCTION TABLES
Creating Function Tables (Buffers) ftgen GEN Routines Writing To Tables tableiw / tablew tabw_i / tabw
Reading From Tables table / tablei / table3 Saving Tables To Files ftsave / ftsavek TableToSF tab_i / tab
252
REALTIME INTERACTION
MIDI
Opcodes For Use In MIDI-Triggered Instruments massign pgmassign notnum cpsmidi veloc ampmidi midichn pchbend aftouch polyaft Opcodes For Use In All Instruments ctrl7 ( ctrl14/ctrl21) initc7 ctrlinit ( initc14/initc21) midiin midiout
253
HUMAN INTERFACES
Widgets FLTK overview here Keys sensekey Mouse xyin WII wiiconnect wiidata wiirange wiisend P5 Glove p5gconnect p5gdata
INSTRUMENT CONTROL
SCORE PARAMETER ACCESS
p(x) pindex pset passign pcount
Tempo Reading tempo miditempo tempoval Duration Modifications ihold xtratim Time Signal Generators metro mpulse
PROGRAM FLOW
init igoto kgoto timout reinit/rigoto/rireturn
EVENT TRIGGERING
event_i / event scoreline_i / scoreline schedkwhen seqtime /seqtime2 timedseq
254
INSTRUMENT SUPERVISION
Instances And Allocation active maxalloc prealloc Turning On And Off turnon turnoff/turnoff2 mute remove exitnow
MATH
MATHEMATICAL CALCULATIONS
Arithmetic Operations + * / ^ %
exp(x)
abs(x) int(x) frac(x) round(x) ceil(x) floor(x) Trigonometric Functions sin(x) cos(x) tan(x) sinh(x) cosh(x) tanh(x) sininv(x) cosinv(x) taninv(x) taninv2(x) Logic Operators && ||
255
CONVERTERS
MIDI To Frequency cpsmidi cpsmidinn more Frequency To MIDI F2M F2MC (UDO's) Cent Values To Frequency cent Amplitude Converters ampdb ampdbfs dbamp dbfsamp Scaling Scali Scalk Scala (UDO's)
PLUGINS
PLUGIN HOSTING LADSPA dssiinit dssiactivate dssilist dssiaudio dssictls VST vstinit vstaudio/vstaudiog vstmidiout vstparamset/vstparamget vstnote vstinfo vstbankload vstprogset vstedit EXPORTING CSOUND FILES TO PLUGINS
256
257
258
ENVELOPES
Simple Standard Envelopes
linen applies a linear rise (fade in) and decay (fade out) to a signal. It is very easy to use, as you put the raw audio signal in and get the enveloped signal out. linenr does the same for any note which's duration is not fixed at the beginning, like MIDI notes or any real time events. linenr begins to fade out exactly when the instrument is turned off, adding an extra time after this turnoff. adsr calculates the classical attack-decay-sustain-release envelope. The result is to be multiplied with the audio signal to get the enveloped signal. madsr does the same for a realtime note (like explained above for linenr). Other standard envelope generators can be found in the Envelope Generators overview of the Canonical Csound Manual.
259
DELAYS
Audio Delays
The vdelay familiy of opcodes is easy to use and implement all necessary features to work with delays: vdelay implements a variable delay at audio rate with linear interpolation. vdelay3 offers cubic interpolation. vdelayx has an even higher quality interpolation (and is by this reason slower). vdelayxs lets you input and output two channels, and vdelayxq four. vdelayw changes the position of the write tap in the delay line instead of the read tap. vdelayws is for stereo, and vdelaywq for quadro. The delayr/delayw opcodes establishes a delay line in a more complicated way. The advantage is that you can have as many taps in one delay line as you need. delayr establishes a delay line and reads from it. delayw writes an audio signal to the delay line. deltap, deltapi, deltap3, deltapx and deltapxw are working similar to the relevant opcodes of the vdelay family (see above). deltapn offers a tap delay measured in samples, not seconds.
Control Delays
delk and vdel_k let you delay any k-signal by some time interval (usable for instance as a kind of wait mode).
260
FILTERS
Csound has an extremely rich collection of filters and they are good available on the Csound Manual pages for Standard Filters and Specialized Filters. So here some most frequently used filters are mentioned, and some tips are given. Note that filters usually change the signal level, so you will need the balance opcode.
REVERB
Note that you can work easily in Csound with convolution reverbs based on impulse response files, for instance with pconvolve. freeverb is the implementation of Jezar's well-known free (stereo) reverb. reverbsc is a stereo FDN reverb, based on work of Sean Costello. reverb and nreverb are the traditional Csound reverb units. babo is a physical model reverberator ("ball within the box").
261
Pitch Estimation
ptrack, pitch and pitchamdf track the pitch of an incoming audio signal, using different methods. pvscent calculates the spectral centroid for FFT streaming signals (see below under "FFT And Spectral Processing")
Tempo Estimation
tempest estimates the tempo of beat patterns in a control signal.
Dynamic Processing
compress compresses, limits, expands, ducks or gates an audio signal. dam is a dynamic compressor/expander. clip clips an a-rate signal to a predefined limit, in a soft manner.
262
SPATIALIZATION
Panning
pan2 distributes a mono audio signal across two channels, with different envelope options. pan distributes a mono audio signal amongst four channels.
VBAP
vbaplsinit configures VBAP output according to loudspeaker parameters for a 2- or 3-dimensional space. vbap4 / vbap8 / vbap16 distributes an audio signal among up to 16 channels, with krate control over azimut, elevation and spread.
Ambisonics
bformenc1 encodes an audio signal to the Ambisonics B format. bformdec1 decodes Ambisonics B format signals to loudspeaker signals in different possible configurations.
Binaural / HRTF
hrtfstat, hrtfmove and hrtfmove2 are opcodes for creating 3d binaural audio for headphones. hrtfer is an older implementation, using an external file.
263
Doppler Shift
doppler lets you calculate the doppler shift depending on the position of the sound source and the microphone.
GRANULAR SYNTHESIS
partikkel is the most flexible opcode for granular synthesis. You should be able to do everything you like in this field. The only drawback is the large number of input arguments, so you may want to use other opcodes for certain purposes. You can find a list of other relevant opcodes here. sndwarp focusses granular synthesis on time stretching and/or pitch modifications. Compare waveset and the pvs-opcodes pvsfread, pvsdiskin, pvscale, pvshift for other implementations of time and/or pitch modifications.
264
CONVOLUTION
pconvolve performs convolution based on a uniformly partitioned overlap-save algorithm. ftconv is similar to pconvolve, but you can also use parts of the impulse response file, instead of reading the whole file. dconv performs direct convolution.
265
266
FFT Info
pvsinfo gets info either from a realtime f-signal or from a .pvx file. pvsbin gets the amplitude and frequency values from a single bin of a f-signal. pvscent calculates the spectral centroid of a signal.
267
FM Instrument Models
see here
268
Writing To Tables
tableiw / tablew: Write values to a function table at i-rate (tableiw), k-rate and a-rate (tablew). These opcodes provide many options and are safe because of boundary check, but you may have problems with non-power-of-two tables. tabw_i / tabw: Write values to a function table at i-rate (tabw_i), k-rate or a-rate (tabw). Offer less options than the tableiw/tablew opcodes, but work also for nonpower-of-two tables. They do not provide a boundary check, which makes them fast but also give the user the resposability not writing any value off the table boundaries.
269
270
271
k <- a
downsamp converts an a-rate signal to a k-rate signal, with optional averaging. max_k returns the maximum of an a-rate signal in a certain time span, with different options of calculation
a <- k
upsamp converts a k-rate signal to an a-rate signal by simple repetitions. It is the same as the statement asig=ksig. interp converts a k-rate signal to an a-rate signal by interpolation.
272
Formatted Printing
prints lets you print a format string at i-time. The format is similar to the C-style syntax (verweis). There is no %s format, therefore no string variables can be printed. printf_i is very similar to prints. It also works at init-time. The advantage in comparision to prints is the ability of printing string variables. On the other hand, you need a trigger and at least one input argument. printks is like prints, but takes k-variables, and like at printk you must specify a time between printing. printf is like printf_i, but works at k-rate.
String Variables
sprintf works like printf_i, but stores the output in a string variable, instead of printing it out. sprintfk is the same for k-rate arguments. strset links any string with a numeric value. strget transforms a strset number back to a string.
273
274
Remote Instruments
remoteport defines the port for use with the remote system. insremot will send note events from a source machine to one destination. insglobal will send note events from a source machine to many destinations. midiremot will send midi events from a source machine to one destination. midiglobal will broadcast the midi events to all the machines involved in the remote concert.
Network Audio
socksend sends audio data to other processes using the low-level UDP or TCP protocols. sockrecv receives audio data from other processes using the low-level UDP or TCP protocols.
275
HUMAN INTERFACES
Widgets
The FLTK Widgets are integrated in Csound. Information and examples can be found here. QuteCsound implements a more modern and easy-to-use system for widgets. The communication between the widgets and Csound is done via invalue (or chnget) and outvalue (or chnset).
Keys
sensekey gets the input of your computer keys.
Mouse
xyin can get the mouse position if your frontend does not provide this sensing otherwise.
WII
wiiconnect reads data from a number of external Nintendo Wiimote controllers. wiidata reads data fields from a number of external Nintendo Wiimote controllers. wiirange sets scaling and range limits for certain Wiimote fields. wiisend sends data to one of a number of external Wii controllers.
P5 Glove
p5gconnect reads data from an external P5 Glove controller. p5gdata reads data fields from an external P5 Glove controller.
276
Tempo Reading
tempo allows the performance speed of Csound scored events to be controlled from within an orchestra. miditempo returns the current tempo at k-rate, of either the midi file (if available) or the score. tempoval reads the current value of the tempo.
Duration Modifications
ihold causes a finite-duration note to become a 'held' note. xtratim extend the duration of the current instrument instance.
277
PROGRAM FLOW
init initializes a k- or a-variable (assigns a value to a k- or a-variable which is valid at itime). igoto jumps to a label at i-time. kgoto jumps to a label at k-time. timout jumps to a label for a given time. Can be used in conjunction with reinit to perform time loops (see the chapter about Control Structures for more information). reinit / rigoto / rireturn forces a certain section of code to be reinitialized (= i-rate variables are renewed).
EVENT TRIGGERING
event_i / event: Generate an instrument event at i-time (event_i) or at k-time (event). Easy to use, but you cannot send a string to the subinstrument. scoreline_i / scoreline: Generate an instrument at i-time (scoreline_i) or at k-time (scoreline). Like event_i/event, but you can send to more than one instrument but unlike event_i/event you can send strings. On the other hand, you must usually preformat your scoreline-string using sprintf. schedkwhen triggers an instrument event at k-time if a certain condition is given. seqtime / seqtime2 can be used to generate a trigger signal according to time values in a function table. timedseq is an event-sequencer in which time can be controlled by a time-pointer. Sequence data are stored into a table.
278
INSTRUMENT SUPERVISION
Instances And Allocation
active returns the number of active instances of an instrument. maxalloc limits the number of allocations (instances) of an instrument. prealloc creates space for instruments but does not run them.
Named Instruments
nstrnum returns the number of a named instrument.
zak
279
Trigonometric Functions
sin(x), cos(x), tan(x) perform a sine, cosine or tangent function. sinh(x), cosh(x), tanh(x) perform a hyperbolic sine, cosine or tangent function. sininv(x), cosinv(x), taninv(x) and taninv2(x) perform the arcsine, arccosine and arctangent functions.
Logic Operators
&& and || are the symbols for a logical "and" respective "or". Note that you can use here parentheses for defining the precedence, too, for instance: if (ival1 < 10 && ival2 > 5) || (ival1 > 20 && ival2 < 0) then ...
280
CONVERTERS
MIDI To Frequency
cpsmidi converts a MIDI note number from a triggered instrument to the frequency in Hertz. cpsmidinn does the same for any input values (i- or k-rate). Other opcodes convert to Csonund's pitch- or octave-class system. They can be found here.
Frequency To MIDI
Csound has no own opcode for the conversion of a frequency to a midi note number, because this is a rather simple calculation. You can find a User Defined Opcode for rounding to the next possible midi note number or for the exact translation to a midi note number and a cent value as fractional part.
Amplitude Converters
ampdb returns the amplitude equivalent of the dB value. ampdb(0) returns 1, ampdb(-6) returns 0.501187, and so on. ampdbfs returns the amplitude equivalent of the dB value, according to what has been set as 0dbfs (1 is recommended, the default is 15bit = 32768). So ampdbfs(-6) returns 0.501187 for 0dbfs=1, but 16422.904297 for 0dbfs=32768. dbamp returns the decibel equivalent of the amplitude value, where an amplitude of 1 is the maximum. So dbamp(1) -> 0 and dbamp(0.5) -> -6.020600. dbfsamp returns the decibel equivalent of the amplitude value set by the 0dbfs statement. So dbfsamp(10) is 20.000002 for 0dbfs=0 but -70.308998 for 0dbfs=32768.
Scaling
Scaling of signals from an input range to an output range, like the "scale" object in Max/MSP, is not implemented in Csound, because it is a rather simple calculation. It is available as User Defined Opcode: Scali (i-rate), Scalk (k-rate) or Scala (a-rate).
281
PYTHON OPCODES
pyinit initializes the Python interpreter. pyrun runs a Python statement or block of statements. pyexec executes a script from a file at k-time, i-time or if a trigger has been received. pycall invokes the specified Python callable at k-time or i-time. pyeval evaluates a generic Python expression and stores the result in a Csound k- or ivariable, with optional trigger. pyassign assigns the value of the given Csound variable to a Python variable possibly destroying its previous content.
SYSTEM OPCODES
getcfg returns various Csound configuration settings as a string at init time. system / system_i call an external program via the system call.
VST
vstinit loads a plugin. vstaudio / vstaudiog return a plugin's output. vstmidiout sends midi data to a plugin. vstparamset / vstparamget sends and receives automation data to and from the plugin. vstnote sends a midi note with a definite duration. vstinfo outputs the parameter and program names for a plugin. vstbankload loads an .fxb bank. vstprogset sets the program in a .fxb bank. vstedit opens the GUI editor for the plugin, when available.
282
283
60. GLOSSARY
control cycle, control period or k-loop is a pass during the performance of an instrument, in which all k- and a-variables are renewed. The time for one control cycle is measured in samples and determined by the ksmps constant in the orchestra header. If your sample rate is 44100 and your ksmps value is 10, the time for one control cycle is 1/4410 = 0.000227 seconds. See the chapter about Initialization And Performance Pass for more information.
control rate or k-rate (kr) is the number of control cycles per second. It can be calculated as the relationship of the sample rate sr and the number of samples in one control period ksmps. If your sample rate is 44100 and your ksmps value is 10, your control rate is 4410, so you have 4410 control cycles per second.
f-statement or function table statement is a score line which starts with a "f" and generates a function table. See the chapter about function tables for more information. A dummy fstatement is a statement like "f 0 3600" which looks like a function table statement, but instead of generating any table, it serves just for running Csound for a certain time (here 3600 seconds = 1 hour).
i-time or init-time or i-rate signify the time in which all the variables starting with an "i" get their values. These values are just given once for an instrument call. See the chapter about Initialization And Performance Pass for more information.
k-time is the time during the performance of an instrument, after the initialization. Variables starting with a "k" can alter their values in each ->control cycle. See the chapter about Initialization And Performance Pass for more information.
time stretching can be done in various ways in Csound. See sndwarp, waveset, pvstanal and the Granular Synthesis opcodes. In the frequency domain, you can use the pvs-opcodes pvsfread, pvsdiskin, pvscale, pvshift.
284
61. LINKS
DOWNLOADS
Csound: http://sourceforge.net/projects/csound/files/ Csound's User Defined Opcodes: http://www.csounds.com/udo/ QuteCsound: http://sourceforge.net/projects/qutecsound/files/ WinXound:http://winxound.codeplex.com Blue: http://sourceforge.net/projects/bluemusic/files/
COMMUNITY
Csound's info page on sourceforge is a good collection of links and basic infos. csounds.com is the main page for the Csound community, including news, online tutorial, forums and many links. The Csound Journal is a main source for different aspects of working with Csound. The Csound Blog by Jacob Joaquin offers a lot of interesting articles, tutorials, examples and software.
TUTORIALS
A Beginning Tutorial is a short introduction from Barry Vercoe, the "father of Csound". An Instrument Design TOOTorial by Richard Boulanger (1991) is another classical introduction, still very worth to read. Introduction to Sound Design in Csound also by Richard Boulanger, is the first chapter of the famous Csound Book (2000).
285
Virtual Sound by Alessandro Cipriani and Maurizio Giri (2000) A Csound Tutorial by Michael Gogins (2009), one of the main Csound Developers.
VIDEO TUTORIALS
A playlist as overview by Alex Hofmann: http://www.youtube.com/view_play_list?p=3EE3219702D17FD3
QuteCsound
QuteCsound: Where to start? http://www.youtube.com/watch?v=0XcQ3ReqJTM First instrument: http://www.youtube.com/watch?v=P5OOyFyNaCA Using MIDI: http://www.youtube.com/watch?v=8zszIN_N3bQ About configuration: http://www.youtube.com/watch?v=KgYea5s8tFs Presets tutorial: http://www.youtube.com/watch?v=KKlCTxmzcS0 http://www.youtube.com/watch?v=aES-ZfanF3c Live Events tutorial: http://www.youtube.com/watch?v=O9WU7DzdUmE http://www.youtube.com/watch?v=Hs3eO7o349k http://www.youtube.com/watch?v=yUMzp6556Kw New editing features in 0.6.0: http://www.youtube.com/watch?v=Hk1qPlnyv88
EXAMPLE COLLECTIONS
Csound Realtime Examples by Iain McCurdy is one of the most inspiring and up-to-date collections. The Amsterdam Catalog by John-Philipp Gather is particularily interesting because of the adaption of Jean-Claude Risset's famous "Introductory Catalogue of Computer Synthesized Sounds" from 1969.
BOOKS
The Csound Book (2000) edited by Richard Boulanger is still the compendium for anyone who really wants to go in depth with Csound. Virtual Sound by Alessandro Cipriani and Maurizio Giri (2000)
286
Signale, Systeme, und Klangsysteme by Martin Neukom (2003, german) has many interesting examples in Csound.
287