Audio Unit Hosting Guide For Ios
Audio Unit Hosting Guide For Ios
2010-09-01
Apple Inc. 2010 Apple Inc. All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, mechanical, electronic, photocopying, recording, or otherwise, without prior written permission of Apple Inc., with the following exceptions: Any person is hereby authorized to store documentation on a single computer for personal use only and to print copies of documentation for personal use provided that the documentation contains Apples copyright notice. The Apple logo is a trademark of Apple Inc. Use of the keyboard Apple logo (Option-Shift-K) for commercial purposes without the prior written consent of Apple may constitute trademark infringement and unfair competition in violation of federal and state laws. No licenses, express or implied, are granted with respect to any of the technology described in this document. Apple retains all intellectual property rights associated with the technology described in this document. This document is intended to assist application developers to develop applications only for Apple-labeled computers. Every effort has been made to ensure that the information in this document is accurate. Apple is not responsible for typographical errors. Apple Inc. 1 Infinite Loop Cupertino, CA 95014 408-996-1010 Apple, the Apple logo, iPhone, iPod, Mac, Mac OS, Objective-C, and Xcode are trademarks of Apple Inc., registered in the United States and other countries. IOS is a trademark or registered trademark of Cisco in the U.S. and other countries and is used under license. Simultaneously published in the United States and Canada.
Even though Apple has reviewed this document, APPLE MAKES NO WARRANTY OR REPRESENTATION, EITHER EXPRESS OR IMPLIED, WITH RESPECT TO THIS DOCUMENT, ITS QUALITY, ACCURACY, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. AS A RESULT, THIS DOCUMENT IS PROVIDED AS IS, AND YOU, THE READER, ARE
ASSUMING THE ENTIRE RISK AS TO ITS QUALITY AND ACCURACY. IN NO EVENT WILL APPLE BE LIABLE FOR DIRECT, INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES RESULTING FROM ANY DEFECT OR INACCURACY IN THIS DOCUMENT, even if advised of the possibility of such damages. THE WARRANTY AND REMEDIES SET FORTH ABOVE ARE EXCLUSIVE AND IN LIEU OF ALL OTHERS, ORAL OR WRITTEN, EXPRESS OR IMPLIED. No Apple dealer, agent, or employee is authorized to make any modification, extension, or addition to this warranty. Some states do not allow the exclusion or limitation of implied warranties or liability for incidental or consequential damages, so the above limitation or exclusion may not apply to you. This warranty gives you specific legal rights, and you may also have other rights which vary from state to state.
Contents
Introduction
Chapter 1
Chapter 2
3
2010-09-01 | 2010 Apple Inc. All Rights Reserved.
CONTENTS
Configure the Audio Units 36 Write and Attach Render Callback Functions 36 Connect the Audio Unit Nodes 37 Provide a User Interface 38 Initialize and Start the Audio Processing Graph 38 Troubleshooting Tips 39 Chapter 3
4
2010-09-01 | 2010 Apple Inc. All Rights Reserved.
Chapter 2
Chapter 3
5
2010-09-01 | 2010 Apple Inc. All Rights Reserved.
Table 3-5
Identifier keys for accessing the dynamically-linkable libraries for each iOS audio unit 46
6
2010-09-01 | 2010 Apple Inc. All Rights Reserved.
INTRODUCTION
iOS provides audio processing plug-ins that support mixing, equalization, format conversion, and realtime input/output for recording, playback, offline rendering, and live conversation such as for VoIP (Voice over Internet Protocol). You can dynamically load and usethat is, hostthese powerful and flexible plug-ins, known as audio units, from your iOS application. Audio units usually do their work in the context of an enclosing object called an audio processing graph, as shown in the figure. In this example, your app sends audio to the first audio units in the graph by way of one or more callback functions and exercises individual control over each audio unit. The output of the I/O unitthe last audio unit in this or any audio processing graphconnects directly to the output hardware.
iOS device
At a Glance
Because audio units constitute the lowest programming layer in the iOS audio stack, to use them effectively requires deeper understanding than you need for other iOS audio technologies. Unless you require realtime playback of synthesized sounds, low-latency I/O (input and output), or specific audio unit features, look first at the Media Player, AV Foundation, OpenAL, or Audio Toolbox frameworks. These higher-level technologies employ audio units on your behalf and provide important additional features, as described in Multimedia Programming Guide.
At a Glance
2010-09-01 | 2010 Apple Inc. All Rights Reserved.
INTRODUCTION
Excellent responsiveness. Because you have access to a realtime priority thread in an audio unit render callback function, your audio code is as close as possible to the metal. Synthetic musical instruments and realtime simultaneous voice I/O benefit the most from using audio units directly. Dynamic reconfiguration. The audio processing graph API, built around the AUGraph opaque type, lets you dynamically assemble, reconfigure, and rearrange complex audio processing chains in a thread-safe manner, all while processing audio. This is the only audio API in iOS offering this capability.
An audio units life cycle proceeds as follows: 1. 2. 3. 4. 5. 6. 7. At runtime, obtain a reference to the dynamically-linkable library that defines an audio unit you want to use. Instantiate the audio unit. Configure the audio unit as required for its type and to accomodate the intent of your app. Initialize the audio unit to prepare it to handle audio. Start audio flow. Control the audio unit. When finished, deallocate the audio unit.
Audio units provide highly useful individual features such as stereo panning, mixing, volume control, and audio level metering. Hosting audio units lets you add such features to your app. To reap these benefits, however, you must gain facility with a set of fundamental concepts including audio data stream formats, render callback functions, and audio unit architecture. Relevant Chapter: Audio Unit Hosting Fundamentals (page 11)
How to configure the I/O unit. I/O units have two independent elements, one that accepts audio from the input hardware, one that sends audio to the output hardware. Each design pattern indicates which element or elements you should enable. Where, within the audio processing graph, you must specify audio data stream formats. You must correctly specify formats to support audio flow. Where to establish audio unit connections and where to attach your render callback functions. An audio unit connection is a formal construct that propagates a stream format from an output of one audio unit to an input of another audio unit. A render callback lets you feed audio into a graph or manipulate audio at the individual sample level within a graph.
At a Glance
2010-09-01 | 2010 Apple Inc. All Rights Reserved.
INTRODUCTION
No matter which design pattern you choose, the steps for constructing an audio unit hosting app are basically the same: 1. 2. 3. Configure your application audio session to ensure your app works correctly in the context of the system and device hardware. Construct an audio processing graph. This multistep process makes use of everything you learned in Audio Unit Hosting Fundamentals (page 11). Provide a user interface for controlling the graphs audio units.
Become familiar with these steps so you can apply them to your own projects. Relevant Chapter: Constructing Audio Unit Apps (page 29)
Prerequisites
Before reading this document, its a good idea to read the section A Little About Digital Audio and Linear PCM in Core Audio Overview. Also, review Core Audio Glossary for terms you may not already be familiar with. To check if your audio needs might be met by a higher-level technology, review Using Audio in Multimedia Programming Guide.
Prerequisites
2010-09-01 | 2010 Apple Inc. All Rights Reserved.
INTRODUCTION
If you have some experience with audio units and just want the specifics for a given type, you can start with Using Specific Audio Units (page 41).
See Also
Essential reference documentation for building an audio unit hosting app includes the following:
Audio Unit Properties Reference describes the properties you can use to configure each type of audio unit. Audio Unit Parameters Reference describes the parameters you can use to control each type of audio unit. Audio Unit Component Services Reference describes the API for accessing audio unit parameters and properties, and describes the various audio unit callback functions. Audio Component Services Reference describes the API for accessing audio units at runtime and for managing audio unit instances. Audio Unit Processing Graph Services Reference describes the API for constructing and manipulating audio processing graphs, which are dynamically reconfigurable audio processing chains. Core Audio Data Types Reference describes the data structures and types you need for hosting audio units.
10
See Also
2010-09-01 | 2010 Apple Inc. All Rights Reserved.
CHAPTER 1
All audio technologies in iOS are built on top of audio units, as shown in Figure 1-1. The higher-level technologies shown hereMedia Player, AV Foundation, OpenAL, and Audio Toolboxwrap audio units to provide dedicated and streamlined APIs for specific tasks. Figure 1-1 Audio frameworks in iOS
Media Player OpenAL Audio Toolbox AV Foundation Foundation Core Media
Direct use of audio units in your project is the correct choice only when you need the very highest degree of control, performance, or flexibility, or when you need a specific feature (such as acoustic echo cancelation) available only by using an audio unit directly. For an overview of iOS audio APIs, and guidance on when to use each one, refer to Multimedia Programming Guide.
Simultaneous audio I/O (input and output) with low latency, such as for a VoIP (Voice over Internet Protocol) application Responsive playback of synthesized sounds, such as for musical games or synthesized musical instruments Use of a specific audio unit feature such as acoustic echo cancelation, mixing, or tonal equalization A processing-chain architecture that lets you assemble audio processing modules into flexible networks. This is the only audio API in iOS offering this capability.
11
CHAPTER 1
Audio units provided in iOS Audio units iPod Equalizer 3D Mixer Multichannel Mixer
I/O
Format conversion Format Converter The identifiers you use to specify these audio units programmatically are listed in Identifier Keys for Audio Units (page 46). Note: The iOS dynamic plug-in architecture does not support third-party audio units. That is, the only audio units available for dynamic loading are those provided by the operating system.
Effect Unit
iOS 4 provides one effect unit, the iPod Equalizer, the same equalizer used by the built-in iPod app. To view the iPod apps user interface for this audio unit, go to Settings > iPod > EQ. When using this audio unit, you must provide your own UI. This audio unit offers a set of preset equalization curves such as Bass Booster, Pop, and Spoken Word.
Mixer Units
iOS provides two mixer units. The 3D Mixer unit is the foundation upon which OpenAL is built. In most cases, if you need the features of the 3D Mixer unit, your best option is to use OpenAL which provides a higher level API that is well suited for game apps. For sample code that shows how to use OpenAL, see the sample code project oalTouch. For sample code that shows how to use the 3D Mixer unit directly, see the project Mixer3DHost. The Multichannel Mixer unit provides mixing for any number of mono or stereo streams, with a stereo output. You can turn each input on or off, set its input gain, and set its stereo panning position. For a demonstration of how to use this audio unit, see the sample code project Audio Mixer (MixerHost).
I/O Units
iOS provides three I/O units. The Remote I/O unit is the most commonly used. It connects to input and output audio hardware and gives you lowlatency access to individual incoming and outgoing audio sample values. It provides format conversion between the hardware audio formats and your application audio format, doing so by way of an included Format Converter unit. For sample code that shows how to use the Remote I/O unit, see the sample code projects IOHost and aurioTouch.
12
CHAPTER 1
The Voice-Processing I/O unit extends the Remote I/O unit by adding acoustic echo cancelation for use in a VoIP or voice-chat application. It also provides automatic gain correction, adjustment of voice-processing quality, and muting. The Generic Output unit does not connect to audio hardware but rather provides a mechanism for sending the output of a processing chain to your application. You would typically use the Generic Output unit for offline audio processing.
To work with audio units directlyconfiguring and controlling themuse the functions described in Audio Unit Component Services Reference. To create and configure an audio processing graph (a processing chain of audio units) use the functions described in Audio Unit Processing Graph Services Reference.
There is some overlap between the two APIs and you are free to mix and match according to your programming style. The audio unit API and audio processing graph API each provide functions for:
Obtaining references to the dynamically-linkable libraries that define audio units Instantiating audio units Interconnecting audio units and attaching render callback functions Starting and stopping audio flow
This document provides code examples for using both APIs but focuses on the audio processing graph API. Where there is a choice between the two APIs in your code, use the processing graph API unless you have a specific reason not to. Your code will be more compact, easier to read, and more amenable to supporting dynamic reconfiguration (see Audio Processing Graphs Provide Thread Safety (page 20)).
13
CHAPTER 1
ioUnitDescription.componentFlags ioUnitDescription.componentFlagsMask
= 0; = 0;
This description specifies exactly one audio unitthe Remote I/O unit. The keys for this and other iOS audio units are listed in Identifier Keys for Audio Units (page 46). Note that all iOS audio units use the kAudioUnitManufacturer_Apple key for the componentManufacturer field. To create a wildcard description, set one or more of the type/subtype fields to 0. For example, to match all the I/O units, change Listing 1-1 to use a value of 0 for the componentSubType field. With a description in hand, you obtain a reference to the library for the specified audio unit (or set of audio units) using either of two APIs. The audio unit API is shown in Listing 1-2. Listing 1-2 Obtaining an audio unit instance using the audio unit API
AudioComponent foundIoUnitReference = AudioComponentFindNext ( NULL, &ioUnitDescription ); AudioUnit ioUnitInstance; AudioComponentInstanceNew ( foundIoUnitReference, &ioUnitInstance );
Passing NULL to the first parameter of AudioComponentFindNext tells this function to find the first system audio unit matching the description, using a system-defined ordering. If you instead pass a previously found audio unit reference in this parameter, the function locates the next audio unit matching the description. This usage lets you, for example, obtain references to all of the I/O units by repeatedly calling AudioComponentFindNext. The second parameter to the AudioComponentFindNext call refers to the audio unit description defined in Listing 1-1 (page 13). The result of the AudioComponentFindNext function is a reference to the dynamically-linkable library that defines the audio unit. Pass the reference to the AudioComponentInstanceNew function to instantiate the audio unit, as shown in Listing 1-2 (page 14). You can instead use the audio processing graph API to instantiate an audio unit. Listing 1-3 shows how. Listing 1-3 Obtaining an audio unit instance using the audio processing graph API
// Declare and instantiate an audio processing graph AUGraph processingGraph; NewAUGraph (&processingGraph); // Add an audio unit node to the graph, then instantiate the audio unit AUNode ioNode; AUGraphAddNode ( processingGraph, &ioUnitDescription, &ioNode ); AUGraphOpen (processingGraph); // indirectly performs audio unit instantiation // Obtain a reference to the newly-instantiated I/O unit AudioUnit ioUnit;
14
CHAPTER 1
This code listing introduces AUNode, an opaque type that represents an audio unit in the context of an audio processing graph. You receive a reference to the new audio unit instance, in the ioUnit parameter, on output of the AUGraphNodeInfo function call. The second parameter to the AUGraphAddNode call refers to the audio unit description defined in Listing 1-1 (page 13). Having obtained an audio unit instance, you can configure it. To do so, you need to learn about two audio unit characteristics, scopes and elements.
A scope is a programmatic context within an audio unit. Although the name global scope might suggest otherwise, these contexts are never nested. You specify the scope you are targeting by using a constant from the Audio Unit Scopes enumeration. An element is a programmatic context nested within an audio unit scope. When an element is part of an input or output scope, it is analogous to a signal bus in a physical audio deviceand for that reason is sometimes called a bus. These two termselement and busrefer to exactly the same thing in audio unit programming. This document uses bus when emphasizing signal flow and uses element when emphasizing a specific functional aspect of an audio unit, such the input and output elements of an I/O unit (see Essential Characteristics of I/O Units (page 18)). You specify an element (or bus) by its zero-indexed integer value. If setting a property or parameter that applies to a scope as a whole, specify an element value of 0.
15
CHAPTER 1
Figure 1-2 (page 15) illustrates one common architecture for an audio unit, in which the numbers of elements on input and output are the same. However, various audio units use various architectures. A mixer unit, for example, might have several input elements but a single output element. You can extend what you learn here about scopes and elements to any audio unit, despite these variations in architecture. The global scope, shown at the bottom of Figure 1-2 (page 15), applies to the audio unit as a whole and is not associated with any particular audio stream. It has exactly one element, namely element 0. Some properties, such as maximum frames per slice (kAudioUnitProperty_MaximumFramesPerSlice), apply only to the global scope. The input and output scopes participate directly in moving one or more audio streams through the audio unit. As youd expect, audio enters at the input scope and leaves at the output scope. A property or parameter may apply to an input or output scope as a whole, as is the case for the element count property (kAudioUnitProperty_ElementCount), for example. Other properties and parameters, such as the enable I/O property (kAudioOutputUnitProperty_EnableIO) or the volume parameter (kMultiChannelMixerParam_Volume), apply to a specific element within a scope.
UInt32 busCount = 2; OSStatus result = AudioUnitSetProperty mixerUnit, kAudioUnitProperty_ElementCount kAudioUnitScope_Input, 0, &busCount, sizeof (busCount ); ( // // // // the the the the property key scope to set the property on element to set the property on property value
Here are a few properties youll use frequently in audio unit development. Become familiar with each of these by reading its reference documentation and by exploring Apples audio unit sample code projects such as IOHost and Audio Mixer (MixerHost):
for example.
16
CHAPTER 1
audio data an audio unit should be prepared to produce in response to a render call. For most audio units, in most scenarios, you must set this property as described in the reference documentation. If you dont, your audio will stop when the screen locks.
kAudioUnitProperty_StreamFormat, for specifying the audio stream data format for a particular
audio unit input or output bus. Most property values can be set only when an audio unit is uninitialized. Such properties are not intended to be changed by the user. Some, though, such as the kAudioUnitProperty_PresentPreset property of the iPod EQ unit, and the kAUVoiceIOProperty_MuteOutput property of the Voice-Processing I/O unit, are intended to be changed while playing audio. To discover a propertys availability, access its value, and monitor changes to its value, use the following functions:
AudioUnitGetPropertyInfoTo discover whether a property is available; if it is, you are given the
data size for its value and whether or not you can change the value
AudioUnitGetParameter AudioUnitSetParameter
To allow users to control an audio unit, give them access to its parameters by way of a user interface. Start by choosing an appropriate class from UIKit framework to represent the parameter. For example, for an on/off feature, such as the Multichannel Mixer units kMultiChannelMixerParam_Enable parameter, you could use a UISwitch object. For a continuously varying feature, such as stereo panning position as provided by the kMultiChannelMixerParam_Pan parameter, you could use a UISlider object.
17
CHAPTER 1
Convey the value of the UIKit objects current configurationsuch as the position of the slider thumb for a UISliderto the audio unit. Do so by wrapping the AudioUnitSetParameter function in an IBAction method and establishing the required connection in Interface Builder. For sample code illustrating how to do this, see the sample code project Audio Mixer (MixerHost).
Element 0
Element 1
Your application
Although these two elements are parts of one audio unit, your app treats them largely as independent entities. For example, you employ the enable I/O property (kAudioOutputUnitProperty_EnableIO) to enable or disable each element independently, according to the needs of your app. Element 1 of an I/O unit connects directly to the audio input hardware on a device, represented in the figure by a microphone. This hardware connectionat the input scope of element 1is opaque to you. Your first access to audio data entering from the input hardware is at the output scope of element 1. Similarly, element 0 of an I/O unit connects directly the audio output hardware on a device, represented in Figure 1-3 by the loudspeaker. You can convey audio to the input scope of element 0, but its output scope is opaque. Working with audio units, youll often hear the two elements of an I/O unit described not by their numbers but by name:
The input element is element 1 (mnemonic device: the letter I of the word Input has an appearance similar to the number 1)
18
CHAPTER 1
The output element is element 0 (mnemonic device: the letter O of the word Output has an appearance similar to the number 0)
As you see in Figure 1-3 (page 18), each element itself has an input scope and an output scope. For this reason, describing these parts of an I/O unit may get a bit confusing. For example, you would say that in a simultaneous I/O app, you receive audio from the output scope of the input element and send audio to the input scope of the output element. When you need to, return to this figure. Finally, I/O units are the only audio units capable of starting and stopping the flow of audio in an audio processing graph. In this way, the I/O unit is in charge of the audio flow in your audio unit app.
For details on these tasks and on the rest of the audio processing graph life cycle, refer to Constructing Audio Unit Apps (page 29). For a complete description of this rich API, see Audio Unit Processing Graph Services Reference.
19
CHAPTER 1
Adding or removing audio unit nodes (AUGraphAddNode, AUGraphRemoveNode) Adding or removing connections between nodes (AUGraphConnectNodeInput, AUGraphDisconnectNodeInput) Connecting a render callback function to an input bus of an audio unit (AUGraphSetNodeInputCallback)
Lets look at an example of reconfiguring a running audio processing graph. Say, for example, youve built a graph that includes a Multichannel Mixer unit and a Remote I/O unit, for mixed playback of two synthesized sounds. You feed the sounds to two input buses of the mixer. The mixer output goes to the output element of the I/O unit and on to the output audio hardware. Figure 1-4 depicts this architecture. Figure 1-4 A simple audio processing graph for playback
Audio processing graph Multichannel Mixer unit Guitar sound Beats sound Input 0 Output Input 1 Output element Remote I/O unit Input element
Now, say the user wants to insert an equalizer into one of the two audio streams. To do that, add an iPod EQ unit between the feed of one of the sounds and the mixer input that it goes to, as shown in Figure 1-5.
20
CHAPTER 1
Figure 1-5
The steps to accomplish this live reconfiguration are as follows: 1. Disconnect the beats sound callback from input 1 of the mixer unit by calling AUGraphDisconnectNodeInput. Add an audio unit node containing the iPod EQ unit to the graph. Do this by specifying the iPod EQ unit with an AudioComponentDescription structure, then calling AUGraphAddNode. At this point, the iPod EQ unit is instantiated but not initialized. It is owned by the graph but is not yet participating in the audio flow. Configure and initialize the iPod EQ unit. In this example, this entails a few things:
2.
3.
Call the AudioUnitGetProperty function to retrieve the stream format (kAudioUnitProperty_StreamFormat) from the mixer input. Call the AudioUnitSetProperty function twice, once to set that stream format on the iPod EQ units input and a second time to set it on the output. (For a complete description of how to configure an iPod EQ unit, see Using Effect Units (page 45).) Call the AudioUnitInitialize function to allocate resources for the iPod EQ unit and prepare it to process audio. This function call is not thread-safe, but you can (and must) perform it at this point in the sequence, when the iPod EQ unit is not yet participating actively in the audio processing graph because you have not yet called the AUGraphUpdate function.
4.
Attach the beats sound callback function to the input of the iPod EQ by calling AUGraphSetNodeInputCallback.
In the preceding list, steps 1, 2, and 4all of them AUGraph* function callswere added to the graphs to-do list. Call AUGraphUpdate to execute these pending tasks. On successful return of the AUGraphUpdate function, the graph has been dynamically reconfigured and the iPod EQ is in place and processing audio.
21
CHAPTER 1
Figure 1-6
Start here
1
Render callback
Render callback
Each request for a set of data is known as a render call or, informally, as a pull. The figure represents render calls as gray control flow arrows. The data requested by a render call is more properly known as a set of audio sample frames (see frame in Core Audio Glossary). In turn, a set of audio sample frames provided in response to a render call is known as a slice. (See slice in Core Audio Glossary.) The code that provides the slice is known as a render callback function, described in Render Callback Functions Feed Audio to Audio Units (page 23). Here is how the pull proceeds in Figure 1-6: 1. After you call the AUGraphStart function, the virtual output device invokes the render callback of the Remote I/O units output element. This invocation asks for one slice of processed audio data frames. The render callback function of the Remote I/O unit looks in its input buffers for audio data to process, to satisfy the render call. If there is data waiting to be processed, the Remote I/O unit uses it. Otherwise, and as shown in the figure, it instead invokes the render callback of whatever your app has connected to its input. In this example, the Remote I/O units input is connected to an effect units output. So, the I/O unit pulls on the effect unit, asking for a slice of audio frames. The effect unit behaves just as the Remote I/O unit did. When it needs audio data, it gets it from its input connection. In this example, the effect unit pulls on your apps render callback function. Your apps render callback function is the final recipient of the pull. It supplies the requested frames to the effect unit. The effect unit processes the slice supplied by your apps render callback. The effect unit then supplies the processed data that were previously requested (in step 2) to the Remote I/O unit. The Remote I/O unit processes the slice provided by the effect unit. The Remote I/O unit then supplies the processed slice originally requested (in step 1) to the virtual output device. This completes one cycle of pull.
2.
3. 4. 5. 6.
22
CHAPTER 1
static OSStatus MyAURenderCallback ( void *inRefCon, AudioUnitRenderActionFlags *ioActionFlags, const AudioTimeStamp *inTimeStamp, UInt32 inBusNumber, UInt32 inNumberFrames, AudioBufferList *ioData ) { /* callback body */ }
The inRefCon parameter points to a programmatic context you specify when attaching the callback to an audio unit input (see Write and Attach Render Callback Functions (page 36)). The purpose of this context is to provide the callback function with any audio input data or state information it needs to calculate the output audio for a given render call. The ioActionFlags parameter lets a callback provide a hint to the audio unit that there is no audio to process. Do this, for example, if your app is a synthetic guitar and the user is not currently playing a note. During a callback invocation for which you want to output silence, use a statement like the following in the body of the callback:
*ioActionFlags |= kAudioUnitRenderAction_OutputIsSilence;
When you want to produce silence, you must also explicitly set the buffers pointed at by the ioData parameter to 0. Theres more about this in the description for that parameter.
23
CHAPTER 1
The inTimeStamp parameter represents the time at which the callback was invoked. It contains an AudioTimeStamp structure, whose mSampleTime field is a sample-frame counter. On each invocation of the callback, the value of the mSampleTime field increments by the number in the inNumberFrames parameter. If your app is a sequencer or a drum machine, for example, you can use the mSampleTime value for scheduling sounds. The inBusNumber parameter indicates the audio unit bus that invoked the callback, allowing you to branch within the callback depending on this value. In addition, when attaching a callback to an audio unit, you can specify a different context (inRefCon) for each bus. The inNumberFrames parameter indicates the number of audio sample frames that the callback is being asked to provide on the current invocation. You provide those frames to the buffers in the ioData parameter. The ioData parameter points to the audio data buffers that the callback must fill when it is invoked. The audio you place into these buffers must conform to the audio stream format of the bus that invoked the callback. If you are playing silence for a particular invocation of the callback, explicitly set these buffers to 0, such as by using the memset function. Figure 1-7 depicts a pair of noninterleaved stereo buffers in an ioData parameter. Use the elements of the figure to visualize the details of ioData buffers that your callback needs to fill. Figure 1-7 The ioData buffers for a stereo render callback function
inTimeStamp.mSampleTime
Left Channel
Right Channel
inNumberFrames
24
CHAPTER 1
struct AudioStreamBasicDescription { Float64 mSampleRate; UInt32 mFormatID; UInt32 mFormatFlags; UInt32 mBytesPerPacket; UInt32 mFramesPerPacket; UInt32 mBytesPerFrame; UInt32 mChannelsPerFrame; UInt32 mBitsPerChannel; UInt32 mReserved; }; typedef struct AudioStreamBasicDescription
AudioStreamBasicDescription;
Because the name AudioStreamBasicDescription is long, its often abbreviated in conversation and documentation as ASBD. To define values for the fields of an ASBD, write code similar to that shown in Listing 1-7. Listing 1-7 Defining an ASBD for a stereo stream
size_t bytesPerSample = sizeof (AudioUnitSampleType); AudioStreamBasicDescription stereoStreamFormat = {0}; stereoStreamFormat.mFormatID stereoStreamFormat.mFormatFlags stereoStreamFormat.mBytesPerPacket stereoStreamFormat.mBytesPerFrame stereoStreamFormat.mFramesPerPacket stereoStreamFormat.mBitsPerChannel stereoStreamFormat.mChannelsPerFrame stereoStreamFormat.mSampleRate = = = = = = = = kAudioFormatLinearPCM; kAudioFormatFlagsAudioUnitCanonical; bytesPerSample; bytesPerSample; 1; 8 * bytesPerSample; 2; // 2 indicates stereo graphSampleRate;
To start, determine the data type to represent one audio sample value. This example uses the AudioUnitSampleType defined type, the recommended data type for most audio units. In iOS, AudioUnitSampleType is defined to be an 8.24 fixed-point integer. The first line in Listing 1-7 calculates the number of bytes in the type; that number is required when defining some of the field values of an ASBD, as you can see in the listing. Next, still referring to Listing 1-7, declare a variable of type AudioStreamBasicDescription and initialize its fields to 0 to ensure that no fields contain garbage data. Do not skip this zeroing step; if you do, you are certain to run into trouble later. Now define the ASBD field values. Specify kAudioFormatLinearPCM for the mFormatID field. Audio units use uncompressed audio data, so this is the correct format identifier to use whenever you work with audio units. Next, for most audio units, specify the kAudioFormatFlagsAudioUnitCanonical metaflag for the mFormatFlags field. This flag is defined in CoreAudio.framework/CoreAudioTypes.h as follows:
25
CHAPTER 1
This metaflag takes care of specifying all of the layout details for the bits in a linear PCM sample value of type AudioUnitSampleType. Certain audio units employ an atypical audio data format, requiring a different data type for samples and a different set of flags for the mFormatFlags field. For example, the 3D Mixer unit requires the UInt16 data type for its audio sample values and requires the ASBDs mFormatFlags field to be set to kAudioFormatFlagsCanonical. When working with a particular audio unit, be careful to use the correct data format and the correct format flags. (See Using Specific Audio Units (page 41).) Continuing through Listing 1-7 (page 25), the next four fields further specify the organization and meaning of the bits in a sample frame. Set these fieldsmBytesPerPacket, mBytesPerFrame, mFramesPerPacket, and mBitsPerChannel fieldsaccording to the nature of the audio stream you are using. To learn the meaning of each of these fields, refer to the documentation for the AudioStreamBasicDescription structure. You can see examples of filled-out ASBDs in the sample code projects Audio Mixer (MixerHost) and Mixer3DHost. Set the ASBDs mChannelsPerFrame field according to the number of channels in the stream1 for mono audio, 2 for stereo, and so on. Finally, set the mSampleRate field according to the sample rate that you are using throughout your app. Understanding Where and How to Set Stream Formats (page 26) explains the importance of avoiding sample rate conversions. Configure Your Audio Session (page 34) explains how to ensure that your applications sample rate matches the audio hardware sample rate. Rather than specify an ASBD field by field as youve seen here, you can use the C++ utility methods provided in the CAStreamBasicDescription.h file (/Developer/Extras/CoreAudio/PublicUtility/). In particular, view the SetAUCanonical and SetCanonical C++ methods. These specify the correct way to derive ASBD field values given three factors:
Whether the stream is for I/O (SetCanonical) or for audio processing (SetAUCanonical) How many channels you want the stream format to represent Whether you want the stream format interleaved or noninterleaved
Whether or not you include the CAStreamBasicDescription.h file in your project to use its methods directly, Apple recommends that you study that file to learn the correct way to work with an AudioStreamBasicDescription structure. See Troubleshooting Tips (page 39) for ideas on how to fix problems related to audio data stream formats.
26
CHAPTER 1
The audio input and output hardware on an iOS device have system-determined audio stream formats. These formats are always uncompressed, in linear PCM format, and interleaved. The system imposes these formats on the outward-facing sides of the I/O unit in an audio processing graph, as depicted in Figure 1-8. Figure 1-8 Where to set audio data stream formats
Audio processing graph Remote I/O unit Input element Multichannel Mixer unit Remote I/O unit Output element
Audio unit connection Hardware-imposed stream format Application (client) stream format Application (client) sample rate
In the figure, the microphone represents the input audio hardware. The system determines the input hardwares audio stream format and imposes it onto the input scope of the Remote I/O units input element. Similarly, the loudspeakers in the figure represent the output audio hardware. The system determines the output hardwares stream format and imposes it onto the output scope of the Remote I/O units output element. Your application is responsible for establishing the audio stream formats on the inward-facing sides of the I/O units elements. The I/O unit performs any necessary conversion between your application formats and the hardware formats. Your application is also responsible for setting stream formats wherever else they are required in a graph. In some cases, such as at the output of the Multichannel Mixer unit in Figure 1-8, you need to set only a portion of the formatspecifically, the sample rate. Start by Choosing a Design Pattern (page 29) shows you where to set stream formats for various types of audio unit apps. Using Specific Audio Units (page 41) lists the stream format requirements for each iOS audio unit. A key feature of an audio unit connection, as shown in Figure 1-8 (page 27), is that the connection propagates the audio data stream format from the output of its source audio unit to the input of its destination audio unit. This is a critical point so it bears emphasizing: Stream format propagation takes place by way of an audio unit connection and in one direction onlyfrom the output of a source audio unit to an input of a destination audio unit. Take advantage of format propagation. It can significantly reduce the amount of code you need to write. For example, when connecting the output of a Multichannel Mixer unit to the Remote I/O unit for playback, you do not need to set the stream format for the I/O unit. It is set appropriately by the connection between the audio units, based on the output stream format of the mixer (see Figure 1-8 (page 27)). Stream format propagation takes place at one particular point in an audio processing graphs life cyclenamely, upon initialization. See Initialize and Start the Audio Processing Graph (page 38). You have great flexibility in defining your application audio stream formats. However, whenever possible, use the sample rate that the hardware is using. When you do, the I/O unit need not perform sample rate conversion. This minimizes energy usagean important consideration in a mobile deviceand maximizes audio quality. To learn about working with the hardware sample rate, see Configure Your Audio Session (page 34).
27
CHAPTER 1
28
CHAPTER 2
Now that you understand how audio unit hosting works, as explained in Audio Unit Hosting Fundamentals (page 11), you are well prepared to build the audio unit portion of your app. The main steps are choosing a design pattern and then writing the code to implement that pattern.
Has exactly one I/O unit. Uses a single audio stream format throughout the audio processing graphalthough there can be variations on that format, such as mono and stereo streams feeding a mixer unit. Requires that you set the stream format, or portions of the stream format, at specific locations.
Setting stream formats correctly is essential to establishing audio data flow. Most of these patterns rely on automatic propagation of audio stream formats from source to destination, as provided by an audio unit connection. Take advantage of this propagation when you can because it reduces the amount of code to write and maintain. At the same time, be sure that you understand where it is required for you to set stream formats. For example, you must set the full stream format on the input and output of an iPod EQ unit. Refer to the usage tables in Using Specific Audio Units (page 41) for all iOS audio unit stream format requirements. In most cases, the design patterns in this chapter employ an audio processing graph (of type AUGraph). You could implement any one of these patterns without using a graph, but using one simplifies the code and supports dynamic reconfiguration, as described in Audio Processing Graphs Manage Audio Units (page 19).
29
CHAPTER 2
Figure 2-1
Audio unit connection Hardware-imposed stream format Application (client) stream format
As you can see in the figure, the audio input hardware imposes its stream format on the outward-facing side of the Remote I/O units input element. You, in turn, specify the format that you want to use on the inward-facing side of this element. The audio unit performs format conversion as needed. To avoid unnecessary sample rate conversion, be sure to use the audio hardware sample rate when defining your stream format. The input element is disabled by default, so be sure to enable it; otherwise, audio cannot flow. The pattern shown in Figure 2-1 takes advantage of the audio unit connection between the two Remote I/O elements. Specifically, you do not set a stream format on the input scope of the audio units output element. The connection propagates the format you specified for the input element. The outward-facing side of the output element takes on the audio output hardwares stream format, and the output element performs format conversion for the outgoing audio as needed. Using this pattern, you need not configure any audio data buffers.
30
CHAPTER 2
Figure 2-2
Audio unit connection Hardware-imposed stream format Application (client) stream format Application (client) sample rate
In this pattern, you configure both elements of the Remote I/O unit just as you do in the pass-through pattern. To set up the Multichannel Mixer unit, you must set the sample rate of your stream format on the mixer output, as indicated in Figure 2-2. The mixers input stream format is established automatically by propagation from the output of the Remote I/O units input element, by way of the audio unit connection. Similarly, the stream format for the input scope of the Remote I/O units output element is established by the audio unit connection, thanks to propagation from the mixer unit output. In any instance of this patternindeed, whenever you use other audio units in addition to an I/O unityou must set the kAudioUnitProperty_MaximumFramesPerSlice property as described in Audio Unit Properties Reference. As with the pass-through pattern, you need not configure any audio data buffers.
31
CHAPTER 2
Figure 2-3
Render callback
As you can see in the figure, this pattern uses both elements of the Remote I/O unit, as in the previous patterns in this chapter. Attach your render callback function to the input scope of the output element. When that element needs another set of audio sample values, it invokes your callback. Your callback, in turn, obtains fresh samples by invoking the render callback function of the Remote I/O units input element. Just as for the other I/O patterns, you must explicitly enable input on the Remote I/O unit, because input is disabled by default. And, as for the other I/O patterns, you need not configure any audio data buffers. Notice that when you establish an audio path from one audio unit to another using a render callback function, as in this pattern, the callback takes the place of an audio unit connection.
Render callback
Output element
You can use this same pattern to build an app with a more complex audio structure. For example, you might want to generate several sounds, mix them together, and then play them through the devices output hardware. Figure 2-5 shows such a case. Here, the pattern employs an audio processing graph and two additional audio units, a Multichannel Mixer and an iPod EQ.
32
CHAPTER 2
Figure 2-5
Input 0 Output Input 1 Audio unit connection Application (client) stream format Application (client) sample rate Hardware-imposed stream format Output element
In the figure, notice that the iPod EQ requires you to set your full stream format on both input and output. The Multichannel Mixer, on the other hand, needs only the correct sample rate to be set on its output. The full stream format is then propagated by the audio unit connection from the mixers output to the input scope of the Remote I/O units output element. These usage details, and other specifics of using the various iOS audio units, are described in Using Specific Audio Units (page 41). For each of the Multichannel Mixer unit inputs, as you see in Figure 2-5, the full stream format is set. For input 0, you set it explicitly. For input 1, the format is propagated by the audio unit connection from the output of the iPod EQ unit. In general, you must account for the stream-format needs of each audio unit individually.
33
CHAPTER 2
2. 3. 4. 5. 6. 7.
Specify audio units. Create an audio processing graph, then obtain the audio units. Configure the audio units. Connect the audio unit nodes. Provide a user interface. Initialize and then start the audio processing graph.
Next, employ the audio session object to request that the system use your preferred sample rate as the device hardware sample rate, as shown in Listing 2-1. The intent here is to avoid sample rate conversion between the hardware and your app. This maximizes CPU performance and sound quality, and minimizes battery drain. Listing 2-1 Configuring an audio session
// 1 // 2 // 3 // 4 // 5
NSError *audioSessionError = nil; AVAudioSession *mySession = [AVAudioSession sharedInstance]; [mySession setPreferredHardwareSampleRate: graphSampleRate error: &audioSessionError]; [mySession setCategory: AVAudioSessionCategoryPlayAndRecord error: &audioSessionError]; [mySession setActive: YES error: &audioSessionError]; self.graphSampleRate = [mySession currentHardwareSampleRate];
The preceding lines do the following: 1. 2. 3. 4. 5. Obtain a reference to the singleton audio session object for your application. Request a hardware sample rate. The system may or may not be able to grant the request, depending on other audio activity on the device. Request the audio session category you want. The play and record category, specified here, supports audio input and output. Request activation of your audio session. After audio session activation, update your own sample rate variable according to the actual sample rate provided by the system.
34
CHAPTER 2
Theres one other hardware characteristic you may want to configure: audio hardware I/O buffer duration. The default duration is about 23 ms at a 44.1 kHz sample rate, equivalent to a slice size of 1,024 samples. If I/O latency is critical in your app, you can request a smaller duration, down to about 0.005 ms (equivalent to 256 samples), as shown here:
self.ioBufferDuration = 0.005; [mySession setPreferredIOBufferDuration: ioBufferDuration error: &audioSessionError];
For a complete explanation of how to configure and use the audio session object, see Audio Session Programming Guide.
Listing 2-2 shows how to perform these steps for a graph that contains a Remote I/O unit and a Multichannel Mixer unit. It assumes youve already defined an AudioComponentDescription structure for each of these audio units. Listing 2-2 Building an audio processing graph
AUGraph processingGraph; NewAUGraph (&processingGraph); AUNode ioNode; AUNode mixerNode; AUGraphAddNode (processingGraph, &ioUnitDesc, &ioNode); AUGraphAddNode (processingGraph, &mixerDesc, &mixerNode);
35
CHAPTER 2
The AUGraphAddNode function calls make use of the audio unit specifiers ioUnitDesc and mixerDesc. At this point, the graph is instantiated and owns the nodes that youll use in your app. To open the graph and instantiate the audio units, call AUGraphOpen:
AUGraphOpen (processingGraph);
Then, obtain references to the audio unit instances by way of the AUGraphNodeInfo function, as shown here:
AudioUnit ioUnit; AudioUnit mixerUnit; AUGraphNodeInfo (processingGraph, ioNode, NULL, &ioUnit); AUGraphNodeInfo (processingGraph, mixerNode, NULL, &mixerUnit);
The ioUnit and mixerUnit variables now hold references to the audio unit instances in the graph, allowing you to configure and then interconnect the audio units.
36
CHAPTER 2
callbackStruct.inputProcRefCon
= soundStructArray;
You can attach a render callback in a thread-safe manner, even when audio is flowing, by using the audio processing graph API. Listing 2-4 shows how. Listing 2-4 Attaching a render callback in a thread-safe manner
AURenderCallbackStruct callbackStruct; callbackStruct.inputProc = &renderCallback; callbackStruct.inputProcRefCon = soundStructArray; AUGraphSetNodeInputCallback ( processingGraph, myIONode, 0, // output element &callbackStruct ); // ... some time later Boolean graphUpdated; AUGraphUpdate (processingGraph, &graphUpdated);
AudioUnitElement mixerUnitOutputBus = 0; AudioUnitElement ioUnitOutputElement = 0; AUGraphConnectNodeInput ( processingGraph, mixerNode, // source node mixerUnitOutputBus, // source node bus iONode, // destination node ioUnitOutputElement // desinatation node element );
37
CHAPTER 2
You can, alternatively, establish and break connections between audio units directly by using the audio unit property mechanism. To do so, use the AudioUnitSetProperty function along with the kAudioUnitProperty_MakeConnection property, as shown in Listing 2-6. This approach requires that you define an AudioUnitConnection structure for each connection to serve as its property value. Listing 2-6 Connecting two audio units directly
AudioUnitElement mixerUnitOutputBus = 0; AudioUnitElement ioUnitOutputElement = 0; AudioUnitConnection mixerOutToIoUnitIn; mixerOutToIoUnitIn.sourceAudioUnit = mixerUnitInstance; mixerOutToIoUnitIn.sourceOutputNumber = mixerUnitOutputBus; mixerOutToIoUnitIn.destInputNumber = ioUnitOutputElement; AudioUnitSetProperty ( ioUnitInstance, kAudioUnitProperty_MakeConnection, kAudioUnitScope_Input, ioUnitOutputElement, &mixerOutToIoUnitIn, sizeof (mixerOutToIoUnitIn) );
// // // // //
connection destination property key destination scope destination element connection definition
Initializes the audio units owned by the graph by automatically invoking the AudioUnitInitialize function individually for each one. (If you were to construct a processing chain without using a graph, you would have to explicitly initialize each audio unit in turn.) Validates the graphs connections and audio data stream formats. Propagates stream formats across audio unit connections.
38
CHAPTER 2
Listing 2-7 shows how to use AUGraphInitialize. Listing 2-7 Initializing and starting an audio processing graph
OSStatus result = AUGraphInitialize (processingGraph); // Check for error. On successful initialization, start the graph... AUGraphStart (processingGraph); // Some time later AUGraphStop (processingGraph);
Troubleshooting Tips
Whenever a Core Audio function provides a return value, capture that value and check for success or failure. On failure, make use of Xcodes debugging features as described in Xcode Debugging Guide. If using an Objective-C method in your app, such as for configuring your audio session, take advantage the error parameter in the same way. Be aware of dependencies between function calls. For example, you can start an audio processing graph only after you successfully initialize it. Check the return value of AUGraphInitialize. If the function returns successfully, you can start the graph. If it fails, determine what went wrong. Check that all of your audio unit function calls leading up to initialization returned successfully. For an example of how to do this, look at the -configureAndInitializeAudioProcessingGraph method in the sample code project Audio Mixer (MixerHost). Second, if graph initialization is failing, take advantage of the CAShow function. This function prints out the state of the graph to the Xcode console. The sample code project Audio Mixer (MixerHost) demonstrates this technique as well. Ensure that you are initializing each of your AudioStreamBasicDescription structures to 0, as follows:
AudioStreamBasicDescription stereoStreamFormat = {0};
Initializing the fields of an ASBD to 0 ensures that no fields contain garbage data. (In the case of declaring a data structure in external storagefor example, as an instance variable in a class declarationits fields are automatically initialized to 0 and you need not initialize them yourself.) To print out the field values of an AudioStreamBasicDescription structure to the Xcode console, which can be very useful during development, use code like that shown in Listing 2-8. Listing 2-8 A utility method to print field values for an AudioStreamBasicDescription structure
- (void) printASBD: (AudioStreamBasicDescription) asbd { char formatIDString[5]; UInt32 formatID = CFSwapInt32HostToBig (asbd.mFormatID); bcopy (&formatID, formatIDString, 4); formatIDString[4] = '\0'; NSLog NSLog NSLog NSLog (@" (@" (@" (@" Sample Rate: Format ID: Format Flags: Bytes per Packet: %10.0f", %10s", %10X", %10d", asbd.mSampleRate); formatIDString); asbd.mFormatFlags); asbd.mBytesPerPacket);
Troubleshooting Tips
2010-09-01 | 2010 Apple Inc. All Rights Reserved.
39
CHAPTER 2
Frames per Packet: Bytes per Frame: Channels per Frame: Bits per Channel:
This utility method can quickly reveal problems in an ASBD. When defining an ASBD for an audio unit stream format, take care to ensure you are following the "Recommended stream format attributes and Stream format notes in the usage tables in Using Specific Audio Units (page 41). Do not deviate from those recommendations unless you have a specific reason to.
40
Troubleshooting Tips
2010-09-01 | 2010 Apple Inc. All Rights Reserved.
CHAPTER 3
Each iOS audio unit has certain things in common with all others and certain things unique to itself. Earlier chapters in this document described the common aspects, among them the need to find the audio unit at runtime, instantiate it, and ensure that its stream formats are set appropriately. This chapter explains the differences among the audio units and provides specifics on how to use them. Later in the chapter, Identifier Keys for Audio Units (page 46) lists the codes you need to locate the dynamically-linkable libraries for each audio unit at runtime.
41
CHAPTER 3
Details
The outward-facing sides of the Remote I/O unit acquire their formats from the audio hardware as follows: The input element (element 1) input scope gets its stream format from the currently-active audio input hardware.
The output element (element 0) output scope gets its stream format from the currently-active output audio hardware.
Set your application format on the output scope of the input element. The input element performs format conversion between its input and output scopes as needed. Use the hardware sample rate for your application stream format. If the input scope of the output element is fed by an audio unit connection, it acquires its stream format from that connection. If, however, it is fed by a render callback function, set your application format on it. Parameters Properties Property notes None in iOS. See I/O Audio Unit Properties. You never need to set the kAudioUnitProperty_MaximumFramesPerSlice property on this audio unit.
42
CHAPTER 3
On the input scope, manage stream formats as follows: If an input bus is fed by an audio unit connection, it acquires its stream format from that connection.
If an input bus is fed by a render callback function, set your complete application stream format on the bus. Use the same stream format as used for the data provided by the callback.
On the output scope, set just the application sample rate. Parameters Properties Property notes See Multichannel Mixer Unit Parameters.
kAudioUnitProperty_MeteringMode.
By default, the kAudioUnitProperty_MaximumFramesPerSlice property is set to a value of 1024, which is not sufficient when the screen locks and the display sleeps. If your app plays audio with the screen locked, you must increase the value of this property unless audio input is active. Do as follows: If audio input is active, you do not need to set a value for the kAudioUnitProperty_MaximumFramesPerSlice property.
43
CHAPTER 3
3D Mixer Unit
The 3D Mixer unit (subtype kAudioUnitSubType_3DMixer) controls stereo panning, playback tempo, and gain for each input, and controls other characteristics such as apparent distance to the listener. The output has an audio gain control. To get some idea of what this audio unit can do, consider that OpenAL in iOS is implemented using it. In most cases, if you need the features of the 3D Mixer unit, your best option is to use OpenAL. For sample code that shows how to use OpenAL, see the sample code project oalTouch. For sample code that shows how to use the 3D Mixer unit directly, see the sample code project Mixer3DHost. Table 3-3 provides usage details for this audio unit. Table 3-3 Using the 3D Mixer unit Details One or more input elements, each of which is mono. One stereo output element.
Audio unit feature Elements Recommended stream format attributes Stream format notes
UInt16 kAudioFormatFlagsCanonical
On the input scope, manage stream formats as follows: If an input bus is fed by an audio unit connection, it acquires its stream format from that connection.
If an input bus is fed by a render callback function, set your complete application stream format on the bus. Use the same stream format as used for the data provided by the callback.
On the output scope, set just the application sample rate. Parameters Properties See 3D Mixer Unit Parameters. See 3D Mixer Audio Unit Properties. Note, however, that most of these properties are implemented only in the Mac OS X version of this audio unit. By default, the kAudioUnitProperty_MaximumFramesPerSlice property is set to a value of 1024, which is not sufficient when the screen locks and the display sleeps. If your app plays audio with the screen locked, you must increase the value of this property unless audio input is active. Do as follows: If audio input is active, you do not need to set a value for the kAudioUnitProperty_MaximumFramesPerSlice property.
Property notes
44
CHAPTER 3
On the input scope, manage stream formats as follows: If the input is fed by an audio unit connection, it acquires its stream format from that connection.
If the input is fed by a render callback function, set your complete application stream format on the bus. Use the same stream format as used for the data provided by the callback.
On the output scope, set the same full stream format that you used for the input. Parameters Properties None.
kAudioUnitProperty_FactoryPresets and kAudioUnitProperty_PresentPreset
Property notes
The iPod EQ unit provides a set of predefined tonal equalization curves as factory presets. Obtain the array of available EQ settings by accessing the audio units kAudioUnitProperty_FactoryPresets property. You can then apply a setting by using it as the value for the kAudioUnitProperty_PresentPreset property. By default, the kAudioUnitProperty_MaximumFramesPerSlice property is set to a value of 1024, which is not sufficient when the screen locks and the display sleeps. If your app plays audio with the screen locked, you must increase the value of this property unless audio input is active. Do as follows: If audio input is active, you do not need to set a value for the kAudioUnitProperty_MaximumFramesPerSlice property.
45
CHAPTER 3
Converter unit Supports audio format conversions to or from linear PCM. iPod Equalizer unit
3D Mixer unit Supports mixing multiple audio streams, output panning, sample rate conversion, and more. Multichannel Mixer unit Supports mixing multiple audio streams to a single stream.
kAudioUnitType_Output
Supports converting to and from linear kAudioUnitSubType_GenericOutput PCM format; can be used to start and stop kAudioUnitManufacturer_Apple a graph. Remote I/O unit
kAudioUnitType_Output
Connects to device hardware for input, kAudioUnitSubType_RemoteIO output, or simultaneous input and output. kAudioUnitManufacturer_Apple Voice Processing I/O unit
kAudioUnitType_Output
Has the characteristics of the I/O unit and kAudioUnitSubType_adds echo suppression for two-way VoiceProcessingIO communication. kAudioUnitManufacturer_Apple
46
REVISION HISTORY
This table describes the changes to Audio Unit Hosting Guide for iOS. Date 2010-09-01 2010-06-06 Notes Major revision, including addition of a chapter on using specific audio units. New document that explains how to use the system-supplied audio processing plug-ins in iOS.
47
2010-09-01 | 2010 Apple Inc. All Rights Reserved.
REVISION HISTORY
48
2010-09-01 | 2010 Apple Inc. All Rights Reserved.