Audio units are a lower-level, lower-latency technology. And also, a relatively complex API. Audio queues are higher-level and simpler to use, but have far greater latency.
I have produced a sample project showing the simplest creation of an audio unit and audio queue output so you can compare and contrast the two APIs. Get it on Gitorious here.
The following series of blog entries will describe each technology.
About the project
The project builds a sample iOS application that plays a steady sine wave as long as it is running. If you look at main.mm you'll see a single #define for USE_AUDIO_QUEUE_OUTPUT that can be used to switch between the audio unit and audio queue back ends.
There are extra things a well-behaved iOS audio app should do that are not (yet) illustrated by this example project. Notably, your app should use the Audio Sessions API to define what kind of audio session it requires and to handle interruptions to the audio output (e.g. when a phone call it received whilst your app is running).
Make some noise!
Before we work out how to make noise with our iOS devices, we need some audio to start with. For this reason, the AudioProducer protocol defines a simple interface that the audio backends can grab an audio stream from.
It looks like this:
/// Type of audio sample produced by an AudioProducertypedef SInt16 Sample;/// Protocol for objects that produce audio@protocol AudioProducer@property (nonatomic) float sampleRate;/// Fills a buffer with "size" samples./// The buffer should be filled in with interleaved stereo samples.- (void) produceSamples:(Sample *)audioBuffer size:(size_t)size;@end
That is, we'll be peddling signed 16-bit integer samples, and have a single method that can be called to grab the next block of interleaved stereo audio samples.
Given that, I have written a simple SineWave audio generator. It's not the most elegant generator, I'll admit. It uses a resonant filter to approximate a sine wave rather than the maths library trig functions for performance reasons.
The SineWave interface adopts the AudioProducer protocol, and adds two extra properties - the frequency of the sine wave and the peak (amplitude) of the wave. You can see the interface here and the implementation here.
Atomicity
Note that since this is a simplistic example I have made all of these properties nonatomic. However, both audio units and audio queues will pull audio from your application in a background thread (not the main user interface thread).
The audio unit uses a very high priority background thread, as it is a very low latency audio pipeline with little buffering. The audio queue thread is not set as high-priority, as it employs a large amount of buffering.
You must bear these threads in mind when writing an audio generator. Ensure that any parameter that can be changed is thread-safe. Make sure that if the UI is in the process of adjusting values and gets interrupted by the audio thread that no disasters (e.g. nasty audio glitches) can result.
What this looks like in practice is different for each application. But this is an important warning to heed.
Next time
So now we have some audio to play, next time we'll look at how to use the audio queue APIs to play it.
No comments:
Post a Comment