Synthesizers With AVAudioSourceNode
May 23, 2020
With
AVAudioSourceNode
Apple introduced an easy way to create simple synthesizers and generate
audio in real-time.
To be able to easily use the synthesizer we'll create in this tutorial,
we'll create a Synthesizer
class with a setup function:
import AVFoundation
class Synthesizer {
// TODO: Variables
func setup() {
// TODO: Setup
}
}
Basic Variables
Before writing the setup function, we'll need to add some basic variables to the class. Let's get started by adding those that will be used for the audio engine:
var audioEngine: AVAudioEngine!
var sourceNode: AVAudioSourceNode!
We'll initialize the audio engine and source node later on in the setup function.
There are a some other variables which will be needed to generate a basic sine wave:
var time = Float.zero
var frequencyRamp = Float.zero
var currentFrequency: Float = 200 {
didSet {
frequencyRamp = currentFrequency - oldValue
}
}
Setup
Now that we added all necessary variables, we can write the setup function. Let's start by initializing the audio engine:
engine = AVAudioEngine()
The next step is to initialize the source node. Since we're not quite ready to add all of the audio generation code, we'll leave it empty for now:
sourceNode = AVAudioSourceNode(renderBlock: { (_, _, frameCount, bufferList) -> OSStatus in
// TODO: Audio generation
return noErr
})
Now we can connect the source node to the audio engine:
audioEngine.attach(sourceNode)
audioEngine.connect(sourceNode, to: audioEngine.outputNode, format: nil)
While this would technically work, the sound is very distorted. To fix this, we need to use the proper format when connecting the source node to the output node. Let's first create the proper format 1:
let format = audioEngine.outputNode.inputFormat(forBus: 0)
let inputFormat = AVAudioFormat(commonFormat: format.commonFormat,
sampleRate: format.sampleRate,
channels: 1,
interleaved: format.isInterleaved)
Now that we have proper input format, we can use it to connect the
nodes. To do so replace nil
in
audioEngine.connect(…)
with inputFormat
:
audioEngine.connect(sourceNode, to: audioEngine.outputNode, format: inputFormat)
After connecting the nodes, the audio engine will need to be started:
do {
try audioEngine.start()
} catch {
print("Error: " + error.localizedDescription)
}
The audio engine has now been fully set up and we're ready to move on to the audio generation in the source node.
Audio Generation
I'm not going to describe the details of generating sine waves here since it's not the topic of this article.
All of this code will need to be added into the source node
initializer (it will replace
// TODO: Audio generation
).
First, we'll add some basic variables for the sample generation:
let listPointer = UnsafeMutableAudioBufferListPointer(bufferList)
let rampValue = self.frequencyRamp
let frequency = self.currentFrequency
let period = 1 / frequency
Now we can iterate through the frames and add the generated sample to the buffer:
for frame in 0..<Int(frameCount) {
let completion = self.time
let sample = sin(2.0 * .pi * (frequency + rampValue * completion) * self.time) // Generates sine wave
self.time += 1 / Float(self.engine.outputNode.inputFormat(forBus: 0).sampleRate)
self.time = fmod(self.time, period)
for buffer in listPointer {
let bufferPointer = UnsafeMutableBufferPointer<Float>(buffer)
bufferPointer[frame] = sample
}
}
If you are looking for an explanation of this, you should read this article about synthesizers in Swift. It has a great explanation of the details but the implementation is way more complex than the one used in this article.
We can now change the frequency of the synthesizer by changing
currentFrequency
, which we defined earlier.
Usage
Our finished synthesizer class can be used like this:
let synth = Synthesizer()
synth.setup()
Conclusion
Today we learned how to generate audio and build a simple synthesizer
with AVAudioEngine
. This can be useful for many projects
such as a frequency-based hearing test. In most cases, you could just
generate an audio file in GarageBand or Logic and include it in your app
bundle instead, but generating the audio in the app saves space and is
generally cleaner. If you want to use the final result, you can download
the Synthesizer
class from
this GitHub Gist.
1 Add this code above the connection code.