← Back

Using SoundAnalysis With a Saved Audio File

May 19, 2020

Classifying sounds in a saved audio file using the new SoundAnalysis framework isn't all that different from performing the classification in real-time. It's not hard either, but besides the WWDC session, there isn't much documentation available right now. Here's how to use the API:

Differences

While the real-time approach uses AVAudioEngine, we need to use SNAudioFileAnalyzer, which can be initialized with a URL. In your code you'll need to swap out url with the actual URL. Here's how to set the analyzer up:

do {
    let analyzer = try SNAudioFileAnalyzer(url: url)
    let request = try SNClassifySoundRequest(mlModel: model)
    analyzer.add(request, withObserver: self)
    analyzer.analyze()
} catch {
    print("Error: \(error.localizedDescription)")
}

Observing

To observe the results, we'll need to conform to the SNResultsObserving protocol1. Inside the request(_ request: SNRequest, didProduce result: SNResult) function we can get the results like this:

guard let classificationResult = result as? SNClassificationResult else { return }
let results = classificationResult.classifications

Each result (in results) has a confidence and an identifier.

Real-Time Analysis

If you are interested in how to run the analysis in real time, you should check out this article, which also covers how to train a model for this kind of classification in Create ML.


1 The functions request(_ request: SNRequest, didProduce result: SNResult), requestDidComplete(_ request: SNRequest) and request(_ request: SNRequest, didFailWithError error: Error) will have to be implemented.