DAFx paper on Jamoma Audio Graph
I am currently attending DAFx-10, the 13th International Conference on Digital Audio Effects, hosted by IEM in Graz. Yesterday I presented the paper The Jamoma Audio Graph Layer, written by Tim Place, myself and Nils Peters. The paper is now available for download, and here is the abstract:
Jamoma Audio Graph is a framework for creating graph structures in which unit generators are connected together to process dynamic multi-channel audio in real-time. These graph structures are particularly well-suited to spatial audio contexts demanding large numbers of audio channels, such as Higher Order Ambisonics, Wave Field Synthesis and microphone arrays for beamforming. This framework forms part of the Jamoma layered architecture for interactive systems, with current implementations of Jamoma Audio Graph targeting the Max/MSP, PureData, Ruby, and AudioUnit environments.
One of the important features of the framework is how it enables work on multichannel audio signals using single patch chords in patching environments such as Max and PureData. Patches for spatialisation in MaxMSP typically end up being rather elaborate, as illustrated above. In contrast, the patch below illustrates how Jamoma AudioGraph is able to pass multiple audio channels around using one patch chord only.comments powered by Disqus
|Licensed under a Creative Commons Attribution 3.0 Norway License. Web site hosted by BEK.|