background image

Surround recordings as artistic material



Last fall BEK purchased a SoundField microphone. For me the ability to capture spatial information has led to a renewed interest in doing sound and field recordings. I have spent quite a bit of time investigating the work flow required for recording, encoding and decoding, as well as how the recordings might be used artistically in ways that I find meaningful.

Recently AudioFinder was updated to support display and playback of multichannel sound files. For me that helps a lot towards using AudioFinder as an audio assets application.

Also, this spring Harpex was released. Harpex is a parametric decoder of ambisonic signals, and I first heard of it from Natasha Barrett. I am really impressed with the decoding abilities of Harpex, it is producing way better spatial definition than the software decoding plugin that comes with the SoundField microphone itself. For real-time generative sound installations Harpex is pretty CPU-intensive, and I am not sure how well it will work to use it live, but for pre-processing of sound, it’s really good.

In addition, Harpex used as a plugin with AudioFinder helps me auditing spatial sound over headphones, which is useful when away from the studio.


In the last few weeks I have also worked on extending Distance-Based Amplitude Panning (DBAP) to support ambisonic sources. DBAP was first developed for the Living Room workshop/installation in Trondheim in 2003, and presented at ICMC 2009:

Most common techniques for spatialization require the listener to be positioned at a “sweet spot” surrounded by loudspeakers. For practical concert, stage, and installation applications such layouts may not be desirable. Distance-based amplitude panning (DBAP) offers an alternative panning-based spatialization method where no assumptions are made concerning the layout of the speaker array nor the position of the listener.

So far DBAP has been used for spatialisation of mono sources.

But my recent work on surround recordings, as well as a demonstration of research on source directivity at the SpatDIF meeting at IRCAM a year ago got me thinking about the possibility of using B-format signals as input to DBAP.

My hope is that this might produce sounding results with a spatial distribution and differentiation that goes beyond panning of mono sources, while also offering possibilities of a sculptural choreographing of sound that might be difficult to achieve through decoding alone.

DBAP with B-format sources have been developed as a Max external that eventually will make it into Jamoma, and a Jamoma module to go along with it.

I have tested it out at my studios with seven speakers positioned the similarly to the help patch screenshot above, and the initial tests are promising. The effect, as compared to mono DBAP, is subtle, but gets closer towards crating scenographic illusions through sound alone.

Today I’ll be doing a large scale test at Håkonshallen.


comments powered by Disqus


Creative Commons License Licensed under a Creative Commons Attribution 3.0 Norway License. Web site hosted by BEK.