The latest issue of the Computer Music Journal (MIT Press) includes an article on the Spatial Sound Description Interchange Format (SpatDIF) by Nils Peters, Jan Schacher, and myself, entitled “The Spatial Sound Description Interchange Format: Principles, Specification, and Examples”.
Here’s the abstract of the paper:
SpatDIF, the Spatial Sound Description Interchange Format, is an ongoing collaborative effort offering a semantic and syntactic specification for storing and transmitting spatial audio scene descriptions. The SpatDIF core is a lightweight minimal solution providing the most essential set of descriptors for spatial sound scenes. Additional descriptors are introduced as extensions, expanding the namespace and scope with respect to authoring, scene description, rendering, and reproduction of spatial sound. A general overview presents the principles informing the specification, as well as the structure and the terminology of the SpatDIF syntax. Two use cases exemplify SpatDIF’s potential for pre-composed pieces as well as interactive installations, and several prototype implementations that have been developed show its real-life utility.
The full paper can be found here. An earlier version of this manuscript was presented at the SMC conference 2012 where it received a Best Paper Award.
More information on SpatDIF can be found here.
De Montfort University (DMU) is currently running a research project that has lead to several interesting initiatives dealing with analysis of electroacoustic music:
The OREMA Online Repository for Electroacoustic Music Analysis project is a community-based forum where analysts of electroacoustic music can post their analyses of electroacoustic music compositions. It allows people with different ideas of analysis a space to discuss why they choose to analyse a piece in a certain way. The aim of the project is to gauge whether a community initiative can aid an analyst’s understanding of a work, whilst helping them conduct an analysis themselves.
eOREMA is a new peer-reviewed, open access peer-reviewed journal devoted to the analysis of electroacoustic music in all of its various forms. The first volume of the new eOREMA Journal is now online.
And finally, Pierre Couprie is currently developing a new software package that is intended to include as many tools as possible for analysis of electroacoustic music. EAnalysis is still in development at the time of writing, but has already incorporated many of the tools that are currently available that are applicable to analysis from the listener’s point of view. An expert system will be added in the near future to aid the analyst in terms of sonic patterns of behaviour.
When I revamped this website using Ruby On Rails back in 2009, I didn’t bother about implementing comment features. The experience with comments and pingback spam on the previous publishing solution had escalated to a spam war that I simply could not win, and in the end I just disabled it.
Without comments this blog has turned into more of a monologue than I appreciate. When redoing the Jamoma web site earlier this year, we implemented Disqus as a solution for comments and discussions. That has turned out to be a nice addition to the Jamoma web site. So today I’ve spent an hour or so updating my own web site to do the same. From here on, you are welcome to comment on past and future blog posts!
Here’s more from the geek department: Today me and Stian Remvik were looking into how to do non-real-time video processing in Max and Jitter. The need for this came up in the process of developing software for the Les -Høyt! project currently in development.
Below is a prototype patch illustrating how this can be done. It would of course need to be refined further for prime time (dynamic control of file name, codec, frame rate, matrix size, etc), but the fundamental principle is implemented and functional. Currently the rendered file will be written to the Max application folder.
Over the past few days I’ve been looking into how to make JS audio effect plugins for the Reaper DAW program. JS is a scripting language which is compiled on the fly and allows you to modify and/or generate audio and MIDI, as well as draw custom vector based UI and analysis displays. JS effects are simple text files, which when loaded in REAPER become full featured plug-ins.
JS plugins are simple and fast to develop, and I have made a bunch for processing of ambisonic sound field recordings:
- YAW rotation (around the up axis)
- PITCH rotation (around the right axis)
- ROLL rotation (around the front axis)
- 1st order encoding of mono source
- 1st order encoding of stereo source (treated as two mono sources)
- 1st order super stereo encoding (I still need to apply 90 degree phase offsets to signals in order to complete this one)
I’ve set up a GitHub repository where you can grab the code/plugins and follow further development.