At the end of the month I will be participating in Augmented spatiality, a public sound art project for the suburb of Hökarängen in Stockholm in which the artworks, performances and other comprised events are integrated into the social and spatial processes taking place in the public sphere. I will be doing a temporary sound installation at the metro station in Hökarängen, making use of surround sound played back through multiple speakers distributed throughout the space. For me sound spatialisation (the act of locating sound in space by distributing differentiated audio to multiple speakers) serves a multitude of purposes that includes activating relationships between sound and space, and raising the awareness of and sensitivity to place where the listener is situated. I intend that this sound installation might entice the audience to stay for an extended period of time, contemplating both the work and its site.
In 1980, the geographer and urban planner Edward Soja coined the term Spatiality to refer to the quality of the space that is inherently social. Having other terms in language related to the spatial, Soja invented this one to denote the space that was produced as a result of the social life. He reflected that way on the production and organization of the social space following the previous work on the topic by Henri Lefebvre.
On this basis Augmented Spatiality has been conceived as a public sound art project for the suburb of Hökarängen in Stockholm in which the artworks, performances and other comprised events are integrated into the social and spatial processes taking place in the public sphere. Addressing on the formation of social space in the city, the project aims to reflect on the ways in which public art and sound creation is assimilated or not by the networks operating in a specific place.
Augmented Spatiality has grown as a collaborative framework of artists, citizens, institutions and public structures in order that the project itself and its development may highlight the ongoing cultural, educational, economical and political events in this suburb of Stockholm. Different topics covering critical walks, gentrification processes, the idea of the local and variations in time of the soundscape and the landscape make of the project a ground of experimentation whose results will be experienced mainly through the listening sensitivity in Hökarängen, a suburb whose history and present time have additionally shaped the whole process.
The well-known sociologist Saskia Sassen asked herself: How could public space be created in the city through architecture and the practice of the citizens? The author proposes in her dissertation to work in modest spaces outside the heart of the cities; modest spaces that are open and are still permeable to differential processes of acting in the city development.
Augmented Spatiality takes place in Hökarängen, a district in Farsta borough, in the southern suburbs of Stockholm municipality. The most representative area of Hökarängen was designed in the 50’s by the Swedish architect David Helldén, influenced by new English ideas about neighbourhood units and community centres. The planning of the modern Hökarängen started in 1940 when an urban planning competition was announced, the so-called Gubbängs-investigation whose premise was a testing ground for the society of the future. Part of the planning that came up from these premises was the pedestrian street, Hökarängsplan, which was the first pedestrian street ever planned in Sweden and is one of the main venues for Augmented Spatiality project.
The latest issue of the Computer Music Journal (MIT Press) includes an article on the Spatial Sound Description Interchange Format (SpatDIF) by Nils Peters, Jan Schacher, and myself, entitled “The Spatial Sound Description Interchange Format: Principles, Specification, and Examples”.
Here’s the abstract of the paper:
SpatDIF, the Spatial Sound Description Interchange Format, is an ongoing collaborative effort offering a semantic and syntactic specification for storing and transmitting spatial audio scene descriptions. The SpatDIF core is a lightweight minimal solution providing the most essential set of descriptors for spatial sound scenes. Additional descriptors are introduced as extensions, expanding the namespace and scope with respect to authoring, scene description, rendering, and reproduction of spatial sound. A general overview presents the principles informing the specification, as well as the structure and the terminology of the SpatDIF syntax. Two use cases exemplify SpatDIF’s potential for pre-composed pieces as well as interactive installations, and several prototype implementations that have been developed show its real-life utility.
The full paper can be found here. An earlier version of this manuscript was presented at the SMC conference 2012 where it received a Best Paper Award.
More information on SpatDIF can be found here.
De Montfort University (DMU) is currently running a research project that has lead to several interesting initiatives dealing with analysis of electroacoustic music:
The OREMA Online Repository for Electroacoustic Music Analysis project is a community-based forum where analysts of electroacoustic music can post their analyses of electroacoustic music compositions. It allows people with different ideas of analysis a space to discuss why they choose to analyse a piece in a certain way. The aim of the project is to gauge whether a community initiative can aid an analyst’s understanding of a work, whilst helping them conduct an analysis themselves.
eOREMA is a new peer-reviewed, open access peer-reviewed journal devoted to the analysis of electroacoustic music in all of its various forms. The first volume of the new eOREMA Journal is now online.
And finally, Pierre Couprie is currently developing a new software package that is intended to include as many tools as possible for analysis of electroacoustic music. EAnalysis is still in development at the time of writing, but has already incorporated many of the tools that are currently available that are applicable to analysis from the listener’s point of view. An expert system will be added in the near future to aid the analyst in terms of sonic patterns of behaviour.
When I revamped this website using Ruby On Rails back in 2009, I didn’t bother about implementing comment features. The experience with comments and pingback spam on the previous publishing solution had escalated to a spam war that I simply could not win, and in the end I just disabled it.
Without comments this blog has turned into more of a monologue than I appreciate. When redoing the Jamoma web site earlier this year, we implemented Disqus as a solution for comments and discussions. That has turned out to be a nice addition to the Jamoma web site. So today I’ve spent an hour or so updating my own web site to do the same. From here on, you are welcome to comment on past and future blog posts!
Here’s more from the geek department: Today me and Stian Remvik were looking into how to do non-real-time video processing in Max and Jitter. The need for this came up in the process of developing software for the Les -Høyt! project currently in development.
Below is a prototype patch illustrating how this can be done. It would of course need to be refined further for prime time (dynamic control of file name, codec, frame rate, matrix size, etc), but the fundamental principle is implemented and functional. Currently the rendered file will be written to the Max application folder.