Archive for August 2011
Bergen Barokk: Hans Knut Sveen and Frode Thorsen
The sound installation Lines converging at a distance is still running at Håkon’s Hall. The sound of the installation is based on recordings of the early music ensemble Bergen Barokk.
The coming weekend, as part of Bergen Medival Music Days Bergen Barokk will respond to the sound installation with an improvisation at the hall, playing with and against the sound of the installation.
The performance by Frode Thorsen and Hans Knut Sveen will be part of the program for a guided tour of the medival part of Bergen. If you want to attend, please meet by the entrance of Bryggens Museum coming Saturday or Sunday at 11:55.
I’m currently collaborating with Bergen Barokk towards a larger joint project, combining electronics and acoustic instruments. For the project various recordings were done during the spring, to be used in the current installation as well the further process.
Today we had a two hour inspiring and fun session at the hall, experimenting with material and the structure of the performance. In spite of having done sound installations for more than 10 years, I have never before tried using them as a sound bed for live improvisations and performances. Signe Lidén, MA student at the art academy, has for a while been inviting various musicians to play with her drone instruments and sculptures,. Based on the experience today I definetively understand her fascination with this approach.
The above screen shot is proof of concept that it is possible to use AudiUnit plugins in Cubase and Nuendo.
Cubase and Nuendo are themselves only willing to use VST plugins. In comes Plogue Bidule to the rescue!
Bidule is a patching environment (somewhat similar to Max or Reaktor). In addition to the stand-alone application, Bidule also comes in the form of a VST and AudioUnit plugin.
In the Nuendo project above, a Bidule VST plugin has been added. Within the Bidule patch two AudioUnit plugins are hosted, a 4-band compressor courtesy of Apple, and the MasterNeedle plugin from the now discontinued Hipno plugins. There is also an AudioNetSend plugin thrown in for good measure, as I first started investigating this when I wondered about how to get sound across from Nuendo to Spectre for advanced spectral monitoring.
When I started working on digital audio in the late 90s, I almost immediately drifted towards working on real-time processing in Max for multi-channel generative sound installations.
Audio production was never part of the syllabus while I was studying composition at the Grieg Academy, and I’m mostly self-taught, although that includes lots of reading up on books, manuals, online resources, attending to mailing lists and forums, and other kinds of online and face-to-face exchange.
My workflow seems to differ quite a bit from the common workflow involved in standard studio productions. I guess that is partly due to my ignorance of how to do stuff “the right way” (assuming there is one), but also that my work is targeted towards a different way of working on and experiencing sound/music.
Still, for some years I have been interested in learning more about standard studio work, including recording, mixing and mastering, in order to see if there are principles and techniques that could extend and enhance my own work.
I have just started reading Mixing secrets for the small studio by Mike Senior, and it is instantly gratifying. I’ve just made a Max patch for listening to low frequency sine tones.
It’s apparent that my studio monitors (a pair of Genelec 8040A) are not really capable of dealing with frequencies below 40 Hz. Furthermore it is clear that the frequency response at my studio is far from flat up to at least 400 Hz. It might be well worth checking out if this can be improved with some acoustic treatment.
The B&W 602 S3 speakers that I mostly use for installations don’t seem to kick in until I reach 55 Hz, but from there onwards the frequency response seems more flat than the Genelec, although attacks are much less pronounced. I suspect that the more flat frequency response might be due to the automatic speaker setup procedure of the Denon receiver that use a mic and test signals to adjust delay time, gain level and EQ for each of the speakers. I have not yet found a way of controlling these settings of the receiver manually.
The patch can be downloaded here.
For the past 10 years the Granular Toolkit (GTK) by Nathan Wolek has been a welcome and much used extension to MaxMSP for granular synthesis and processing.
Nathan Wolek has just announced that in order to ensure future usability as well as enable the code to be used in other hosting environments, he has decided to end development of GTK in its current form, and instead reimplement as new granular components in Jamoma.
The source code for the externals in GTK will be open-surced under a BSD license, so that it remains possible to maintain the current code for projects that depends on it.
This is exciting news for Jamoma, and will certainly boost the feature sets and useability of the DSP and AudioGraph libraries. With the recent open-sourcing and inclusion of Plugtastic (wrapping AudioGraph externals into Audio Unit plugins), it also opens up exciting possibilities for creating granulation plugins in the future.
For fellow Jamoma developers, recent development has seen a few changes to the repositories:
Jamoma development (and future releases) now require Mac OSX 10.6 or newer. This change was required for compatibility with development on OSX Lion (10.7) and Xcode 4.
In addition the ObjectiveMax submodule has been moved from Tim Place’s Github account to be part of the Jamoma context at GitHub. Local repositories will need to be updated accordingly. Instructions on how to do so can be found here.
Peder Balke: Månelys (Moonlight), 1870s
I guess, in my mind I have always wanted music to do something to me. Maybe I have always wanted it to do almost the same thing, but to make music do the same thing, you have to keep making different music.
When I first started making music, I was interested in the personalities I could play, the different figures I could be. I lost interest in that. I didn’t want myself to be in the center of the music any more. And so I began experimenting a lot, with trying to remove the personality in some ways, for example by making more than one voice, so that it stops being a single figure in the moddile of the picture. And I tried singing using non-sensical words, using words backwards, putting strange sounds on my voice, different ways of reducing the importance of the figure in the picture. Cause, what I started to get interested in was the landscape behind the figure, and I found the figure more and more of a problem.
It’s like with a painting: If you have a picture of a landscape, you look at that, and your eye moves freely over the landscape. If you put a figure in there, even if it’s a tiny little one, it becomes the center of your attention, it is very difficult to ignore. Humans relate to other humans.
With music that became a problem, because I felt that as long as I was in the center of the picture, it made you as the listener outside the picture. If I took myself out of that picture, it left an open field, a sound field of some kind, which invited you in. And I felt that by removing my own personality, as represented by my voice, I opened up the music in a new way. I made a space that people could come into, I made the music much more environmental.