NewsWorksSoftwareTextBioContact
background image

Blog

iZotope Iris is AudioSculpt turned spectral sampler

2012-07-15

When AudioSculpt was first released by Ircam back in 1994, it offered ground-breaking new possibilities for spectral analysis and processing of sound. Since then the program has been maintained and updated, and is still available on a yearly subscription basis as part of Ircam Forum.

Related software has been released later on, such as Spear (development might seem to have stalled back in 2009), Raven (mainly geared towards bioacoustic research), MetaSynth and Spectro. Acousmographe from GRM is yet another option that I discovered in the process of writing this post. Adobe Audition also has advanced spectral processing capabilities, and I guess this list is far from extensive.

Anyway, one of the newest kids on the block is iZotope Iris. Iris combines up to three samples that can all be spectrally filtered and combined with an optional syntesised signal (e.g. noise or “dirty” sine tone) that can be spectrally processed as well. With easy to use capabilities for setting in and out points for each sample, looping properties, envelopes, transpositions, time stretching and some common FX processing (distortion, delay, chorus and reverb) Iris offers possibilities for some pretty nifty sound design.

 

PS: If you know of additional software related to the above, please mail or tweet me, as I would be curious to know.

Best paper award at SMC 2012

2012-07-14

The paper “SpatDIF: Principles, Specification, and Examples” by Nils Peters, Jan C. Schacher and Trond Lossius received one of the two best papers awards at the Sound and Music Computing Conference 2012.

The winners are invited to submit a revised and expanded version for publication in a future issue of The Computer Music Journal.

I’d like to take the opportunity to thank my co-authors for both of the SMC 2012 papers, Nils Peters, Jan Schacher and Tim Place, for their great work on these papers, and for the general inspiring and stimulating collaboration and friendship over several years.

Papers at SMC 2012

2012-07-12

I have contributed to two papers presented at the Sound and Music Computing Conference that’s currently taking place in Copenhagen. Today I learned that both papers are among the ten papers nominated as best papers of the conference.

 

SpatDIF: Principles, specification, and examples
by Nils Peters , Jan Schacher & Trond Lossius

Turenas-reconstructed3

SpatDIF, the Spatial Sound Description Interchange Format, is an ongoing collaborative effort offering a semantic and syntactic specification for storing and transmitting spatial audio scene descriptions. The SpatDIF core is a lightweight minimal solution providing the most essential set of descriptors for spatial sound scenes. Additional descriptors are introduced as extensions, expanding the namespace and scope with respect to authoring, scene description, rendering and reproduction of spatial audio. A general overview of the specification is provided, and two use cases are discussed, exemplifying SpatDIF’s potential for file-based pieces as well as real-time streaming of spatial audio information.

 

An Automated Testing Suite for Computer Music Environments
by Nils Peters , Trond Lossius & Tim Place

Dataspace-integrationtest-v2

Software development benefits from systematic testing with respect to implementation, optimization, and maintenance. Automated testing makes it easy to execute a large number of tests efficiently on a regular basis, leading to faster development and more reliable software. Systematic testing is not widely adopted within the computer music community, where software patches tend to be continuously modified and optimized during a project. Consequently, bugs are often discovered during rehearsal or performance, resulting in literal “show stoppers”. This paper presents a testing environment for computer music systems, first developed for the Jamoma framework and Max. The testing environment works with Max 5 and 6, is independ from any 3rd-party objects, and can be used with non-Jamoma patches as well.

Ramping and dataspace in Jamoma

2012-07-12

Here’s a sneak peek at a new feature that I am currently working on for Jamoma: Support for dataspaces.

In this example the parameter myPos is used to keep track of the position of a point in space. The parameter is using the position dataspace. By default position will be described using Cartesian coordinates (xyz), but spherical coordinates (add) can be used as an alternative way of describing position.

When ramping, the ramp will happen according to the dataspace used to describe the target value. This way the point can move along straight lines as well as e.g. perform spiraling movements.

I’m currently in the process of testing to ensure that I have everything working, and I expect this feature to be included with the next installer.

Machine learning toolkit for Max

2012-06-11

Fuzzyart

ml.* is a machine learning toolkit for Max 5+ by Benjamin Day Smith, and apparently the only implementations of Adaptive Resonance Theory, Self-Organizing Maps, and Spatial Encoding neural nets for Max. Currently for Max OS X only, with Windows builds are forthcoming. ml.* was presented at NIME 2012, and the paper should become available when the proceedings gets online.

 
 
 

Creative Commons License Licensed under a Creative Commons Attribution 3.0 Norway License. Web site hosted by BEK.