Back to news

Singing Clouds

  • Published on May 24, 2019

 

By Jérôme Nika, Researcher and Computer Music Designer

Lullaby Experience is an immersive experience that combines the worlds of concerts, installations, theater, and opera. Surrounded by swarms of singing and whispering voices, the audience wanders through a dream-like setting in which they meet a clarinet player, a violinist, a clown, a ballerina… as well as the other musicians from the Ensemble Mondern and a host of characters imagined by the director Claus Guth. Pascal Dusapin has wanted to create this project for over a decade. He dreamed of being able to weave together “singing clouds”, musical masses interwoven with a large number of voices that would be, on the contrary, intrinsically intimate and minimalist: lullabies and nursery rhymes sung a cappella. The aim of this collaboration was to create generative processes that can navigate a pool of individual songs in different languages, with different characteristics and qualities to create “singing clouds” that come to life using a sound diffusion system spatialized and controlled by Thierry Coduys. The generative agents at the heart of Lullaby Experience, derived from the research carried out during the project ANR DYCI2 and developed in collaboration with Jean Bresson (IRCAM-STMS Musical Representations team), were designed to offer the possibility of a high level of abstraction when composing the temporal evolution of polyphonic choirs—sometimes dense, sometimes fragile, sometimes static, sometimes changing—as well as controlling the balance, or imbalance, between the heterogeneity introduced by the broad range of materials and several paradigms of homogeneity that could structure these clouds over time.
 
In order to collect the “pieces of clouds”, Thierry Coduys and Buzzing Light developed a smartphone application that lets anyone who wants to contribute to the project record and transmit their lullabies; songs have arrived from France, Germany, Spain, Iraq… As the collection grew, the lullabies were analyzed by Axel Roebel and Nicolas Obin (IRCAM-STMS Sound Analysis and Synthesis team) according to several audio-musical criteria, offering our generative agents several “handles” to extract events and assemble them so they would tell “new stories”. For Pascal Dusapin, the first imperative was to hear the material without it having undergone any sort of transformation to preserve the organic character of the voice and to maintain the variety of timbres and energies, breathing and breaths, and sometimes, even, the song’s approximate accuracy. It was therefore not a question of using only these lullabies as an observation ground on which to learn models, but also considering them as the elementary bricks that would constitute these structured clouds. Fueled by these analyses, our agents were provided with musical memories on which temporal models were learned, creating a map of the natural and contextual similarities of the elements in the lullabies. Then, drawing from their memories, the agents could instantiate high-level compositional scenarios based on melodic, harmonic, rhythmic, or timbral criteria in the form of concrete “singing clouds”. They allowed the composer to compose by defining high-level temporal evolutions, or we could say, to compose on the scale of the narrative.

Lullaby Experience
© Quentin Chevrier
Lullaby Experience
© Quentin Chevrier
Pascal Dusapin
© Quentin Chevrier

Also discover

We use cookie technology to measure our online audience and to allow you to share content on social networks via link buttons. Further information.