Soundstack 2018

A free three day series of workshops & masterclasses on the art and technologies of spatial audio, from Ambisonics to object-based audio, interactivity to synthesis, 5-7 Oct 2018 in London. Apply to attend (any/ all days) before 25 Sept here, find out more below, or contact a[dot]mcarthur[at]qmul.ac.uk with questions. This is an intermediate-level event listitsweden, and requires some understanding of the following:

  • Max MSP

  • Pure Data

  • Unity

  • Spatial audio

Friday 5th Oct @ Call & Response Enclave, Deptford, London SE8 4NT & Saturday 6th Oct Sunday 7th Oct 2018 @ Queen Mary University London, Mile End Campus, London E1 4NS

These workshops will introduce you to four artist-engineers working across binaural and multichannel audio, at the cutting edge of 3D and interactive applications for VR, AR, installations and performance. Over three days you will hear about specific software and techniques, as well as the aesthetic potential of working with immersive audio in fixed and real-time settings. You will have hands-on instruction, as well as demonstration of work. By the end of the workshop you will have a better understanding of how to spatially compose sound, how to use the open-source Heavy compiler to work with Pure Data in environments like Unity, how to build a wearable binaural audio system with head-tracking in Pure Data using the Bela platform, and how to use Ircam’s ‘Spat’ suite.

FRIDAY 5th October

Spatial aesthetics + performance Friday 5th Oct 10am – late, with Tom Slater (workshop) and Alo (performance) @ Call & Response Enclave, Deptford, London SE8 4NT 

This 1 day workshop will give the sound curious the opportunity to learn a range of new skills in 3D audio. Working with free software we use an open workshop model, where participants work towards self-directed projects. Participants will be able to diffuse their own work over C&R’s unique 3D 15-speaker sound system and take part in an informal exhibition at our space. Areas covered include: binaural and Ambisonic recording with 3D microphones, Ambisonic processing and spatialisation, immersive sound installation design and generative spatialisation techniques.

Tom Slater is an artist and researcher who works with digital media and physical computing to build immersive audiovisual environments. Currently a director of Call & Response and PhD researcher at University College Falmouth, Tom’s creative practice revolves around how sound and image producing technologies affect our understanding of spatial dis/embodiment.

Alo Allik is an Estonian sound artist who has performed his live coded electronic music and generative computer graphics throughout the world. His aesthetically and geographically restless lifestyle has enabled him to traverse a diverse range of musical worlds including DJ-ing electronic dance music, live electronic jam sessions, electroacoustic composition, free improvisation and audiovisual performances. He has forged collaborations with a number of curious and innovative musicians, writers and visual artists along the way focussed on exploring links between technology, creativity and tradition. In recent years he has been actively participating in the Algorave movement developing a style he describes as noisefunk which combines traditional rhythm patterns with evolutionary synthesis algorithms. Currently, Alo works as a researcher and lecturer in London, while continuing to perform his music and visuals to audiences worldwide.

SATURDAY 6th October

Bela, Pure Data & head-tracking Saturday 6th Oct, 10am – 1pm with Becky Stewart @ Queen Mary Uni London, Mile End Campus E1 4NS

Learn how to use Bela Mini, an embedded platform for low-latency audio signal processing, to generate interactive binaural audio. You will start by teaching how to program Bela using Pure Data and then how to have the code interact with the physical world using custom sensors. Using paper craft and circuity, you will get to develop your own creative ideas and start prototyping a wearable musical performance or installation

Becky Stewart is a lecturer in the School of Electronic Engineering and Computer Science at Queen Mary University of London where she is a member of the Centre for Digital Music and Centre for Intelligent Sensing. In 2011 she co-founded Codasign, an arts technology company that taught children and adults how to use code and electronics in creative projects. Since returning to Queen Mary in 2016, her research has examined the intersection of wearable computing, e-textiles, and spatial audio with a focus on interactive technologies for the performing arts.

https://github.com/theleadingzero/belaonurhead/wiki/Soundstack-2018-Workshop

https://bela.io/

Saturday 6th Oct

100% organic interactive audio for games and VR using Unity and the Heavy compiler (hvcc)
Saturday 6th Oct, 2pm – 5.30pm with Chris Heinrichs @ Queen Mary Uni London, Mile End Campus E1 4NS

This workshop covers the full workflow of designing, compiling and integrating interactive sound assets for unity using the newly open-sourced heavy compiler by the late Enzien Audio. After a brief introduction to computationally generated audio and its use-cases we will go through the process of designing a simple but flexible sound model. We will then use the hvcc python tool:

https://github.com/enzienaudio/hvcc

https://github.com/enzienaudio

This tool will convert the model into an audio plugin for Unity. Within Unity, we’ll discover some of the interesting things we can accomplish with simple scripts and interactions, and how each instance of our model can be spatialised binaurally in order for us to begin converging on optimal hyperreal sonic sensation.

Chris Heinrichs is a ‘signal artist’ or technical sound designer based in London and Malta. Former co-director of Enzien Audio and PhD graduate from Queen Mary (‘Human Expressivity in the Control and Integration of Computationally Generated Audio’). He makes a living designing sonic interaction using computational (aka procedural) audio methods. Artist by night.

PANow – Enzien Audio: Heavy from Graham Gatheral on Vimeo.

SUNDAY 7th October

Introduction  to Spat,  Sunday 7th Oct, 10am – 5pm with Thibaut Carpentier @ Queen Mary Uni London, Mile End Campus E1 4NS

Thibaut will present an introductory workshop to Ircam Spat. Spat is a real-time spatial audio processor that allows composers, sound artists, performers, or sound engineers to control the localization of sound objects in 3D auditory spaces. The hands-on session will cover the practical implementation and usage of panning techniques (vbap, ambisonics, binaural, etc.), reverberation (convolution or parametric), object-based production, spatial authoring, 3D mixing and post-production. Participants are required to be conversant with digital audio tools and fluent with Max programming.

Thibaut Carpentier studied acoustics at the Ecole Centrale and signal processing at ENST Télécom Paris, before joining CNRS (French National Center for Scientific Research) as a research engineer. Since 2009, he has been a member of the Acoustics & Cognition team at IRCAM. His work focuses on sound spatialization, artificial reverberation, room acoustics, and computer tools for 3D composition and mixing. He is the lead developer and head of the Spat project as well as the 3D mixing and post-production workstation Panoramix. In 2018, he was awarded the CNRS Cristal medal.