Soundstack 2021

Soundstack is free event focused on the art and technologies of spatial sound. This year on Friday 26th March, Soundstack brings you an online workshop about audio-visual point clouds in Unity using photogrammetry, as well as a session dedicated to spatial sound in the browser (we now all need to do it, and this is unlikely to change).

This is an intermediate-level event, and requires some understanding of spatial sound. The sessions will introduce you to artist-engineers working at the cutting edge of spatial sound for VR, AR, installations and performance.

You will hear about specific software and techniques, as well as the aesthetic potential of working with immersive sound in fixed and real-time settings. In the workshop setting you will have hands-on instruction, as well as a demonstration of work, and discussions.

Soundstack will help you have a better understanding of how to approach sound as space. Due to online delivery, the usual limitations on numbers don’t apply – so spread the word, and join in.

Schedule (all times GMT)

Session A – Audiovisual Point-clouds workshop (details here)

10:00 – 10:15     Introductions

10:15 – 11:15     3D scanning using your phone 

11:15 – 11:45     Rendering your scan data / video presentations

11:45 – 12:45     Preparing your data for Unity

12:45 – 13:15     Break / video presentations of spatial sound hubs

13:15 – 14:45     Using your data in Unity + Q&A

Register to attend as a passive participant:

https://www.eventbrite.co.uk/e/intersections-2021-soundstack-audiovisual-point-clouds-workshop-tickets-143579643579

OR

Apply to attend as an active participant (details below)

Schedule (all times GMT)

Session B – Spatial sound through the browser

15:00 – 15:15        Introduction to the session

15:15 – 16:00        Part 1 ‘In practice: case studies’ (Assembly 2020 from Call & Response with Tommie Introna from Black Shuk using Google’s Omnitone + Cobi van Tonder from Acoustic Atlas using Web Audio API)

16:00 – 16:30        Part 2 ‘Ambisonics through the browser’ (IEM’s HOAST with Tommi Deppisch & Nils Meyer-Kahlen + Leslie Gaston-Bird + Envelop’s Earshot with Christopher Willits & Roddy Lindsay )

16:30 – 17:10        Part 3 ‘Web Audio API’ (Queen Mary University of London’s Josh Reiss with Nemisindo + Imperial College London’s Lorenzo Picinali with Pluggy Project + High Fidelity’s Philip Rosedale with Spatial Audio API + Leslie Gaston-Bird)

17:10 – 17:15        Wrap up

Register to attend:

https://www.eventbrite.co.uk/e/intersections-2021-soundstack-spatial-sound-through-the-browser-tickets-142392376431

Soundstack will help you have a better understanding of how to approach sound as space. Due to online delivery, the usual limitations on numbers don’t apply – so spread the word, and join in.

Session 1 – Workshop – Audiovisual Pointclouds 10:00 – 14:45

Facilitator: Kathrin Hunze

This workshop will allow participants to make 3D scans of any objects to which they have access, using free software and their smartphones. Then we will bring these objects into the games engine Unity, as photogrammetry data. Finally, they will be able to manipulate these data, in order to create fully three-dimensional digital artwork which can be taken in any number of creative directions. You will learn about the templates, use of Plugins and audiovisual Environments in Unity, 3d Scanning with a mobile phone. Check on vacation clubs.

Participation:

You can either watch the session (passive participation) with an opportunity to ask questions, or get feedback from Kathrin as you go (active participation) by applying for a place. Places are limited and will require you to submit a Unity project to demonstrate your basic Unity skills.

Participant requirements:
          • Good base knowledge of Unity
          • Basic C# programming skills
Hardware:
          • Smartphone
          • A second screen for better workflow (recommendation, not a must!)
          • A computer with access to the internet, and sufficient CPU to run Unity, Zoom, and third-party software in real-time
Software:
          • Unity (version will be announced after application process)
          • Meshlab
          • CloudCompare
          • Regard3d
          • + your own sound materials
          • (all software is free)


 

Active participant application process (interactive workshop)

For being part in the workshop you need basic experience in Unity. For that we need:

1. A short written statement about you and your experiences with Unity (max 1000 characters)
2. Video recording or photos (combined into one .pdf file max 10MB) of your best Unity project
(3). Optional: website link
4. Your email address

Please send all information as an email or wetransfer link by 21 March 2021 to: kh[at]raumperspektive.com

If your application is accepted you will receive a link with further instructions for the workshop, including links to the software and the Unity version number. If you already have Unity installed, it’s a good idea to install Unity hubs to be able to run multiple versions of Unity on the same computer.

 

Register to attend as a passive participant (view-only) :

https://www.eventbrite.co.uk/e/intersections-2021-soundstack-audiovisual-point-clouds-workshop-tickets-143579643579

Kathrin Hunze is a Media Artist and Artistic Researcher. She studied Sound Design and Communication Design at the Hamburg University of Applied Sciences,2016, is a graduate of the Art and Media degree program at the Berlin University of the Arts, 2019, and a distinguished graduate of the Art and Media program of the Berlin University of the Arts 2020. Resident at the Academy of Applied Arts Vienna, 2020 and the Institute for Electronic Music and Acoustics (IEM), 2019. Lecturer for Art and Media, Fashion Design and Computation & Design at the Berlin University of the Arts and for Computing and the Arts, at the Berlin School of Popular Arts. She lives and works in Berlin.

 

Session B – Spatial sound through the browser

The past year has necessitated new ways of working spatially, specifically through the browser. Yet this is a layer of additional practice and learning which can be an obstacle for the creative practitioner. What tools are available to overcome the limitations of defaulting to static binaural renders of spatial sound works? What ecosystems do these tools sit within, or integrate with? Where should we start, and what do we need to know (how much additional time do we need to invest in learning the technologies, can we integrate existing workflows, and what outputs are possible if we work through the browser)? These questions will be tackled during Soundstack’s afternoon session.

Register here:

https://www.eventbrite.co.uk/e/intersections-2021-soundstack-spatial-sound-through-the-browser-tickets-142392376431

Session 2 – Spatial sound through the browser – speaker bios

Part 1 ‘In practice’: case studies 15:15 – 16:00

Tommie Introna

Collaborates with artists, working predominantly with sound and programming. He is a member of Black Shuck a co-operative that produces moving image, audio and digital projects. He also works with young people, facilitating peer led artistic projects.

https://blackshuck.co/projects/ 

https://callandresponse.org.uk/ 

Cobi van Tonder

Dr. Cobi van Tonder is a creator, composer, and Marie S Curie Research Fellow at the University of York.  Acoustic Atlas is a browser-based platform for virtual acoustic simulations of natural and cultural heritage sites.  Many heritage sites are documented in great detail from a visual perspective, but sonically there exists little data. Acoustic Atlas is a collaborative archive in progress, for acousticians, archaeologists, and sound artists to share sound data as immersive, real-time auralisations.

Listen with headphones (best to use non-Bluetooth) to avoid feedback here:

https://acousticatlas.de/experience/ 

More info:

https://acousticatlas.info/

Register here:

https://www.eventbrite.co.uk/e/intersections-2021-soundstack-spatial-sound-through-the-browser-tickets-142392376431

Part 2 ‘Ambisonics through the browser’ 16:00 – 16:30

Thomas Deppisch 

Thomas is a PhD student at the Applied Acoustics division of Chalmers University of Technology, working in the realm of spatial audio.

https://github.com/thomasdeppisch 

http://www.ta.chalmers.se/people/thomas-deppisch/ 

Nils Meyer-Kahlen

Nils is a PhD student at the Aalto Acoustics Lab. Since his masters, his main interest has been spatial audio processing and perception. His current aim is to faithfully reproduce the acoustics of different spaces for Mixed Realities.

Leslie Gaston-Bird

Leslie (AMPS, MPSE) is a Dante Level-3 Certified audio engineer specializing in 5.1 re-recording mixing (dubbing) and sound editing. She is a former Governor-at-Large for the Audio Engineering Society, and author of the book Women in Audio. She is a member of the Recording Academy (The Grammys®), a member and councilperson of the Association of Motion Picture Sound (AMPS), and member of Motion Picture Sound Editors (MPSE). She has worked for National Public Radio (Washington, D.C.), Colorado Public Radio, the Colorado Symphony Orchestra, Post Modern Company, and was a tenured Associate Professor at the University of Colorado Denver.

https://sites.google.com/view/mixmessiahproductions/about 

Christopher Willits

Christopher Willits is a pioneering electronic musician, producer, educator, and co-founder & director of Envelop, a nonprofit with the mission to unite people through immersive listening experiences. As one of the core artists on the Ghostly International label, Willits’ immersive ambient music has reached millions of listeners and includes collaborations with Ryuichi Sakamoto and Tycho.

https://www.envelop.us/ 

https://www.christopherwillits.com/ 

Roddy Lindsay

Entrepreneur and software engineer. Co-founder of Envelop, Envelop board member and co-founder of Hustle. Performs live immersive electronic music as The Ride.

https://www.envelop.us/

Register here:

https://www.eventbrite.co.uk/e/intersections-2021-soundstack-spatial-sound-through-the-browser-tickets-142392376431

Part 3 ‘Web Audio API’ 16:30 – 17:10 

Josh Reiss

Professor, Queen Mary University London / Co-founder Nemisindo

Josh Reiss is a Professor with the Centre for Digital Music at Queen Mary University of London. He has published more than 200 scientific papers, and co-authored the book Intelligent Music Production, and textbook Audio Effects: Theory, Implementation and Application. He is the President-Elect and a Fellow of the Audio Engineering Society (AES). He co-founded the highly successful spin-out company, LandR, his second start-up Tonz has received investment and he recently launched a third, Nemisindo.

https://www.eecs.qmul.ac.uk/~josh/ 

Lorenzo Picinali

Reader in Audio Experience Design and I lead the Audio Experience Design (AXP) research theme with the Dyson School of Design Engineering. In the past years he has worked in Italy (Università degli Studi di Milano), France (LIMSI-CNRS and IRCAM) and UK (De Montfort University and Imperial College London) on projects related with 3D binaural sound rendering, interactive applications for visually and hearing impaired individuals, audiology and hearing aids technology, audio and haptic interaction and, more in general, acoustical virtual and augmented reality. The research he’s been involved in the past years focussed mainly on the implementation of a binaural spatialisation tool, which also integrates a hearing loss simulation and virtual hearing aids, and on Head Related Transfer Functions selection, evaluation and adaptation.

http://www.imperial.ac.uk/people/l.picinali 

www.imperial.ac.uk/design-engineering-school 

Philip Rosedale

Philip Rosedale is the cofounder and CEO of High Fidelity. The company’s API that allows developers to integrate its patented real-time spatial audio — originally developed for immersive VR experiences — into their apps, games, and websites. In 1995, Rosedale created FreeVue, one of the first internet videoconferencing apps, which was acquired by RealNetworks. He founded Linden Lab in 1999, the creators of Second Life, which has become a home for millions of people and has a multi-billion dollar virtual economy. Philip holds a B.S. in Physics from University of California, San Diego.

https://www.highfidelity.com/

Leslie Gaston-Bird

Leslie (AMPS, MPSE) is a Dante Level-3 Certified audio engineer specializing in 5.1 re-recording mixing (dubbing) and sound editing. She is a former Governor-at-Large for the Audio Engineering Society, and author of the book Women in Audio. She is a member of the Recording Academy (The Grammys®), a member and councilperson of the Association of Motion Picture Sound (AMPS), and member of Motion Picture Sound Editors (MPSE). She has worked for National Public Radio (Washington, D.C.), Colorado Public Radio, the Colorado Symphony Orchestra, Post Modern Company, and was a tenured Associate Professor at the University of Colorado Denver.

https://sites.google.com/view/mixmessiahproductions/about 

Register here:

https://www.eventbrite.co.uk/e/intersections-2021-soundstack-spatial-sound-through-the-browser-tickets-142392376431

Spatial sound through the browser – software information

Ambisonics

Organisation: IEM

App: HOAST

HOAST360 is the open-source, higher-order Ambisonics, 360° video player with acoustic zoom. HOAST360 dynamically outputs a binaural audio stream from up to fourth-order Ambisonics audio content. Technical details are explained in an AES eBrief.

https://hoast.iem.at/ 

https://github.com/thomasdeppisch/hoast360 

https://www.aes.org/e-lib/browse.cfm?elib=20828 

Organisation: Envelop

App: Earshot

A free and open-source transcoder for live streaming Higher-Order Ambisonics. It is based on nginx, MPEG-DASH, and the Opus codec which supports up to 255 audio channels (or 14th-order Ambisonics.)

Earshot comes with an intuitive web application that allows developers to debug and monitor their multichannel audio DASH streams, and easily test different dash.js client settings to optimize their end user experience.

https://www.envelop.us/software 

https://github.com/EnvelopSound/Earshot 

Organisation: Google

App: Omnitone

Omnitone is a JavaScript implementation of an ambisonic decoder that also allows you to binaurally render an ambisonic recording directly on the browser.

https://googlechrome.github.io/omnitone/#home 

https://opensource.googleblog.com/2016/07/omnitone-spatial-audio-on-web.html

Web Audio API

Organisation: Queen Mary Uni London

App: Nemisindo

Nemisindo Ltd is a high tech start-up, spun-out from academic research, offering sound design services based around innovative procedural audio innovations, see https://youtu.be/Jjzvlshr_Go. They recently secured an Epic Megagrant to provide procedural audio for the Unreal Game Engine, their online system offers real-time sound effect synthesis in the browser. The system is comprised of a multitude of synthesis models, with post-processing tools (audio-effects, temporal and spatial placement, etc), for users to create scenes from scratch. Each of these models can generate sound real-time, allowing the user to manipulate multiple parameters and shape the sound in different ways.

https://nemisindo.com/ 

Organisation: Imperial College London

App: Pluggy / PlugSonic

Pluggy is a web app allowing users to import their own audio files (only MP3 is supported), create soundscapes and interact with them (this is also hosted on Heroku). PlugSonic is a suite of web- and mobile-based applications for the curation and experience of 3D interactive soundscapes and sonic narratives in (and beyond) the cultural heritage context.

Project page https://www.pluggy-project.eu/plugsonic/ 

PlugSonic Soundscape Web: https://pluggy-plugsonic.her okuapp.com/ 

PlugSonic Sample: http://plugsonic.pluggy.eu/sample

2 short demos:

https://imperialcollegelondon.app.box.com/s/pqusrapf7u31nbu09p437esyfdlez77m

https://imperialcollegelondon.app.box.com/s/pobay28g1ogjfu198fdzhbky1qavkl0x  

Two brief conference papers describing functionalities, together with the functionalities of a web-based audio editor created with the Web Audio API:

https://secure.aes.org/forum/pubs/conferences/?elib=20435 

https://www.mdpi.com/2076-3417/11/4/1540 

Organisation: High Fidelity

App: Spatial Audio API

Real-time spatial audio API for websites, apps, games etc

https://www.highfidelity.com/api 

https://www.highfidelity.com/zaru (demo)

“You may be surprised — our API is super simple. We have had people get a simple web app up and running within 15 min! Here’s a link to a number of sample Guides”: https://www.highfidelity.com/api/guides