A free two-day, two-track series of workshops, concerts & masterclasses on the art and technologies of spatial sound.
This year Soundstack is on Friday 8th and Saturday 9th November 2019, at Queen Mary University London.
Apply to attend here before 21 October. Places are free, limited and popular, so the more effort you put into your application, the better your chances!
This is an intermediate-level event, and requires some understanding of spatial sound.
These workshops will introduce you to seven artist-engineers working at the cutting edge of spatial sound for VR, AR, installations and performance.
You will hear about specific software and techniques, as well as the aesthetic potential of working with immersive sound in fixed and real-time settings.
You will have hands-on instruction, as well as a demonstration of work, discussions and an evening performance.
Soundstack will help you have a better understanding of how to approach sound as space.
We will :
premiere the IKO loudspeaker array in the UK
+ use Supercollider in Unity for audioreactive photogrammetry
+ use the Steno language to build spatial synthesis engines and filters
+ approach spatial sonification and sound interaction design
+ work with the Media & Arts Tech (MAT) programme’s 24-channel speaker system
This year Soundstack will focus on the aesthetics of sound as space, in an attempt to address this often under-considered area. Relative to the technologies with which we can create spatial sound, aesthetic strategies and conceptual frameworks seem undeveloped, scarce, and disparate.
How can this concern be moved forward, and how? Expect bespoke and specialist workshops, great people and as always a free-to-all event.
The invited artists this year are:
& Soundstack founder:
Friday 8th November
09:00 – 13:00 Systems∿Encounter with Steno — Spatial edition with Till Bovermann (required: SuperCollider + headphones + laptop)
14:00 – 18:00 Spatial Entities with Giulia Vismara (required: Reaper + laptop + headphones)
10:00 – 18:00 Sonification and sonic interaction design for space with Paul Vickers (no requirements)
Saturday 9th November
10:00 – 17:00 Audioreactive Photogrammetry with Kathrin Hunze & Thomas Hack – Create and control point clouds in Unity with audio signals and OSC (required: Unity (2018.3 or later) + Supercollider + Regard3D + laptop + mobile phone + headphones + game controller (optional) – see workshop description for more info – all software are free and with the exception of Unity can be downloaded during the workshop)
11:00 – 17:00 Projections of a shared Now – Approaching Sound as Space using the IKO loudspeaker with Gerriet Sharma & Angela McArthur (no requirements). Check trumedical co uk.
This year, Soundstack is co-located with the ‘Absurd Musical Interfaces‘ hackathon run by Giacomo Lepri from the Augmented Instruments Lab at Queen Mary Uni.
This means you’ll have some shared evening events and a chance to meet more people.
Friday 8th November
09:00 – 13:00
Systems∿Encounter with Steno — Spatial edition with Till Bovermann (required: SuperCollider + headphones + laptop)
Using Steno, a little language for spinning long strings of synths, we will build a pool of spatial synthesis engines and filters to explore the complex interplay of “life-(inter-)connection” synthesis.
Since this workshop relies partially on livecoding techniques in SuperCollider, basic knowledge about software sound synthesis in e.g. Max/MSP, Pd or SuperCollider is recommended but not required.
Friday 8th November
14:00 – 18:00
Spatial Entities with Giulia Vismara (required: Reaper + laptop + headphones)
The recent diffusion of 3d audio technologies shifts the focus from the idea of ’sound in space’ to that of ‘sound as a space’, to which the body as an organism actively participates. In the wake of the complex spatialities resulting from this change of perspective, issues relating to the perception of space and its concept are in the foreground as well as the possibility of new approaches.
During the workshop we will explore and reflect on different way of creating sonic spaces. Focusing on the emerging interactions between technologies, space, sound and body, we will refine strategies for working with spatial sound. Keywords: Spatial imagery, Spatial properties, spatialization tecniques, spatial aesthetic, binaural, ambisonics. Bring your own sounds!
Friday 8th November
10:00 – 18:00
Sonification and sonic interaction design for space with Paul Vickers (no requirements)
Sonification represents (signifies) data through sound (like a Geiger counter for data). However, our understanding of the role played by aesthetics, embodiment, and spatial considerations in our listening to, and perception of, sonification is very limited. These aspects of sonification cannot be addressed except through an interdisciplinary approach.
In this cross-disciplinary workshop we will first introduce the notion of a shared perceptual space (SPS) as a phenomenal and conceptual framework through which sound masses can be perceived by sonification designers, scientists, artists and listeners.
Then we will introduce the concept of the Subject Position and how this might be used to ground sonification design. We will then explore and reflect on how the issues raised by the ideas above can be used to drive thinking in sonification design and listening
Saturday 9th November
10:00 – 17:00
Audioreactive Photogrammetry with Kathrin Hunze & Thomas Hack (required: Unity (2018.3 or later) + Supercollider + Regard3D + laptop + mobile phone + headphones + game controller (optional). All software are free and with the exception of Unity can be downloaded during the workshop)
In this workshop we will create 3d visual worlds in Unity that react to direct audio input or to OSC messages sent from SuperCollider in order to investigate the potential of photogrammetry as an artistic tool in the generative arts.
To this end we will create 3d point clouds from 2d images taken with e.g. a mobile phone. We shall import these point clouds into Unity and look at possibilities to animate them. Then we will control these animations with audio or OSC input.
Technology-wise we will focus mostly on Unity, thus previous experience with Unity is not required, but helpful. We will not provide a thorough introduction to SuperCollider, but since the connection to SuperCollider is only a small part of the full workshop, previous experience with SuperCollider is not an essential prerequisite for successful participation.
NOTE: Please install Unity PRIOR to the workshop as it is a very large download
Saturday 9th November
11:00 – 17:00
Projections of a shared Now – Approaching Sound as Space using the IKO loudspeaker with Gerriet K. Sharma & Angela McArthur (no requirements)
In electroacoustic composition, spatial computer music, 3D audio systems and auditory virtual environments, we deal with spatial sound phenomena and spatial dimensions like proliferation, width, height etc. Sound masses as phenomena are perceived by composers, scientists and audiences causing ‘something’ we call a shared perceptual space (SPS).
For the composer, a communicable or self-explanatory composition of plastic sound objects is conceptually, theoretically and practically useful when faced with changing architectural space/ cultural descriptions of space/ spatial perceptions. Agreeing parameters for such an intersubjective space is key. Where does the composer’s perception in the compositional process overlap with both the engineer’s and audience’s? How and from which sides (linguistic, technical, artistic, etc.) can this field be approached?
In two sessions and a recital we are going to introduce practical and theoretical research results from the past 10 years working with the IKO, to develop a basic understanding of verbalization and sculpturality in so called immersive environments. By this we want to foster further uses of the term SPS in a variety of fields as aesthetic strategies.
Gerriet K. Sharma is a composer and sound artist. Within the last 15 years he was deeply involved in spatialization of electroacoustic compositions in Ambisonics and Wave-Field Synthesis and transformation into 3D-sound sculptures.
From 2009 to 2015 he was curator of “signale-graz” concert series for electroacoustic music, algorithmic composition, radio art and performance at the MUMUTH Graz. His works have been presented at various venues and events worldwide, including compositions for the icosahedral loudspeaker (IKO) and loudspeaker hemisphere where presented at Darmstädter Summer Courses 2014, Music Biennale Zagreb 2015, New York City Electroacoustic Music Festival 2009/16, and Sound and Music Conference (SMC) Hamburg 2016. He has received numerous awards and grants including Scholarships by the German Academic Exchange Service (DAAD) in 2007 and 2009. In 2008 he was awarded with the German Sound Art Award. In spring 2014 he was composer in residence at ZKM Karlsruhe/Germany. He was senior researcher and composer within the three year artistic research project “Orchestrating Space by Icosahedral Loudspeaker” (OSIL) funded by the Austrian Science Fund (FWF) with 40 publications, over 20 lectures and 12 internationally premiered compositions. He had been appointed as DAAD Edgar Varèse guest-professor at Electronic Music Studio, Audio communication (AK), TU Berlin for WS 2017/18. Since 2019 he is working on a book on “spatial practices“ and new compositions, exhibitions and a lecture series.
Till Bovermann is an artist and scientist, working with the sensation of sound and interaction. He studied Computer Science in the Natural Science, majoring in Robotics at Bielefeld University where he also received a PhD. During his post-doc at Media Lab Aalto University, he initiated DEIND, a project aimed to design instruments for people with autistic spectrum disorder. Till was principal investigator of the project 3DMIN project at UdK Berlin. Since 2018, he works for the art-science project “rotting sounds” at University for Applied Arts, Vienna.
Till displayed his artistic work and performed with his self-build instruments at various European places such as ZKM Karlsruhe, Queen Mary University London, Berlin, Amsterdam, Athens, Helsinki, Frankfurt. Till co-curated the festival “Performing Sound, Playing Technology” at ZKM.
Till has been teaching at various international institutions. Alongside his artistic and academic work, he develops software in and for SuperCollider.
Giulia Vismara is a composer, performer and researcher specialized in electroacoustic music and sound art. She is mostly concerned with the organic nature of sound and the creation of textures in which concrete and synthetic elements combine together. Space is the key of her research, the matrix that molds the music and the sounds she composes. Through her research she is investigating what does it means “space” as a concept in different context as architecture, technology and composition and how we can experience it via sound. For this reason she is using different methods and approach to spatialization and 3d technologies. Currently she is finishing her PhD at IUAV, Architecture, City and design, researching the dynamical relationship between sound, space and body. Giulia was an artist in residence at La Chambre Blanche (CA), ACA, Florida (US), QO2 (BE), Tempo Reale (IT), Insitute für Elektronische Musik und Akustik, (AU), Lavanderia a vapore (IT), Teatro dello Scompiglio (IT), Spazio K (IT) and her sound work where presented in Italy, France, Belgium, Germany, Austria, US, Canada.
Paul Vickers is a computer scientist and a chartered engineer based in Newcastle-upon-Tyne, UK. His research focuses on data sonification, that is, how sound can be used to communicate data and information. Of particular interest is the role of aesthetics in successful sonification design and how accounts of embodied listening experience can help to explain and understand how people interact with sonic representations of data. Paul finds the space between disciplines to be especially stimulating and has worked with visual artists, sonic artists, and composers as well as with other computer scientists and engineers on a variety of sonification research projects. He is also working on the development of mathematical models to describe the foundations of visualization and sonification to allow comparisons of visualization and sonification processes at the syntactic, semantic, and pragmatic levels. Paul has even performed a stand-up comedy routine based on his sonification research at the Bright Club comedy event. Paul is Associate Professor of Computer Science and Computational Perceptualisation at Northumbria University in Newcastle
Kathrin Hunze studied Sound Design and Communication Design at the Hamburg University of Applied Sciences and is a graduate of the Art and Media degree program at the Berlin University of the Arts. Currently she is an Artist in Residence at the Institute for Electronic Music in Graz and doing her ‘Meisterschüler’program of Art and Media at the Berlin University of the Arts. Her work examines audiovisual media in complex systems in trans- and interdisciplinary contexts and combine analog and digital technologies as a critical artistic element.
Thomas Hack has studied Theoretical Physics and obtained a PhD in Hamburg. After being a Postdoc in Hamburg, Genoa and Leipzig, he specialised in Data Science and Machine Learning. Together with Kathrin Hunze he is pursuing generative projects at the interface of science and art.
contact a[dot]mcarthur[at]qmul.ac.uk with questions.