PSNE caught up with spatial audio pioneers Robin Whittaker and Dave Haydon of TiMax developers Out Board to glean some insights into ongoing and future trends in theatre audio and showcontrol.
The trends we see now in spatial reinforcement and immersive audio differ only in subtlety to those we’ve always been interested in: realistic panoramas created by effectively rendered and managed localisation and spatialisation. And this is no less now appreciated for orchestral reinforcement than in vocal reinforcement. A wider range of people are showing a greater awareness of what it is and how it can enhance the listening experience. As a result, we are seeing globally greater interest for shows and events to move into the experiential, immersive audio space.
So, traditionally the interest has come from experienced sound designers expanding their boundaries, creating soundscapes for their audience that add more realism as well dramatic impact; nowadays however more mid-level customers are looking to create something experiential for their audience.
People have been sold on the concept of using immersive audio for added wow factor, but often cannot describe what it is they want. We find we need to dissect this new immersive audio paradigm in discussions with creatives, because it means different things to different people, with variations that can often be quite nuanced.
However, once effectively-spatialised audio and localisation is experienced, the realisation of the benefits for an individual situation or project is fast. Beyond the laudable development of ever more accurate and great-sounding speaker systems over the years, beyond that fidelity, localisation is the obvious thing. A major part of this whole current immersion discussion, in a stage environment, is multiple localisations made effective for an entire audience.
When you start looking closely at why spatialised audio sounds better you can start to see why more people are interested in these sort of sound designs. There are multiple mechanisms at work here. In particular you are avoiding all sorts of masking by keeping sources separated in space which is more in harmony it’s closer to how we naturally hear.
This mixing of sources in the acoustic domain as opposed to electronically is demonstrably superior in terms of imaging and spatial unmasking than combining them together in the electronic domain. Two instruments playing more or less the same note at the same amplitude will combine properly if they are absolutely in phase, but if you mix two out of phase there will be some destructive interference. Spatially reinforced audio, using strategic delay-matrix management in particular, amplifies audio sympathetically by separating individual elements into a more familiar and intelligible spatial panorama. Placing and exposing sources individually in space minimises phasing and masking, which is central the conversation as to why the audio sounds better spatially balanced as opposed to mixed electronically.
In the old days when PAs were smaller and cruder, that’s pretty much what used to happen anyway. You would come to hear each instrument amplified by its own amplification system. When PAs started getting bigger and bigger and more and more powerful, and monitor engineers went to in-ear systems with minimal on-stage sound, everything the audience started to hear was then mixed to a pair of stereo loudspeakers.
There’s also been growth in interest a hybrid spatial and immersive experience: For Alan Ayckbourn’s The Divide at the Old Vic, sound designed by Bobby Aitken, the emphasis was on strong but unobtrusive spatial reinforcement for the actors’ mics, to communicate the wide range of pathos and bathos in the piece. Complementary to this was the use of live instrumental with woods, strings, piano and electronics mixed with a 40-piece choir, all located upstage, which were then balanced with the acting voices’ variable dynamics. The dystopian fantasy nature of the piece lent itself considerably to immersive creativity, and a series of enhanced wide surround imaging objects were created to dynamically spread separate choir sections and spatial reverbs, making enough space to ‘lift the cast slightly to hear the nuances of their voices’ without having to use gain to help them cut through.
Further into the immersive domain, The Young Vic’s Life of Galileo was dominated by a massive video-mapped projection dome above the stage and audience, with high energy electronica from the Chemicals’ Tom Rowland, and the performance spread around amongst the audience. All the seating areas were mapped with their own matching surround zones so the music and effects could be intimately experienced by everyone, as well as larger spatial zones for the huge universe and sunburst tableaux that played out on the overhead dome. Not something you could readily throw a spatialisation algorithm at, but not untypical of the sort of stagings you’re seeing these days.
A strong trend now is tracking where interest seems to have snowballed. Fortunately we’re now able to build on the experience and successes of our already well-established system with a brand new tracker system coming out this year that provides a quantum leap in resolution, affordability and application diversity.
And finally, spatial reverb and acoustic enhancement are definitely on the menu now, with some interesting implementations emerging, especially those using convolutions of real spaces. We’re responding a whole new next-generation implementation with tunable, adjustable environments, which we expect to bring to the theatre and venue communities this year.