SPATIAL AUDIO - SOURCE-ORIENTED REINFORCEMENT AND IMMERSIVE AUDIO

 

Someone once said “show me a stereo chicken and I’ll reproduce it in stereo”.  The point is the world’s not stereo, or mono, its more “omnio”, or something... We actually experience sounds all around us, often moving, and usually related to what we see, smell, touch etc.

 

TiMax evolutionary spatial audio reinforces and reproduces sound sympathetically, reducing distraction and increasing focus for an audience by precisely matching what we hear to what we see.  TiMax also fully immerses audiences through dynamically managed 3D layered soundscapes and spatial acoustics.

 

Now for the nerdy bits

- An evolving knowledge base about the science and art behind TiMax spatial reinforcement and immersive audio techniques and implementation.

 

its no mystery

True spatial reinforcement achieved using the magic of Source-Oriented Reinforcement (SOR more later) makes amplified voices and instruments appear to come from exactly where the performers are on stage.  This is not the same as panning.

With conventional level panning, the sound appears to come from the speakers somewhere roughly between left and right, but only for people sitting near the centre line of the audience.  In general anybody sitting off this centre line will perceive the sound to come from the speaker channel nearest to them.

 

precedence

This happens because we localise to what we hear first more than what we hear loudest.  We are programmed to do this as part of primitive survival mechanisms, and we all do it within similar parameters.

This is called Precedence, or Haas Effect after the scientist who discovered it, and works within a 6-8dB level window.  This means the first arrival (i.e. actor’s voice) can reach us up to 6-8dB quieter than the second arrival (i.e. loudspeaker) and we’ll still localise to it.  This means we can actively apply this localisation effect and still achieve useful amplification.

We localise even to a 1ms early arrival, all the way up to about 25ms, beyond which our brain separates the two arrivals out into an echo.  Between 1ms and about 10ms arrival time differences there will be varying coloration caused by comb-filtering artefacts.

 

control time or it will control us

Its easy to see how all the arrival time differences between loudspeakers, performers and seating positions cause widely different panoramic perceptions across the audience.  You only to have to move 1ft (0.3m) to create a differential delay of 1ms, causing significant image shift; literally, the person sitting next to you can get a different panoramic impression to you.  This makes some directors distrust and avoid using radio mics for their actors.

 

stress factor

If there is continuous conflict between our eyes and ears this causes confusion.  It is distracting to hear an actor’s voice coming from a speaker on the left when we see them on the right side of the stage.  With multiple actors you will have an audience further stressed by the effort of trying to discern who’s saying what.

This reduces intelligibility, dramatic impact and overall immersion in a performance.  Directors will say it prevents the “willing suspension of disbelief” they are aiming for.  The same can be true of a mono or stereo PA mix conflicting with the natural acoustic panorama of an orchestra or choir, which a director would prefer to enhance not overpower.  Localisation by way of spatial reinforcement is essential to satisfy these fundamental objectives.

 

flatline delays

For many years delays have been applied to the actor’s mic at the desk, electronically pushing the PA back to make the natural acoustic voice precedent.  This flatline delay technique can involve MIDI Cues to change delay presets as actors move around.  In recent years it has been possible to automate it using stage tracking systems such as TiMax Tracker, and including some cross-stage level panning.

 

no free-lunch

This can be quickly and simply set up in a dynamic dsp processor such as the TiMax SoundHub but has some compromises.  The centre-line only limitation above will still apply, and with wide-dispersion crossfiring L/R systems there are likely to be problems with comb-filtering and echo outside the centre-line audience sweetspot.

 

comb-filter hell

With a single flatline delay on a mic being fed to a wide crossfiring L/R system, it's hard to set a delay time that works for every seat in the house, as all of them are being covered by both of the crossfiring speakers.  Consequently when the actor moves you may hear phasing artifacts when his acoustic source interacts with these flatline delays.  One theatre sound engineer has described this as comb-filter hell.

 

echo bitch

The converse of this is hearing echo’s due to arrival time differences going beyond the echo perception threshold of around 25ms.  A wide dispersion long-throw system covering most of the room from two separate directions will often span far more than this limit (i.e. ~25ft / 8m).

 

The Matrix revealed

Fortunately help is at hand with the TiMax SoundHub dynamic delay-matrix, which allows all speakers to have different delay times with respect to each mic.  Coupled with radially-arrayed speaker systems, this is true Source-Oriented Reinforcement (SOR).  TiMax has pioneered this technique since the 1990’s with the industry’s first ever audio imaging objects, in the form of level/delay Image Definitions which can be recalled onto each mic manually or under automation control.  TiMax SoundHub has since refined and evolved this technique using dynamically morphing delay-matrix transitions, recalled manually using its built-in Cue List, often via MIDI triggers from the desk or QLab, or automatically via real-time stage tracking.

 

keeping Track(er)

The icing on the cake comes on the form of the market-leading TiMax Tracker stage tracking system which follows the actor, performer or presenter around stage and tells the TiMax Soundhub to morph seamlessly between the appropriate level/delay Image Definition objects.

TiMax Tracker’s multi-viewpoint and multi-layer data redundancy topology was evolved to address limitations encountered in first-generation, single-viewpoint systems which the TiMax team helped develop in earlier years.

However, more than half the many shows using TiMax worldwide have manual Cueing of actors’ mic localisations.  The TiMax software lends itself equally to this approach for more tightly choreographed productions, as well as to the fully tracked setups which becomes more essential for busier or more free-form shows.

 

The New Normal

These evolutionary TiMax spatial reinforement techniques have now been adopted by leading award-winning sound designers for successful London West End and New York Broadway shows, as well as across Europe, Russia and Asia, including highly challenging large-scale outdoor productions.

Theatres and venues across the UK, Scandinavia, USA, China, Japan, S.Korea and beyond have chosen the TiMax route to bring their sound reinforcement up to what so many now regard as The New Normal - an immersive, fully spatialised approach that eclipses conventional mono/stereo/LCR in the eyes (ears?) of audiences and producers alike - fully engaging and immersive Spatial Reinforcement using SOR.

 

In the Zone

SOR doesn’t just work by applying clever morphing algorithms, however.  The subjectivity required to get 3D audio spatialisation and localisation just right is important, and a major strength of TiMax is how it adapts to a huge range of stages and audience configurations.  It handles with ease the often compromised speaker locations sound designers are left with after scenery, lighting, video, wigs etc departments have had their say (just kidding about the wigs...).

The TiMax dimensionally-aware StageSpace tool intelligently auto-renders matrix delay settings for stage Zone objects, based on CAD import of the performance space.  However leading sound designers know that its the one or two millisecond and dB tweaks by ear that make the spatial imaging really work, with appropriate management of delay changes to match the action.

The highly evolved TiMax SoundHub software and firmware does this with precision and versatility.

Its task-based workflow and ergonomics are informed by in-depth experience of work done on premium shows by TiMax developers Out Board, as well as direct market feedback from some of the best sound designers in the business.

 

first-wavefront reinforcement

For large stages, especially open air, and where vocals have to get above a pit band, TiMax manages cross-firing on-stage speakers to apply variable first-wavefront reinforcement.  This creates a strong zero-time anchor on-stage to support the actors’ voices, which follows them about under Cue or Tracker control, allowing the operator to push their levels more while still maintaining localisation.

 

3D spatial audio rendering

Now the fun bit.  Immersive, multi-dimensional object-based soundscape creation and control are both highly effective and straightforward in TiMax.  Spatial Image Definition objects are built as instructions that the sound designer can then use to place or dynamically move sounds around a space.  These objects know which speakers to use and which parameters to apply to them to make this work over an entire audience not just a central sweetspot.

 

dynamic movement

Sound effects, atmospheres and certain music stems can be made more immersive, realistic, emotive and impactful by applying subtle or extreme movement in multiple dimensions.  TiMax makes it quick and easy to achieve simple or complex dynamic soundscape creation using its integrated TimeLine showcontrol and PanSpace rendering tools.

 

show-in-a-box integration and playback

Effective use of immersive spatial audio in performance, presentation and experience worlds requires comprehensive showcontrol interactivity for scheduling and synchronisation with other media and events.  TiMax fully integrates PanSpace object-based spatial rendering and TimeLine audio showcontrol triggers and events management.

A versatile bundle of onboard GPIO, MIDI, MTC, Date/Time and XML controls allows TiMax to seamlessly interact with mixing desk automation and the likes of QLab, Ableton, Medialon, Crestron, AMX plus D3 and other video servers, as well as our own TiMax Portal customisable touch-screen and showcontrol resources.
Coupled with on-board dsp, mix-automation and playback from 16 up to 64 tracks, TiMax SoundHub truly represents a single show-in-a-box package which can operate standalone with no computers or other peripherals attached.

 

 

  ^TOP

Out Board (Sheriff Technology Ltd)

Unit 4, Church Meadows

Haslingfield Road

Barrington

Cambridgeshire

CB22 7RG

United Kingdom

Copyright ©Out Board (Sheriff Technology Ltd.) 2017

Website by basslinedesign build 1.3.3 December 2017