source-oriented reinforcement - A True Story


An evolving knowledge base about the science and art behind audio localisation, its applications, and how to do it right. Check-in regularly for more instalments.

its no mystery
First thing to know about source-oriented reinforcement (SOR) is that it’s not panning. Audio localisation created using SOR makes the amplified sound actually appear to come from where the performers are on stage.

With panning the sound usually appears to come from the speakers but biased to relate roughly to a performer’s position on stage. Most of us are also aware that level panning only really works for people sitting near the centre line of the audience. In general anybody sitting much off this centre line will mostly perceive the sound to come from whichever stereo speaker channel they’re nearest to.

precedence
This happens because our ear-brain combo localises to the sound we hear first, not necessarily the loudest. We are all programmed to do this as part of our primitive survival mechanisms, and we all do it within similar parameters. We will localise even to a 1ms early arrival, all the way up to about 25ms, then our brain stops integrating the two arrivals and separates them out into an echo. Between 1ms and about 10ms arrival time differences there will be varying colouration caused by phasing artefacts.

This localisation effect, called precedence or Haas Effect after the scientist who discovered it, works within a 6-8dB level window. This means the first arrival can be up to 6-8dB quieter than the second arrival and we’ll still localise to it. This is handy as it means we can actively apply this localisation effect and achieve useful amplification – more later.

If we don’t control these different arrivals they will control us. All the various natural delay offsets between the loudspeakers, performers and the different seat positions cause widely different panoramic perceptions across the audience. You only to have to move 1ft (0.3m) to create a differential delay of 1ms, causing significant image shift.

stress factor
If there is continuous conflict between what we see and what we hear this causes confusion. It is distracting to see an actor on the right side of the stage but hear his voice coming from a speaker on the left. Spread this out across multiple actors and you will have an audience stressed by the effort of trying to discern who’s saying what.

This reduces intelligibility, dramatic impact and overall immersion in a performance. Directors will say it prevents the “willing suspension of disbelief” they are trying to achieve. The same can be true of a mono or stereo PA mix conflicting with the natural acoustic panorama of an orchestra or choir, which a director would prefer to enhance not overpower. Localisation is essential to satisfy these prime objectives.

flatline delays
One trick used to improve localisation is to apply different delays to the actors mic at the desk as they move around stage, effectively pushing the PA back to make the natural acoustic voice precedent. This flatline delay technique has and still is often done by stepping through MIDI Cues to change delay presets. More recently it has become possible to automate it using performer tracking systems such as TiMax Tracker, and adding some cross-stage level panning.

no free-lunch
This is a simple and quick mode to set up in a level/delay matrix such as the TiMax2 SoundHub but has some significant potential compromises. The centre-line only limitation above will still apply, and with very wide-dispersion crossfiring L/R systems such as line arrays there are likely to be problems with comb-filtering and echo as you stray off the centre-line audience sweetspot.

comb-filter hell
When you apply a flatline delay offset to a mic then feed it into a wide crossfiring L/R system it is virtually impossible to set a delay time that works for every seat in the house, when all of them are being covered by both of the crossfiring speakers. Consequently when the actor moves you often hear phasing artifacts at certain points when his acoustic source interacts with these flatline delays. One theatre sound engineer has described this as comb-filter hell.

echo bitch
The converse of this is the occurrence of echo’s due to differences in arrival times going beyond the human echo perception threshold of about 25ms. By the time a wide dispersion long-throw system has covered most of the room from two separate directions, it is spanning far more than this limit (i.e. ~25ft / 8m).

Therefore a flatline delay which works perfectly to localise an actor for certain seats in one part of the auditorium can cause echo’s to be heard by other sections of the audience. This is most likely to occur for audience seats and stage positions furthest from the centre line, and unfortunately can be more pronounced in the more expensive seats nearest the stage.

The Matrix revealed

To be continued…

first-wavefront reinforcement

To be continued…

more..

To be continued...


©Out Board 2011