Conrad Fletcher MIPS describes the technical challenges of relaying a live stage performance via satellite to a cinema audience around the world.
You would have thought that a live transmission to cinemas in 5.1 would be broadly similar to a live surround sound TV transmission, and indeed that’s what I thought when I first started researching into the medium some four years ago. However, in fact there is a fundamental difference in approach which, although initially unsettling for those of us weaned on PPMs, can end up providing more creative freedom. Furthermore, this way of doing things is infiltrating the broadcast industry and is possibly about to change TV forever. Of course, I’m referring to loudness metering… but more of that later.
I have now sound-supervised many National Theatre “NT live” transmissions, the latest being a relay of Frankenstein from the Olivier theatre on the South Bank. Directed by Danny Boyle and scored by Underworld, it was notable for two reasons: firstly both director and band were particularly interested and involved with the sound of the show; and secondly the two leading characters swapped roles after each performance, giving us almost two completely different shows to work with.
Inevitably, there are some compromises when translating one medium to another. At first it can seem virtually impossible to faithfully re-create the immediacy of a stage performance happening in front of you to a cinema presentation via satellite. So after a lot of thought the first thing I did was define a few rules that I thought might minimise any distractions. I wanted the cinema audience to be as connected to the stage as possible, so I panned the theatre audience mics only to the sides and back of the 5.1 sound space, reinforcing the feeling that the cinema audience were part of the show rather than decoupled from it. I also wanted to get perspective into the speech balance to give a presentation similar to that of a film or TV drama obtained using booms, but with – as far as possible – no mics visible to the audience.
Since the live link was to cinemas, I wanted to make use of the extended dynamics that such an environment can support, emulating within reason the full dynamics of the stage performance. The cinema format offers virtually unlimited headroom (well, a maximum 105dBA per speaker isn’t bad!), and the first stage of the process was going to see a production. Once in the auditorium I couldn’t help noticing the completely naked actor for the first 15 minutes of the show…. so nowhere to hide a radio-mic then! At this point I tried to enjoy the show as a punter rather than be too analytical – this experience became the blueprint for the transmission. As there’s so much going on it’s easy to get lost in technical minutiae and miss large parts of the show!
After that first review of the show all departments then normally gather for a production meeting to go through technical aspects of the performance. However, since it was difficult to find a single date that everyone could attend we planned this particular show via email. Bowtie Television provided vision and presentation facilities, and we used a separate truck for the audio with a Studer Vista 8 console, and all the other familiar bits of audio gear in it.
The theatre’s in-house PA system is now fully digital, with a Digico SD7 front-of-house desk and MADI signal routing. As we use MADI for all our interconnects I thought it would be easy for us to run a fibre to FOH and pick up any lines we needed – but here lay the first hurdle! For our digital audio system to be synchronous with the in-house system we all had to be clocked from the same master generator. For the performance this was going to be provided by the Bowtie TV truck. However, the cable run was well over 200m and when we stuffed the video reference down one end of the cable pretty much nothing came out of the other! Added to that were the in-house team’s worries that an intermittent video reference might take the whole show down. The solution was a relatively new piece of gear: a MADI sample rate converter made by Direct Out Technologies (www.directout.eu). This device enabled FOH and the TV side of things to run completely independently of each other, avoiding any risk of the “domino effect.”
As we needed around 80 lines from the auditorium to us one of our own Studer stage-boxes was rigged at the FOH position, bring back audience mics and redundant feeds in case of problems with the MADI feed. The whole cast were to be radio-miked, but a number of Sennheiser MKH416 and Crown PCC boundary layer mics were hidden in and around the stage to capture ambience and anyone not wearing a radio-mic. These ambience mics were mixed with the direct radio-mic signals where possible to give the required perspective – the radio-mics having been time-aligned with the float mics to help.
We used two audience mic arrays. The first provided ambience and applause and comprised two front Sennheiser MKH416s and three slung Neumann KM140 at the rear. The second captured tighter laughter with four KM140s on arms rigged from the balcony lighting truss.
The first rehearsal is normally the first time we actually hear anything through the system, so everything is recorded direct to multitrack straight after the mic amps. This approach allows us to flip between mic amps and recorded signals with exactly the same gain structure and console signal processing, enabling us to rehearse over and over again. I’ve found that there’s no substitute for rehearsing a show at least a dozen times – it’s basically a live, three-hour film dub and we need to have as much of the script committed to memory as possible before the performance. To record the show we used two systems: a Merging Pyramix DAW with two MADI cards, and a Pro Tools HD Native rig with the new HD MADI interface and an Avid Mojo SDI converter which we’re testing. The Pro Tools DAW system recorded 64 tracks of 24-bit audio plus an SD video stream to help with rehearsals. Actually, it was most of a video stream as there’s a 2GB file size limit which we hit – we’re still investigating this as it just stopped working near the end of the show. Thank heavens for duplicate recording! In the meantime, if anyone knows a workaround for this, I’d be glad to hear it!
The sound crew is fairly large and the two radio-mic wranglers have possibly the hardest job in the show. Not only do they have to convince reticent actors to wear the mics, but they also have to hide them sufficiently well that they can’t be seen in a 30ft wide, high-definition close-up and without any rustling or thumping giving the game away. In practice it’s almost impossible to completely eliminate any noise, particularly when the lead insists on banging his chest (and mic) at regular intervals. I find it helps to shout very loudly at the screen whenever this happens (!) but I also put a compressor across his channel with the fastest attack time possible and a 0.2mS look-ahead to minimise the risk of speaker cones popping out in cinemas all over the world. Wherever possible we try to hide the mics in the actors’ hair, or pre-mic costumes, capes, hats and, occasionally, the props. When we did Hamlet we were continually switching between four different radio-mics for one of the cast.
Anyway, back to the crew: there’s also the FOH Mixer and stage assistant. The Bowite TV crew consists of a truck guarantor for comms and Dolby encoding as well as final monitoring, and two floor sound engineers. In the sound truck there are three people; myself and Paul Stannering, who hard mixes the radio-mics, and Ollie Nesham who records the stems and acts as guarantor for the truck. I match the sound perspective to the shot by using various combinations of stage mics with the radio-mics, and vary the speech, music and effects’ level and panning for dramatic effect.
After the rehearsal we all decamped to a cinema in central London to watch the recording, warts and all. This is where I have learned that no two cinemas sound the same, sometimes not even remotely similar! In one cinema it can sound too loud while in a second the same performance can sound too quiet, and it’s amazing that a reproduction level difference of 3 or 4dB can make a huge difference to the impact of the show. Some cinemas also sound very dull, and one extremely famous cinema that I won’t name hasn’t had a working LFE speaker for over a year. Anyway, Danny Boyle was there with Rick Smith from Underworld, and commented that he felt the soundscape wasn’t very involving and rather front-heavy. A load of notes later, I met up with Rick to massage the audio mix and go over any problems we could foresee for the transmission.
When we first started doing this sort of live relay I did a lot of research on sound levels and EQ. Now for a TV broadcaster, levels are easy – you peak to PPM 6 on both legs. However, when mixing for cinema you pretty much ignore the metering and just mix until it sounds right for you. The theory is that your monitoring environment is calibrated to be the same as that of the ‘standard’ cinema environment, and so what you hear is exactly what the audience will hear. In fact, via the dreaded Dialnorm speech is normalised to an RMS figure of -31dBFS, giving a consistent loudness whatever film is showing. This is the standard being transferred to the world of TV, hopefully putting an end to having to turn the volume down during the adverts and giving mixers more dynamic range to play with. Perhaps an article for another day…
The speaker level calibration for the film industry is slightly different to our familiar TV standards in that they align -20dBFS RMS pink noise to 85dBC for the front three channels and 3dB lower (82dBC) for the rears. In a TV mix room all five channels would be aligned to the same level. [This anomaly is because of the way the rear channel speakers were aligned with the old matrixed ProLogic surround system – Ed]. A complex band-limited measurement for the LFE contribution sees its level end up around 91dBC. However, the first problem with this alignment protocol in an OB truck is that it’s massively too loud! As the speakers are so close to the sound balancer the monitoring reference level needs to be between 3 and 8dB quieter than the standard film reference level to make sure the mix heard in the control room translates well to the screen. We actually ended up with 80dBC for the fronts, after taking a mix to the Pinewood Dolby Premier dubbing theatre during testing.
In a cinema there are limited controls to adjust the sound reproduction, centred on the Dolby volume knob. It’s normally supposed to be calibrated to a volume of ‘7’, but can be turned up or down by the projectionist, losing or gaining 3dB per step. What appears to happen is that the knob is set to 7 until someone complains that the latest blockbuster is too loud, whereupon it’s turned down and left until the next person complains it’s too loud or quiet. We are working on a solution to this inconsistency by transmitting a calibrated segment of speech, FX or music during the satellite tests before the main event. This should enable the projectionist to adjust the volume in the cinema to be at least roughly correct.
The second problem we have found is that the subjective affect of the X-curve equalisation varies enormously between theatres. The X-curve is basically an equalisation applied at the cinema just before the power amplifiers that is supposed to mitigate the subjective sound character of a speaker in a large auditorium. There are two variations of the X-curve intended for auditoria greater than, and less than, 150 cubic metres. It involves a gentle roll-off of either 3dB or 1.5dB per octave above 2kHz, with an optional bass roll-off. Unfortunately, because of the wide variation in cinema size and wall absorption, this appears to be a pretty hit or miss solution. Also, the standard way of calibrating a cinema involves taking an RTA measurement approximately two-thirds of the way back from the front which yields, at best, the response of the room at that particular point rather than any meaningful test of the whole system.
So after much listening at Pinewood and many other cinemas across the UK we have evolved a set of mastering tools for our live cinema transmission – one set each for speech and music transmissions which help to address these issues.
Anyway, back to the rehearsal and we were joined by Rick Smith who gave us the detail on the motivation for the various sound effects heard throughout the performance so that I could start noting down the level and panning for each effect. I also put small amounts of delay in a lot of the lines to encourage a sense of depth and involvement, and I uses our TC System 6000 and Lexicon 960 to help with the surround effects. If we had been able to post produce the mix I would have used the sensational VSP surround processor in the Studer desk as this basically creates a virtual room in which you that can place any mono source, generating varying pre-delay and reverb in each of the five surround channels as the pan control is moved. With this technology it’s possible to make a close radio-mic appear to recede – meaning no need for float and hidden mics! Our transmission was entirely live, however, so not only can we not use VSP, but we can’t use any console automation or snapshots either as the performances vary enormously and unpredictably between rehearsals and transmissions.
One of the initial sound effects was a humming noise for the first few minutes, which is actually a modified sample of the ambient noise in the auditorium to hide the running noise of a particular lighting effect – a huge mirror suspended above the stage and front of the audience. It has 1200 dimmers behind it, feeding 1200 incandescent light-bulbs of various sizes hanging from it. When it was at full brightness it felt like a three-bar electric heater over your head! Plans to reproduce this in cinemas were (un)fortunately shelved at an early stage.
We had around 22 effects lines coming into us, with each line containing a mixture of effects and music, depending on the cue, and all at different levels. So, a lot to do which is why we rehearsed for two days before the second camera rehearsal. For this, Ben Cumberpatch and Jonny Lee Miller swapped roles, so we had a completely different performance to contend with too. Added to this, Jonny was losing his voice which meant I couldn’t rehearse the mic perspective properly, so we were getting a little nervous!
After this rehearsal recording we had a second cinema viewing (at a different cinema to the first review), and I discovered that the surround channels were almost inaudible! So, back to the truck for a little tweaking and another two days of rehearsal. This particular show was unusual in that we were recording the matinee performance for transmission a week later, and then the lead actors changed roles and we transmitted the evening performance live to 22 countries around the world. Emma Freud was the compère for both shows, presenting from the auditorium with a cabled Neumann KMS105 mic. I would have preferred something invisible in keeping with the performance, but in practice this was made impractical because anyone in the audience were permitted to ring a huge bell suspended above them. Also, the audience can make a surprising amount of noise when finding their seats after visiting the interval drinks bar!
After the show, the usual practice is a sigh of relief that it all worked as intended, a quick de-rig, and promptly to the bar to discuss the next production – in this case The Cherry Orchard in June. The series has been gaining in popularity with each production and details of each show can be found at www.NTlive.com if you would like to experience it for yourself.
by Mike Felton MIPS
I had been intrigued by the concept of these live relays to cinemas hence I had badgered Conrad into writing an article about the process. Coincidentally I had received a rave review of this production from my daughter who was lucky enough to have seen it “in the flesh” at the National. Tickets were by then sold-out so I jumped at the chance of experiencing the relayed version at our local bijou Curzon Richmond.
I spoke to Conrad on the afternoon that I was due to see the relay and was rather surprised to learn from him that we would not be seeing a live transmission on this occasion but a recording of a matinee. This was a bit disappointing but explained why the version I saw had Benedict Cumberbatch as Dr F and not the “reversed roles” version that was advertised for that night’s live show.
As you will know from Conrad’s article he had a very tricky job in covering the action, particularly the monster who spends the first quarter of an hour writhing around the floor virtually naked. This specific challenge I surmised was achieved with the help of a large “scar” that looked like the crimped edge of a cornish pasty(!) that ran backwards over the head of the actor. This I presume enabled the cable route from the mic in what would have been the “hairline” position if he was not bald! I never did work out where the transmitter was…
Overall I thought the audio perspectives achieved by the contributions from the static mics worked very well and soon one’s suspension of disbelief was secure and the content took over and one watched the play quite naturally.
Some technical problems
Maybe I went on an unlucky night but we had a few reception problems. I was disappointed to find that the lip sync was out and unfortunately the worst way, i.e. sound leading. Conrad tells me that they transmit sync checks in the afternoon to which all the cinemas are supposed to line up to but in this case something must have changed between the test and transmission. The other less annoying problem was a small dropout that happened approximately every 30-40 secs throughout the whole transmission. Most worrying however was a complete loss of sound towards the end of the play which lasted maybe 5 secs. The return was preceded by a (mercifully!) short burst of full-scale shash (that’s loud!) As I say maybe I was unlucky in my choice of night.
From an aesthetic point I thought that the surrounds could have been a bit louder but that and the above problems underline the tricky variables involved in this sort of operation, i.e. although in theory the presentation setup is calibrated – as the originator you can never be sure it seems…
Overall it is a very worthwhile undertaking allowing as it does the possibility of seeing productions that for reasons of location or sold-out bookings would not be possible otherwise. Congratulations to Conrad for pulling it off!