I recorded this project together with Tillman Hopf. Learn more about it here.
Recording session at Optimist Studios, Los Angeles. With Zaire Black, vōx, Amy Keys, Morgan Sorne, Grayson, Addie Hamilton, Sekou Andrews, Jens Kuross, Shana Halligan, FreQ Nasty, Robot Koch, Chihsuan Yang, and 50+ superb musicians and singers. Album release August 2020 on Clouds Hill / Project C.
In this post I’d like to run though what I learned about the setup for a modern area tour and some principles which may be useful to understand if you find yourself in the position to build a setup for a tour or sound system design.
I’ll be looking at everything through the angle of an audio engineer who is tasked with the setup and functioning of the electrical and audio system as well as the musical equipment- but not the on-site P.A. or rigging.
Perhaps by the end of the article you’ll find that the setup for a modern area tour is more like building an integrated studio environment, than the classic model of setting up a mock stage where the musician’s gear gets transported from venue to venue.
The equipment came in one semi truck- it was our task to set up the audio/electrical/wireless components of a practice space which would roughly mirror the setup for the live tour.
Electricity . . .
comes first and is often overlooked in the world of audio. However, it’s really important to be aware of because it’s the bedrock upon which everything in the production depends.
When setting up power it helps to visualize where the band will be and then anticipate the location of audio racks, stage boxes, guitar rigs and whatever will need power.
In our case, we needed power primarily for
The mixing desk (SSL 500) + wireless receivers/transmitters
The Bass station
The piano station
The guitars/ amps = pedalboards (which came later)
Electrical power should almost always come from one source (one wall socket). This is to avoid ground loops which can cause hum and noise on audio lines. A single wall socket can supply 3 kW (230V x 13 Amps) of power which is sufficient for a home studio or small concert.
Larger productions, like this one, require industrial sockets, power distribution units and power conditioners which ensure even flowing power and protection against surges.
At this location we used a 16amp wall socket which we then connected to a Connex Power Distribution Unit using 5 pin IEC 60309 style industrial cables.
This unit ensures steady voltage in the case of a surge and has built in breakers to protect audio-electrical equipment.
Surges occur when an additional power unit is plugged into the same source as used for audio. The demand for additional power can create a spike in voltage which runs through the whole circuit and can damage sensitive equipment. That’s why it’s important to ask at every venue if the the same source is being used for anything else during the concert.
Once the electrical source is established and trusted- it’s necessary to distribute the power across the stage so that every musician and technician has easy access to a socket.
Our Connex unit receives one plug from the wall and provides 5 outputs from the back which we divided across the room to these electrical boxes.
These boxes provide six outlets which are then linked to additional
If you can help it, it’s better to connect power strips in ‘star’ configurations rather than a daisy chain to keep the earth paths short.
So now that every position has electricity, we need to provide easily accessible audio inputs.
It’s always been helpful for me to understand audio signal flow like tributaries of a river flowing together and eventually leading to the ocean.
In this analogy the ocean is our SSL 500 digital mixer, and the first step of the signal chain, whether it be Vox, Gtr DI, or Keys, would be like the melting snow in the mountaintops which feeds the audio streams.
Let’s take a Vocal Mic as an example of the original source of our audio and follow it through our chain back to the mixer:
At the beginning, a vocalist produces a vibrating longitudinal wave with her vocal cords which sympathetically vibrates a coil of wire inside a microphone transducer which then translates the variation in sound pressure into electrical signals. This electrical signal then runs down a XLR Cable into our first 12 input stage box. This is our our first small stream. The audio signal then travels through an audio multicore cable,aka snake, (aka harting cables) toward the next stage of the chain.
The snakes allow 12 channels of audio to travel along one cable. These snakes then flow into the three larger 3 SSL live I/O ML 32.32 (32-preamp, 32-line-out) 6U racks. Just before the ML32.32 the snake goes into an adapter which splits multicore cable back into 12 XLRs, which are then plugged into the SSL rack mount units:
These 3 stageboxes (32 analogue inputs each) are connected to one another via MADI cables. Madi is a protocol that is cable of carrying X audio channels on one thin cable. Although the connectors themselves are awful in terms of physical connectivity, the function is powerful.
The stageboxes are then fed into SSL’s Black light 2 Madi concentrator which connects everything two the SSL 500.
= As you can see the whole show basically runs into these two SSL blacklight cables which carry our 98 channels of input to the console (we’ve arrived at the ocean if you’re still with me).
Here are the first 43 channels of our input list. Drums take up two 12 input stage boxes.
Audio sum up:
The most important thing to understand about audio is signal flow. Once the signal enters the console it’s only the start. There is an enormous potential for routing within a console and it’s important to visualize what path the signal is taken no matter how you internally route it.
I think now is a good opportunity to mention something quickly about gain staging that is simple but often overlooked. When you get a mic input at the console you need to use a pre-amp to boost it to line level because the components of the console are built to receive audio signal at that higher voltage. That means the input gain should always be at 0db RMS or -18/-20 DBFS. This is not peak level we’re looking at on our meters- and it’s important to be aware of what metering we’re actually using. If the gain needs to go down then the path should be routed to a group fader which is then reduced. Reducing the input gain would mean bringing the signal below the optimal voltage for the mixer. This means that none of the compressors, gates, etc. will function optimally down the line. This is important! And really deserves it’s own article. . .
is crucial for delivering the band’s in ear monitoring and also provides inputs for the lead vocals and several guitar and basses.
Wireless can be tricky for the engineer who’s used to always plugging in cables to feel secure about signal connection. But really, feeling comfortable with any technology comes from learning more about it.
Wireless system are made up of receivers, transmitters, switches and antennae. Transmitters get signal from the console and send it to bodypack receivers for in ear monitoring. Bodypack transmitters get input from, e.g: a guitar or Vox, then send to receivers positioned nearby the console. Antenna broadcast the signal from transmitter to the bodypack receivers using selected frequency bands. Switches group multiple transmitters or receivers together.
For the in ear monitoring, we used an Extended Network of Shure PSM1000 P10T dual wireless stereo transmitters connected with a switch and monitored using Wireless Workbench 6. The units are daisy chain connected by ethernet and you can also daisy chain the power (audio manufacturers pay attention- this is awesome.) For Antennas we used PA805Z2-RSMA Passive directional Antennas.
Wireless devices transmit over Radio Frequency, and each new venue has a different frequency profile of what’s usable and unusable bandwidth.
Before each setup you have to capture the specificity of the radio frequency environment by taking a scan.
This assigns your devices frequencies which are not otherwise in use by TV or radio stations. And keeps you operating within legal frequencies.
We used Wireless workbench 6 to analyze the current RF in the area, calculate the usable RF and then deployed to our inventory- which creates a beautiful light show on our devices that makes all wireless engineers happy. If you don’t have a linked network of devices WW6 generates a report in PDF form which you then have to assign to your devices manually.
The actual bodypacks are themselves scanners, but they can’t scan the whole frequency range. Therefore we used a AXT600 spectrum manager to scan the whole range which is, by the way, 470 to 952 megahertz (Mhz).
It’s important to scan for frequencies before turning on RF on the receivers so you’re not picking up the transmitter frequencies. If you’re not using something like wireless workbench 6, with shure you can press scan on the receivers, then just by holding the receiver in front of the transmitters and pressing the sync button you wirelessly sync with the transmitter. Then press enter and deploy frequencies. Then turn on the RF switch.
A note on Radio Frequency: keep wifi 10 meters away from wireless because it uses the same 2.4 ghz signal.
As an aside, as a monitor engineer I really like the idea of having my stereo output going to my own wireless headphones and then working with an ipad to build a monitor mix from the stage. This is honestly the best way to work as a monitor engineer. I can’t think of anything better. I think it’s also super important to have a good crowd mix to send into the in ears.
A bit of the musicians gear, the fun part!
Bass Station: The second day two system techs came to setup the instruments. There is a station for the bassist, whose keys include; ‘Moog little Phatty analogue Synthesizer’, ‘Phillips philicorda’, and a ‘Sledge polyphonic synthesizer’. The ‘Nord Stage 2’ is running Through mainstage 2 into a local Apollo 8. Pedals include; ‘Electroharmonix Mel 9 tape replay machine’ and ‘Deluxe memory boy analogue delay by electroharmonix.’ Aguilar tone hammer and tlc compressor pedals and Boss v2 and Polygtune.
The basses are numerous, but most often used are a Hofner bass and Fender jazz bass.
The piano used was a Yamaha CP70 piano. This is an acoustic piano built with pickups that offer a very distinctive sound- ‘somewhere between a hammer hitting a nail and a piano.’ It was built in the middle 80’s because of the difficulty of transporting acoustic pianos on stage.
To get more resonance from the frame of the piano this CP70 was supplemented with contact mics from a Swiss manufacturer called Schertler.
The pickups and contact mics are are mixed in a local Apollo with a bit of EQ and compression and then go through two Moog mono pedals as stereo filter devices. It then goes to a Strymon reverb and delay which receive midi data which allows to skip patches from song to song, or section of the song.
Play audio 12 is used for Mainstage and has 12 quarter in outputs- which is perfect for Ableton and generally anything live involving a computer. Play Audio 12 also has a USB host port where you can power up to 7 usb devices and two USB inputs for redundancy between with two machines. With a midi controller you can press play on two machines and have playback perfectly synced. (It really seems to me that Play Audio 12 is the best, and really, first audio interface for live performance.)
We can extended the Play audio 12 with a midi hub for everything else on stage that needs midi. For example, the pedal-board for the guitar player, the drummer’s pad sounds and trigger sounds- all of which automatically move on as the track plays. All of these midi satellites are connected on stage to the Playaudio via ethernet, which acts as one system.
The keyboard player uses something called. ‘Iphone touch OSC’ which is a midi software controller with the use of midi lighting cables. Yes, these exist. That’s a midi cable which goes into a iPhone adapter.
I would have liked to stay longer to see the Guitar, Drums and microphone setup (my favorite) but I had only one chance to catch a ride back to Berlin from our remote location close to the Polish border.
As an overview I can say that there is a beauty and an ease to the way in which certain large scale productions function. It can actually be much harder at smaller levels.
For example when I worked front of house at a theater I was responsible for wireless, stage setup, mic’ing, monitor sound, front of house, lights, while being a host for the band. As you can imagine, this ca be quite challenging when there in a technically advanced production.
On major area tours there is usually a person for every job. There is a wireless engineer, two systems techs for the instrumental setup, an engineer for the technical setup + additional engineer (me), front of house engineer, monitor engineer- and that’s just for the setup. The actual concert will have a much bigger production.
There is so much processing and routing involved at every station, so much more to talk about but I think I’ll wait for the next setup to talk about more specific aspects of the production.
Recording a full sized harp is most similar to recording a grand piano. The harp is basically a piano placed on its side, turned vertical and stripped of its hammers and housing.
For this session I recorded a custom built chromatic harp without the pedals that you’d expect to find on a concert harp.
We first tried to record in a small acoustically treated room, but found that we had some mild low-frequency resonances and not enough reflective space for the harp to really speak. It was too boxed in for an instrument meant to be played in an orchestras setting.
(As an aside, the main recording room would have been perfect, not to big not to small, not too dry not to reflective- but it had strange interference from a newly constructed cell tower which could be seen outside the studio windows. The radio tower noise itself was odd and didn’t sound like an electrical hum. I searched the room checking heating, air conditioning, and electricity before being told that the whole building was suffering from the weird signals on recordings.)
So, we ended up recording in the kitchen.
This space was large and bright enough to give the harp room to unfold its sound.
I used a microphone setup similar to what Noah Georgeson described on Paul Tingen’s blog, while recording Joanna Newsom’s ‘Divers’.
I’ll start with the most important mics.
Two Schoeps Small diaphragm cardiod condenser microphones on either side of the strings. I was worried about phase at first, but when rotated to a kinda reverse ORTF pattern- 180 degree angle- they sounded quite phase coherent. This particular pair beautifully picked up the precise sound and intensity of fingers plucking the strings. If these were used alone they might be too sensitive, but mixed within the context of the other mics it works really well.
The Gefell M930 wandered around a bit before finding a home at the bass of the harp in front of the soundboard. We were searching for a defined low-end, and the RE20 on the lowest hole at the back of the soundboard was not cutting it. So the Gefell M930 was originally placed at the back of the soundboard to supplement the sound. But it wasn’t adding definition so I put it at the bass of the harp pointing toward the string and it sounded great. Next time it would be wise to have another Gefell on the opposite side of the soundboard. The Gefell picked up an amazing about of body and warm beefy character of the harp.
These three mics alone carried a lot of the weight in the recording and in the gain staging and leveling process they were the most prioritized. They also mixed well together.
The next most useful mics were a stereo pair of Coles 4038’s in ORTF set about a harps distance from the harp. This is your basic listener position. They are great room mics and could be used by themselves for a ‘natural’ and warm sounding recording.
(As a side note, speaking of what is ‘natural’. . . Sometimes what sounds most realistic or natural on a recording actually has some hyper-realistic qualities in terms of mic placement or post-processing.
Taking the harp as an example- the instrument is rather complex, and in my opinion, like a drum set or piano, requires multiple mics to replicate the rich impression it makes on us when we’re listening to it in person.
The idea that putting two microphones like these Coles 4038’s up in ortf format a few meters away from the instrument to ‘mimic’ the two ears of a listener – that this is most ‘naturalistic’ way to record doesn’t seem quite accurate to me. When you listen to an instrument in person you actually have the ability to process the character of a sound in a much more complex way than two microphones. Your ears can wander from the sound of finger plucking to bass resonance to the wind in the trees outside.
Listening to a recording of two microphones of the same instrument/performance/location doesn’t yield the same sensuality of listening that it does in person- the way our ears can wander and focus- much like the way we see.
This richness can be approximated when many microphones, each with its own snapshot of the sound, are used to create what I think is a more ‘natural’ experience of the recording.
Comparing a microphone to a camera is one way of understanding this ‘framing’ that a microphone does. In the same way that a camera frames a shot, a microphone frames the sound of a particular source. But neither the camera nor the microphone effectively mimics the experience of seeing a image or listening to an instrument in person. Instead what gets closer visually is when you have multiple cameras merging their image to create a virtual reality feel. This combining of images is kinda normal in the audio world where many microphones are taking many different snapshots at once and then blended or mixed together to form one impression of the instrument.
Although the microphones are doing something that our ears cannot; that is picking up the sound image from multiple locations at once, like I said earlier the microphone is actually limited the way it listens. Thus combining microphones and mixing them together is a more naturalistic approach to recording, mimicking the sensual complex character of embodied listening. )
By the way, why should listener/audience perspective be more privileged than player perspective anyway? The player is sure to hear an extreme amount of intricacies in the sound, including very close finger plucking. Isn’t that then ‘natural to the player’? )
What matches nicely, every time, with close room mics are far room mics. The coles are my close rooms and the far room is a Neumann TLM 103. I guess I like the close room + far room combo from recording drums. I find that having early and later reflections is always super fun while mixing. If I were recording in a huge space I’d like to have mics progressively place in distance to catch all the major reflections 10ms, 20ms, 50ms, 100ms, 250ms, 500ms, 1s, etc. . . Overkill or most amazing concert hall recording ever? Here’s the TLM 130 catching the far room feel.
Next is the Electro Voice RE20 the only dynamic mic used in this recording. Which is slightly sad given that I love dynamic mics, but what we were going for was solo harp in high def and dynamics on the strings, for instance, wouldn’t have been needed as the harp doesn’t need to compete with anything else in the mix but vocals. The RE20 does some solid work with the low-mid range adding a very characteristic and solid tone. It’s positioned at the lowest hole on the soundboard at a distance which felt good to the hairs on the back of my hand- not too much bass, not too little. It was leveled rather low on this recording because the Gefell was really killing it on the detailed low end. Still though, if this recording were for a rock or pop purpose the RE20 could be very useful.
The last mic used was another Neuman TLM130. It was positioed to recieve the air coming from the top hole on the back of the soundboard. It seems far away in the picture, but once Hans (the musican and composer) tilted the harp back to play it was rather close. This Neumann was first used as a room mic before the Coles showed up- it sounded great in the small room and the main room but for some reason really sucked in the kitchen/living room where we ended up- so it got stuck in the back. It actually doesn’t sound bad here, but it doesn’t mix too well with the other mics. It kinda ruins the perceptual image of the sound of the harp- that is, what’s coming out of the back hole is just a lot of everything; high, mid, and low end. All that sound isn’t spaced out like it is on the front of the harp, and it kinda ruins the stereo image. Not that we were necessarily going for a precise stereo image- like you do with a piano running from bass notes on the left ear to treble on the right. So, was this mic superfluous for this recording; yes. Should I have just taken it down? Yes, most likely. But hey, – whoever is doing the mixing may find it useful in ways I wasn’t able to.
Here’s an overall shot of the close mics.
The session was recorded in Logic Pro X instead of my usual Pro Tools. The first song ended up using the maximum amount of tracks (286 or so), even with Logic’s cycle record function. That’s a lot of takes and a lot of editing!