In the Q&A after the Walmort Portrait Studio livestream the other night, we got a lot of questions about how we were mixing things during the show.  Charles Shriner and I use a lot of instruments, both hardware and iPad-based, and I think people were really curious about how we were herding all those animals.   As Charles pointed out to me years ago, live improv performances are ultimately really all about mixing, and the more I thought about it the more I thought that the whole question needed a longer – and no doubt more complex – answer.  This blog post tries to be that. 

It breaks down into a couple-three different questions, actually:

  • How do you do gain staging for performance?
  • How do you run a sound check when the performance is being streamed?
  • How do you control the levels of multiple instruments during performance?

So let’s take those in order.

Signal cleanup and gain staging

For me, gain staging starts with signal cleanup. This generally isn’t an issue with iPad instruments; they’re digital, and dirty power doesn’t seem to affect the iPad at all so far as hum or noise go.  Other instruments have… problems.  The OP-1 is notorious for this, particularly for high-pitched buzz, and I find that the 1010Music Blackbox sample player can also be problematic.  And then, of course, there are cables, which have a way of spontaneously developing, um, issues. 

For the OP-1, I use a noise filter made by Pyle .  It does a good job under virtually any circumstances.   For the Blackbox, I use an SNI ground–loop noise isolator that works well (and is not very expensive).

With those egregious offenders eliminated, the remaining instruments can generally be handled using standard gain-staging procedures.  Each of us (Charles and I) go through that process individually before we try to sound-check together.   Essentially, I pretend that I’m doing a sound check for an imaginary band made up of all the different instruments that I’m going to play.  The idea is to get rid of noise, set the gain for each instrument so that it isn’t going to clip and so that the whole ensemble is balanced. I also note any patches or presets that are likely to be problematic in performance because they’re either exceptionally loud or exceptionally soft or exceptionally twitchy when it comes to output levels (Sugar Bytes’ Factory instrument for the iPad, which I otherwise love, is exceptionally tough to wrangle for this purpose).    There’s a very good tutorial from Pro Audio Files – it’s quite helpful, and makes some good points about how, yes, you really do have to worry about gain staging even in an all-digital environment.   Another good article on the subject is here, and I strongly recommend you read both.

Sound-checking for streaming performance

This ends up being a two-stage process.  The first goal is to achieve balance between players.  The second is to ensure that the signal  being delivered via the stream sounds good for the audience, in terms of level, spatialization, etc.

I should add that what I’m about to say is oriented toward live performance via JamKazam (JK) when you’re using a separate piece of hardware to do the streaming, as opposed to streaming from within JK itself.  Because we use real-time, music-responsive motion graphics we use an ATEM Mini-Pro to provide video mixing and the streaming engine.  I’m not sure what you might run into streaming directly from JK… but I’d be careful, because JK is pretty opaque when it comes to level control.

JK is notorious for presenting a different balance to each player.  It used to do that whether you told it to or not, but they seem to have eliminated some of the really puzzling balance bugs from the software.  But it also offers each player control over what they hear from who – similar to a multi-player set of monitor mixes – so somebody has to be the designated shot-caller for soundcheck.  Usually with our shows it’s me, because I’m the last stop in the chain before it goes out to the stream.  In effect, I’m doing what a front-of-house mix engineer would do in a sound check for a live show, because – just like that FOH engineer – I’m in the best position to hear what the audience is going to hear. 

And just like a live-show soundcheck, the object of the exercise is to achieve balance between players.  There are about 10 thousand articles out there about this process; I don’t have a particular favorite.  Things I look out for are clipping from deep bass stuff,  harsh high midrange frequencies, and balance between players.  Most of the time this is mostly about asking people to play the loudest stuff they’re going to, and something at mid-level, just to make sure that the gain’s OK, and maybe ask people to change some EQ or gain on individual instruments that are misbehaving in one way or another.   I suppose that if I were really fussy I’d insert some processing between JamKazam and the outgoing stream – an EQ, compressor, and limiter, most likely.  That could be done in hardware or software.  But so far it hasn’t been necessary.

The final step is to make sure that the outgoing stream has its levels set correctly. I generally check for distortion or clipping on the meters that are part of the ATEM’s streaming engine, and then give a listen to what’s actually going out via Twitch or whatever the ultimate delivery system is going to be.  I realize that I’m making that last step sound like an afterthought.  It isn’t – but there’s a problem.  You can’t make changes that respond to what you’re hearing in the stream in real time, because the stream is inevitably delayed by 10 to 20 seconds before it comes back to you.  So you tweak and wait a short while and listen, and tweak and wait a short while and listen, and tweak and….   This is efficient if (and only if) what you’re doing is trying to set the overall level correctly (allowing for both loud and soft passages in performance).  Given the lag, trying to do anything more complicated than that would take forever – so you should get all of the balance work out of the way before you check the stream levels. 

Mixing stuff during the show

Let’s remember what Charles said: live improv is mostly about mixing in real time.  Things that have real, physical knobs make this easy – the knobs are all right there, or at least the important ones for performance are.  The first time I ever played a live set (not that long ago – it was 2016), I found out that, from that perspective, the iPad is a nightmare.  You really only have access to one instrument at a time, and switching back and forth is clumsy and problematic.  A hosting program like AUM helps to solve this problem, and for a while I got along well with that approach.  Then I discovered – again at Mr. Shriner’s recommendation – the MIDIFighter Twister. 

It’s a miraculous device.  Not the only such miraculous device – others, like Joe Wall, have recommended similar pieces of hardware like the Faderfox that have more knobs and possibly finer control – but the Twister simple to use in live performance without having to remember too much about what’s assigned where.  It has 16 physical knobs that can be configured in 4 banks, giving you a total of 64 things to twist.   With the addition of a software midi-mangler, you could also get each knob to do something different when it’s pushed in, for a total of 128 controls.  All of its pretty lights are also configurable, which is a help when you’re trying to remember what each knob does , or see if it’s doing that or not. 

I generally set mine up so that each row represents a different instrument.  The leftmost knob is used for overall gain control (either via AUM’s midi-learn feature, or by sending CC directly to a hardware instrument).  The remaining three knobs in each row are used to control whatever it is I think contributes most to the “playability” of that particular instrument.  One of those is almost always the filter, which I put at the end of the row for consistency.  With SugarBytes’ Factory, I use the middle two knobs for the tweak and dice-roll controls; for Shoom I use all three to set the level of the individual presets, and so on and so forth.  And if I’m using the Blackbox, one whole bank is devoted to setting the level for each of the samples in the preset.

All that sounds terribly complicated, but I find that it’s not at all difficult to remember during a live show (well, at least not if I use a cheat card to remind me what instrument is in each row).  And of course configurations can be saved in both AUM and the twister.

I think that’s about it – feel free to respond with comments and questions.

1 thought on “Mixing (and gain staging) for livestreams

  1. Are you having noise issues with the blackbox when powered from the factory power supply or when powered over USB bus? for some reason over USB bus it has a massive hum.

Comments are closed.

X