My live show on Friday was followed by a Q-and-A period where audience members asked me various things about how the music and visuals are done. This post responds to some of those questions, and contains links to pretty much anything you’d want to know about the construction of the show. In fact, probably more than you want to know….

I wasn’t satisfied with the answers I gave to some of the questions that followed my show last night, so I thought I’d say a little more here.  Questions answered in the order received:

Mr. KF of Lawrenceville, NJ asks:  What’s that doodad in the upper left corner of your rig?

That, Mr. KF, is a DJ Tech Tools MidiFighter Twister .  It has 16 knobs, and internal electronics that allow you to program the 16 knobs in 4 banks, thus yielding 64 separate controllers.  The knobs also incorporate a push switch, separately programmable.  You can also use the knobs in “push while turning” mode to control even more stuff, but because of some shortsightedness in the design you need external midi-mangling software to make that work well. Programming the unit is easy and fast from a hosting program that runs under OS/X or Windoze.  You also have control over all the indicator LEDs, which is helpful in remembering what’s what.

My usual way of setting things up is to have each horizontal row of 4 knobs controlling one instrument.  Left-most knob is always master gain for the instrument.   What the other 3 do depends on which controls I think make that particular instrument the most playable in a live setting.   For example, in SoundScaper, the other three knobs are used to control the volume of the three individual oscillators.  In Shoom, I use them to control the volume of the three internal synths.  In Factory, one runs the “Tweak” modulation-morpher, one runs the “Roll the dice” modulation-morpher, and the third is assigned to filter cutoff. As Mr. Charles Shriner told me when he was convincing me that I could not go on living without a Twister, a lot of live performance is live mixing, and this thing can be programmed to be one hell of a mixer.  That’s especially important on the iPad, where switching between apps is inconvenient and gets in the way of making adjustments in (say) the mix of sounds coming from different apps.  Effectively, the Twister lets you “reach past the screen” and make adjustments in instruments that are playing but not (currently) available in the touchscreen.

Mr. JLQ of Somewhere Very Hot in Texas asks: How do you mix the visuals with the music?

I gave a very superficial answer to this last night, responding to it mostly as a technical question.  This blog post has all the gory details of the tech, although the rig I used last night was a little simpler in that only one audio-responsive graphics program was involved (unfortunately, I still needed OBS for titling).  There’s also information about other aspects of livestreaming here.

But actually I think the more important thing to talk about is the parts that aren’t technical, which are entirely about 1) choice of material and 2) the human brain.   The video “show reel” — the stuff that isn’t either a camera showing me playing, or something squiggly responding to music —  runs throughout. It is brought in and out of view by the macro programming in the switcher.  For this show, I picked about 20 experimental films found via YouTube and did a very little work on them — mostly taking sections from the middle of each, and arranging them in a sequence that I liked.  The main thing is to pick stuff that’s “plotless” — that is, where there’s no story that a viewer is going to (even subliminally) expect to be completed somehow.   In this particular case, I got lucky — one of the segments, the morphing faces, was timed pretty close to the tempo of the set, and looked very much like I’d planned it.  I hadn’t — as I said, I got lucky… but there’s a sense in which you make your own luck by making good choices of material.

I think that one of the advantages of the “layered” approach to video is that it encourages the viewer’s mind to do something that the viewer’s mind wants to do anyway.  Brains are really good at creating patterns out of nothing.  I’m sure you’ve all heard about the experiments where people’s brains make sounds or distant conversation out of pure white noise, just because the brain needs to find patterns.  And because the audio-responsive graphics also encourage that pattern-making behavior, the whole thing works better than it has any right to.  As a result you end up with something that looks intentional, but wasn’t — though it was done with a pretty good appreciation for how the probabilities were likely to fall.  

Mr. JLQ also asks:  How do you hook all that stuff up?

There’s a pretty complete description of that here, preceded by a longwinded discussion of how the case was constructed.  There are a couple of things about the rig I’m not completely happy with at the moment.  I think the best plan, though, is to pile up a list of bugs and fixes and then spend a day rewiring at some point down the road.   It plays very well, but I can see some likely problems brewing; it’s not yet really convenient to integrate with other MIDI gear and occasionally the iPad decides to run its charge out despite being hooked up to the charger.

Thanks for watching, everybody.

X