Last night, in the very good company of Karl Fury and Rob Snyder, I did a live-streamed show that was — from the audience’s point of view — less than successful. There was a fair amount of video-stuttering and re-buffering, and I’m sorry that (at least) some people had a less than satisfactory experience.

It’s worth taking a minute to consider why, and examine some alternatives. The fact is that no live performer can accurately predict what is going to happen when a live stream goes up. There are too many variables (we’ll get to those in a minute).

Why? Much of what goes on depends on “network weather” — effectively, competition for bandwidth, server and router resources, and other infrastructure at every point between the streamer and each member of the audience. Remember, the fundamental technology of the Internet is designed for graceful failure rather than guaranteed, on-time delivery — it routes around damage or bottlenecks, but performance will almost certainly degrade. If you’re Netflix or YouTube, you figure on that and build caching resources located near big concentrations of your clients. If you’re me, your options are much more limited. Pretty much all you can do is guarantee that your stuff is getting to the streaming service intact, no matter what amount of competition for streaming bandwidth and network resources is going on at the time. And keep in mind that the “internet weather” is quite variable. There is far more competition for resources at 8 PM on a Saturday night than there is at 2 AM on Tuesday.

Before getting down to cases, it’s worth noting that the ability to analyze a problem during live performance depends crucially on someone having their hands free to do diagnostics. That is certainly not something that a streamer can do while a performance is running ( this is why live television has camera directors, engineers, and a host of other people running the show). It also depends on the streamer having access to the diagnostic tools needed to see what’s going on (as opposed to just using serial trial-and-error techniques). If the streamer has been handed a streaming key by a third party, and has no access to the streaming platform’s diagnostics, they have no way to see what is going on from any perspective that would actually be useful in solving the problem. Again, all they can do is test under as-nearly-identical conditions as they can arrange in advance, and then scale back in a way that offers some wiggle room (one of the resources linked below provides good guidance for that). Most often, that will result in unnecessarily conservative approaches, unless the streamer wants to risk problems at performance time. Neither of those is an appealing alternative.

Now, as to the problem I actually had — in fact, it seems that there were two. The ATEM MiniPro that I use for switching and stream encoding has a small output buffer. The idea is that if outbound bandwidth is constricted, the buffer fills with already-compressed stream data until things ease up, when it trickles it out again. You can think of it as being similar to an overflow basin, or the expansion tank in a heating system. But ATEM’s buffer has two problems — first, it isn’t very big, and second, it has a bug. The “not very big” part comes, I think, from design specs that were created before the pandemic brought about the current extreme traffic levels on the net. The bug is this: the buffer does not properly release data once bandwidth becomes available — it seems to just drop it on the floor until the unit is power-cycled. This problem has been admired widely in the ATEM user forums, but ATEM’s answer seems to be the same as the one offered in the old British Navy manual of seamanship to sailors who are caught in a storm off a lee shore: “Never find yourself in this position”.

So it behooves you to figure out just exactly how much bandwidth you’re likely to have (and need) at the time of your performance. That, of course, is impossible to predict accurately. But here’s the approach I’ll use the next time, which I think has the best chance of success. It is important to run these tests on the same day of the week and at the same time as your show. These assume that you’re using restream.io, but you could run similar tests with other providers.

  • Run a bandwidth test using the Ookla speed test at https://www.speedtest.net/ . Write down the upload speed it finds.
  • Run a second bandwidth test, specifically for restream.io, using the instructions here: https://support.restream.io/en/articles/780137-speed-test-for-restream . It is probably best to test using their “default” server at the top of the left-hand column (especially if you’re using an ATEM, because that’s what it picks). Again, you’re interested in the upload speed.
  • If there’s radical disagreement between the two tests, believe the lesser of the two.
  • Configure your streaming software using restream.io’s guidelines here, being sure to read to the end of the article: https://restream.io/blog/what-is-a-good-upload-speed-for-streaming/ . If you’re using a service other than restream.io, you’ll also find usable guidelines in this same article.

That’s really going to be the best you can do in advance. As I’ve written elsewhere, it helps to have a spotter to tell you what’s going on during the performance. Unfortunately, though, this particular torpedo can’t be steered very well after it’s launched.

X