How to record decent conference videos – without breaking the bank!

August 6, 2020

This being the year of COVID-19, many of us are recording videos for online conferences.

While I cannot claim being a professional video producer, by far, I did work for a video studio during my studies (a long time ago – Betacam days, yes) and have been doing audio recording and remixing on and off since then, moving from analog tools to the incredibly powerful software and cheap but decent microphones of today.

And feeling like time traveling while doing that…but I digress. Let’s see if I can provide some useful advice about recording videos for conference talks.

As an example here’s a video that I recorded for the 2020 FOSS Backstage conference in March 2020.

It’s far from perfect: the image quality and especially lighting are not fantastic, and I forgot to turn off the autofocus on the camera which leads to some where’s that focus sequences.

On the other hand I think the sound is good, loud and clear, which is key in efficiently delivering the message and keeping your audience engaged.

Let’s dig into what I consider the key elements of the recording and production process.

I’ll start with a few basic principles and then describe how I created the above video, as a concrete example.

You and your message

That should be obvious, but as always the key is the content. I think that’s even more true as you’re not getting audience feedback while delivering your talk, so you won’t be able to adjust for bored (or over-excited) listeners.

Taking time to review your scenario, ideally with others, will help get to the point and skip the boring parts.

Also, you have to be entertaining, not to the point of distracting from the topic, but sufficiently to keep your audience engaged. Speaking or acting classes will help with that, and again I think that’s even more important when recording your talk in advance. It’s a bit hard when you’re alone in your room recording this, you’ll need some mental projection to imagine a happy audience and smile to them. Or get an actual audience if you can, even a small one will help.

It’s the audio!

Remember, it’s a conference talk, so audio is THE thing that you need to get right. Less-than-perfect visuals will work, up to a point, but audio that’s not intelligible or just tiring to listen to will scare your audience away.

There are many ways to fail in the audio department. Using a badly sounding or poorly placed microphone, unwanted microphone noises (popping on “P” sounds, hitting it, clothing or wind noise, etc.), background noise and low sound volume are the most common problems.

Taking time to get the audio right will help deliver your message efficiently. When you’re starting up, experimenting is key. Learning about audio normalization, compression and equalization will help, there are lots of tutorials about those techniques on the Web.

Are recordings more boring than live talks?

I think so, as one misses the interaction between the speaker(s) and audience.

As a result I recommend recordings to be shorter and more to the point than a live talk would be. I often get bored watching talk recordings from conferences, as without the interactions and liveliness of a room full of people they are often way less entertaining.

Lighting and filming

I know much less about filming than audio recording, so I’ll avoid dispensing cheap knowledge here.

One thing that I know however is how important lighting is to getting a good image, especially when using basic filming equipment. As mentioned, the lighting is poor in my example video. That leads to a somewhat “sad” picture, even after adding some corrections at the editing stage.

It still works, especially as a big part of the video is slides which don’t have this problem, but next time I’ll pay more attention and try to get a nicer image. That shouldn’t take a lot of extra work and I have enough lamps around the house to avoid buying additional equipment for now. Or I’ll wait for a sunny day and choose the right place to record!

Tools, and how I created the above video

The question of which tools to use often comes, and my usual answer is whatever does the job is fine.

However, describing how I created the above video might help you understand the process better, and how to best spend your time on the important parts.

For filming I used a basic compact camera, not even connected to my computer, to record the video. Getting a decent tripod helps, and don’t forget to disable the autofocus, like I did in this case.

I used my basic USB headset microphone which I know sounds decent – not beautiful, but ok. The positioning of the microphone is critical to getting a good sound, you might want to experiment with that. A microphone that’s close to your mouth will cancel most ambient noises and unwanted resonances from the room which is good. If like me you live close to a military airport you might need to choose your recording day according to their flying schedule to avoid loud background noises. I do have better microphones including a Madonna-style headset that I use when playing live, but it was broken on that day. The cheap USB thingy worked perfectly ok with some processing downstream.

I used Quicktime Audio on my Macbook to record the audio separately from the video, and did a clap with my hands at the beginning to be able to easily sync audio and video afterwards. Digital audio and video should stay in sync for a few minutes without problems even if recorded on separate devices, and if not it’s relatively easy to fix with modern software by shifting things, especially with references such as a film-style clap (that you can do with your hands).

For video editing, the simplest tool that works might be good enough, and maybe quicker to use than more elaborate tools. For this video, I used Adobe Premiere Pro (hey, you know who I work for!) which is extremely powerful but requires some learning. After importing the video from my camera, keeping its audio only to check the sync with the separately recorded audio track, I started by laying down the slides to the audio and then added my talking head, resizing and placing it in a way that keeps the focus on the slides while making the movie more lively.

The slides were added as individual images at the video editing stage, after exporting those from my slides document. I didn’t use any screen recording software for that.

After editing the video I processed the audio with Logic Pro, my usual audio production software that I still often use instead of Adobe Audition which would be another great choice. I’ve been using Logic for years, since the early 90s virtually, when I was using Notator that the same team created and ran on an Atari 1040 so my motivation for switching is very low. Did I tell you about this time travel feeling?

Both tools can do much more than what we need here, any tool that can do audio normalization, EQ and compression should do. Actual audio editing is not needed at this stage as it happened along with the video editing, in Premiere Pro in my case.

For audio processing I extracted the audio track from the edited video, processed it and then re-imported it instead of the track original track created during editing. I applied some EQ to make the sound clearer and remove unnecessary low frequency energy, followed a by an audio compressor to get a louder sound and finally normalized the level to get a track that sounds loud, at a similar level than other online tracks. I might have added a touch of a short reverb as well to beef up the sound, but if that’s the case it’s subtle, I cannot tell now by just listening to it.

The editing and audio processing took me about four hours, which is not much compared to having to travel to Berlin and back to deliver that talk.

Listening to the audio on different speakers, including crappy ones if possible, is required to verify that your audio will sound good for everybody, whatever equipment they use. Let your ears be your guide in getting a sound that emphasizes clarity and perceived loudness, without obvious processing artifacts. An audio track that sounds good at a low volume is usually a good indicator of sound quality.

Coda

I hope this is useful advice if you have to record a conference talk video, and I’m happy to answer any questions in the comments below!

To validate your recording and editing process and tools, you should start by recording a short segment to verify that everything works, to avoid wasting times on recordings that are not useful.

I’ll have to record more videos soon, for the upcoming ApacheCon @Home 2020 and adaptTo 2020 conferences soon and the plan is to use a green screen to better integrate my talking head with the slides. I’ll add links to those videos here when they are ready!


JCR in 15 minutes

November 3, 2009

We had a great NoSQL meeting yesterday evening colocated with ApacheCon. Thanks Jukka for organizing!

I was in track B for the second part, and found it very interesting to compare three different approaches to non-relational content storage: MarkLogic server, JCR and Pier Fumagalli’s Lucene+DAV technique.

I also quite liked Steve Yen’s “horseless carriage” way of looking at NoSQL. Defining things by what they are, as opposed to what they are not, sounds like a good idea.

I gave a short talk about JCR, find the slides below. Of course, as usual, they’re not as good as when I’m here to talk about them ;-)


Back from a great IKS project meeting

May 29, 2009

I’m on my way back from Salzburg where the Salzburg Research team organized a great meeting for the IKS project. Flawless organization as usual, thanks and congrats!

Today’s requirements workshop featured an impressive collection of very powerful brains (and nice people to hold them ;-) including, besides the usual IKS suspects, representatives from more than twenty CMS communities and companies.

I was a bit worried at first that IKS, being mostly in a requirements definition phase, didn’t have much to show to those people, but today’s brainstorming went very well, and the results exceed my expectations.

The most important result for me is agreeing to setup a prototype semantically enhanced search engine, that will use metadata and RDFa embedded in web pages to index content. This will provide the IKS community with a testbed for semantically enhanced websites, and allow us to demonstrate the usefulness of embedded semantic information by making full use of that for searching instead of just enhancing the display of search results. The extracted data might also be very useful for our academic partners to run experiments on real-life data that we’re familiar with. We might not have to write lots of code to setup such a search engine, but it’s important to have our own thing that people can also run behind firewalls, if needed to run experiments on private data.

The second result that I’m excited about is agreeing to work together on a prototype of a semantic rich text content editor, where you’ll get functions like insert person or insert company besides the usual insert link and insert image functions. This will allow us to start making our customers more aware of the importance of semantic markup, in a way that’s not too different from what they’re doing now.

Last but not least in my list of results-that-got-me-excited-about-all-this is agreeing on the creation of a list of simple user stories that demonstrate what IKS is about, in a very simple and understandable way, while allowing us to define use cases and features that might be challenging to implement today.

More complete information about the meeting should be available from the IKS project blog in the next few days, make sure to subscribe to that. For now Bergie (who suggested the semantic editor project) has been taking notes on Quaiku if you’re eager to learn more.

To take part in (or just follow) these projects, subscribe to the IKS mailing list which is going to be our communicatios hub.

Hope to see you there – in a week from now, as next week is my cycling-in-France/offline holiday. Looking forward to getting more familiar with the 29er before the next, more off-road trip in a few weeks.


CMIS could be the MIDI interface of content management…

April 28, 2009

MIDI – the Musical Instrument Digital Interface – was created back in 1982 by a consortium of musical equipment manufacturers including, if I remember correctly, Roland, Yamaha, Sequential Circuits, Korg, Oberheim (I’ve got a Matrix 6 to sell BTW ;-), maybe Ensoniq (did they exist already?) and others. Companies that were fiercely competing in the market, individualistic industry leaders who agreed to get together to create a bigger market for their instruments and equipment.

My diploma work as an electronics engineer was about MIDI, in 1983 – I created a MIDI output interface that could be retrofitted into accordions. The spec was not final at the time (or at least I could get a final version – that was before the web of course), all I had in terms of specs were a few magazine articles, a Yamaha DX7 and one of the first Korg synths to have MIDI. Both synths had slightly different implementations, and some compatibility problems, as can be expected from an early and not yet widespread spec.

What’s happening with CMIS today sounds quite similar: competing vendors finally agreeing on an interoperability spec, even if it’s limited to a lowest common denominator. If this works as with MIDI, we’re in for some exciting times – the few years after 1982 saw a boom in MIDI-related electronic instruments and systems, as suddenly all kinds of equipment from different companies could talk together.

MIDI had serious shortcomings: a slow transmission rate, serial transmission meaning each note in a thick chord is delayed by nearly one millisecond, and somewhat limited data ranges for some real-time controllers. But the basic idea was great, let’s get something done that allows our instruments to talk together in a usable fashion, even if it’s not perfect. MIDI has survived until today, 27 years later, which is quite amazing for such a standard. It’s been tweaked and workarounds (including hardware extensions) have been used to adapt it to evolving needs, and often travels via USB or other fast channels today, but it’s still here, and the impact on the music equipment industry is still visible.

I must admit that I was quite disappointed with the CMIS spec when I first looked at it, especially due to the so-called REST bindings which aren’t too RESTful. And CMIS seems to consider a “document” as the unit of content, whereas JCR converts like myself prefer to work at a more atomic level. And don’t tell me that hierachies are a bad thing in managing content – you might want to ignore them in some cases, but micro-trees are a great way of organizing atoms of content.

Nevertheless, seeing the enthusiasm around the soon-to-be-incubating Apache Chemistry project (that link should work in a few days, how’s that for buzz building?) made me think about MIDI, and how amazing it was at the time that “commercial enemies” could get together to do something that finally benefitted the whole industry.

I still don’t understand why WebDAV can’t do the job if this is about documents, and still prefer JCR for actual work with content (considering that everything is content), but I’m starting to think that CMIS might make a big difference. It will need a test suite for that of course- software engineers know that interoperability without test suites can’t work – and this week’s CMIS plugfest is a good step in this direction. I’ll be around on Thursday, looking forward to it!


Oracle buys MySQL (as part of Sun) – a great time to have another look at content repositories!

April 20, 2009

Lots of noise (and some gems) about the acquisition of Sun by Oracle on Twitter today. But contrary to Oracle’s content servers, Twitter seems to be holding up quite well.

I half-jokingly added my own noise saying that now’s a good time for people worried about MySQL’s future to switch to JCR, and Bergie agrees!

Rereading this post, what follows sounds a bit like marketingspeak, but it’s not – just enthusiasm!

We’ve been discussing the similarities between Midgard and JCR earlier this year with him and also with Jukka, and it’s amazing to see how close the models of Midgard and JCR are. With their TYPO3CR, Typo3 also agree that the JCR model is extremely well suited for content storage and manipulation. Midgard2 doesn’t use the JCR APIs, but as mentioned above the concepts are very similar.

Having made the move myself from wire-some-object-relational-stuff-on-top-of-sql-and-suffer-forever to JCR as an API that’s designed from the ground up to work with granular content, including observation, unstructured nodes and many other nice features, I’m not going back!

If you’re working with content (and yes, everything is content anyway), and started wondering about the future of MySQL today, now might be a good time to take another look at JCR. Apache Jackrabbit has been making huge progress in the last two years with respect to performance and reliability, and Apache Sling makes it much easier than before to get started with JCR, mostly due to its HTTP/JSON storage and query interface which takes the J out of JCR.

Never had so many (meaningful) replies and retweets on Twitter before today – but I started by wondering why CMIS wants to reinvent WebDAV, so no wonder. We’ll save that one for another time I guess.


Open Source Collaboration Tools are Good for You – relooked and live tomorrow!

April 1, 2009

I have relooked and slightly expanded this presentation for tomorrow at OpenExpo in Bern – the main addition is a discussion of the fear of making mistakes in public.

Talking to attendees last week at ApacheCon shows that people often struggle to introduce these tools and the open way of working in their companies. It seems like that fear can be an important blocking factor, and people are rarely explicitely aware of it.

(See how serious I am? This is April 1st and I’m not even making lame jokes!)

Update: the video is available on YouTube as part of the OpenExpo channel.


Tales from the OSGi trenches

March 25, 2009

My Tales from the OSGi trenches presentation today at ApacheCon went well, timing was surprisingly good given that I gave this talk for the first time.

People can certainly relate to the issues that we’ve been facing with OSGi, and the realization is that the large majority of them can be linked to lack of developer education and lack of documentation and examples.

Things will get better, but my conclusions page already has a lot more smileys than monsters!


Ready for ApacheCon Europe 2009

March 21, 2009


I’ll be giving three talks next week at ApacheCon, on OSGi, Apache Sling and Open Source collaboration tools.

Ruwan Linton‘s OSGi talk, which is scheduled after mine on Wednesday, also presents practical experiences with OSGi. I’m looking forward to comparing our experiences, and people should probably attend both talks to get the whole picture.

I’m also very much looking forward to meeting new people and old friends there, including the Jackrabbit/Sling folks at Tuesday’s JCR/Jackrabbit/Sling meetup.

Before that I’ll be in Rome for a meeting of the IKS project, talking about requirements and use cases for semantically enhanced CMSes. Looks like a packed but very interesting week ahead – lots of context switches though ;-)

Update: forgot to mention Carsten Ziegeler’s Embrace OSGi – A Developer’s Quickstart presentation, which comes right before mine – attending that one will also help put mine in context, as I won’t cover the basics of OSGi.


The CMS vendor meme

March 18, 2009

Yesterday my colleague Michal Marth launched a cool CMS vendor meme, challenging other vendors to self-evaluate their products according to the we-get-it-checklist suggested by Kas Thomas.

Many vendors have already responded. Except those who don’t know about Twitter or blogs, of course. You don’t want to buy from them anyway ;-)

To help people find pages related to this meme around the web, I suggest adding the string 9c56d0fcf93175d70e1c9b9d188167cf to such pages, so that a Google query can find them all.

As I said on the dev.day.com post, this number is the md5 of some great software, the first person to tell me which file that is gets a free beer or equivalent beverage!


Looking for use cases for a semantically enhanced CMS

March 10, 2009

iks-logo.jpgDay is participating as an industrial partner in the Interactive Knowledge project, which aims to provide an open source technology platform for semantically enhanced content management systems.

We are starting to collect use cases for a semantically enhanced CMS – although I’m not 100% sure what semantically enhanced means (and I assume that means different things to different people), I have started with use cases like the following:

When I drop an image of a house in my content, the system allows me to see images of similar houses, and pages that talk about houses.

When I start writing a new piece of content, the system optionally shows me similar content that’s already in the repository, even if written in other languages.

The system allows me to formulate queries like “recent pages that talk about houses to rent in the french part of Switzerland”.

If you have additional ideas for such use cases, or examples of systems that provide such features, I’m all ears!