SSL/TLS certificates with Let’s Encrypt

February 4, 2021

This is mostly a notes to self post, about something that I rarely do and that has become much easier than what I remembered.

I recently created a website for a community project in my spare time, using React, which is a lot of fun and also shows how things have evolved since I last did any substantial front-end work, which is a long time.

But this post is not about React, it’s about creating and checking the necessary SSL/TLS certificate. The hosting service does not create wildcard certificates by default and I needed that for “historical” reasons.

So here we go, this is just my recipes to remember for next time.

Creating a certficate using certbot

Here’s the command that I used to generate a Let’s Encrypt certificate using certbot.

This required setting up a DNS TXT record for validation, and waiting for it to be active by checking the domains DNS servers,, using dig NS to find them and then dig @thednsserver.xx TXT <domain> to verify that the TXT records are live.

sudo certbot certonly --manual --preferred-challenges dns -d * -d

Checking the certificate’s Subject Alternative Names

Before sending traffic to your website, once the certificate is installed on it you’ll need to check that it includes the correct Subject Alternative Names, if multiple subdomain names are pointing to it.

Running the below command produces a text dump of the certificate, and grepping that for DNS lists those names:

openssl s_client -showcerts -connect < /dev/null | openssl x509 -text | grep DNS

which outputs,

That’s it! At this point the certificate is installed and verified, we can send traffic to the website. Just remember to renew the certificate before it expires, either automatically if your setup allows it, or manually using similar commands than what we saw here.

How to record decent conference videos – without breaking the bank!

August 6, 2020

This being the year of COVID-19, many of us are recording videos for online conferences.

While I cannot claim being a professional video producer, by far, I did work for a video studio during my studies (a long time ago – Betacam days, yes) and have been doing audio recording and remixing on and off since then, moving from analog tools to the incredibly powerful software and cheap but decent microphones of today.

And feeling like time traveling while doing that…but I digress. Let’s see if I can provide some useful advice about recording videos for conference talks.

As an example here’s a video that I recorded for the 2020 FOSS Backstage conference in March 2020.

It’s far from perfect: the image quality and especially lighting are not fantastic, and I forgot to turn off the autofocus on the camera which leads to some where’s that focus sequences.

On the other hand I think the sound is good, loud and clear, which is key in efficiently delivering the message and keeping your audience engaged.

Let’s dig into what I consider the key elements of the recording and production process.

I’ll start with a few basic principles and then describe how I created the above video, as a concrete example.

You and your message

That should be obvious, but as always the key is the content. I think that’s even more true as you’re not getting audience feedback while delivering your talk, so you won’t be able to adjust for bored (or over-excited) listeners.

Taking time to review your scenario, ideally with others, will help get to the point and skip the boring parts.

Also, you have to be entertaining, not to the point of distracting from the topic, but sufficiently to keep your audience engaged. Speaking or acting classes will help with that, and again I think that’s even more important when recording your talk in advance. It’s a bit hard when you’re alone in your room recording this, you’ll need some mental projection to imagine a happy audience and smile to them. Or get an actual audience if you can, even a small one will help.

It’s the audio!

Remember, it’s a conference talk, so audio is THE thing that you need to get right. Less-than-perfect visuals will work, up to a point, but audio that’s not intelligible or just tiring to listen to will scare your audience away.

There are many ways to fail in the audio department. Using a badly sounding or poorly placed microphone, unwanted microphone noises (popping on “P” sounds, hitting it, clothing or wind noise, etc.), background noise and low sound volume are the most common problems.

Taking time to get the audio right will help deliver your message efficiently. When you’re starting up, experimenting is key. Learning about audio normalization, compression and equalization will help, there are lots of tutorials about those techniques on the Web.

Are recordings more boring than live talks?

I think so, as one misses the interaction between the speaker(s) and audience.

As a result I recommend recordings to be shorter and more to the point than a live talk would be. I often get bored watching talk recordings from conferences, as without the interactions and liveliness of a room full of people they are often way less entertaining.

Lighting and filming

I know much less about filming than audio recording, so I’ll avoid dispensing cheap knowledge here.

One thing that I know however is how important lighting is to getting a good image, especially when using basic filming equipment. As mentioned, the lighting is poor in my example video. That leads to a somewhat “sad” picture, even after adding some corrections at the editing stage.

It still works, especially as a big part of the video is slides which don’t have this problem, but next time I’ll pay more attention and try to get a nicer image. That shouldn’t take a lot of extra work and I have enough lamps around the house to avoid buying additional equipment for now. Or I’ll wait for a sunny day and choose the right place to record!

Tools, and how I created the above video

The question of which tools to use often comes, and my usual answer is whatever does the job is fine.

However, describing how I created the above video might help you understand the process better, and how to best spend your time on the important parts.

For filming I used a basic compact camera, not even connected to my computer, to record the video. Getting a decent tripod helps, and don’t forget to disable the autofocus, like I did in this case.

I used my basic USB headset microphone which I know sounds decent – not beautiful, but ok. The positioning of the microphone is critical to getting a good sound, you might want to experiment with that. A microphone that’s close to your mouth will cancel most ambient noises and unwanted resonances from the room which is good. If like me you live close to a military airport you might need to choose your recording day according to their flying schedule to avoid loud background noises. I do have better microphones including a Madonna-style headset that I use when playing live, but it was broken on that day. The cheap USB thingy worked perfectly ok with some processing downstream.

I used Quicktime Audio on my Macbook to record the audio separately from the video, and did a clap with my hands at the beginning to be able to easily sync audio and video afterwards. Digital audio and video should stay in sync for a few minutes without problems even if recorded on separate devices, and if not it’s relatively easy to fix with modern software by shifting things, especially with references such as a film-style clap (that you can do with your hands).

For video editing, the simplest tool that works might be good enough, and maybe quicker to use than more elaborate tools. For this video, I used Adobe Premiere Pro (hey, you know who I work for!) which is extremely powerful but requires some learning. After importing the video from my camera, keeping its audio only to check the sync with the separately recorded audio track, I started by laying down the slides to the audio and then added my talking head, resizing and placing it in a way that keeps the focus on the slides while making the movie more lively.

The slides were added as individual images at the video editing stage, after exporting those from my slides document. I didn’t use any screen recording software for that.

After editing the video I processed the audio with Logic Pro, my usual audio production software that I still often use instead of Adobe Audition which would be another great choice. I’ve been using Logic for years, since the early 90s virtually, when I was using Notator that the same team created and ran on an Atari 1040 so my motivation for switching is very low. Did I tell you about this time travel feeling?

Both tools can do much more than what we need here, any tool that can do audio normalization, EQ and compression should do. Actual audio editing is not needed at this stage as it happened along with the video editing, in Premiere Pro in my case.

For audio processing I extracted the audio track from the edited video, processed it and then re-imported it instead of the original track created during editing. I applied some EQ to make the sound clearer and remove unnecessary low frequency energy, followed a by an audio compressor to get a louder sound and finally normalized the level to get a track that sounds loud, at a similar level than other online tracks. I might have added a touch of a short reverb as well to beef up the sound, but if that’s the case it’s subtle, I cannot tell now by just listening to it.

The editing and audio processing took me about four hours, which is not much compared to having to travel to Berlin and back to deliver that talk.

Listening to the audio on different speakers, including crappy ones if possible, is required to verify that your audio will sound good for everybody, whatever equipment they use. Let your ears be your guide in getting a sound that emphasizes clarity and perceived loudness, without obvious processing artifacts. An audio track that sounds good at a low volume is usually a good indicator of sound quality.


I hope this is useful advice if you have to record a conference talk video, and I’m happy to answer any questions in the comments below!

To validate your recording and editing process and tools, you should start by recording a short segment to verify that everything works, to avoid wasting times on recordings that are not useful.

I’ll have to record more videos soon, for the upcoming ApacheCon @Home 2020 and adaptTo 2020 conferences soon and the plan is to use a green screen to better integrate my talking head with the slides. I’ll add links to those videos here when they are ready!

Update: here’s the ApacheCon talk recording that I recorded later, using a green screen processed live using OBS, the Open Broadcaster Software. I had a bit of trouble lighting the green screen, which causes some visual artifacts but I think that still works well. Someone mentioned that in the first video posted here I was often looking away from the audience, this new video is much better from that point of view – thanks Joerg for your comment!

Apache: lean and mean, durable, fun!

May 19, 2017

Here’s another blog post of mine that was initially published by Computerworld UK.

My current Fiat Punto Sport is the second Diesel car that I own, and I love those engines. Very smooth yet quite powerful acceleration, good fuel savings, a discount on state taxes thanks to low pollution, and it’s very reliable and durable. And fun to drive. How often does Grandma go “wow” when you put the throttle down in your car? That happens here, and that Grandma is not usually a car freak.

Diesel engines used to be boring, but they have made incredible progress in the last few years – while staying true to their basic principles of simplicity, robustness and reliability.

The recent noise about the Apache Software Foundation (ASF) moving to Git, or not, made me think that the ASF might well be the (turbocharged, like my car) Diesel engine of open source. And that might be a good thing.

The ASF’s best practices are geared towards project sustainability, and building communities around our projects. That might not be as flashy as creating a cool new project in three days, but sometimes you need to build something durable, and you need to be able to provide your users with some reassurances that that will be the case – or that they can take over cleanly if not.

In a similar way to a high tech Diesel engine that’s built to last and operate smoothly, I think the ASF is well suited for projects that have a long term vision. We often encourage projects that want to join the ASF via its Incubator to first create a small community and release some initial code, at some other place, before joining the Foundation. That’s one way to help those projects prove that they are doing something viable, and it’s also clearly faster to get some people together and just commit some code to one of the many available code sharing services, than following the ASF’s rules for releases, voting etc.

A Japanese 4-cylinder 600cc gasoline-powered sports bike might be more exciting than my Punto on a closed track, but I don’t like driving those in day-to-day traffic or on long trips. Too brutal, requires way too much attention. There’s space for both that and my car’s high tech Diesel engine, and I like both styles actually, depending on the context.

Open Source communities are not one-size-fits-all: there’s space for different types of communities, and by exposing each community’s positive aspects, instead of trying to get them to fight each other, we might just grow the collective pie and live happily ever after (there’s a not-so-hidden message to sensationalistic bloggers in that last paragraph).

I’m very happy with the ASF being the turbocharged Diesel engine of Open Source – it does have to stay on its toes to make sure it doesn’t turn into a boring old-style Diesel, but there’s no need to rush evolution. There’s space for different styles.

Shared neurons and the Shadok’s First Law of Failure

January 30, 2017

This blog post of mine was originally published by Computerworld UK in 2010.

French-speaking fortysomethings might remember the Shadoks, a two-minute TV comics show that aired on ORTF when I was a kid.

Those silly (and funny) creatures need to get to Earth as their own planet, on the left of the sky, is falling into pieces. They spend their time pumping on bicycle-like contraptions to produce Cosmogol 999 fuel to power their rocket towards Earth, while respected Professor Shadoko keeps them motivated with his motto: “the more you fail, the more likely your are to eventually succeed”. So they keep failing, every single time, hoping that statistics will prove them right some day.

A researcher colleague recently asked me to demonstrate that the work done by Apache communities is more than the sum of the individual capabilities of its members. I think the answer is yes, and that how we cope with failure has a lot to do with it.

Demonstrating this with hard numbers would be difficult, but having been active in Apache projects for the last ten years I can think of a number of examples where what we sometimes call “shared neurons” make all the difference.

It usually starts with a random thought – an often half-baked idea, that might be totally unrealistic or impossible to implement, but is nevertheless exposed to the community as is, and often starts a fruitful discussion. People bounce off each other’s ideas, in a written brainstorm that’s slower than if you were talking face to face, but sometimes more efficient as people have more time to think about their contributions to the discussion.

Brainstorming in the same room is faster, but brainstorming with a worldwide group of specialists – that’s much more powerful than Cosmogol 999, even if it has to happen in writing. But sometimes that turns into a Shadok fuel-producing exercise that fails to produce useful results.

Some Apache (and other) projects use those random thoughts as a first-class design and architecture tool, marking such email discussions with [RT] in their subject lines. This serves as a warning that the discussion is not necessarily about something feasible, it’s more about putting people’s neurons together to shape and refine potentially useful ideas. And sometimes those ideas are just too silly or unrealistic to go anywhere – yet the corresponding discussions stay archived forever on our mailing lists.

Some people are afraid of exposing their unfinished ideas to their communities so early, and having them archived for posterity whatever the outcome. Your idea might turn out to be a stupid one once people nicely demonstrate why it won’t work, or someone might point you to that part of code that does exactly what you were dreaming about, and that you had forgotten about. Silly you.

In my opinion, being ready to accept such failures makes you a much more efficient contributor to open communities. Although you don’t need to fail as consistently as the Shadoks, being ready to fall on your face once in a while allows you to contribute your wildest ideas to the discussion. Combined with respect for other’s crazy ideas, and with the ability to explain nicely when people are wrong, we have a recipe for efficient sharing of neurons, where the sum is definitely greater than its parts.

I’m seeing this neuron sharing in action on our Apache mailing lists all the time, and I think accepting the risk of being wrong when exposing one’s ideas on our public lists makes a big difference. We’re not Shadoks after all – our failure stats are way too low.

Continous Deployment with Apache Sling

September 2, 2014

Today I had the pleasure to attend the Master’s thesis defense of our intern Artyom Stetsenko, titled Continous Deployment of Apache Sling Applications.

Coaching Artyom for this project has been a pleasure, he did a great job and worked independently while listening very well to our advice. He got an excellent mark for his thesis and that was well deserved. Also due to an excellent no-bullets presentation!

I have uploaded Artyom’s thesis paper here, with his permission. The code is available at As the name indicates that’s experimental code, but the resulting Sling-based cluster with automated atomic deployment is functional. Just push an updated crankstart file to the Git repository and the cluster is updated atomically and without downtime.

For me the main goal was to see how we can improve Apache Sling‘s support of modern operations, with continuous deployment, immutable instances etc. I’m continuing my explorations with a Docker-based Sling cluster, the main goal being to create simple clustered environments that allow us to play with these things.

Update: I forgot to mention that my Docker cluster prototype is the basis for my upcoming talk at adaptTo() 2014 on September 23rd in Berlin. The talk’s title is “Apache Sling and devops – the next frontier” and I’ll talk about how Sling can be made more operations-friendly.

Generating hard to guess content URLs in Sling

October 27, 2010

In RESTful apps, it is often useful to create hard to guess URLs, as a simple privacy device.

Here’s a self-explaining example (with hardcoded parameters) of how to do that in Sling.

After installing this component, an HTTP POST to a node named ‘foo’ creates a child node with a somewhat long hex string as its name, instead of the usual simple names generated by Sling.

package foo;

import java.util.Random;

import org.apache.felix.scr.annotations.Component;
import org.apache.felix.scr.annotations.Service;

/** Example that generates hard-to-guess node names in Sling,
 * for nodes added under nodes named 'foo' 
 * To test, build and install a bundle that includes this component,
 * and run
 * <pre>
 *   curl -X MKCOL http://admin:admin@localhost:4502/foo
 *   curl -F title=bar http://admin:admin@localhost:4502/foo/
 * </pre>
 * The output of the second curl call should return something like
 * <pre>
 *   Content created /foo/dd712dd234637bb9a9a3b3a10221eb1f
 * </pre>
 * Which is the path of the created node. 
public class FooNodeNameGenerator implements NodeNameGenerator {
    private static final Random random = new Random(System.currentTimeMillis());
    /** @inheritDoc */
    public String getNodeName(
            SlingHttpServletRequest request, 
            String parentPath, 
            boolean requirePrefix, 
            NodeNameGenerator defaultNng) 
        if(parentPath.endsWith("/foo")) {
            final StringBuilder name = new StringBuilder();
            for(int i=0; i < 2; i++) {
            return name.toString();
        return null;

My best (and worst) ApacheCon so far – thanks and random thoughts

November 8, 2008

US 2008 in New Orleans has been my best ApacheCon so far! Brain dump ahead.

Great socializing and networking, which should lead to new exciting new projects in the near future.

My two talks were well received as far as I can tell.

Nice emerging buzz about Sling. Free polo shirts helped – well done Carsten! We have to do something about our docs and examples so as to not scare people away, in the meantime I’m sure the Sling folks will be happy to offer guidance on how best to structure things if you want to try it.

Big thanks to everybody involved in this very successful event – it is impossible to mention everybody here, so I’ll just pick and choose: Shane Curcuru who did a great job as the conference lead. The Stone Circle team for making everything happen in a flawless way.

Thanks to President elect Obama for allowing me to spend a historical day in the US.

Thanks to Glen Daniels for finding the jam session places – the first one was not too hot from the musical point of view (drumming on a beer keg only goes so far), but interesting to visit – ever seen a laundromat in a bar?

Thanks to the folks (sponsors IIUC) who organized the parade on Thursday evening. Walking in the streets following the great Rebirth Brass Band with cops opening the way on their nice Harleys was awesome.

Thanks to the New Orleans people for being so nice and friendly, even though I sometimes had a hard time understanding them. I’ll work on that.

For some reason, swimming in the outdoor pool at 7:30AM on a foggy day made us suspect in the eyes of the security guards. Normal hotel customers don’t do that I guess, but who said Swiss Apache folk are normal hotel guests?

Thanks to the local musicians for playing so well.

On Tuesday a few of us escaped from the barcamp (which was great, but one cannot do everything I guess) for a swamp tour with Cajun encounters. Definitely recommended, our guide was a genuine Cajun, born in the area and with lots of stories to tell. I didn’t bring a camera, but photos should be available on Flickr.

That’s for the “best ApacheCon” part – the “worst ApacheCon” is about jetlag taking almost a week to go away, and leaving me in a miserable state on Tuesday when I had a lot of work to do to finish preparing my talks and for the Big Release.

For some reason, many of us had a hard time with jetlag this week. That might be related to the food (deep fried everything) or the lack of fresh air and sunlight. Not that it wasn’t sunny, but sunset around 6PM means one does not see much natural light if following the conference sessions.

See you in Amsterdam in March for ApacheCon EU 2009!

(long post hey – that’s what you get for locking me in planes for about 10 hours ;-)

No LIFT – but thanks nouvo for the video

February 8, 2008

No LIFT for me once again this year, due to a timing collision with The Important Phase of That Project.

But thanks to many videos are available to catch up. It’s not the same thing though, there are many people who I’d have loved to meet there.

Google Groups, could you talk to your Gmail cousin a bit more?

February 8, 2008

Even though Gmail bothers us with its Sender header, Google Groups’s set of virtual gray matter is apparently too small to figure out that

From: "Bertrand Delacretaz" <>

comes from

Can’t those guys talk together?

Going Solo – Lausanne, May 16th, 2008

January 22, 2008

(no I’m not going back to freelancing ;-)

Stephanie Booth announces the Going Solo conference, for freelancers and small business owners of the internet industry, in (our beautiful) Lausanne on May 16th.

More info at!