SSL/TLS certificates with Let’s Encrypt

February 4, 2021

This is mostly a notes to self post, about something that I rarely do and that has become much easier than what I remembered.

I recently created a website for a community project in my spare time, using React, which is a lot of fun and also shows how things have evolved since I last did any substantial front-end work, which is a long time.

But this post is not about React, it’s about creating and checking the necessary SSL/TLS certificate. The hosting service does not create wildcard certificates by default and I needed that for “historical” reasons.

So here we go, this is just my recipes to remember for next time.

Creating a certficate using certbot

Here’s the command that I used to generate a Let’s Encrypt certificate using certbot.

This required setting up a DNS TXT record for validation, and waiting for it to be active by checking the domains DNS servers,, using dig NS to find them and then dig @thednsserver.xx TXT <domain> to verify that the TXT records are live.

sudo certbot certonly --manual --preferred-challenges dns -d * -d

Checking the certificate’s Subject Alternative Names

Before sending traffic to your website, once the certificate is installed on it you’ll need to check that it includes the correct Subject Alternative Names, if multiple subdomain names are pointing to it.

Running the below command produces a text dump of the certificate, and grepping that for DNS lists those names:

openssl s_client -showcerts -connect < /dev/null | openssl x509 -text | grep DNS

which outputs,

That’s it! At this point the certificate is installed and verified, we can send traffic to the website. Just remember to renew the certificate before it expires, either automatically if your setup allows it, or manually using similar commands than what we saw here.

How to record decent conference videos – without breaking the bank!

August 6, 2020

This being the year of COVID-19, many of us are recording videos for online conferences.

While I cannot claim being a professional video producer, by far, I did work for a video studio during my studies (a long time ago – Betacam days, yes) and have been doing audio recording and remixing on and off since then, moving from analog tools to the incredibly powerful software and cheap but decent microphones of today.

And feeling like time traveling while doing that…but I digress. Let’s see if I can provide some useful advice about recording videos for conference talks.

As an example here’s a video that I recorded for the 2020 FOSS Backstage conference in March 2020.

It’s far from perfect: the image quality and especially lighting are not fantastic, and I forgot to turn off the autofocus on the camera which leads to some where’s that focus sequences.

On the other hand I think the sound is good, loud and clear, which is key in efficiently delivering the message and keeping your audience engaged.

Let’s dig into what I consider the key elements of the recording and production process.

I’ll start with a few basic principles and then describe how I created the above video, as a concrete example.

You and your message

That should be obvious, but as always the key is the content. I think that’s even more true as you’re not getting audience feedback while delivering your talk, so you won’t be able to adjust for bored (or over-excited) listeners.

Taking time to review your scenario, ideally with others, will help get to the point and skip the boring parts.

Also, you have to be entertaining, not to the point of distracting from the topic, but sufficiently to keep your audience engaged. Speaking or acting classes will help with that, and again I think that’s even more important when recording your talk in advance. It’s a bit hard when you’re alone in your room recording this, you’ll need some mental projection to imagine a happy audience and smile to them. Or get an actual audience if you can, even a small one will help.

It’s the audio!

Remember, it’s a conference talk, so audio is THE thing that you need to get right. Less-than-perfect visuals will work, up to a point, but audio that’s not intelligible or just tiring to listen to will scare your audience away.

There are many ways to fail in the audio department. Using a badly sounding or poorly placed microphone, unwanted microphone noises (popping on “P” sounds, hitting it, clothing or wind noise, etc.), background noise and low sound volume are the most common problems.

Taking time to get the audio right will help deliver your message efficiently. When you’re starting up, experimenting is key. Learning about audio normalization, compression and equalization will help, there are lots of tutorials about those techniques on the Web.

Are recordings more boring than live talks?

I think so, as one misses the interaction between the speaker(s) and audience.

As a result I recommend recordings to be shorter and more to the point than a live talk would be. I often get bored watching talk recordings from conferences, as without the interactions and liveliness of a room full of people they are often way less entertaining.

Lighting and filming

I know much less about filming than audio recording, so I’ll avoid dispensing cheap knowledge here.

One thing that I know however is how important lighting is to getting a good image, especially when using basic filming equipment. As mentioned, the lighting is poor in my example video. That leads to a somewhat “sad” picture, even after adding some corrections at the editing stage.

It still works, especially as a big part of the video is slides which don’t have this problem, but next time I’ll pay more attention and try to get a nicer image. That shouldn’t take a lot of extra work and I have enough lamps around the house to avoid buying additional equipment for now. Or I’ll wait for a sunny day and choose the right place to record!

Tools, and how I created the above video

The question of which tools to use often comes, and my usual answer is whatever does the job is fine.

However, describing how I created the above video might help you understand the process better, and how to best spend your time on the important parts.

For filming I used a basic compact camera, not even connected to my computer, to record the video. Getting a decent tripod helps, and don’t forget to disable the autofocus, like I did in this case.

I used my basic USB headset microphone which I know sounds decent – not beautiful, but ok. The positioning of the microphone is critical to getting a good sound, you might want to experiment with that. A microphone that’s close to your mouth will cancel most ambient noises and unwanted resonances from the room which is good. If like me you live close to a military airport you might need to choose your recording day according to their flying schedule to avoid loud background noises. I do have better microphones including a Madonna-style headset that I use when playing live, but it was broken on that day. The cheap USB thingy worked perfectly ok with some processing downstream.

I used Quicktime Audio on my Macbook to record the audio separately from the video, and did a clap with my hands at the beginning to be able to easily sync audio and video afterwards. Digital audio and video should stay in sync for a few minutes without problems even if recorded on separate devices, and if not it’s relatively easy to fix with modern software by shifting things, especially with references such as a film-style clap (that you can do with your hands).

For video editing, the simplest tool that works might be good enough, and maybe quicker to use than more elaborate tools. For this video, I used Adobe Premiere Pro (hey, you know who I work for!) which is extremely powerful but requires some learning. After importing the video from my camera, keeping its audio only to check the sync with the separately recorded audio track, I started by laying down the slides to the audio and then added my talking head, resizing and placing it in a way that keeps the focus on the slides while making the movie more lively.

The slides were added as individual images at the video editing stage, after exporting those from my slides document. I didn’t use any screen recording software for that.

After editing the video I processed the audio with Logic Pro, my usual audio production software that I still often use instead of Adobe Audition which would be another great choice. I’ve been using Logic for years, since the early 90s virtually, when I was using Notator that the same team created and ran on an Atari 1040 so my motivation for switching is very low. Did I tell you about this time travel feeling?

Both tools can do much more than what we need here, any tool that can do audio normalization, EQ and compression should do. Actual audio editing is not needed at this stage as it happened along with the video editing, in Premiere Pro in my case.

For audio processing I extracted the audio track from the edited video, processed it and then re-imported it instead of the original track created during editing. I applied some EQ to make the sound clearer and remove unnecessary low frequency energy, followed a by an audio compressor to get a louder sound and finally normalized the level to get a track that sounds loud, at a similar level than other online tracks. I might have added a touch of a short reverb as well to beef up the sound, but if that’s the case it’s subtle, I cannot tell now by just listening to it.

The editing and audio processing took me about four hours, which is not much compared to having to travel to Berlin and back to deliver that talk.

Listening to the audio on different speakers, including crappy ones if possible, is required to verify that your audio will sound good for everybody, whatever equipment they use. Let your ears be your guide in getting a sound that emphasizes clarity and perceived loudness, without obvious processing artifacts. An audio track that sounds good at a low volume is usually a good indicator of sound quality.


I hope this is useful advice if you have to record a conference talk video, and I’m happy to answer any questions in the comments below!

To validate your recording and editing process and tools, you should start by recording a short segment to verify that everything works, to avoid wasting times on recordings that are not useful.

I’ll have to record more videos soon, for the upcoming ApacheCon @Home 2020 and adaptTo 2020 conferences soon and the plan is to use a green screen to better integrate my talking head with the slides. I’ll add links to those videos here when they are ready!

Update: here’s the ApacheCon talk recording that I recorded later, using a green screen processed live using OBS, the Open Broadcaster Software. I had a bit of trouble lighting the green screen, which causes some visual artifacts but I think that still works well. Someone mentioned that in the first video posted here I was often looking away from the audience, this new video is much better from that point of view – thanks Joerg for your comment!

Rules For Revolutionaries (2000 edition)

April 7, 2020

The below message is from 2000 but I think it still applies to open source collaboration in the 21st century.

I wasn’t part of the Apache Jakarta story myself but have heard of that a few times ;-)

It was written by James Duncan Davidson, I don’t take any credit for what follows – I just copied it back here from a public archive, to make it easier to find it next time I need to quote it.

Date: Thu, 13 Jan 2000 15:46:41 -0800
Subject: RESET: Proposal for Revolutionaries and Evolutionaries
From: James Duncan Davidson

Ok, the logical place for this is general@jakarta, but I’m including tomcat-dev@jakarta so that the people who are there and not on general can see it. Please do not discuss on tomcat-dev, please only discuss on general.

In a closed source project where you’ve got a set team, you make decisions about where the entire team goes and somebody takes the lead of deciding what gets done when. In the discussions about Craig’s long term plan, this metric was applied by several of us in thoughts about where to go next.

After pondering this for a while, it’s (re)become obvious to me that there’s no way that anybody can expect an open source organization to work the same way that a team in a corporate setting can. Ok, so this is pretty freaking obvious, but I’ve been watching people that are not from Sun and who have been doing open source for a while talking and proposing things that come from this line of thought as well. Its not just people from Sun or people from any particular entity.

So — in any software development project there is a natural tension between revolution and evolution. In a closed source environment, you make the call at any particular time on whether you are in revolutionay mode or evolutionare mode. For example, JSDK was in evolutionary mode for years.
Then in Nov 98, We made a decision to go revolutionary. Of course, at the time the project team was composed of 1 person — me, so it was an easy decision. After that revolution was over in Jan 99, Tomcat was in evolutionary mode getting JSP bolted in and working with J2EE. We (Sun
folks) could do that because that was what suited the goals best at the time.

However, Open source is chaotic. With its special magic comes a different reality. This is:

1) People work on their own time (even people paid by a company can be considered to be working on their own time in this situtation as each company is going to have different cycles and things they want)

2) People work on what they want to. If you are working on your own time, you are going to do what you want or you do something else.

3) Some people are evolutionaries, other are revolutionaries, and some are both at different times.

4) Both approaches are important and need to be cultured.

5) You really can’t afford to alienate any part of your developer community. Innovation can come from anywhere.

To allow this to happen, to allow revolutionaries to co-exist with evolutionaries, I’m proposing the following as official Jakarta policy:

1) Any committer has the right to go start a revolution. They can establish a branch or seperate whiteboard directory in which to go experiment with new code seperate from the main trunk. The only responsibility a committer has when they do this is to inform the developer group what their intent is, to keep the group updated on their progress, and allowing others who want to help out to do so. The committer, and the group of people who he/she has a attracted are free to take any approaches they want too free of interference.

2) When a revolution is ready for prime time, the committer proposes a merge to the -dev list. At that time, the overall community evaluates whether or not the code is ready to become part of, or to potentially replace the, trunk. Suggestions may be made, changes may be required. Once all issues have been taken care of and the merge is approved, the new code becomes the trunk.

3) A revolution branch is unversioned. It doesn’t have any official version standing. This allows several parallel tracks of development to occur with the final authority of what eventually ends up on the trunk laying with the entire community of committers.

4) The trunk is the official versioned line of the project. All evolutionary minded people are welcome to work on it to improve it. Evolutionary work is important and should not stop as it is always unclear when any particular revolution will be ready for prime time or whether it will be officially accepted.

What does this mean?

In practice, this means that Craig and Hans and anybody else that wants to run with that revolution is welcome to do so. The only change is that it’s not called — it’s the RED branch or GOOGLE branch or whatever they want to call it.

Whenever Craig (or anybody else working on that codebase) wants to bring stuff into the trunk, they propose it here and we evaluate it on it’s merits.

If somebody disagrees with Craigs approach (for the sake of argument here), they are free to create a BLUE whiteboard and work out what they think is a good solution. At that point, the community will have to evaluate both approaches. But since this is a populist society, with such a structure it is hoped that it becomes clear which is the preferred approach by the community by their participation and voting. Or maybe the best solution is something in the middle and the two parties work together to merge.

Irregardless, the point is to allow solutions to happen without being stalled out in the formative stages.

An important point is that no one revolution is declared to be the official .next until it’s ready to be accepted for that.

There is the side effect that we could potentially end up with too many revolutions happening, but I’d rather rely upon the natural inclination of developers to gravitate towards one solution to control this than to try to control it through any policy statement.

When would this be official?

Well, if this is well recieved, we’d want to word it up and make it a bylaw (with approval by the PMC — this is one of the areas in which the PMC has authority). Hopefully soon.

Comments? Suggestions?

James Davidson duncan@
Java + XML / Portable Code + Portable Data !try; do()

It’s pretty quiet here these days…

January 7, 2020

I haven’t published here in a while, but I’ve been writing and presenting elsewhere:

My biography on this blog is mostly current, please have a look if you need more info!

A personal blog is less relevant for me today, I hope you enjoy those other channels.

Would you hire an open source developer?

January 3, 2018

This blog post of mine was initially published by Computerworld UK in 2010.

As open source comes of age and becomes mainstream, more and more job postings include “open source skills” in their requirements.

But do you really want to hire someone who spends their time exchanging flames with members of their own community in public forums? Someone who greets newcomers with “I have forwarded your question to /dev/null, thanks” and other RTFM answers?

Luckily, open source communities are not just about being rude and unwelcoming to strangers. Most of them are not like that at all, and the skills you learn in an open source community can make a big difference in a corporate environment as well.

One very important skill that you learn or improve in an open source community is to express yourself clearly in written form. The mailing lists or forums that we use are very limited compared to in-person communications, and extra care is required to get your message through. Being concise and complete, disagreeing respectfully, avoiding personal attacks and coping with what you perceive as personal attacks are all extremely useful skills on the job. Useful skills for your whole life actually.

Once you master asynchronous written discussions as a way to build group consensus, doing the same in a face to face meeting can be much easier. But the basic skills are the same, so what you learn in an open source community definitely helps.

Travel improves the mind, and although being active in open source can help one travel more, even without traveling you’ll be exposed to people from different cultures, different opinions, people who communicate in their second or third language, and that helps “improve your mind” by making you more tolerant and understanding of people who think differently.

Not to mention people who perceive what you say in a different way than you expected – this happens all the time in our communities, due in part to the weak communications channels that we have to use. So you learn to be extra careful with jokes and sneaky comments, which might work when combined with the right body language, but can cause big misunderstandings on our mailing lists. Like when you travel to places with a different culture.

Resilience to criticism and self-confidence is also something that you’ll often develop in an open source community. Even if not rude, criticism in public can hurt your ego at first. After a while you just get used to it, take care of fixing your actual mistakes if any, and start ignoring unwarranted negative comments. You learn to avoid feeding the troll, as we say. Once your work starts to produce useful results that are visible to the whole community, you don’t really care if someone thinks you’re not doing a good job.

The technical benefits or working in open source communities are also extremely valuable. Being exposed to the work and way of thinking of many extremely bright developers, and quite a few geniuses, definitely helps you raise the bar on what you consider good software. I remember how my listening skills improved when I attended a full-time music school for one year in my youth: just listening to great teachers and fellow students play made me unconsciously raise the bar on what I consider good music.

Open source communities, by exposing you to good and clever software, can have the same effect. And being exposed to people who are much better than you at certain things (which is bound to happen for anybody in an open source project) also helps make you more humble and realistic about your strengths and weaknesses. Like in soccer, the team is most efficient when all players are very clear about their own and other players’ strengths and weaknesses.

You’ll know to whom you should pass or not pass the ball in a given situation.

To summarise, actively participating in a balanced open source community will make you a better communicator, a more resilient and self-confident person, improve your technical skills and make you humbler and more realistic about your strengths and weaknesses.

Great software is like a great music teacher

January 3, 2018

This blog post of mine was initially published by Computerworld UK in 2010.

I’m amazed at how many so-called “enterprise software systems” do not embrace the Web model in 2010, making them way much harder and much less fun to use than they should be.

I have recently started making parallels between this and music teachers, and the analogy seems to work. Don’t ask where the parallel comes from…weird connections in my brain I guess.

Say you want to learn to play the guitar. Someone recommended Joe, who’s teaching in his downtown studio.

You get there almost on time. Traffic. You find Joe’s studio and here he is, dressed in a neat and simple casual outfit. Smiling at you.

Joe: Hey welcome! So you wanna learn to play?

You: Yes. I brought my guitar, got it from my uncle. It’s a bit worn out as you can see.

Joe: I see…well, you might want to get a better one if you continue past the first few lessons, but for now that will do! Do you have something that you would like to play to get started?

You: “Smoke on the water”, of course. The opening line.

Joe: Let’s try that then, I’ll show you! Just plug your guitar in this amplifier, and let me setup some nice effects so you get a cool sound.

Joe plays the first few bars a few times, shows you how that works and you give it a try. Ten minutes later you start sounding half-decent and you’re having loads of fun playing together with Joe.

Joe: Okay, you’re doing good! I’ll show you my rough course plan so you know what’s up next. I’m quite flexible when it comes to the curriculum – as long as you’re having fun and progressing we’ll be fine.

It’s easy to imagine the bad teacher version of this story:

  • Unwelcoming
  • Complains because you’re three minutes late.
  • Wears a boring old-fashioned suit, and not willing to let you play that crappy old guitar.
  • Boring you with tons of scales before you can start playing a song.
  • Not giving you an overview of what comes next.
  • Not ready to compromise on His Mighty Standard Teaching Program.
  • Making you feel stupid about how bad a player you are.

Bad software is like that bad teacher:

  • Hard to get started with.
  • Requires tons of specific client software of just the right version.
  • Requires you to enter loads of useless information before doing anything useful or fun.
  • Not willing to let you explore and do your own mistakes, and making sure you feel stupid when mistakes occur.

The Web model is the way to go, of course.

  • Ubiquitous access.
  • Welcoming to various types of client software.
  • Easy to point to by way of permanent URLs.
  • Doing its best (fail whales anyone?) to keep you informed and avoid making you feel stupid when something goes wrong.
  • Letting you explore its universe with simple web-based navigation, and rewarding your efforts with new discoveries.

This is 2010, and this is the Web. Don’t let any useless software stand between you and the information and services that you need.

Open source is done. Welcome to Open Development!

December 14, 2017

I originally published this article on SD Times, republishing it to keep it around for posterity…

If you’re looking at embracing open source today, you might be a bit late to the game. Using open-source software is mainstream now, and being involved in open-source projects is nothing to write home about either. Everybody does it, we know how it works, its value is proven.

But what’s next? Sharing source code openly is a given in open-source projects, but in the end it’s only about sharing lines of text. The real long-term power of successful open-source projects lies in how their communities operate, and that’s where open development comes in.

Shared communications channels. Meritocracy. Commit early, commit often. Back your work by issues in a shared tracker. Archive all discussions, decisions and issues about your project, and make that searchable. All simple principles that, when combined, make a huge difference to the efficiency of our corporate projects.

But, of course, the chaotic meritocracies of open-source projects won’t work for corporate teams, right? Such teams require a chain of command with strictly defined roles. Corporate meritocracy? You must be kidding.

I’m not kidding, actually: Open development works very well in corporate settings, and from my experience in both very small and fairly large organizations, much better than old-fashioned top-to-bottom chains of command and information segregation principles. Empower your engineers, trust them, make everything transparent so that mistakes can be caught early, and make sure the project’s flow of information is continuous and archived. Big rewards are just around the corner—if you dive in, that is.

What’s open development?
Open development starts by using shared digital channels to communicate between project members, as opposed to one-to-one e-mails and meetings. If your team’s e-mail clients are their knowledge base, that will go away with them when they leave, and it’s impossible for new project members to acquire that knowledge easily.

A centralized channel, like a mailing list, allows team members to be aware of everything that’s going on. A busy mailing list requires discipline, but the rewards are huge in terms of spreading knowledge around, avoiding duplicate work and providing a way for newcomers to get a feel for the project by reviewing the discussion archives. At the Apache Software Foundation, we even declare that “If it didn’t happen on the dev list, it didn’t happen,” which is a way of saying that whatever is worth saying must be made available to all team members. No more discrepancies in what information team members get; it’s all in there.

The next step is sharing all your code openly, all the time, with all stakeholders. Not just in a static way, but as a continuous flow of commits that can tell you how fast your software is evolving and where it’s going, in real time.

Software developers will sometimes tell you that they cannot show their code because it’s not finished. But code is never finished, and it’s not always beautiful, so who cares? Sharing code early and continuously brings huge benefits in terms of peer reviews, learning from others, and creating a sense of belonging among team members. It’s not “my” code anymore, it’s “our” code. I’m happy when someone sees a way to improve it and just does it, sometimes without even asking for permission, because the fix is obvious. One less bug, quality goes up, and “shared neurons in the air” as we sometimes say: all big benefits to a team’s efficiency and cohesion.

Openly sharing the descriptions and resolutions of issues is equally important and helps optimize usage of a team’s skills, especially in a crisis. As in a well-managed open-source project, every code change is backed by an issue in the tracker, so you end up with one Web page per issue, which tells the full history of why the change was made, how, when, and by whom. Invaluable information when you need to revisit the issue later, maybe much later when whoever wrote that code is gone.

Corporate projects too often skip this step because their developers are co-located and can just ask their colleague next door directly. By doing that, they lose an easy opportunity to create a living knowledgebase of their projects, without much effort from the developers. It’s not much work to write a few lines of explanation in an issue tracker when an issue is resolved, and, with good integration, rich links will be created between the issue tracker and the corresponding source code, creating a web of valuable information.

The dreaded “When can we ship?” question is also much easier to answer based on a dynamic list of specific issues and corresponding metadata than by asking around the office, or worse, having boring status meetings.

The last critical tool in our open development setup is in self-service archives of all that information. Archived mailing lists, resolved issues that stay around in the tracker, source-code control history, and log messages, once made searchable, make project knowledge available in self-service to all team members. Here as well, forget about access control and leave everything open. You want your engineers to be curious when they need to, and to find at least basic information about everything that’s going on by themselves, without having to bother their colleagues with meetings or tons of questions. Given sufficient self-service information, adding more competent people to a project does increase productivity, as people can largely get up to speed on their own.

While all this openness may seem chaotic and noisy to the corporate managers of yesterday, that’s how open-source projects work. The simple fact that loose groups of software developers with no common boss consistently produce some of the best software around should open your eyes. This works.

Status meetings are a waste of time and money

November 23, 2017

Last Monday I presented on Asynchronous Decision Making at the (excellent) FOSS Backstage Micro Summit in Berlin and there were some questions about me saying that status meetings are a waste of time.

My opinion on status meetings hasn’t changed in a long time and I’m very happy to see Jason Fried loudly confirm it in his status meetings are the scourge post.

Quoting him:

How would you feel if you had to regularly expense $1200 so you could “tell a few teammates something”. Think that would go over well?

If your team shape allows you to run status meetings, you should first reflect on their actual cost. And if you still want to run them after that I suggest:

a) Requiring people to briefly report their status in writing before the meeting, asynchronously

b) Requiring people to read other people’s status before the meeting, asynchronously

c) Choosing a maximum of 3 items to discuss in your meeting,
based on those reports, and timebox those topics during the meeting

d) If you don’t get enough items to deserve interrupting your whole team right now: cancel that meeting! Or maybe limit it to managers to avoid interrupting the Makers.

That’s just the essentials, Jason Fried has more detailed suggestions in a similar spirit, make sure to read his post!

I like the 3P format for brief written status reports:

– Progress: what concrete, measurable progress has been made since last report

– Problems: what’s blocking you from progressing

– Perspectives: what are your plans for the next period

If this post (and Jason’s) help you save tons of money by eliminating useless meetings, feel free to make a donation to a good cause ;-)

Large Mailing Lists Survival Guide

November 10, 2017

I initially published this blog post on the Adobe Open Development blog at but that’s been archived so I’m re-publishing it here for convenience. Thanks to Adobe for allowing me to spend a portion on my time reflecting on such topics!

Here’s a “survival guide” that we use at Adobe to help our colleagues make sense of our busy Open Development mailing lists.

To send or not to send

TS1. Before asking a question, search the list archives (and issue trackers, etc.) as the answer might already be in there.

TS2. Keep noise low – consider how many people will receive your message. If you’re just saying “thank you” to one or two persons, do it off-list.

TS3. Does your message really belong in an email? If it’s information that’s supposed to last and that you expect future coworkers to know, a wiki, website or blog post is a much better place. Just send the URL of that wiki/website page in email then. Writing more than 3-4 paragraphs is often a sign of something that does not belong in email. And email is where information goes to die!

TS4. Do not cross-post. If you really think other lists might be interested, let them know about your message by sending them its URL, but do not copy the message itself. Cross-posting tends to create split discussions that are impossible to follow.

TS5. Don’t be shy – if a mailing list exists it is meant to be used, so any on-topic question with a sensible subject line is welcome, assuming you do your homework as explained in this guide.

Starting new discussion threads

ST0. To start a new topic, do not reply to an existing message – use the “new message” function of your email client to create a new discussion thread.

ST1. When starting a new discussion thread, include at least one [TAG] in the subject line that points to the product, technology or topic that your message is about.

ST2. The goal of the subject line is to help people decide if they want or need to read your message. If that’s not sufficient, use your first phrase as a summary to clarify. On busy lists, spending time on crafting a good subject line avoids wasting other people’s time, and gives you much better chances to get an answer.

ST3. One question/topic per thread please, it’s much easier to follow, helps people notice your question and makes much more sense in the archives later.

Writing your message

WR1. The shorter your message or reply, the more likely people are to read it.

WR2. Speak in URLs – if something has an URL or unique identifier, point to it. Don’t say “the about page on the new website” or “the memory problem”, but rather and SLING-9362 if the latter is a well-known shortcut to the URL of the actual issue in your tracker. This helps avoid misunderstandings, as well as creating valuable archives with a rich Web of links.

WR3. If you’re describing a bug, include all the necessary information to reproduce the problem, while being as concise as you can: what did you do, what did you expect, what did you get, what was your environment.

WR4. No large attachments or stack traces: open a bug with that info or put it somewhere where people can download it and include only the URL in your message.

WR5. Be direct, ask the root question instead of a derived one. If you need to buy food at 4AM, for example, and you think Burrito Unlimited might be open, asking where can I buy food now is more likely to provide a helpful response that asking where the nearest Burrito Unlimited is. Also known as an “XY problem” where you ask Y but what you really need is X.

Replying and quoting

RQ1. Quote sparingly and precisely. Opinions differ on this, but the lazy top-quoting that’s unfortunately the norm depending on which email client you use tends to make discussions shallow, as it’s impossible to know precisely what people are replying to. So many years later, Usenet quoting still rules on busy lists and/or for complex discussions.

RQ2. Give sufficient context for people to follow your thoughts. Closely related to precise quoting.

RQ3. Remember WR1 – the more concise message usually wins.

Filtering tips

Here are some tips for coping with many emails from multiple high-traffic lists. The goal is to give you the option to quickly skim over the lists traffic to find the few threads that are actually relevant to you.

F1. Have a folder for each list, and a filtering rule that moves the relevant messages there.

F2. To make sure you quickly see responses to threads you take part in, have a special rule for emails/threads that include you. This is a rule that you can have across all your emails: it could be a special “to me” folder, or a different inbox.

F3. To filter out the topics that interest you, set up a rule based on keywords in the subject. Same for people.
This works especially well if people consistently use the ST1 subject line tags mentioned above.

See Also

Apache: lean and mean, durable, fun!

May 19, 2017

Here’s another blog post of mine that was initially published by Computerworld UK.

My current Fiat Punto Sport is the second Diesel car that I own, and I love those engines. Very smooth yet quite powerful acceleration, good fuel savings, a discount on state taxes thanks to low pollution, and it’s very reliable and durable. And fun to drive. How often does Grandma go “wow” when you put the throttle down in your car? That happens here, and that Grandma is not usually a car freak.

Diesel engines used to be boring, but they have made incredible progress in the last few years – while staying true to their basic principles of simplicity, robustness and reliability.

The recent noise about the Apache Software Foundation (ASF) moving to Git, or not, made me think that the ASF might well be the (turbocharged, like my car) Diesel engine of open source. And that might be a good thing.

The ASF’s best practices are geared towards project sustainability, and building communities around our projects. That might not be as flashy as creating a cool new project in three days, but sometimes you need to build something durable, and you need to be able to provide your users with some reassurances that that will be the case – or that they can take over cleanly if not.

In a similar way to a high tech Diesel engine that’s built to last and operate smoothly, I think the ASF is well suited for projects that have a long term vision. We often encourage projects that want to join the ASF via its Incubator to first create a small community and release some initial code, at some other place, before joining the Foundation. That’s one way to help those projects prove that they are doing something viable, and it’s also clearly faster to get some people together and just commit some code to one of the many available code sharing services, than following the ASF’s rules for releases, voting etc.

A Japanese 4-cylinder 600cc gasoline-powered sports bike might be more exciting than my Punto on a closed track, but I don’t like driving those in day-to-day traffic or on long trips. Too brutal, requires way too much attention. There’s space for both that and my car’s high tech Diesel engine, and I like both styles actually, depending on the context.

Open Source communities are not one-size-fits-all: there’s space for different types of communities, and by exposing each community’s positive aspects, instead of trying to get them to fight each other, we might just grow the collective pie and live happily ever after (there’s a not-so-hidden message to sensationalistic bloggers in that last paragraph).

I’m very happy with the ASF being the turbocharged Diesel engine of Open Source – it does have to stay on its toes to make sure it doesn’t turn into a boring old-style Diesel, but there’s no need to rush evolution. There’s space for different styles.