Open Source Communites are like gardens, says Leslie Hawthorn

June 4, 2012

Great keynote by Leslie Hawthorn at Berlin Buzzwords today – here are my quick notes.

As with a garden, you need to cultivate your community, it doesn’t just happen.

You need to define the landscape – where you want the garden to grow. For a community this means clearly defining goals and non goals.

The landscape needs to be regularly assessed – are we welcoming to newbies? Can new plants grow here?

Clear the paths. Leslie mentions the example of rejecting code patches based on the coding style – if you haven’t published a style guide, that’s not the best way to welcome new contributors. Make sure all paths to success are clear for your community members.

A garden needs various types of plants – invite various types of people. Not just vague calls for help – specific, individual calls based on their skills usually work much better.

Pluck out the weeds, early. Do not let poisonous people take over.

Nurture your seedlings. Take care of newbies, help them grow.

Like companion planting, pairing contributors with the right mentors can make a big difference.

Know when to prune. Old-timers leaving a project is not necessarily a problem, things change.

Leslie cites OpenMRS as an example of successful community management – I’ll have to have a look but I already like the “how to be a casual committer” paragraph in their different types of OpenMRS developers page.

All in all, very interesting parallels. Being quite clueless about gardening, I never thought of looking at our communities from this angle, but it makes perfect sense. Broadening one’s view – that’s what a keynote should be about!


RESTful services as intelligent websites

March 27, 2012

Thinking of RESTful services as intelligent websites helps drive our architecture decisions towards simplicity, discoverability and maximum use of the HTTP protocol. That should help us design services that are easier to use, debug, evolve and maintain.

In this post I’ll try to address a few important aspects of RESTful services, considering them as intelligent websites. We might need more formal specifications in some cases, but this is hopefully a good start.

Discoverability and service homepage

Having a descriptive homepage for your service helps search engines and people discover it, and the base service URL should “never” change. Service-specific subdomains come to mind.

The service homepage includes all the necessary information such as service owner’s contact information, links to source code, etc.

News about service updates live right here on the service homepage, ideally as full content for the most recent news, but at least as links.

The key idea is that I shouldn’t have to tell you more than the service homepage’s URL for you to be able to use the service.

Even if your service is a company-internal one that’s not meant to become public, having a decent-looking homepage, or at least one that’s well organized and well readable, won’t hurt.

HATEOAS

In my pragmatic view Hypermedia as the Engine of Application State basically means links tell you where to go next.

In a website meant for humans, the meaning of a link is often expressed by logical groups: navigation links at the top left, “more info” links in a smaller font at the bottom of a page, etc.

For machines, adding rel attributes to <link> elements (in HTML, or the equivalents in other formats) tells us what we can do next. A client should be able to first try a link with non-destructive results, and get a response the supplies details about how the specified interaction is supposed to work. If those details are too involved to be expressed in a simple HTTP request/response (which should be a “maybe this is too complex” warning), links to explanations can be provided in the HTML content of our intelligent website.

Self-explaining services

The service documentation, if any is needed, is also provided on the service homepage, or linked from it.

Service interactions are designed to be as obvious as possible to minimize the need for documentation, and readable automated tests (hosted on the service website of course, or linked from it) can help document the details.

HTML forms describe service parameters

HTML forms are best way to document the service parameters: provide a form that humans can use to play with the service, with enough information such as lists and ranges of values so that users can figure out by themselves what the service expects.

The idea is that a human user will play with your service from the HTML form, then move on to implementing a client in the language of their choice.

The action attribute of <form> elements also contributes to HATEOAS – intelligent clients can discover that if needed, and it’s obvious for human users.

And of course, speaking of parameters, be liberal in what you accept, and conservative in what you send.

Speaking in URLs

Like humans, RESTful services need to be precise when they speak about something important.

If your service response says invalid input format for example, it’s not hard to include in that response an URL that points to a description of the correct input format. That makes all the difference between a frustrating and a useful error message, and is part of HATEOAS as well.

Web-friendly responses

Readable response formats will help people make sense of your service. The HTTP protocol does provide standard ways of compressing responses to avoid using more bandwidth than needed, so optimizing the response for wire efficiency does not make much sense unless you’re really expecting measuring huge traffic. And even if you need an optimized binary response format, there’s probably a way to make that optional.

HTTP status codes

Thou shalt not abuse HTTP status codes – or you might be suddenly transformed into a teapot.

Returning a 200 OK status code with content that describes an error is a no-no: if something went wrong, the HTTP status code needs to express that.

Security

Website authentication and authorization mechanisms and secure connections work for machine clients as well, no need to reinvent that wheel.

HTTP sessions are a bad idea in a RESTful context of course, state is driven by hypertext as discussed above.

Character encodings

The issues are the same for human users as for machine clients: you need to play by the HTTP protocol rules when it comes to character encodings, and using UTF-8 as the default is usually the best option.

Stable service URLs

As with good websites, once a service has been published at a given URL, it should continue working in the same way “forever”.

A substantially different new version of the service should get its own different URL – at least a different path containing a version number, or maybe a new subdomain name.

Long-running jobs

Regardless of human or machine clients, you usually don’t want HTTP requests to last more than a few seconds. Long-running jobs should initially create a new resource that describes the job’s progress and lets the client know when the output is available. We’ve had an interesting discussion about this in Apache Stanbol recently, about long-running jobs for semantic reasoners.

Service metadata

Non-trivial services often have interesting metadata to provide, which can have both static and dynamic parts, like configuration values and usage statistics for example.

Here again, the website view helps: that metadata is just made accessible from the service homepage via links, ideally with both human (a href) and machine (link rel) variants.

Coda

Designing your RESTful service as an intelligent website should help people make good use of it, and will probably also help you make its interactions simpler and clearer.

If your service can’t be presented as a website, it might mean that your interaction design is not optimal, or not really RESTful.

I’m happy to see these opinions challenged, of course, if someone has any counter-examples.

Update: I should have mentioned that this post, and especially the “intelligent websites” concept, was inspired by conversations with Roy Fielding, in the early days of Apache Sling. I haven’t asked him to review it, so this doesn’t mean he would endorse the whole thing…I’m giving credit for inspiration but still liable for any mistakes ;-)

Update #2: See also my REST reading list, a regularly updated collection of links to articles that I’ve found useful in understanding and explaining REST..


Stefano’s Mazzocchi’s Busy List Pattern

December 6, 2011

Since being part of a larger company, I’m hearing people saying “we need a new mailing list for that” way too often.

People are afraid of busy mailing lists sometimes, but in terms of fostering open collaboration and communities a busy list is excellent.

Another problem is people writing 1-to-1 emails instead of writing to the list, as an attempt to avoid making the list even noisier. If your message is on-topic with a well written subject line, and as concise as possible, it’s not noise and definitely belongs on the list.

Stefano’s Mazzocchi briefly explains this in his ApacheCon 2006 slides titled all you wanted to know about Open Development community building but didn’t know who to ask. I haven’t found that pattern described in a more accessible way than among Stefano’s 303 PDF slides so far, so here’s a summary that I hope is faithful to the original idea.

The Busy List Pattern

Here’s my summary of that part of Stefano’s slides (starting at page 200 in his PDF file), with some additional comments of mine.

The pattern starts with somebody suggesting that the mailing list is too noisy and should be split in multiple ones.

Restaurants and night clubs, however, know that packed rooms help marketing…what’s more boring than being alone in a restaurant?

But we’re not a bar…we can’t go on getting 200 messages on that list every day, can we?

Actually we can…if everybody who posts to the list is extra careful about how they post, a list with 200 or more messages per day is perfectly manageable – but only if the subject lines are very carefully chosen and evolved (see below), and only if people are very careful about what they post, to maximize clarity and avoid wasting other people’s time.

We should strive to keep our lists as packed as possible. It’s hard to set a limit, but before splitting a list you might ask if people are using it efficiently. First try to improve the signal to noise ratio, and if that really fails you might consider splitting the list.

If you really need to split the list, do it by audience and not by topic – a consistent audience will lead to an interesting list, whereas scattering topics all around makes cross-topic discussions painful.

Careful with those subject lines

The subject line of messages makes all the difference between a noisy list an a useful one.

Choose sensible subject lines that allow people do decide if they want to read your message or not.

Reread your subject line before sending – does it really express what your message is about, and does it contain any relevant call to action?

Change those subject lines when the thread changes topics.

Address one topic only per thread.

Use [MARKERS] in subject lines to tag messages.

Simple rules like this will boost your list’s efficiency tremendously – there’s more good stuff like this in Stefano’s slides, make sure to have a look!


MANIFEST.MF must be the first resource in a jar file – here’s how to fix broken jars

November 15, 2011

Some tools, like the Sling OSGi Installer, require the MANIFEST.MF to be the first file in a jar file, or they won’t find it.

This happens when using the java.util.jar.JarInputStream class to read a jar’s manifest, for example.

The manifest is where OSGi bundle headers are found, for example, so not having it in the right place makes the jar unusuable as a bundle.

I won’t discuss here whether this requirement is part of the jar file spec (it would make sense, as this makes sure you can read it quickly even from a huge jar file), but anyway there are many cases where this is required.

To fix a jar where this is not the case, you need to unpack the jar and recreate it, as in this example, starting from a broken.jar in the current directory:

$ mkdir foo
$ cd foo
$ jar xvf ../broken.jar
$ mv META-INF/MANIFEST.MF /tmp/mymanifest
$ jar cvfm fixed.jar /tmp/mymanifest .

That’s it – creating the new jar with the jar utility puts the MANIFEST.MF in the right place.


Quick notes from Mark O’Neill’s Transfer Summit 2011 talk

September 8, 2011

Here’s my quick on-the-spot notes of Mark O’Neill’s (@marxculture) excellent presentation, on how a government’s IT can innovate.

Citing Andrew Savory, never thought I’d see govt. spokesperson be entertaining and informative. Reminds me of my excellent experiences working for the Swiss Parliament services in the late nineties.

Here’s my unedited notes – slides should be available soon. URLs and emphasis added by myself.

UK gov spends 20 billion pounds a year on IT, with 20 top suppliers.

Mark’s got a 20MB mailbox to manage this.

Different velocities: technology, business society.

His role: track and align to those velocities.

Velocity of change in his government IT world is “very different” from what it is on the outside. The gap is increasing.

The more you diverge from the velocity of the market, the higher your costs are.

The options are: do the same, do nothing or ask a different question.

The challenge: no money – although they’re spending 20 billion pounds a year…

The government is currently driven by the policy cycle: problem -> draw up a policy -> interpret and implement policy -> monitor and evaluate. The customer is not part of this picture. Need an innovation model.

He shows a picture of a washing machine for dogs: there is an e-petition about this, the e-petition site (http://epetitions.direct.gov.uk/) got 16k such petitions in 4 weeks, 3.8 million visitors, 20 million page views, 1.5 million signatures. The e-petition site was built in 6 weeks with 6 people, cost 80 thousand pounds including security costs. The system will be released as open source shortly.

The future: small projects are developed by his office, up to 3 months with contracted companies, if more than 3 months need to ask different questions.
20 billion pounds a year should be one of the most dynamic, diverse and entrepreneurial market in the world. What needs to be done to achieve that?

Leaner procurement mechanisms. Use open standards and open source. Look at the processes, check how much paperwork and overhead is actually necessary.
Challenge to new projects includes two questions: what is it that you want to do, and why do you want to do it that way.

Decompose the project in smaller units of work – also gives more chance for SMEs to jump in.

Three layers: infrastructure, app, support. Procurement is currently often based on a complete silo – moving to smaller units can help reuse and sharing.

Rethink the approach to make it possible to buy an excellent application form an SME that does not provide infrastructure or services.

Working on G-Cloud, private onshore cloud for UK government services. (http://www.cloudbook.net/directories/gov-clouds/uk-government-cio-council).

Talking of “agile” “cloud” is a bit like talking about “magic” these days ;-)

Conversation always trumps process.

Two key differences between agile and waterfall.

First difference: with waterfall you talk to developers, they go away for six months, come back with something that you don’t want. Agile: you talk all the time.

Second difference: with waterfall, you might not be around anymore when problems arise. With agile, you could be fired after three months…

Tools are hard. Discuss, share, build, learn. Cannot deliver success without using efficient tools, take example from open development teams in the outside world.

Need to build mechanisms to learn, share and discuss what the team is doing.

Success factors:

1) Be the most dynamic, diverse and entrepreneurial market in the world.

2) IT should just work.

3) Reuse, dialogue, agility, ownership should be part of the day-to-day business.


Turning 42, and why I love my job

August 24, 2011

I’ll be turning 42 in a few months (counting in base 12 of course) and it feels like a good time to reflect on what it is that makes my job enjoyable. My father was a carpenter, and both my brothers started with that as their first job, so I’m kind of the disruptive element of the family (I didn’t say black sheep, ok?).

So, why did I choose to work with cold electronics (my first degree) and computers instead of working with a noble and beautiful living thing like wood?

After some thinking I came up with four key elements.

The first key element is creating cool things. Note that I don’t say creating cool software: I realized that for me the creative process is more important than what exactly is being created. Coolness is obviously a subjective measurement, so it’s hard to define precisely. Lean and maintainable software that people find useful definitely falls into that category for me.

Next is working with bright and fun people. Being active in the Apache Software Foundation, and joining Day in 2007 made me realize how stimulating it is to work with people that impress you every day with their technical and other skills. People who are fun to work with help keep some distance with the Big Problems At Work. Technical and other problems are bound to happen in any job, and that’s when your colleagues’ attitudes make all the difference. Software and work are not always the most important things in life.

Using efficient and fun tools comes next – in my previous life as an independent software developer and architect I sometimes had to put up with lame environments and tools at customer sites, and that can be depressing when you’re aiming for quality and efficiency. My first grade math teacher kept saying that good craftsmen use good tools, and she was right!

The fourth element is keeping a good work-life balance. I tend to engage myself 100% in my work, but for that to happen I need to be able to engage myself 100% in other things at regular intervals. This often means disconnected holidays “away from the grid”. I also decided long ago to never work on Sundays, unless there’s really no other way, which is rare. This has helped me keep my sanity during those phases when the rest of the week is totally crazy.

The fun thing is that those four elements would totally apply to being a carpenter…and I actually did enjoy helping at my father’s shop during school holidays when I was a kid. I’m not planning on going back though – now that my son learned carpentry as well, he’s making fun of me every time I try!


How to fix your project collaboration model?

August 5, 2011

I’ve been studying the collaboration processes of the Apache Software Foundation (ASF) for a while [1], by observing and practicing the successful collaboration model that we use in ASF projects.

Taking the opposite angle, I’ve been reflecting lately on how to fix a broken collaboration model. How do you help teams move from an “I have no idea what my colleagues are doing, and I get way too much email” model to the efficient self-service information flow that we see in ASF and other open source projects?

As I see it, the success of the Apache collaboration model is based on six guiding principles:

  • If it didn’t happen on the dev list, it didn’t happen.
  • Code speaks louder than words.
  • Whatever you’re working on, it must be backed by an issue in the tracker.
  • If your file is not in subversion it doesn’t exist.
  • Every important piece of information has a permanent URL.
  • Email is where information goes to die.

Some of those might need to be slightly tweaked for your particular context, but I believe you can apply most of them to all collaborative projects, as opposed to just software development.

The context of an ASF project is a loose group of software developers, architects, testers, release managers and users (with many people playing several of these roles) working remotely, with no central decision authority. There’s no manager in an ASF project, no money exchanged and no common corporate culture. Decisions are made by consensus, by a group that grows organically as new members are elected based on their merit with respect to that particular project.

This may sound like a recipe for failure, yet many of our projects are very successful in delivering great software consistently. How does that work?

Let’s describe our six principles in more detail, so that you can see if they apply to your own project.

If it didn’t happen on the dev list, it didn’t happen.

In an ASF project, all decisions are made on a single developers mailing list.

Backchannel discussions are inevitable: someone will meet a coworker at the coffee machine, people attend conferences, talk on IRC, etc. There’s nothing wrong with that, but the rule is that as soon as a discussion turns into something that has impact on the project, it must come back to the dev list. The goal is for all project participants to have the same information from a single source.

As those lists are archived, you can then go back to check why and how a particular decision was made.

Creating consensus using email only can be hard, on the other hand it forces you to clarify your ideas and express them in an understandable way, which in the case of software often promotes cleaner architectures and designs (but see also code speaks louder than words below).

Email etiquette (solid and evolving subject lines, concise writing, precise quoting etc.) plays a big role in making this work, and that’s something people have to learn.

In an ASF project, experienced members will often help beginners improve their list communications skills, by gently steering them towards more efficient writing and messaging styles. And your message might just be ignored if the subject line says “URGENT: need help!” or “Question”, which can be a good beginner’s lesson.

Top-posting is usually frowned upon on ASF lists, as that often leads to superficial discussions compared to the much more precise inline quoting.

Email is where information goes to die.

Aren’t we contradicting the previous principle here?

Not really – the dev list is for the flow of discussions and decisions, not for important information that people need to refer to in their work.

Even with good and multiple email archives like the ones we have for ASF projects, finding a specific piece of information in email is painful. That might be good enough for going back to a discussion to get more context about what happened, and to go back to decisions (marked by [VOTE] in their subject lines at the ASF) from time to time, but definitely not for the day-to-day information that you need in such a project. That’s where the issue tracker, discussed below, comes into play.

Code speaks louder than words.

No one told me that when I started working in ASF projects, but after some time trying to argue about software architecture and how my ideas would improve our project’s software, I realized that I was just wasting my time.

The best way to express a software architecture or an idea that will improve a software component is to implement it and show the code to your fellow project members.

That code might be just a rough and ugly prototype in any suitable programming language, that’s not a problem – as long as it expresses your ideas, you will often spend much less time getting that idea across when it’s backed by a concrete piece of code.

Whatever you’re working on, it must be backed by an issue in the tracker.

This might be the most important of our guiding principles – on a desert island, I think I’d be ready to work with just an issue tracker as my collaboration channel.

Many people think that issue trackers are only for bugs: you create an issue when you have run your software and found something that doesn’t work.

Although that was the original intention, I believe (and I’ve seen this fly in many different contexts) that an issue tracker can be used to back all the work that’s being done in a software or other project. Software design, implementation, test servers and continuous integration setups and maintenance, hardware upgrades, password reset requests, milestones like demos and sprints (as issues, not just dates)…and bugs of course: managing all project tasks in a single tracker makes a huge difference.

When used as your coordination backbone, a good issue tracker (we currently use JIRA and Bugzilla at the ASF) will help answer a lot of questions about your project’s status, such as

  • Who’s working on what?
  • What did Joe Developer do last month?
  • Where do we stand?
  • What prevents X from being implemented?
  • Why did we implement X in this way back in 2001?

For this to work, project members need to update their issues often, and break them down into smaller issues as needed, so that the tracker later tells the story of how the task went from A to B.

No need for big literary statements, but instead of sending an email to the dev list saying that X is ready to be used, just update the corresponding issue in the tracker. A good tracker will send events when that happens, to which people can subscribe. Together with the source code control system events (commits etc.) this creates a live activity stream for your project. Making extensive use of the tracker will help provide a complete picture of what’s happening, in that stream.

I’m also a big fan of using issue dependencies to organize issues in trees that can be used to keep track of what needs to be done to reach the project’s goals (aka “planning”, but in a more organic form).

Cygwin dependency treeAs an example, here’s the dependency tree for bug 3383 of the cygwin project. That’s not an ASF project, which shows that we’re not the only ones to use this technique.

That tree starts with “the FRYSK project” as an umbrella issue, which is broken down into issues that represent the different project areas, and so on util it reaches the actual things that need to be implemented or fixed. Combined with a tracker’s reporting functions, such a tree helps tremendously in answering the “where do we stand?” question, and in reshuffling priorities quickly in a crisis. You can also create umbrella issues for sprints or releases, to express what needs to be done to get there.

If your file is not in subversion it doesn’t exist.

The goal here is to avoid relevant files floating around – put everything in subversion, or in whatever source code control system you’re using.

If some files are really too large for that, make sure they have permanent URLs, and document those in the issue tracker or in your source code.

Every important piece of information has a permanent URL.

If you cannot point to a piece of information with a URL, it’s probably not worth much. Or you’re using the wrong system to store that information.

I also like to speak in URLs as much as possible. Saying SLING-505 (which is an agreed upon shortcut for https://issues.apache.org/jira/browse/SLING-550 in an ASF project) is so much more precise than referring to the background servlets feature.

Conclusion

Reviewing your collaboration model against the above principles should help find out where it can be improved. The most common problems that I’ve seen in the various organizations that I worked for in my career are the “split brain” problem, where testers, customers or managers use different tracking systems, and the “way too much email” problem where people send important information (or code…aaaaarghhhh) around in email as opposed to taking advantage of a tracker.

Does that sound like you? If yes you might want to have a closer look at how successful open projects work, and take inspiration from them.

[1] See previous posts: What makes Apache tick? and Open Source collaboration tools are good for you!.


Follow

Get every new post delivered to your Inbox.

Join 29 other followers