It’s just a Web server – a plea for simplicity

June 16, 2014

I’m currently working on my keynote for next week’s Connect – Web Experience 2014 conference in Basel and very much looking forward to it! Last year’s conference was excellent, and this year’s schedule looks very exciting.

My keynote is about the value of simplicity in software – including a few tales from the trenches.

We like to think of what we build with AEM as large enterprise systems, with complex requirements. Intricate workflows. Rocket science.

However, when you think about it, our systems are “just” HTTP request processors, that manipulate atomic pieces of content in a content repository.

What if you wanted to manage the Whole World Wide Web with a single system? The architecture of that 4WCMS might be quite similar to what Apache Sling provides for AEM: mostly independent dynamic HTTP request processors, selected by path and resource type, that render and/or process resources from a huge tree of content.

If our architecture works for that 4WCMS, the systems that we are actually working on are just peanuts compared to that. Managing a single site, or just a small federation of a hundred thousand sites? Easy. Yes I’m being provocative – it’s a keynote!

The inherently scalable architecture of the Web, combined with the natural decoupling that HTTP and REST (and OSGi) provide, should allow us to keep our systems simple, transparent, robust and adaptable. Yet, much too often, we fall into the “entrprisey” trap and start designing complex machinery where something simple would do – if only someone had the guts to challenge the status quo.

I have a few examples in mind, from past projects, where simplicity provided huge wins, even though it required convincing customers that had different, usually more complex ideas. My goal is to demonstrate how valuable simplicity is, and how expensive it can be to create initially. Like the story of those 28 lines of code that took three months to create, in 1999, and still live happily in production with Zero Bugs.

We shouldn’t give up – creating simple things is a lot of work, but the rewards are huge.

To paraphrase Blaise Pascal, if your system is complicated it usually means you didn’t work hard enough to make it simpler. Or maybe you have a really complex problem, but that’s not very common. And maybe that complex problem is the wrong one to solve anyway.

I hope to see you next week in Basel and until then – keep things simple!


Sony Bridge for Mac, a perfect example of second-system effect #fail!

April 3, 2014

I’m sorry to be ranting here instead of writing about more creative things, but here’s a perfect example of the “second-system effect” – replacing simple software that works perfectly well with something supposedly more powerful…that doesn’t work.

My Sony Xperia V phone used to upgrade over the air. Get a notification, wait until you have a good Internet connection, start the upgrade, wait a few minutes and you’re done. Who needs more than that?

Sony apparently thinks that’s too simple. Or maybe they have other reasons to start forcing users to use their brand new shiny Sony Bridge desktop software for updates. Reminds me of my first Samsung Galaxy, years ago…I hope Samsung’s over that by now – but I digress.

There doesn’t seem to be another option to upgrade my phone this time, so I play nice, downdload and install the latest Sony Bridge V3.7.1(3710) software tool, connect my phone to my computer and start the bridge tool. Oh, by the way: how do you upgrade if you don’t have a computer? What’s with “mobile first”? But I digress.

And here’s the first “fun” result: the tool refuses to upgrade my phone now because the battery charge is less than 80%. Oh my.

IT IS PLUGGED IN, STUPID! How could you talk to it via USB otherwise? Fail. Sony Bridge, I’m starting to hate you. No, really.

Ok, I’m a good boy. I wait until my battery reaches 80% and try again. Hurray! I can get to the upgrade screen this time.

The tool says “Initializing” and “Talking to Update Engine”.

And it stays there.

And time passes.

Fourty-seven minutes now. 47 MINUTES and exactly nothing else happens. No feedback. No progress. No error messages. Did you guys take UX 101 in school? I guess not.

Sony Bridge #fail screenshot

Forget the Cancel button…it doesn’t work.

I tried twice. Had to delete some secret cache files in between both attemps, as on the second time the tool was telling me that my phone is up to date, but it’s not, as the phone itself tells me.

The result of this brilliant software evolution? I am unable to upgrade my phone. That will make it useless soon, I guess, and I didn’t pay a sizeable amount of money to have a non-upgradeable phone.

So here we have a perfect example of what software folks should NEVER do: replace a perfectly working simple system (over-the-air updates) which a new shiny thing that DOESN’T WORK and makes me YELL here which is quite unusual. Ok, maybe you don’t care about me yelling, but if you’re Sony I would expect you to be more clever than that.

So, Sony, I guess I’ll ask for a refund for this now useless phone. What do you think?

But of course, the best by far is just to re-enable over-the-air updates. That used to work very well.

Update: I was finally able to upgrade my phone by running the Sony Bridge train wre^H^H^H^H^H^H^H^H^H software on an old macbook. But still…why?


RESTful services as intelligent websites

March 27, 2012

Thinking of RESTful services as intelligent websites helps drive our architecture decisions towards simplicity, discoverability and maximum use of the HTTP protocol. That should help us design services that are easier to use, debug, evolve and maintain.

In this post I’ll try to address a few important aspects of RESTful services, considering them as intelligent websites. We might need more formal specifications in some cases, but this is hopefully a good start.

Discoverability and service homepage

Having a descriptive homepage for your service helps search engines and people discover it, and the base service URL should “never” change. Service-specific subdomains come to mind.

The service homepage includes all the necessary information such as service owner’s contact information, links to source code, etc.

News about service updates live right here on the service homepage, ideally as full content for the most recent news, but at least as links.

The key idea is that I shouldn’t have to tell you more than the service homepage’s URL for you to be able to use the service.

Even if your service is a company-internal one that’s not meant to become public, having a decent-looking homepage, or at least one that’s well organized and well readable, won’t hurt.

HATEOAS

In my pragmatic view Hypermedia as the Engine of Application State basically means links tell you where to go next.

In a website meant for humans, the meaning of a link is often expressed by logical groups: navigation links at the top left, “more info” links in a smaller font at the bottom of a page, etc.

For machines, adding rel attributes to <link> elements (in HTML, or the equivalents in other formats) tells us what we can do next. A client should be able to first try a link with non-destructive results, and get a response the supplies details about how the specified interaction is supposed to work. If those details are too involved to be expressed in a simple HTTP request/response (which should be a “maybe this is too complex” warning), links to explanations can be provided in the HTML content of our intelligent website.

Self-explaining services

The service documentation, if any is needed, is also provided on the service homepage, or linked from it.

Service interactions are designed to be as obvious as possible to minimize the need for documentation, and readable automated tests (hosted on the service website of course, or linked from it) can help document the details.

HTML forms describe service parameters

HTML forms are best way to document the service parameters: provide a form that humans can use to play with the service, with enough information such as lists and ranges of values so that users can figure out by themselves what the service expects.

The idea is that a human user will play with your service from the HTML form, then move on to implementing a client in the language of their choice.

The action attribute of <form> elements also contributes to HATEOAS – intelligent clients can discover that if needed, and it’s obvious for human users.

And of course, speaking of parameters, be liberal in what you accept, and conservative in what you send.

Speaking in URLs

Like humans, RESTful services need to be precise when they speak about something important.

If your service response says invalid input format for example, it’s not hard to include in that response an URL that points to a description of the correct input format. That makes all the difference between a frustrating and a useful error message, and is part of HATEOAS as well.

Web-friendly responses

Readable response formats will help people make sense of your service. The HTTP protocol does provide standard ways of compressing responses to avoid using more bandwidth than needed, so optimizing the response for wire efficiency does not make much sense unless you’re really expecting measuring huge traffic. And even if you need an optimized binary response format, there’s probably a way to make that optional.

HTTP status codes

Thou shalt not abuse HTTP status codes – or you might be suddenly transformed into a teapot.

Returning a 200 OK status code with content that describes an error is a no-no: if something went wrong, the HTTP status code needs to express that.

Security

Website authentication and authorization mechanisms and secure connections work for machine clients as well, no need to reinvent that wheel.

HTTP sessions are a bad idea in a RESTful context of course, state is driven by hypertext as discussed above.

Character encodings

The issues are the same for human users as for machine clients: you need to play by the HTTP protocol rules when it comes to character encodings, and using UTF-8 as the default is usually the best option.

Stable service URLs

As with good websites, once a service has been published at a given URL, it should continue working in the same way “forever”.

A substantially different new version of the service should get its own different URL – at least a different path containing a version number, or maybe a new subdomain name.

Long-running jobs

Regardless of human or machine clients, you usually don’t want HTTP requests to last more than a few seconds. Long-running jobs should initially create a new resource that describes the job’s progress and lets the client know when the output is available. We’ve had an interesting discussion about this in Apache Stanbol recently, about long-running jobs for semantic reasoners.

Service metadata

Non-trivial services often have interesting metadata to provide, which can have both static and dynamic parts, like configuration values and usage statistics for example.

Here again, the website view helps: that metadata is just made accessible from the service homepage via links, ideally with both human (a href) and machine (link rel) variants.

Coda

Designing your RESTful service as an intelligent website should help people make good use of it, and will probably also help you make its interactions simpler and clearer.

If your service can’t be presented as a website, it might mean that your interaction design is not optimal, or not really RESTful.

I’m happy to see these opinions challenged, of course, if someone has any counter-examples.

Update: I should have mentioned that this post, and especially the “intelligent websites” concept, was inspired by conversations with Roy Fielding, in the early days of Apache Sling. I haven’t asked him to review it, so this doesn’t mean he would endorse the whole thing…I’m giving credit for inspiration but still liable for any mistakes ;-)

Update #2: See also my REST reading list, a regularly updated collection of links to articles that I’ve found useful in understanding and explaining REST..


Turning 42, and why I love my job

August 24, 2011

I’ll be turning 42 in a few months (counting in base 12 of course) and it feels like a good time to reflect on what it is that makes my job enjoyable. My father was a carpenter, and both my brothers started with that as their first job, so I’m kind of the disruptive element of the family (I didn’t say black sheep, ok?).

So, why did I choose to work with cold electronics (my first degree) and computers instead of working with a noble and beautiful living thing like wood?

After some thinking I came up with four key elements.

The first key element is creating cool things. Note that I don’t say creating cool software: I realized that for me the creative process is more important than what exactly is being created. Coolness is obviously a subjective measurement, so it’s hard to define precisely. Lean and maintainable software that people find useful definitely falls into that category for me.

Next is working with bright and fun people. Being active in the Apache Software Foundation, and joining Day in 2007 made me realize how stimulating it is to work with people that impress you every day with their technical and other skills. People who are fun to work with help keep some distance with the Big Problems At Work. Technical and other problems are bound to happen in any job, and that’s when your colleagues’ attitudes make all the difference. Software and work are not always the most important things in life.

Using efficient and fun tools comes next – in my previous life as an independent software developer and architect I sometimes had to put up with lame environments and tools at customer sites, and that can be depressing when you’re aiming for quality and efficiency. My first grade math teacher kept saying that good craftsmen use good tools, and she was right!

The fourth element is keeping a good work-life balance. I tend to engage myself 100% in my work, but for that to happen I need to be able to engage myself 100% in other things at regular intervals. This often means disconnected holidays “away from the grid”. I also decided long ago to never work on Sundays, unless there’s really no other way, which is rare. This has helped me keep my sanity during those phases when the rest of the week is totally crazy.

The fun thing is that those four elements would totally apply to being a carpenter…and I actually did enjoy helping at my father’s shop during school holidays when I was a kid. I’m not planning on going back though – now that my son learned carpentry as well, he’s making fun of me every time I try!


Pragmatic validation metrics for third-party software components

October 22, 2010

Earlier this week at the IKS general assembly I was asked to present a set of industrial validation metrics for the open source software components that IKS is producing.

Being my pragmatic self, I decided to avoid any academic/abstract stuff and focus on concrete metrics that help us provide value-adding solutions to our customers in the long term.

Here’s the result, for a hypothetical FOO software component.

Metrics are numbered VMx to make it clear what we’ll be arguing about when it comes to evaluating IKS software.

VM1
Do I understand what FOO is?
VM2
Does FOO add value to my product?
VM3
Is that added value demonstrable/sellable to my customers?
VM4
Can I easily run FOO alongside with or inside my product?
VM5
Is the impact of FOO on runtime infrastructure requirements acceptable?
VM6
How good is the FOO API when it comes to integrating with my product?
VM7
Is FOO robust and functional enough to be used in production at the enterprise level?
VM8
Is the FOO test suite good enough as a functionality and non-regression “quality gate”?
VM9
Is the FOO licence (both copyright and patents) acceptable to me?
VM10
Can I participate in FOO’s development and influence it in a fair and balanced way?
VM11
Do I know who I should talk to for support and future development of FOO?
VM12
Am I confident that FOO still going to be available and maintained once the IKS funding period is over?

VM1 can be surprisingly hard to fulfill when working on researchy/experimental stuff ;-)

Suggestions for improvements are welcome in this post’s comments, as usual.

Thanks to Alex Conconi who contributed VM11.


Adobe, Day and Open Source: a dream and a nightmare

July 30, 2010

What does the acquisition of Day by Adobe mean for Day’s open source activities? Some people are disappointed by the lack of comments about this in the official announcements to date.

Thankfully, Erik Larson, senior director of product management and strategy at Adobe, commented on Glyn Moody’s blog post quite early in the frenzy of tweets and blog posts that followed yesterday’s announcement.

Quoting him:

…we are very excited for Day’s considerable “open source savvy” to expand Adobe’s already significant open source efforts and expertise. That is a strategic benefit of the combination of the two companies. I have personally learned a lot from David Nuscheler and his team in the past few months as we put the deal together.

Not bad for a start, but we’re engineers right? Used to consider the worst case, to make sure we’re prepared for it.

Me, I’m an engineer but also an optimistic, and I’m used to start with the ideal, happy case when analyzing situations. It helps focus my efforts on a worthy goal.

So let’s do this and dream about the best and worst cases. This is absolutely 100% totally my own dreams, I’m not speaking for anyone here, not wearing any hat. Just dreamin’, y’know?

The Dream

This is late 2011.

The last few months have more than confirmed that Day’s acquisition by Adobe, one year ago, happened for strategic reasons: a big part of the deal was filling up gaps in Adobe’s enterprise offering, but Day’s open source know-how and network have brought a lot of value as well.

Day folks have played an important role in expanding the open development culture inside Adobe; Photoshop will probably never be fully open source, but moving more key components of the Adobe technology stack to open source, and most importantly open development, has paid off nicely. In terms of reaching out to developers and customers, in getting much better feedback at all levels, and in terms of software quality of course. It’s those eyeballs.

The Apache Software Foundation’s Incubator has been quite busy in the last few months. The new platinum sponsor enjoys a fruitful relationship with the foundation.

With JCR moving to their core, Adobe’s enterprise applications are starting to reach a new level of flexibility. Customers are enthusiastic about being able to access their data via simple and standards-based interfaces. Enterprise-level mashups, anyone?

JCR is not just that minor content repository API pushed by that small swiss software vendor anymore: being adopted by a major player has made a huge difference in terms of market recognition (I’m sure my friends at Hippo, Jahia and Sakai, among others, will love that one). The added resources have also helped improve the implementations, and people love the book!

With this, Apache Jackrabbit and Apache Sling have reached new levels of community participation and quality. Although quite a few new committers are from Adobe, a number of other companies have also pushed their developers to participate more, due to the increased market visibility of JCR.

Adobe’s additional resources, used wisely to take advantage of the Day team’s strengths, have enabled them to fully realize the CQ5 vision. Everything is content, really.

As in all fairy tales, the former Day team and Adobe live happily ever after. (Editor’s note: this is not Disney, can we strike that one please?)

The Nightmare

This is late 2011, and I can hear the programmers complaining in their bland cubicles.

Aaarrggghhhhh.

The few Day folks who still work at Adobe did try to convince their management to continue on the open source and open development track. No luck – you can’t argue with an US company making 4 billion a year, can you?

CQ5 customers are too busy converting their websites to native PDF (this is about documents, right?) to realize what’s going on. The most desperate just switched to DrooplaPress, the newest kid on the LISP-based CMSes block. That won’t help business much but at least it’s fun to work with. If you love parentheses, that is.

Adobe’s competitors who really jumped on the open source and open development train are gone for good, it is too late to catch up. You should have sold you shares a year ago.

Luckily, Apache Jackrabbit and Apache Sling are still alive, and increased involvement of the “Benelux Gang” (ex-Day folks spread over a few Benelux content management companies) in those projects means there’s still hope.

You wake up wondering why you didn’t accept that job at the local fast food. Computers are so boring.

Coda

I know life is more complicated than dreams sometimes, but I like dreams much better than nightmares, and I’m a chronic optimistic. So you can easily guess which scenario I’m going to work towards!

I’ll keep you posted about what really happens next. Once I wake up, that is.

Just dreamin’, y’know?

Related reading

Open Source at Adobe by my colleague and fellow Apache Member Jukka Zitting.

Open innovation in software means Open Source, a recent post of mine.

See also my collected links related to the announcement at http://delicious.com/bdelacretaz/adobeday.


Open innovation in software means open source

July 2, 2010

Here’s a “reprint” of an article that I wrote recently for the H, to introduce my talk at TransferSummit last week.

According to Henry Chesbrough[1], Open Innovation consists of using external ideas as well as internal ideas, and internal and external paths to market, to advance a company’s technology.

Software architects and developers are usually not short of ideas, but which of those ideas are the really good ones? How do you select the winning options and avoid wasting energy and money on the useless ones?

Feedback is the key to separating the wheat from the chaff. Fast and good quality feedback is required to steer any fast vehicle or sports device, and it works the same in software: without a performant feedback loop, you’re bound to fall on your face – or at least to be slower than your competitors on the road to success.

Innovation is not invention – it’s about value

In a recent blog post on the subject, Christian Verstraete, CTO at HP, rightly notes that innovation is not invention. Whereas the value of a new invention might be unknown, the goal of innovation is to produce value, often from existing ideas.

The output of our feedback loop must then be a measurement of value – and what better value for a software product than happy stakeholders? Other developers adopting your ideas, field testers happy with performance, experts suggesting internal changes which will make them feel good about your software’s structure. That kind of feedback is invaluable in steering your innovative software product in the right direction, quickly.

How fast is your feedback loop?

If you have to wait months to get that high-quality feedback, as you might in a corporate setting, your pace of innovation will be accordingly slow.

In the old world of committees, meetings and reports, things move at the speed of overstuffed schedules and overdue reports – slowly. In the new world of agile open source projects, fast and asynchronous Internet-based communication channels are your friends, helping people work at their own pace and on their own schedule, while collectively creating value quickly.

Open source organizations like the Apache Software Foundation provide standardised tools and best practices to foster efficient communications amongst project members. Shared source code repositories generate events to which project members can subscribe, to be informed immediately of any changes in modules that they’re interested in. Web-based issue trackers also use events and subscriptions to make it easy to collaborate efficiently on specific tasks, without requiring the simultaneous online presence of collaborators. Mailing lists also allow asynchronous discussions and decisions, while making all the resulting information available in self-service to new project members.

It is these shared, event-based and asynchronous communications channels that build the quick feedback loop that is key to software innovation. It is not uncommon for a software developer to receive feedback on a piece of code that they wrote, from the other end of the world, just a few minutes after committing that code to the project’s public code repository. Compared to a written problem report coming “from above” a few weeks later, when the developer has moved on to a different module, the value of that fast feedback is very high. It can feel a bit like a bunch of field experts looking over your shoulder while you’re working – scary but extremely efficient.

How good are your feedback “sensors”?

Fast feedback won’t help if it’s of low quality, and fortunately open source projects can also help a lot here. Successful projects can help bring together the best minds in the industry, to collectively solve a problem that benefits all of them. The Apache HTTP server project is one of the best examples, with many CTO-level contributors including a few that were involved in defining the protocols and the shape of today’s Web. If software developers (God forbid) were sold between companies the way soccer players are transferred between teams, we’d see millions of dollars flowing around.

Open source projects are very probably the best way to efficiently bring software experts together today. Industry associations and interest groups might fulfill that role in other industries, but developers like to express themselves in code, and open source projects are where that happens today.

You could of course hire experts to give feedback on your software inside your company, but it’s only a handful of companies who have enough money to bring in the level and number of experts that we are talking about – and that might well turn out to be much slower than the open source way of working.

What’s the right type of project?

Creating or joining an open source project that helps your business and attracts a community of experts is not that easy: the open source project space is somewhat crowded today, and those experts are busy people.

Judging from the Apache Software Foundation’s achievements in the last ten years, infrastructure projects have by far the highest success rate. If you can reduce (part of) your problem to a generalised software infrastructure that appeals to a wide range of software developers, those experts will see value in joining the project. Apache Hadoop is another very successful example of software architects and developers from different companies joining forces to solve a hard problem (large scale distributed computing) in a way that can benefit a whole industry. On a smaller scale, Apache Jackrabbit , one of the projects in which my employer is very active, brings together many experts from the content management world, to solve the problem of storing, searching and retrieving multimedia content efficiently. Those types of software modules are used as central infrastructure components in systems that share a similar architecture, while offering very different services to their end users.

Projects closer to the user interface level are often harder to manage in an open group, partly because they are often more specific to the exact problem that they solve, and also because it is often hard for people coming from different companies and cultural backgrounds to agree on the colour of the proverbial bike shed. An infrastructure software project can be well defined by an industry specification (such as JCR in Jackrabbit’s case), and/or by automated test suites. These are usually much easier to agree on than user interface mock-ups.

Where next?

I hope to have convinced you that open source projects provide the best feedback loop for innovative software. As a next step, I would recommend getting involved in open source projects that matter to you. There are many ways to contribute, from reporting bugs in a useful way, to writing tutorials, contributing new modules or extensions, or simply reporting on your use of the software in various environments.

Contributing, in any small or big way, to a successful open source project is the best way to see this high-quality feedback loop in action. You might also try to use the open source ways of working inside your company, to create or improve your own high-quality “innovation feedback loop”.

I cannot pretend to have the definitive answer to the “how do you select and execute the right ideas to innovate?” question. When it comes to software, however, the fast and high-quality feedback loop that open source projects provide is, in my opinion, the best selection tool.

[1] Chesbrough, H.W. (2003). Open Innovation: The new imperative for creating and profiting from technology. Boston: Harvard Business School Press


Follow

Get every new post delivered to your Inbox.

Join 30 other followers