Sony Bridge for Mac, a perfect example of second-system effect #fail!

April 3, 2014

I’m sorry to be ranting here instead of writing about more creative things, but here’s a perfect example of the “second-system effect” – replacing simple software that works perfectly well with something supposedly more powerful…that doesn’t work.

My Sony Xperia V phone used to upgrade over the air. Get a notification, wait until you have a good Internet connection, start the upgrade, wait a few minutes and you’re done. Who needs more than that?

Sony apparently thinks that’s too simple. Or maybe they have other reasons to start forcing users to use their brand new shiny Sony Bridge desktop software for updates. Reminds me of my first Samsung Galaxy, years ago…I hope Samsung’s over that by now – but I digress.

There doesn’t seem to be another option to upgrade my phone this time, so I play nice, downdload and install the latest Sony Bridge V3.7.1(3710) software tool, connect my phone to my computer and start the bridge tool. Oh, by the way: how do you upgrade if you don’t have a computer? What’s with “mobile first”? But I digress.

And here’s the first “fun” result: the tool refuses to upgrade my phone now because the battery charge is less than 80%. Oh my.

IT IS PLUGGED IN, STUPID! How could you talk to it via USB otherwise? Fail. Sony Bridge, I’m starting to hate you. No, really.

Ok, I’m a good boy. I wait until my battery reaches 80% and try again. Hurray! I can get to the upgrade screen this time.

The tool says “Initializing” and “Talking to Update Engine”.

And it stays there.

And time passes.

Fourty-seven minutes now. 47 MINUTES and exactly nothing else happens. No feedback. No progress. No error messages. Did you guys take UX 101 in school? I guess not.

Sony Bridge #fail screenshot

Forget the Cancel button…it doesn’t work.

I tried twice. Had to delete some secret cache files in between both attemps, as on the second time the tool was telling me that my phone is up to date, but it’s not, as the phone itself tells me.

The result of this brilliant software evolution? I am unable to upgrade my phone. That will make it useless soon, I guess, and I didn’t pay a sizeable amount of money to have a non-upgradeable phone.

So here we have a perfect example of what software folks should NEVER do: replace a perfectly working simple system (over-the-air updates) which a new shiny thing that DOESN’T WORK and makes me YELL here which is quite unusual. Ok, maybe you don’t care about me yelling, but if you’re Sony I would expect you to be more clever than that.

So, Sony, I guess I’ll ask for a refund for this now useless phone. What do you think?

But of course, the best by far is just to re-enable over-the-air updates. That used to work very well.

Update: I was finally able to upgrade my phone by running the Sony Bridge train wre^H^H^H^H^H^H^H^H^H software on an old macbook. But still…why?


Simple startup profiling of OSGi applications

October 18, 2013

Prompted by a colleague’s question about profiling the startup time of Apache Sling systems, I vaguely remembered writing something a while ago, but couldn’t find it immediately.

Funny how one’s memory works – it took me a while to find it again, but I did indeed write an OSGi events recorder + timeline utility in SLING-1109.

That utility has since moved to the Apache Felix webconsole events plugin bundle, which is included in the Sling launchpad and thus was right here under my eyes.

Here’s how you can use that webconsole plugin to get a simple timeline of an OSGi system’s startup:

  1. Install the org.apache.felix.webconsole.plugins.event bundle in an OSGi app where the Apache Felix Webconsole is active.
  2. Set that bundle to a low start level, say 1 or 2, so that it’s activated early and captures as much startup events as possible.
  3. Configure the events recorder at /system/console/configMgr/org.apache.felix.webconsole.plugins.event.internal.PluginServlet, setting the number of events to capture high enough to record your app’s startup.
  4. Start your app.
  5. Look at the startup timeline at /system/console/events

This provides a simple graphical timeline, as shown on the screenshot below, that’s especially useful in detecting outlier bundles or services that take a long time to start up.

Webconsole Events plugin screenshot


jq : sed, grep and awk for json

September 26, 2013

From the very useful tools department: today I stumbled on jq, via Jeroen Janssen’s 7 command-line tools for data science blog post.

As the tagline says, jq is like sed, grep and awk for json: a command-line filter that lets you format, select and output JSON data.

As an example, here’s how you can list all the OSGi bundles from your Sling instance together with their state. The raw bundles.json input looks like this:

{
  "data": [
    {
      "category": "",
      "symbolicName": "org.apache.felix.framework",
      "version": "4.2.0",
      "state": "Active",
      "stateRaw": 32,
      "fragment": false,
      "name": "System Bundle",
      "id": 0
    },
    {
      "category": "",
      "symbolicName": "org.apache.aries.jmx.api",
      "version": "0.3.0",
…

And here's the curl + jq command:

$ curl -s -u admin:admin http://localhost:8080/system/console/bundles.json | \
jq '.data | .[] | .symbolicName + " " + .state ' | sort

"derby Active"
"groovy-all Active"
"jcl.over.slf4j Active"
"log4j.over.slf4j Active"
"org.apache.aries.jmx.api Active"
"org.apache.aries.jmx.core Active"
...

Neat, isn’t it?

See jq’s tutorial and manual for more details.


The Five Wisdoms of Open Development

February 15, 2013

Preparing my Open Development in the Enterprise talk for ApacheCon NA 2013 in Portland the week after, I’ve tried to capture the basic wisdoms of day-to-day work in an Open Development environment:

If it didn’t happen on the dev list, it didn’t happen.

Whatever you’re working on, it must be backed by an issue in the tracker.

If it’s not in the source code control system, it doesn’t exist.

If it’s important, it needs a permanent URL.

What happened while you were away? Check the activity stream and archives.

As usual, following recipes without understanding what they’re about won’t help – I use those wisdoms just as a reminder of how things are supposed to happen, or more precisely how people are supposed to communicate.


Ten years of blogging!

November 9, 2012

Although I had marked the date in my calendar, a great ApacheCon kept me busy and I missed Tuesday’s date for announcing that it’s been ten years now since I started blogging.

Not that you’d care…though it’s fun to see that my interests and professional activities haven’t changed that much in ten years. Less blogging of course, as over time this blog has morphed into an outlet for more permanent/longer content, which is less frequent of course.

The first few posts at http://grep.codeconsult.ch/2002/11/ are not earth-shattering – that was a start though.

See you on Twitter for now, as that’s where my noise has mostly moved, and we’ll see if this blog is still around in ten years!


Open Source Communites are like gardens, says Leslie Hawthorn

June 4, 2012

Great keynote by Leslie Hawthorn at Berlin Buzzwords today – here are my quick notes.

As with a garden, you need to cultivate your community, it doesn’t just happen.

You need to define the landscape – where you want the garden to grow. For a community this means clearly defining goals and non goals.

The landscape needs to be regularly assessed – are we welcoming to newbies? Can new plants grow here?

Clear the paths. Leslie mentions the example of rejecting code patches based on the coding style – if you haven’t published a style guide, that’s not the best way to welcome new contributors. Make sure all paths to success are clear for your community members.

A garden needs various types of plants – invite various types of people. Not just vague calls for help – specific, individual calls based on their skills usually work much better.

Pluck out the weeds, early. Do not let poisonous people take over.

Nurture your seedlings. Take care of newbies, help them grow.

Like companion planting, pairing contributors with the right mentors can make a big difference.

Know when to prune. Old-timers leaving a project is not necessarily a problem, things change.

Leslie cites OpenMRS as an example of successful community management – I’ll have to have a look but I already like the “how to be a casual committer” paragraph in their different types of OpenMRS developers page.

All in all, very interesting parallels. Being quite clueless about gardening, I never thought of looking at our communities from this angle, but it makes perfect sense. Broadening one’s view – that’s what a keynote should be about!


RESTful services as intelligent websites

March 27, 2012

Thinking of RESTful services as intelligent websites helps drive our architecture decisions towards simplicity, discoverability and maximum use of the HTTP protocol. That should help us design services that are easier to use, debug, evolve and maintain.

In this post I’ll try to address a few important aspects of RESTful services, considering them as intelligent websites. We might need more formal specifications in some cases, but this is hopefully a good start.

Discoverability and service homepage

Having a descriptive homepage for your service helps search engines and people discover it, and the base service URL should “never” change. Service-specific subdomains come to mind.

The service homepage includes all the necessary information such as service owner’s contact information, links to source code, etc.

News about service updates live right here on the service homepage, ideally as full content for the most recent news, but at least as links.

The key idea is that I shouldn’t have to tell you more than the service homepage’s URL for you to be able to use the service.

Even if your service is a company-internal one that’s not meant to become public, having a decent-looking homepage, or at least one that’s well organized and well readable, won’t hurt.

HATEOAS

In my pragmatic view Hypermedia as the Engine of Application State basically means links tell you where to go next.

In a website meant for humans, the meaning of a link is often expressed by logical groups: navigation links at the top left, “more info” links in a smaller font at the bottom of a page, etc.

For machines, adding rel attributes to <link> elements (in HTML, or the equivalents in other formats) tells us what we can do next. A client should be able to first try a link with non-destructive results, and get a response the supplies details about how the specified interaction is supposed to work. If those details are too involved to be expressed in a simple HTTP request/response (which should be a “maybe this is too complex” warning), links to explanations can be provided in the HTML content of our intelligent website.

Self-explaining services

The service documentation, if any is needed, is also provided on the service homepage, or linked from it.

Service interactions are designed to be as obvious as possible to minimize the need for documentation, and readable automated tests (hosted on the service website of course, or linked from it) can help document the details.

HTML forms describe service parameters

HTML forms are best way to document the service parameters: provide a form that humans can use to play with the service, with enough information such as lists and ranges of values so that users can figure out by themselves what the service expects.

The idea is that a human user will play with your service from the HTML form, then move on to implementing a client in the language of their choice.

The action attribute of <form> elements also contributes to HATEOAS – intelligent clients can discover that if needed, and it’s obvious for human users.

And of course, speaking of parameters, be liberal in what you accept, and conservative in what you send.

Speaking in URLs

Like humans, RESTful services need to be precise when they speak about something important.

If your service response says invalid input format for example, it’s not hard to include in that response an URL that points to a description of the correct input format. That makes all the difference between a frustrating and a useful error message, and is part of HATEOAS as well.

Web-friendly responses

Readable response formats will help people make sense of your service. The HTTP protocol does provide standard ways of compressing responses to avoid using more bandwidth than needed, so optimizing the response for wire efficiency does not make much sense unless you’re really expecting measuring huge traffic. And even if you need an optimized binary response format, there’s probably a way to make that optional.

HTTP status codes

Thou shalt not abuse HTTP status codes – or you might be suddenly transformed into a teapot.

Returning a 200 OK status code with content that describes an error is a no-no: if something went wrong, the HTTP status code needs to express that.

Security

Website authentication and authorization mechanisms and secure connections work for machine clients as well, no need to reinvent that wheel.

HTTP sessions are a bad idea in a RESTful context of course, state is driven by hypertext as discussed above.

Character encodings

The issues are the same for human users as for machine clients: you need to play by the HTTP protocol rules when it comes to character encodings, and using UTF-8 as the default is usually the best option.

Stable service URLs

As with good websites, once a service has been published at a given URL, it should continue working in the same way “forever”.

A substantially different new version of the service should get its own different URL – at least a different path containing a version number, or maybe a new subdomain name.

Long-running jobs

Regardless of human or machine clients, you usually don’t want HTTP requests to last more than a few seconds. Long-running jobs should initially create a new resource that describes the job’s progress and lets the client know when the output is available. We’ve had an interesting discussion about this in Apache Stanbol recently, about long-running jobs for semantic reasoners.

Service metadata

Non-trivial services often have interesting metadata to provide, which can have both static and dynamic parts, like configuration values and usage statistics for example.

Here again, the website view helps: that metadata is just made accessible from the service homepage via links, ideally with both human (a href) and machine (link rel) variants.

Coda

Designing your RESTful service as an intelligent website should help people make good use of it, and will probably also help you make its interactions simpler and clearer.

If your service can’t be presented as a website, it might mean that your interaction design is not optimal, or not really RESTful.

I’m happy to see these opinions challenged, of course, if someone has any counter-examples.

Update: I should have mentioned that this post, and especially the “intelligent websites” concept, was inspired by conversations with Roy Fielding, in the early days of Apache Sling. I haven’t asked him to review it, so this doesn’t mean he would endorse the whole thing…I’m giving credit for inspiration but still liable for any mistakes ;-)

Update #2: See also my REST reading list, a regularly updated collection of links to articles that I’ve found useful in understanding and explaining REST..


Follow

Get every new post delivered to your Inbox.

Join 26 other followers