$ curl -s http://simile.mit.edu/crowbar/test.html
Hi lame crawler
Using crowbar as a proxy, the page is rendered using the Gecko engine (called via a simple XULRunner app), exactly as a a client browser would do (of course - it is the same client browser engine that Firefox would use):
$ curl -s --data "url=http://simile.mit.edu/crowbar/test.html" http://127.0.0.1:10000/ | xml fo -s 2<-- (stuff added by crowbar for its test page omitted)... -->
Note the use of XMLStarlet to format the resulting document: as it's a DOM dump from Gecko, the output is well-formed in all cases.
The only thing missing seems to be the encoding declaration in the XML output: crawling http://www.perdu.com for example (one of my favorite references on the Web), didn't work produce parsable XML as the XML output is serialized using the iso-8859-1 encoding (at least here on my macosx system), but the XML declaration doesn't mention this.
The code is at http://simile.mit.edu/repository/crowbar/trunk/ and can be installed directly under XULRunner.
Update: the brand new Crowbar web site has more info.