Pragmatic validation metrics for third-party software components
Earlier this week at the IKS general assembly I was asked to present a set of industrial validation metrics for the open source software components that IKS is producing.
Being my pragmatic self, I decided to avoid any academic/abstract stuff and focus on concrete metrics that help us provide value-adding solutions to our customers in the long term.
Here's the result, for a hypothetical FOO software component.
Metrics are numbered VMx to make it clear what we'll be arguing about when it comes to evaluating IKS software.
Do I understand what FOO is?
Does FOO add value to my product?
Is that added value demonstrable/sellable to my customers?
Can I easily run FOO alongside with or inside my product?
Is the impact of FOO on runtime infrastructure requirements acceptable?
How good is the FOO API when it comes to integrating with my product?
Is FOO robust and functional enough to be used in production at the enterprise level?
Is the FOO test suite good enough as a functionality and non-regression "quality gate"?
Is the FOO licence (both copyright and patents) acceptable to me?
Can I participate in FOO's development and influence it in a fair and balanced way?
Do I know who I should talk to for support and future development of FOO?
Am I confident that FOO still going to be available and maintained once the IKS funding period is over?
VM1 can be surprisingly hard to fulfill when working on researchy/experimental stuff ;-)
Suggestions for improvements are welcome in this post's comments, as usual.
Thanks to Alex Conconi who contributed VM11.