The "Chance of all tests passing" at the CPAN dependencies and test results checker is not a scientific measure - but it is great in making it visible that installing from CPAN is still a game of chance with quite big odds of failure. The linked above Catalyst dependencies page quotes the 'probability' of installation failure as 22%, and that is for the most advertised Perl web framework. The important thing in that picture is how this relatively big failure chance comes from the individual small failure chances of the Catalyst prerequisites.
So what can we do? One thing that I would propose is to stop using optional features and dependencies because they really get into the way of automatic installs. We can also simplify the dependency graph of some more important modules, make better statistics about the failure rates and they origins, but probably the most effective way will be to be just more vigilant about the test failures in our modules and in the prerequisites. The important lesson here is that a 1% failure rate seems like nothing important - but it can make our module inadequate to become a prerequisite for some more complex library.
Sunday, August 30, 2009
Subscribe to:
Post Comments (Atom)
3 comments:
It looks like most of the high probability of overall failure is due to List::MoreUtils and Tree::Simple. The first has had some relatively recent problems that have put it at the top of the FAIL 100 list. The NA reports for Tree::Simple look like some sort of testing problem and may or may not reflect issues with Tree::Simple itself.
I think the broader lesson here is that a large dependency tree means that one or two small, even transient problems could make Catalyst hard to install and create a negative first impression.
The checking on cpandeps isn't entirely sane - it calculates across all platforms, so if you have one platform with half a dozen deps dying and five that don't, you get the same result as one dep failing on six, if memory serves.
I spoke to Dave Cantrell about this, and he's aware that the algorithm could be more clever but doesn't have time to actually produce a better one - patches welcome very much applies.
-- mst
Hi mst - I did not want to single Catalyst out, but I needed a concrete example. I know the measure is not perfect (I did link to and explanation from Dave). I've just checked Jifty - and that one has 15% of chance of success. Catalyst with 78% is then comparatively rather bright spot on the picture.
Post a Comment