Saturday, August 7, 2010

A Non-Tragedy of a Commons

“Hardin famously asks us to imagine a green held in common by a village of peasants, who graze their cattle there. But grazing degrades the commons, tearing up grass and leaving muddy patches, which re-grow their cover only slowly. If there is no agreed-upon (and enforced!) policy to allocate grazing rights that prevents overgrazing, all parties' incentives push them to run as many cattle as quickly as possible, trying to extract maximum value before the commons degrades into a sea of mud.

Most people have an intuitive model of cooperative behavior that goes much like this. The tragedy of the commons actually stems from two linked problems, one of overuse and another of under-provision. On the demand side, the commons situation encourages a race to the bottom by overuse-what economists call a congested-public-good problem. On the supply side, the commons rewards free-rider behavior-removing or diminishing incentives for individual actors to invest in developing more pasturage.

The tragedy of the commons predicts only three possible out-comes. One is the sea of mud. Another is for some actor with coercive power to enforce an allocation policy on behalf of the village (the communist solution). The third is for the commons to break up as village members fence off bits they can defend and manage sustainably (the property-rights solution).

When people reflexively apply this model to open-source cooperation, they expect it to be unstable with a short half-life. Since there's no obvious way to enforce an allocation policy for programmer time over the Internet, this model leads straight to a prediction that the commons will break up, with various bits of software being taken closed-source and a rapidly decreasing amount of work being fed back into the communal pool.

In fact, it is empirically clear that the trend is opposite to this. The trend in breadth and volume of open-source development can be measured by submissions per day at Metalab and SourceForge (the leading Linux source sites) or announcements per day at freshmeat.net (a site dedicated to advertising new software releases). Volume on both is steadily and rapidly increasing. Clearly there is some critical way in which the "Tragedy of the Commons" model fails to capture what is actually going on.

Part of the answer certainly lies in the fact that using software does not decrease its value. Indeed, widespread use of open-source software tends to increase its value, as users fold in their own fixes and features (code patches). In this inverse commons, the grass grows taller when it's grazed upon.

That this public good cannot be degraded by overuse takes care of half of Hardin's tragedy, the congested-public-goods problem. It doesn't explain why open source doesn't suffer from underprovision. Why don't people who know the open-source community exists universally exhibit free-rider behavior, waiting for others to do the work they need, or (if they do the work themselves) not bothering to contribute the work back into the commons?

Part of the answer lies in the fact that people don't merely need solutions, they need solutions on time. It's seldom possible to predict when someone else will finish a given piece of needed work. If the payoff from fixing a bug or adding a feature is sufficient to any potential contributor, that person will dive in and do it (at which point the fact that everyone else is a free rider becomes irrelevant).

Another part of the answer lies in the fact that the putative market value of small patches to a common source base is hard to capture. Suppose I write a fix for an irritating bug, and suppose many people realize the fix has money value; how do I collect from all people? Conventional payment systems have high enough overheads to make this a real problem for the sorts of micropayments that would usually be appropriate.

It may be more to the point that this value is not merely hard to capture, in the general case it's hard to even assign. As a thought experiment, let us suppose that the Internet came equipped with the theoretically ideal micropayment system-secure, universally accessible, zero-overhead. Now let's say you have written a patch labeled "Miscellaneous Fixes to the Linux Kernel". How do you know what price to ask? How would a potential buyer, not having seen the patch yet, know what is reasonable to pay for it?

What we have here is almost like a funhouse-mirror image of F. A. Hayek's 'calculation problem' -it would take a super being, both able to evaluate the functional worth of patches and trusted to set prices accordingly, to lubricate trade.

Unfortunately, there's a serious superbeing shortage, so patch author J. Random Hacker is left with two choices: sit on the patch, or throw it into the pool for free.

Sitting on the patch gains nothing. Indeed, It Incurs a future cost-the effort involved in re-merging the patch into the source base in each new release. So the payoff from this choice is actually negative (and multiplied by the rapid release tempo characteristic of open-source projects).

To put it more positively, the contributor gains by passing maintenance overhead of the patch to the source-code owners and the rest of the project group. He also gains because others will improve on his work in the future. Finally, because he won't have to maintain the patch himself, he will be able to afford more time on other and larger customizations to suit his needs. The same arguments that favor opening source for entire packages apply to patches as well.

Throwing the patch in the pool may gain nothing, or it may encourage reciprocal effort from others that will address some of J. Random's problems in the future. This choice, apparently altruistic, is actually optimally selfish in a game-theoretic sense.”

The Cathedral and the Bazaar, Eric Raymond, O’Reilly, 2001, p126-127

No comments:

Post a Comment