May the source be with you,
but remember the KISS principle ;-)

Softpanorama: (slightly skeptical) Open Source Software Educational Society

Computer science related content (under the Open Content License) for self-study. Online resource for students with a special emphasis on unfashionable topics like algorithmsscripting languagescomputer security,  extremely useful but undervalued tools ( Orthodox file managers, Folding Xedit-style editors) , the importance of using the best available computer  books for self-study (especially books by Donald Knuth),   open source software development problems, and the problem of  information overload.

Search books on Amazon (books often can provide information that Internet is missing ;-) :

Support Usenix: the only computer professional society that opens its magazine and proceedings after one year


An Annotated Webliography on Open Source Software Development Problems

News

General

Classic Dilemmas

Status Competion

Brooks' Law and Open Source

OSS organization and leadership

Gift Economy and Anarcho-Communism

Motivation

Annotated Chronicle  FAQ  OSS page Early Critique of Catb ERS Interviews ERS Quotes Marxism, Anarchism, Libertarianism Etc

 

This Webliography contains the most relevant papers from my working document Problems of Open Source Software: (slightly skeptical) Annotated Chronicle.

Nikolai Bezroukov


News

Chapter 1 Introduction

Table 1. Characteristics of selected open-source projects.

Project
Size (KLOC) [1]
Application Domain
Apache
100
HTTP server
Linux
800
Operating system
INN
150
NNTP server
BIND
150
DNS server
KDE
250
Desktop environment
GNOME
150
Desktop environment
Mozilla
1500
Web browser
Perl
150
Programming language
Python
160
Programming language
Samba
150
SMB Server

 

Open-Source Software Development

Even if the current attention given to open-source software by the media and commercial software companies proves to be only a fad and it doesn't make the jump into the mainstream acceptance, I believe that open-source software development is here to stay. While it may not be the right development model for every software project, it is hard to argue against its effectiveness in the case of Linux or Apache. James Sanders [21] makes a good point that as the software companies struggle to find a way to meet the often conflicting requirements of developing software faster while making it more robust, they might be forced to embrace alternative development models. Using Linux as an example, open source gives a development model that ``creates a faster, leaner, and more reliable product than mainstream competitors, and does so essentially for free'' (p. 89). In addition, it can only be expected that new open-source projects will continue to be created and bring forward innovative software ideas that would be too large for resources of a single enthusiast, but generate enough interest in the Internet programmers's community to carry their development forward.

What we should provide to these developers is the tools and knowledge to better manage such projects, reduce the entry barrier for those who want to join the development, and eliminate unnecessary manual work. Clearly, better configuration management (CM) tools are necessary --- drawbacks of CVS are well-known (e.g., [12], and are only more obvious when the collaborating programmers are so widely dispersed and loosely coordinated. Distributed CM systems, such as NUCM [25], would make a better fit, including keeping parts of the development tree on servers belonging to a subgroup of developers that temporarily branched off for work on a new feature (or that, like Netscape in case of Mozilla, has a proprietary version of the software). Announced and proposed changes could be better managed through a mechanism like that described by Narayanaswamy and Goldman [14] instead of a mailing list, like that for Apache Web server. Similarly, reviewing and testing patches would arguably be a lot easier if they were accessed through, for example, a fine-grained revision control mechanism like Historian [1], than as email messages thrown in together with the rest of development-related discussion.

Lastly, effective knowledge management is all the more crucial in open-source development, where most developers won't ever see each other face-to-face and discussion is for practical reasons almost exclusively online and asynchronous. Furthermore, it should be made easier for the new developers to join in and get acquainted with the design decisions and the evolution of the code. Currently, this usually means going through the archives of the project's mailing list. A possibly better approach is suggested by Lougher and Rodden [11], where maintenance-related discussion is (literally) centred on the affected code; it could conceivably be extended to support the full development cycle and include an overview of the temporal change of the program as well.

Through all of this, we should keep in mind the specificities of open-source software development that make it different from, for example, Internet-based development for a more monolithic entity, such as a commercial software company. In particular, open-source software development is extremely fluid: it depends on contributions from volunteers, whose commitments, available time, or interests may easily change. Enacting a traditional software process under these circumstances is much more difficult; it could also be counterproductive, if the contributors perceive that they are giving away their freedom. What is needed are the tools and techniques that are flexible, complement the tools currently in use, and support the existing open-source development model almost transparently.

 

Open Source Projects Manage Themselves Dream On

Much has been written about the open source method of software development. By far, one of the most tantalizing statements about open source development is these projects manage themselves. Gone are layers of do-nothing managers with bloated bureaucracies and interminable development schedules. In their place is a new paradigm of self-organizing software developers with no overhead and high efficiency.

The dust jacket for Eric Raymond's open source manifesto The Cathedral and the Bazaar makes this statement clearly. It says: "…the development of the Linux operating system by a loose confederation of thousands of programmers -- without central project management or control -- turns on its head everything we thought we knew about software project management. … It [open source] suggested a whole new way of doing business, and the possibility of unprecedented shifts in the power structures of the computer industry." This is not just marketing hype on a book cover, Raymond expands the point inside: "… the Linux community seemed to resemble a great babbling bazaar of differing agendas and approaches … out of which a coherent and stable system could seemingly emerge only by a succession of miracles." Other open source adherents make similar statements in trumpeting the virtues of open source programming.

There is one problem with the statement, open source projects manage themselves. It is not true. This article shows open source projects are about as far as you can get from self-organizing. In fact, these projects use strong central control, which is crucial to their success. As evidence, I examine Raymond's fetchmail project (which is the basis of The Cathedral and the Bazaar ) and Linus Torvalds's work with Linux. This article describes a clearer way to understand what happens on successful open source projects and suggests limits on the growth of the open source method.

(Note: This article addresses issues raised in the essay titled The Cathedral and the Bazaar . The essay also is included in a book with the same title, which contains other essays as well.)

Free Software-Free Science

Over the last few years, as the Open Source/Free Software movement has become a constant in the business and technology press, generating conferences, spawning academic investigations and business ventures alike, one single question seems to have beguiled nearly everyone: "how do you make money with free software?"

If the question isn't answered with a business plan, it is inevitably directed towards some notion of "reputation". The answer goes: Free Software programmers do what they love, for whatever reason, and if they do it well enough they gain a reputation for being a good coder, or at least a loud one. Throughout the discussions, reputation functions as a kind of metaphorical substitute for money - it can spill over into real economies, be converted via better jobs or consulting gigs, or be used to make decisions about software projects or influence other coders. Like money, it is a form of remuneration for work done, where the work done is measured solely by the individual, each person his or her own price for creating something. Unlike money, however, it is also often seen as a kind of property. Reputation is communicated by naming, and the names that count are those of software projects and the people who contribute to them. This sits uneasily beside the knowledge that free software is in fact a kind of real (or legal) property (i.e. copyrighted intellectual property). The existence of free software relies on intellectual property and licensing law (Kelty, forthcoming; Lessig, 1999).

In considering the issue, most commentators seem to have been led rather directly to similar questions about the sciences. After all, this economy of reputation sounds extraordinairily familiar to most participants [1]. In particular two claims are often made: 1) That free software is somehow 'like' science, and therefore good; and, 2) That free software is - like science - a well-functioning 'gift economy' (a form of meta-market with its own currency) and that the currency of payment in this economy is reputation. These claims usually serve the purpose of countering the assumption that nothing good can come of system where individuals are not paid to produce. The assumption it hides is that science is naturally and essentially an open process - one in which truth always prevails.

The balance of this paper examines these claims, first through a brief tour of some works in the history and social study of science that have encountered remarkably similar problems, and second by comparing the two realms with respect to their "currencies" and "intellectual property" both metaphorical and actual.

The Fading Altruism of Open Source Development

If this hypothesis is supported by future research, a reinterpretation of the entire history of the free software movement will be necessary. For this analysis suggests a starkly different logic to open source development than is contained in most of the popular literature on the subject. In particular, it offers support for the hypothesis that early open source work thrived because its development took place in an immature and publicly-subsidized market. While academics and researchers were no doubt driven by a desire to "scratch an itch" and perform work they found particularly stimulating, it is significant that they essentially performed labor for which there was very little immediate private-market demand. Is free software truly free? It may be something for which developed countries have already paid: through early funding for academic research and development, and support for public research at times when the market for certain types of software was immature. It is hardly accidental that early "hacker" communities emerged at organizations with the resources and will to subsidize long-term development.

And although the role the United States government played in supporting early development in the American hardware-computing industry is generally recognized, its indirect contribution (and that of foreign governments) to the success of open source projects is usually completely overlooked in arguments which center on hacker ethics [43]. Part of this doubtless stems from the ideological biases of many commentators on this issue, whose political preferences have tended towards the libertarian. By portraying programmers as motivated primarily by cultural factors, early theorists could not only treat early developers as counterculture heroes in the cypherpunk tradition, but also paint the history of the open source movement as an all-American story of Jeffersonian independence. In one typical early interview, for instance, suggestions that open source projects might be explicable as "project[s] subsidized by government or academic" bodies were dismissed offhand by the interviewer. This rejection of classical economic logic is all the more remarkable given the overwhelming torrent of evidence that not only are many open source projects begun in academic circles, but that the growth of Linux itself was indirectly subsidized by the University of Helsinki [44]. It may be one of the odd ironies of computing history that the open source movement has proven such fertile grounds for libertarian philosophizing.

A more profound consequence of treating open source development from a classical perspective is the unexpected shadow it throws on one of the most cherished pillars of modern political economy. In particular, it opens the door to a devastating attack on the empirical validity of the principle of creative destruction invoked by Joseph Schumpeter in his opus Capitalism, Socialism and Democracy (1950). Here, Schumpeter does not (as is often assumed) simply assert that innovative businesses cannibalize their older counterparts. His argument is a more complex one about the process of innovation itself. Contrasting market and state-led production systems, Schumpeter argues that the process of innovation induced by market-systems is inevitably the more socially beneficial because capitalism can be creative in its ability to wring the last ounce of efficiency from industrial processes. Defending capitalism against the socialist encroachment he believed inevitable, Schumpeter pointed out that:

"Since we are dealing with a process whose every element takes considerable time in revealing its true features and ultimate effects, there is no point in appraising the performance of that process ex visu of a given point in time; we must judge its performance over time, as it unfolds through decades or centuries."

This point was intended in defense of the short-term market inefficiencies Schumpeter recognized plagued American production in the 1940s. His defense rested on the belief that these failures in themselves created the long-term conditions necessary for efficient industrial development, by creating the incentives for further technical innovation. If one accepts the common view that open source software is more socially beneficial to society than commercial software, being more likely to lead to innovation because of its open and extensible nature, the success of projects like Linux in their twenty year competition with commercial alternatives creates a daunting empirical challenge to the Schumpeterian assumption that the efficiency of production systems can be measured without reference to the social systems that support them. If Raymond is right to conceptualize the dynamic of the Bazaar model as inherently more equitable than the Cathedral model, this criticism gains a moral edge as well. At least on first glance the historical record seems to suggest that - over decades - certain kinds of state intervention may in fact be socially desirable.

These questions are not merely academic: our understanding of how open source development actually works has profound implications for what kinds of corporate strategies and public policies firms and governments should pursue over time. If this paper is correct in its underlying conceptualization of developer motivations, projects such as Netscape's Mozilla which seek to leverage competitive advantage from unpaid public development are fool's errands - unlikely to create products which can be successfully leveraged for private profitability. Competition in service-provision should also grow progressively unprofitable as expertise on open source products grows more publicly diffuse and Baumol's disease kicks-in. If open source production is stimulated by differentials in real income levels, corporate strategies that seek to ward-off open source competitors by hiring or co-opting their developers are equally prone to undermine themselves in the long-term, being fraught with moral hazard. By increasing the benefits which accrue to free software developers, strategies aimed at co-opting open source development will ironically serve only to encourage further development. And counterintuitively, this model predicts that periods of greatest prosperity in national software industries will correspond with increased activity among developers abroad - exactly the trend we seem to have observed. Will "hard times" in software industries reduce the attractiveness of coding free software abroad?

Unevenly-enforced social policies on issues such as software piracy and labor mobility become incredibly important areas of corporate concern and public interest in this light. By mitigating global pressures for wage convergence, diverging social policies on these issues can encourage open source development even in developed market economies. Open source development may be popular in Canada, Finland and the Netherlands as relative to their neighbors, for instance, exactly because these countries are in many ways satellite economies. Programmers in these nations are likely to be well-aware of international income differentials, and frustrated by market immobility as well. Will a strengthened Euro reduce incentives for open source programming in Europe by reducing the variance in real income across Europe? Will labor mobility across Europe lead to a "convergence" in open source development as the common market becomes a truly continental one? In the case of software piracy, the widespread availability of illegal software drives two conflicting trends. Initially, high levels of piracy should reduce the incentive developers have to build open source alternatives to commercial products. As such, tolerating piracy may become a strategy of self-preservation for certain commercial firms, especially those seeking to establish their products as de facto market standards. Tacitly ignoring piracy - for all of its lost revenue - may yet become one of Microsoft's survival tactics, especially in countries like China which have yet to socially institutionalize open-source development networks. Once open source projects are well-established, however, high levels of piracy very clearly undermine commercial software development. It most prominently lowers the opportunity cost of coding free software over commercial applications, and also ushers in the escalating benefits of free software development modeled by Raymond and Ghosh.

Since theory informs us that public goods are always under-provided in market systems (the free-riding problem), the success of packages such as Linux in direct competition with market-driven alternatives offers vindication for policies aimed at improving social welfare through deliberate market distortion. Public policy may prove critical to the continued success of the open source movement г especially in the more subdued equity market following the recent economic downturn in the United States. NASA's support for the MySQL project and the release of the Army-coded GIS package GRASS under an open source license constitute two examples of "soft" support. As market incentives for open source software production shrink, "hard" support for programs such as higher education may be critical factors to the extent that these types of interventions can discourage rent-seeking behavior while encouraging individuals to invest time and effort in the creation of public goods [45]. Any balanced evaluation of the open source model must take these negative social costs into consideration as well.

Given the tentative nature of this research, I feel it vitally necessary to limit the scope of these conclusions. This paper is not intended as a critique of sociological theories about how individuals interact once they are in open source communities, but rather as an alternate and hopefully more convincing explanation for why developers drift into open source projects and cultures in the first place. Critics may be right to question the universal applicability of these conclusions. It is possible that the true significance of the open source movement is unlikely to be found in the success or failure of any particular project, but in the ephiphenomenal possibilities created the interaction of thousands of smaller, stand-alone programs, application libraries and standards (Raymond's UNIX Gospel). And since important software projects are clearly not restricted to the upper left-hand quadrant on which this paper has focussed, it is possible that a very important segment of the puzzle is missing from this analysis [46]. It is impossible to use this evidence to infer about causation in any quadrant but the combination of "Anti-Proprietary" and "Highly-Complex" projects.

But given the tremendous importance of this debate, it is vital not to dismiss without genuine thought and substantial reflection the points political economists bring to bear on this issue. For it is equally plausible that the benefits of the UNIX Gospel are over-exaggerated: possible only in certain systems (such as command-line UNIX desktops) and in limited circumstances (such as when inter-program communication can be handled through character strings). And if the truly important projects in the open source community are those most similar to the Linux and Gnome projects, which are unique in creating (not exploiting) a framework on which further development can be built, the insights political economists can shed on these movements allow for a much more nuanced view of development than is made by advocates of post-scarcity gift cultures. While this view may be less than wholly optimistic in its hopes for man ever transcending industrial capitalism, it is perhaps reassuring in concurring nonetheless with earlier critics that regardless of how software is produced in the real world, its increasing extensibility seems to be in the public interest and should be encouraged where feasible.

 

Open Source Projects Manage Themselves Dream On

The dust jacket for Eric Raymond's open source manifesto The Cathedral and the Bazaar makes this statement clearly. It says: "…the development of the Linux operating system by a loose confederation of thousands of programmers -- without central project management or control -- turns on its head everything we thought we knew about software project management. … It [open source] suggested a whole new way of doing business, and the possibility of unprecedented shifts in the power structures of the computer industry." This is not just marketing hype on a book cover, Raymond expands the point inside: "… the Linux community seemed to resemble a great babbling bazaar of differing agendas and approaches … out of which a coherent and stable system could seemingly emerge only by a succession of miracles." Other open source adherents make similar statements in trumpeting the virtues of open source programming.

There is one problem with the statement, open source projects manage themselves. It is not true. This article shows open source projects are about as far as you can get from self-organizing. In fact, these projects use strong central control, which is crucial to their success. As evidence, I examine Raymond's fetchmail project (which is the basis of The Cathedral and the Bazaar ) and Linus Torvalds's work with Linux. This article describes a clearer way to understand what happens on successful open source projects and suggests limits on the growth of the open source method.

(Note: This article addresses issues raised in the essay titled The Cathedral and the Bazaar . The essay also is included in a book with the same title, which contains other essays as well.)

What Really Happened with fetchmail
The Cathedral and the Bazaar revolves around Raymond's experience in creating a program called fetchmail by the open source method. As he describes the software development process, he annotates the story with lessons about open source programming and how well it worked for him. One of Raymond's key points is that the normal functions of management are not needed with open source development.

Raymond lists the responsibilities of traditional software managers as: define goals and keep everybody pointed in the same direction, monitor the project and make sure details don't get skipped, motivate people to do boring but necessary work, organize the deployment of people for best productivity, and marshal resources needed to sustain the project over a long period of time. Raymond then states that none of these tasks are needed for open source projects. Unfortunately, the majority of The Cathedral and the Bazaar describes, in detail, how important these management functions are and how Raymond performed them.

Eric Raymond decided what piece of software he would use as a test for open source programming. He decided what features fetchmail would have and would not. He generalized and simplified its design, defining the software project. Mr. Raymond guided the project over a considerable period of time, remaining a constant as volunteers came and went. In other words, he marshaled resources. He surely was careful about source code control and build procedures (or his releases would have been poor quality) so he monitored the project. And, most significantly, Raymond heaped praise on volunteers who helped him, which motivated those people to help some more. (In his essay, Raymond devotes considerable space to describing how he and Torvalds motivate their helpers.) In short, fetchmail made full use of traditional and effective management operations, except Eric Raymond did all of them.

Another compelling (and often-quoted) section of The Cathedral and the Bazaar is the discussion about debugging. Raymond says: "Given enough eyeballs, all bugs are shallow" and "Debugging is parallelizable." These assertions simply are not true and are distortions of how the development of fetchmail proceeded. It is true that many people, in parallel, looked for bugs and proposed fixes. But only one person (Raymond) actually made fixes, by incorporating the proposed changes into the official code base. Debugging (the process of fixing the program) was performed by one person, from suggestions made by many people. If Raymond had blindly applied all proposed code changes, without reading them and thinking about them, the result would have been chaos. A rare bug can be fixed completely in isolation, with no effect on the rest of the program.

Lessons from Linux
In a similar way, on an even larger scale, Linus Torvalds pulled off a great feat of software engineering: he coordinated the work of thousands of people to create a high-quality operating system. Nevertheless, the basic method was the same Raymond used for fetchmail. Torvalds was in charge of Linux. He made all major decisions, assigned subsystems to a few trusted people (to organize the work), resolved conflicts between competing ideas, and inspired his followers.

Raymond provides evidence of Torvalds's control over Linux when he describes the numbering system that Torvalds used for kernel releases. When a significant set of new features was added to the code, the release would be considered "major" and given a new whole number. (For example, release 2.4 would lead to release 3.0.) When a smaller set of bug fixes was added, the release would get just a new minor number. (For example, release 2.4 would become 2.5.) But who made the decisions about when to declare a major release or what fixes were minor? Torvalds. The Linux project was (and still is) his show.

Further proof of Torvalds's key role is the fact the development of Linux slowed to a crawl when Torvalds was distracted. The birth of his daughter and his work at Transmeta corresponded precisely with a period of slow progress for Linux. Why? The manager of Linux was busy with other things. The project could not proceed efficiently without him.

Finally, there is a quote from Torvalds himself during an interview with Bootnet.com. "Boot : You've got a full slate of global developers who are working on Linux. Why hasn't it developed into a state of chaos? Torvalds : It's a chaos that has some external constraints put on it. … the only entity that can really succeed in developing Linux is the entity that is trusted to do the right thing. And as it stands right now, I'm the only person/entity that has that degree of trust."

Open Source Revisited
So, if the open source model is not a bazaar, what is it? To the certain consternation of Raymond and other open source advocates, their bazaar is really a cathedral. The fetchmail and Linux projects were built by single, strong architects with lots of help -- just like the great cathedrals of Europe. Beautiful cathedrals were guided by one person, over many years, with inexpensive help from legions of workers. Just like open source software is. And, just as with open source software, the builders of the cathedrals were motivated by religious fervor and a divine goal. Back then, it was celebrating the glory of God, now it is toppling Bill Gates. (Some people think these goals are not so different.)

Freedom Can Be Slavery by Ian Kaplan

In the past there was not much of a commercial edge to free software. Although many companies used Emacs and the GNU C/C++ compiler to develop their software, these were simply tools that were used to develop products. No one was selling these tools directly. Even Cygnus (which used to be named Cygnus Support) did not sell free software. They were primarily involved with providing Free Software customization and maintenance services. Free Software was truly a Cathedral creation, since the features and bug fixes that companies paid Cygnus for were fed back into the free software base. Although the companies that used free software for commercial purposes got something for free, they also gave something back. The ideal of "giving back" to the free software community is a strong one. For example, recently I have been using a free software tool named ANTLR. While I have not contributed software directly to ANTLR I have published ANTLR examples and notes to help other ANTLR users. I have spent many hours writing these Web pages, but I view it as my way contributing to the ANTLR community.

The Joy of Unix by Eugene Eric Kim (Linux Magazine, Nov. 1999). Pages 1 and 5 provide Joy's views on the relationship between Unix and Linux as development projects (p. 1) and on the current dominance of Sun's chief competitor Microsoft (p. 5).

"As one of the creators of Berkeley Unix, Bill Joy knows a thing or two about developing and marketing a free operating system. Sun Microsystems' chief scientist has survived the Unix wars and has watched both his company and its chief competitor, Microsoft, grow from tiny start-ups to industry giants. . . Joy recently accepted Linux Magazine's invitation to dinner, where he gave Publisher Adam Goodman, Executive Editor Robert McMillan, and Associate Editor Eugene Kim the lowdown on what Sun thinks of Linux and Open Source" 

Free Riding on Gnutella

An extensive analysis of user traffic on Gnutella shows a significant amount of free riding in the system. By sampling messages on the Gnutella network over a 24-hour period, we established that almost 70% of Gnutella users share no files, and nearly 50% of all responses are returned by the top 1% of sharing hosts. Furthermore, we found out that free riding is distributed evenly between domains, so that no one group contributes significantly more than others, and that peers that volunteer to share files are not necessarily those who have desirable ones. We argue that free riding leads to degradation of the system performance and adds vulnerability to the system. If this trend continues copyright issues might become moot compared to the possible collapse of such systems.

Distributed Knowledge and the Global Organization of Software Development by  Bruce Kogut and Anca Metiu Wharton’s Management School.

Based on interviews with dozens of software engineers and managers in four countries – the U.S., Ireland, India and Singapore – the study points out that the growth of a global infrastructure has made it possible to "exploit globally the opportunities opened by the digitalization of production and products."

Metiu and Kogut see the open source movement as a tremendous driver of innovation. "The open development model opens up the ability to contribute to innovation," they say. "It recognizes that the distribution of natural intelligence does not correspond to the monopolization of innovation by the richest firms or richest countries. It is this gap between the distribution of ability and the distribution of opportunity that the web will force companies to recognize and to realign their development strategies." In other words, engineers in China, Israel or India who are unable or unwilling to move to Silicon Valley or the Research Triangle need not be locked out of innovative product development: They can play a vital role in the creation of new products and services.

Metiu and Kogut contrast the open development model with another model of software development, commonly used in the industry, which they call the "global project model." Using this approach, software companies set up offices around the world – paying close attention to the locations of their customers – and then develop software wherever it suits them best to do so. This lets the company take advantage of time zone and labor-cost differences to slash software development costs. Example: Trintech, an Irish company that makes electronic payments software, has offices in the U.S., Ireland, the U.K. and Germany. The company organizes software development work in a way that lets it expand the working day to 16 hours. Some Indian software firms, including Tata Consultancy Services, Infosys and Wipro, boast that since programmers in the U.S. can hand off tasks each evening to their colleagues halfway in India who are just beginning their working day, they can achieve a "24-hour working day."

The study points out that global sourcing today depends on three main factors: The coordination of modularized tasks, communication and shared context. All of these have limitations, though. First, coordination in software across borders is particularly difficult when the task is creative. Second, when software is developed internationally, problems can crop up in transmitting certain kinds of knowledge. Third, a shared context limits coordination across distances because people interpret the world in which they live in different contexts.

Metiu and Kogut found that software firms typically outsource only routine tasks to offshore sites and aim primarily at exploiting the low cost advantage. For example, they looked at a large financial services division of an American firm which did only simple sub-contracting work with Indian software houses. They also studied another telecommunications company which developed software offshore. When this company’s customers demanded features embedded in software rather than in hardware, it was hesitant to commit to outsourcing for strategic products.

However, their research also showed some positive developments which encourage moves by companies towards greater innovation. For example, they found that many offshore software firms like Infosys and Wipro want to be more than just low-cost vendors. They wanted to take part in the innovation process by developing their own products. The researchers believe that global outsourcing will cause global capabilities and aspirations to converge.

A key question to the researchers was: Can companies develop a strategy in which the distinction between onshore and offshore software development merges? At Trintech, for example, the company has tried to make virtual development of software possible by moving closer to its customers in Silicon Valley. The company still has been unable to fully exploit the 24-hour global sourcing cycle. While this model represents a radical departure from the early days of software development, it still did not significantly enhance the scale of innovation.

Kogut and Meitu note that customers also are realizing that even as costs fall, innovations need not occur only in the richest markets. But how is this possible when innovations are designed by the customer, closest to the market? The researchers found that the market itself may move to where the software is being developed. Case in point: Wipro in Bangalore had a customer located in San Diego that requested the development of a software product where the specifications were laid out, the architecture drawn up and the final product shipped back to the customer. Wipro’s software team never met the customer.

So how can firms profit by this strategy? It is possible to do so by shifting revenues from selling the software to the provision of services, online support and training. Further, if companies are able to combine the open source model with some proprietary software, as Netscape did with its Mozilla licensing policy, they can still maintain a competitive edge.

In all this, an important aspect that Metiu and Kogut studied was why software engineers across the world would volunteer their time to develop free software. Their conclusion: engineers intrinsically have a gift-giving culture, and the motivations are also driven by norms of reciprocity, reputation enhancement and by love of their work. They may also benefit financially in the future. Vendors and engineers are increasingly asking to move from a mere fee payment structure to a share of profits in the final product.

Metiu and Kogut conclude that open development is a model that can be transplanted from software to any field where tasks can be broken up into modules and where a wide understanding of the common language and culture exists. They add that "the promise of harnessing the intelligence in the global community through web-supported innovations will be a key element in the strategies of firms, as well as of public and creative institutions."

Radical software environmentalism

In its early days, Earth First! provided both focus and fuel for the pent-up anger of the radical fringe in the American environmental movement. Activists sought to protect old-growth forests by spiking the trees and notifying the US Forest Service that the trees were to remain for the continuing benefit of the public. When logging operations went ahead anyway, protesters blockaded roads and U-locked themselves to heavy equipment, treating angry loggers to classic songs like "Spike a Tree for Jesus" and "You Can't Clear-Cut Your Way to Heaven." The media ate it up, raising public awareness and making other environmental groups seem moderate.

Similarly, the FSF and the GNU Project have provided both focus and fuel for the pent-up frustration of public-spirited hackers worldwide. Free-software activists spike their software with the GPL to protect it from proprietary exploitation and stage whimsical media events like "Windows Refund Day" and the "Silicon Valley Tea Party" to draw attention to their cause. Microsoft CD-ROMs have been launched on rockets, and Richard Stallman treats would-be authors of proprietary software to performances of the "Free Software Song."

Earth First! and the FSF are roughly analogous to each other because they share the same No Compromise attitude. Both organizations promote their causes by fostering a "holy war" mind-set.

But the battles they're waging differ fundamentally, so the comparison between them is breaks down a bit when the impact of time is taken into account. Earth's defenders find many of their victories to be short-lived while the march of environmental damage continuously threatens the web of life. But time is the ally of free-software crusaders. That's probably why they're not noted for U-locking their heads to anything.

... ... ... ...

Approximating idealism
BSD developer Kirk McKusick probably expresses the feelings of many hackers, including myself, when he says, "Ideally I would like all software to have the GPL freedoms, but I don't think that's practical." Having been swept up by the holy fire of free-software radicalism myself, I have found that it's part rapture and part self-immolation.

One friend of mine who's a strong proponent of GPL for all software also happens to be a dyed-in-the-wool-socialist. Meanwhile, I live in a world of nonfree food, nonfree housing, nonfree health care, and a very weak social safety net. It's relatively easy to go broke by making the world a better place, and the starving-artist lifestyle doesn't much appeal to me.

No matter what anyone says, access to source code is always an improvement over closed-source software. Once we all make that leap of faith, the licensing model that best matches the needs of users and developers ought to emerge over time.

The PostScript slides for my talk, "Systems Software Research is Irrelevant" by Rob Pike may be found here, or PDF here.

(Jan 12, 2001WashingtonPost.com: Microsoft A la Hollywood


Nice parody. I especially like the statement  "here is one segment of American society that can't be bought and will not be silenced. That is Hollywood." Is open source another Hollywood...

"...Thank goodness there is one segment of American society that can't be bought and will not be silenced. That is Hollywood. The great cause for which "Antitrust" sacrifices the lives of brilliant young software developers is open-source code. Open-source crusaders believe that software should not be copyrighted. They believe that universal freedom to use and tinker with existing programs is the best way to promote future innovations. But more than that: They believe the very concept of intellectual property rights -- legal ownership of information in any form -- is downright immoral."

"As young Milo declares in the last line of the movie, as the music swells (and if you're in any actual doubt about how this plot comes out, stop reading here -- if you can read), "Human knowledge belongs to the world!"

[Jan, 2001] kuro5hin.org It's Time for Quality Tech Reporting

I've been covering technology news full-time for about five years now, and concentrating almost entirely on Linux and Open Source for the last three. Before I got heavily involved with the Internet I did over 10 years worth of (print) free-lance investigative reporting and feature writing. I find that online tech reporting in general is poor compared to most newspaper and magazine journalism. Can it be improved? And, especially when it comes to Linux and Open Source, does anyone really want quality journalism?

 

First, let me get "journalistic bias" out of the way: In my experience, the only journalist a reader considers completely unbiased is one who agrees totally with his or her point or view. So let's assume that, to most readers, most journalists are biased in some way. I have often found myself accused of bias in different directions by different readers of a single article, so please excuse my cynicism on this topic. I come by it honestly.

Now on to the meat: There is a tendency in all areas of journalism to spend too much time covering what I call "manufactured news," a category in which I place all press releases, political debates, press conferences, and statements by official spokespeople of any kind, including the craggy-faced senior firefighter every large fire department has around whose sole job is to look good in front of a burned-out house on the nightly TV news.

Pumping out manufactured news is easy for reporters, most of whom aren't paid nearly as much as programmers or sysadmins despite being under constant deadline pressure, especially if they are writing for Internet media. Corporate spokespeople are always happy to say something glowing about their companies' products and services. Groups like the EFF and EPIC have agendas they love to push, and sympathetic though you or I may be to those agendas (note the bias here!) repeating their spokepeople's public statements is not real reporting.

So what is real reporting? In the case of advocacy groups, I believe the most important coverage should be of the advocacy process itself, preferably told from the point of view of people in the group doing the actual work. From this perspective, the person selling EFF tshirts at a trade show is a valid interview subject. He or she is exposed one-on-one to trade show attendees' views of the EFF. Do different trade shows (i.e. LinuxWorld and Internet World) attract attendees whose views of the EFF differ? Is one crowd more likely to buy EFF shirts than the other? Does one group tend to subject EFF shirtsellers to vicious harangues, while the other says things like "keep up the good work" most of the time?

This could make an informative article that would not necessarily be biased either in favor of or against the EFF, and the preceding paragraph also points out a wonderful (and underutilized) feature of the World Wide Web as a reporting medium: that instead of explaining the EFF or even mentioning that the three letters stand for "Electronic Frontier Foundation," I simply linked to their Web site so that readers who already knew about the EFF weren't forced to scan a boring explanation, but those who hadn't heard of it could instantly find out all about it, straight from the source.

I believe links are underutilized in online reporting, especially by the so-called "straight" press. Stories on the New York Times Web site, for example, tend to be almost or entirely link-free, which has always bothered me. I love the fact that, with a few keystrokes, I can insert a link in a story that will send readers directly to the source of my background information This allows readers both to check my work and to go beyond my few words and do their own in-depth research if my story interests them in a particular subject, and since getting readers interested in subjects they might otherwise not have thought about is one of my main personal goals as a writer, I tend to use a lot of links in my online stories.

By using links correctly and profusely, even manufactured news can be made interesting and informative, but I am still against it in general. Press releases and other announcements often contain potentially important tech news, but I would just as soon run them in their entirety, clearly labeled as press releases or announcements -- which is how we do it on NewsForge -- rather than regurgitate their content in bylined stories. By doing this, we're assuming our audience is smart enough to tell the difference between promo copy and real news, and I feel this is a safe assumption most of the time. And by running press releases as press releases, we are then free to put our own energy into researching and writing "real" news, which is a rough and time-consuming job at which we are only partially successful so far, but working hard to improve at every day.

The biggest problem we are running into with the idea of trying to bring "real reporting" to Linux and Open Source is that most coverage of this area has been closer to fanzine-style hagiography than journalism until now, and anyone who tries to write anything that might be considered even slightly negative about any Open Source or Free Software icons is instantly slammed, even if their reporting is totally accurate. I stepped into this problem big-time about a year ago on Slashdot, when I did a small writeup on a minor potential hole in the GPL's coverage of ASPs. Flameville! How could a non-programming ignorant schmuck like me dare to say anything even remotely non-positive about the great Richard M. Stallman? Hundreds of comments and emails echoed this refrain, but Stallman himself didn't seem to take my short article as any kind of attack; indeed, some of the work now going into GPL3 is designed to correct the very deficiency I was slammed so hard for having pointed out.

Part of the trouble in converting the former "Gosh! Golly! Ain't all this Linux stuff wunnerful" cheerleading that passed for Open Source news coverage for so long into truly unbiased reporting is that many people in the "Open Source Commumnity" (which is really a whole bunch of different so-called communities, not a single monolithic group) haven't grasped the fact that the revolution is over and we/they have won. It is a similar position to the one in which the Italian Communist Party found itself when it started to win local elections in the 1950s; it was easy for the Communists to criticize the Mayor and City Council from the outside, but when they got into office themselves they suddenly found that they were the ones getting criticized, and they had a rough time learning how to deal with all that negativity, especially when it came from publications they had considered "friendly" while they were out-of-power underdogs.

I'm not saying that all Open Source and Linux news is or should be negative, but that there is going to be some bad mixed in with the good. Stocks prices for Linux companies currently suck, for instance, and that's valid news, just as reports of huge first-day gains for Linux IPOs were big news a year or so ago. Burlington Coat Factory and eMusic adopting Linux for big-time enterprise applications is big news, but if a company that turns to Linux later decides it would be better off with Windows 2000 or a proprietary Unix instead, that is news, too, and deserves just as much attention as an Open Source success story. And those of us who write any of those stories, whether positive or negative, are going to get flamed for "bias" every time we say anything either "good" or "bad" about what we cover, and if we try as hard as we can to avoid coming to any kind of conclusion at all, or even from quoting others' conclusions, we will probably get crucified for being wishy-washy or some such.

But like it or not, and despite yow-yowing from GPL believers, GPL haters, Microsoft bashers, Microsoft boosters (yes, there are still a few out there), die-hard Apple fanatics, Amiga lovers, *BSD believers, and all rest of the people who follow tech news closely and have something to say (usually rather loudly) about the way it is written, coverage of Linux and Open Source will gradually mature and become more even-handed, as will all coverage in the computer trade press, because tech news is gradually becoming interesting enough to the world at large that it is starting to get the level of journalistic attention that, in the past, was reserved strictly for "important" subjects like politics, business, sports, and movie star love scandals.

Robin Miller (aka 'Roblimo') has been a professional writer and editor since 1985. His work has appeared in hundreds of print publications and on dozens of high-profile Web sites. He is currently editor-in-chief of the Open Source Development Network (OSDN), owner of freshmeat, Linux.com, NewsForge, SourceForge, and a number of other tech-oriented Web sites.

 

[April 2000] IBM DeveloperWorks/Library/Papers: To open source or not to open source by Adam L. Beberg


Assuming that you are a company considering open source for your own code, then the bottom line is revenue. If the software can't make money, you can't pay the bills, and everyone will need a new day job. The choice to open source has many implications to revenue, so the implications need to be considered as primary reasons for or against open source.

If you do open source, revenue purely from the software is all but eliminated. This leaves only publishing (sticking the software in a pretty box), tech support when the user gets stuck, and consulting for people that need help setting up your software as sources of revenue. Unless your product is many megabytes, hard to use, and hard to set up, then these may not be enough to cover development and marketing costs.

Open source does give you some advantages that offset the inability to sell the software: bug hunting, reliability, and community. Since the source is out there, if there is a bug, anyone can hunt it down and then send the fix back in to be incorporated into the master copy. With all of this bug-stomping, your software will end up being very stable and reliable; this is offset by all the semi-functional "neat stuff" that users will want to add.

While the open source community can be a large source of programming talent, do not open source your software because you expect them to solve your problems for you. You will still have to do all of the design, initial development, maintenance, and product documentation. The desire for programmers to fix any bug that doesn't directly affect them or to add a feature that they don't personally want is very low. Any sufficiently hard bug to track down will result in a bug report, not a patch. User-submitted "features" will tend to be neat things that they and a few other people may use, but all core functionality will still need to be developed by your programming team.

Benefits of open source
There are many areas where open source is a simple and wise business decision from any angle.

Legacy applications that are no longer a part of current products being sold are an excellent area for open source, assuming that no third-party software is incorporated into the product. Old applications that are no longer cost effective to maintain or support can be made open source as a way to abandon the product, while not abandoning the users (as users who feel abandoned typically become ex-customers in a very short time). This way, the users who cannot migrate to newer products get the source and can use it to fix or modify things to suit their needs, while your entire development staff can move on to current projects.

The specifications to hardware and device driver code is another obvious place where publishing more information will benefit the product. In the past many vendors withheld the specification on how to work with their hardware fearing it would give competitors an advantage. The only real effect this had was preventing many users from having the option of buying their hardware. Writing device driver code is a development cost that is only made up by selling more of the hardware. Publish everything you can about it, and even help to get drivers written for the various open source operating systems. This will help you sell more hardware to anyone who wants it.

Any product that is aimed at establishing a standard protocol or API needs to be open source. If the source is available, then the adoption of any protocol or API will be much faster, and the programmers who use it will be far more comfortable with it. Many modern protocols and attempts at standards that were not open source failed before anyone even noticed they existed. In these cases the software itself does not represent the value; the things it enables does. Often, the amount of revenue from consulting, training, and licensing from the additional developers will completely dwarf any revenue that would have been gained by keeping the code proprietary. Many large Fortune 500 companies make 100 times more revenue on their consulting and support contracts than they do on their hardware and software sales. The key to this case is that the protocol, API, or hardware is worthless until a developer takes it and uses it to create value for the user.

Open source can be used as a powerful attack and defense mechanism while competing with rivals. If a rival's application has no need for support or consulting to use it, and an open source version of that application can be created and released, then the revenue from the closed application will be removed. This is very similar to the old method of discounting a product until a competitor is driven out of business, or salting the earth in a war. On the other hand, if you can develop your application as open source, then a rival will have no way to attack you in a similar way.

The realms of closed source
There are still many areas of the software world where open source has not made much progress. In these cases, open source projects are rare, and those that do exist are not a threat to commercial developers.

Game and entertainment software is the first area where cost is not an object, and open source alternatives rarely exist. People want the latest and greatest entertainment software, and are happy to support the developers if it will get them more frames per second, more players in the game, and more realistic blood splattering and gore. The developers of this software are also so in demand that anyone showing signs of free time and talent would be hired instantly. If the user is not willing to pay, then they will either pirate the entertainment content or simply buy one of thousands of other entertainment alternatives. The driving factors in this category are that the applications are more entertaining to use than they are to write, and it requires constant work to keep up with the state of the art.

The realm of scientific software, large complex applications, and applications that require a Ph.D. or years of experience to even contemplate working on are also not of general concern to the open source world. Open source projects succeed when large numbers of coders each contribute something to a project. Since the supply of extremely skilled people is very limited, and they are often already working 60+ hour weeks, they do not generally have any time left to contribute to open source projects unless the project has gone commercial. As a general rule, if the average to advanced coder can develop an application, there are probably several open source versions of it already. If the average coder cannot begin to work on the software, then it is likely there are only commercial versions of the software.

The other area that is not very open source friendly is boring software that is not sexy to develop, or applications that need end-user support. Things like word processors, spreadsheets, and other end-user tools. It's no secret that most open source software is not documented in a way that an end-user can use. You don't tend to see applications where the user is likely to need to call in for help for something that is being given away for free. Until open source projects started forming companies and paying people to develop all the boring applications non-programmers actually needed, there was no sign of them.

One thing to remember no matter how you license your source is that it takes about 1/20th the time to copy and reverse engineer software then it takes to develop it the first time. Your competitors, be they another company or an open source project, will be on your heels from the start unless you have something they can't do. Usually, things that they won't be able to do include hardware devices, patented technology, or a reliable service and support infrastructure. This has been true since the dawn of time, but in the past you could at least turn around and compete on price. Those days are gone. Now, once they catch up, your revenue is in danger. But they won't be making any money either, so if the stock market bubble funds your competitors but not you, then you may have to worry - unless you use open source or other tactics to head them off.

Innovation issues
One thing that open source is not generally known for is innovation. In general, the rate of innovation in the closed source world is far faster because the desire to stay ahead and stay in business has always been stronger than the desire to re-implement something that's already been done. Even the operating system gems of the open source world are just starting to gain the security and critical enterprise reliability features common in the high-end commercial systems 10 years ago.

All coders by nature like to do new and interesting things, but not necessarily things that are really new, just things which are new to them personally. A simple search for any common application will result in dozens of nearly identical open source projects to build it. Like an army, there are a few high-skill generals and masses of low-skill soldiers working their way up the ladder by working on projects like the ones that the generals worked on when they started out. Projects have high overlap, with low innovation.

If you have an application you need to build and there is an existing open source project to build it or something like it, then join that project or hire some of the main people in the project to work on it full time. The application can then move to the level where service, support, and consulting become sources of revenue. There is a long history of projects that started out as research or open source and turned into commercial ventures.

Licensing issues
If you do decide to open source your software, you will face the decision of choosing a license or writing your own. If one of the existing blessed open source licenses (see Resources) is acceptable, it will save you having to go through the approval process and months in a very uncomfortable spotlight. Put on your flame-proof suit, because you will encounter fanatics no matter what license you choose to use, but it is your software and you get to decide how to license it.

Unlike commercial licenses, open source licenses have never really been tested in court. So far peer pressure and threats of bad press have been enough to enforce them. No one is sure what will happen when the licenses are finally tested in the courts, not to mention the courts of countries with laws different than the United States. Odds are no license (open source or otherwise) means anything due to the patchwork of international laws applied to a global Internet.

Open source meets the real world
There is a rapidly growing middle ground in the licensing of software to deal with the interests of companies and authors while also having the spirit of open source.

Hybrid licenses like the Apple Public Source License, or the Sun Community Source License (see Resources) are not considered "pure" open source by the older factions of the open source community. The licenses still protect the ownership, patent, and other interests of the inventors but publish the source and make it usable to many people. The open source community initially objected to these new licenses, but eventually licenses evolved that were reasonably acceptable to both commercial inventors and the open source community.

Dual licensing is also a common option. One way is to release everything under both a traditional commercial use and an open source license. Users who wish to use the code commercially can provide revenue while providing the benefits of open source to noncommercial and academic users. The other way dual licensing is done is to lead with the current version under a commercial license, and follow later releasing old, obsolete versions under an open source license. Again many paying users will need the most current version; others can wait for the version to be made open source. Dual licensing seems to be more acceptable to the open source community since it has been done for a longer time.

[Oct.24, 2000]  Linux Magazine November 1999 FEATURES The Joy of Unix

As one of the creators of Berkeley Unix, Bill Joy knows a thing or two about developing and marketing a free operating system. Sun Microsystems' chief scientist has survived the Unix wars and has watched both his company and its chief competitor, Microsoft, grow from tiny start-ups to industry giants. Though he has had a major hand in the development of such important Unix technologies as NFS (Sun's Network File System), the Berkeley Unix TCP/IP stack, and the vi text editor, Joy's current obsession is trying to build a thriving development community around Sun's Jini distributed computing technology and its not-quite-Open Source software licensing model. Joy recently accepted Linux Magazine's invitation to dinner, where he gave Publisher Adam Goodman, Executive Editor Robert McMillan, and Associate Editor Eugene Kim the lowdown on what Sun thinks of Linux and Open Source.

Linux Magazine: One of the reasons we wanted to talk to you was that you have a long history with and a broad perspective on Unix and free software. What do you think of Linux? A lot of people talk about it as more than just an operating system.

Bill Joy: It's actually less. It's just a kernel if you want to be technical about it. It's politically incorrect to conflate Linux with the applications. At least one person will get upset. So to be quite precise, it's just the kernel of the OS. When we did Berkeley Unix, we were doing the operating system and all of the applications.

In a lot of ways, the Berkeley Systems Distribution (BSD) was on the road to being free with source available and many of the things that Linux is. But it got hung up in this legal fight between the University of California and Unix Systems Labs.

Those are the accidents of history. Now with Linux, we have this new version of Unix written with similar kinds of values that BSD had. One of the great strengths of Unix is that it's been rewritten and reimplemented several times. Applications with similar names and similar functions are widely understood, which allows this healthy kind of invention and reinvention to occur.

LM: So if it weren't for the lawyers, we'd be called FreeBSD Magazine?

BJ: If BSD had been free, there would have been no reason to rewrite it. The new thing that happened with Linux was cultural. The Internet is now coupling people together in ways that probably couldn't have happened before. How else would the developers have found each other?

I did my work in the era of the magnetic tape. We sent Unix in source form to thousands of people; they sent us a few hundred dollars, because I had to pay for the postage and for the printing of the manuals, and that was our network. It was a postal-age speed thing. It was not very convenient.

LM:Were licensing issues as important back then as they seem to be now?

BJ: No. I knew I needed a license for BSD because at some point Berkeley was going to discover it. So I just took a license from the University of Toronto and modified it a little bit and started using that. I figured if I sent people a tape, and there was nothing for them to sign, they wouldn't take it seriously.

When you give things away for free, often people think that's what it's worth: Nothing. So charging them a small amount and giving them a license to sign actually created a perception of value. I'm not saying the tape didn't have value, but an awful lot of stuff comes across your desk that you just throw away.

LM: So, what did your license actually say?

BJ: I don't remember. It was a one-page thing. I didn't have any lawyers look at it and I'm not a lawyer. I just made it up as I went along.

What happened was that at some point we were getting to be big enough that we were sending out hundreds of these [Unix tapes] a year and charging hundreds of dollars for them. A quarter of a million dollars in revenue is a great deal of money for a graduate student. Scott McNealy likes to say: "To ask permission is to seek denial." And we were operating with that philosophy.

But there were huge amounts of money involved and we were becoming pretty visible. So eventually we decided to send AT&T a letter asking them: "Is this okay what we're doing?" And 18 months later they sent a letter back: "We take no position." We won't answer your question. So that's what it was like to deal with a regulated monopoly of lawyers. That same sort of legal structure is what caused [AT&T] to license the transistor for nothing.

So we couldn't actually get an answer from them and it was only years later that this whole fracas erupted around who owns the code. It turned out their code was as tainted with Berkeley stuff as ours was with theirs, so they eventually came to a truce. That's what I've heard second hand or just drinking wine with people. So there's a very tortured and funny history to all this code.

LM: Have you ever contemplated what it would have been like if you'd released your code under the GNU Public License (GPL) or something similar?

BJ: I don't see what the advantage to it is. The important thing is that people have the source code. I actually think it's fine that people can take BSD and make improvements to it and reap software profit.

I don't think, given where we were and what we were trying to do, that the license made that much of a difference. At Berkeley, we had the model that software is the result of your research. The university tradition is that when you do research, you publish. So not giving people the source code for software meant that you weren't publishing your research. A fundamental model of BSD was: Software is a result of our research. We'll publish it and other people will use it if they choose. If someone commercializes it, I don't particularly care, because if you publish research in a university, people can commercialize it. That's just the way it is.

The important thing in my mind is that people share stuff. We've done something at Sun -- Community Source Licensing -- which is another spin on this. But the fundamental principle in my mind is that people get to see the results of other people's work in a way that they can stand on shoulders rather than on toes. The details can vary; there can be many approaches and they work in different contexts.

I think the GPL is fine. I just don't necessarily agree that it will achieve everything that Richard Stallman thinks it will. I'm not as religious about this as other people are.

[Sep 30, 2000] Charles Connell Replies to Eric Raymond

Background
Over the past few years, Eric Raymond has written and revised a famous essay about open-source software titled "The Cathedral and the Bazaar." This essay is popularly known as CatB. On September 12, Lotus Developer Network published my essay titled "
Open-Source Projects Manage Themselves? Dream On" in which I challenged some of the basic tenets of CatB. Raymond responded to the article with this reply: "Eric Raymond Replies to Charles Connell" R1. Below is my continuation of our discussion.

Sample Size
In R1, Raymond writes, "[Connell] confuses my observations about the structure of the open-source community in the large with claims about the organization of individual projects in the small. This confusion leads him up several garden paths and down a number of blind alleys."

It is true my comments in "Open-Source Projects Manage Themselves? Dream On" are based on only two open-source projects, Fetchmail and Linux. Moreover, it is true I made some conjectures about the general nature of open-source projects from these two cases. But this is true of CatB as well. Raymond makes many statements about the nature of open-source development, based only on the same two projects. More significantly, the whole world read CatB and drew broad conclusions from it about open-source programming and how to run these projects. Netscape and other companies changed their business models to match the ideas in CatB.

The moral here is we should expand the sample size of projects we study before making firm conclusions about the base principles of open-source software. (By "we," I mean everyone in the open-source community and those interested in its outcome.)


Intention of CatB

In R1, Raymond writes, "The confusion begins immediately with Mr. Connell's title, 'Open-Source Projects Manage Themselves? Dream On.' Of course they don't, and nobody who has ever participated in one would dream of claiming that they do."

It appears Mr. Raymond has not read the dust jacket of his own book. It says: "…the development of the Linux operating system by a loose confederation of thousands of programmers--without central project management or control--turns on its head everything we thought we knew about software project management. This assertion is precisely why CatB is such a famous essay. People look to the open-source method as a new way to create software and, in the process, break the tyranny of Microsoft.


Now, I am sure someone in the marketing department at O'Reilly & Associates (rather than Raymond) wrote the blurb on his book cover. Nevertheless, an author has a responsibility to ensure the cover of his book squares with the message inside. Is Raymond claiming the above quote on the jacket is 180 ° opposite from the theme inside?


Organization vs. Control

In R1, Raymond states, "Mr. Connell writes as if self-organization and strong central control … are incompatible, but he is crucially wrong in this. Individual projects self-organize because the participants choose to be there--they select themselves, and they choose to follow the project's benevolent dictator (or else they leave)."

Raymond has a partially valid point here. Being self-organizing and having central control are not completely incompatible--sort of. Imagine a group of random castaways marooned on a desert island. Initially there is no hierarchy to this group. But the group may choose to elect a leader or committee to guide them. They would choose as leaders the people who seem most able to help them survive. The castaways would be self-organizing, yet have established some central control. This is analogous to an open-source project allowing one member to serve as project leader, because it is in everyone's best interest. (Gee, maybe this would make a good TV show…)


However, the argument only goes so far. Over time, leaders get stuck on certain ways of doing things and become accustomed to their power. They are resistant to change. The group's ability to organize itself diminishes, even though it began in an egalitarian way. Similarly, open-source projects are not very self-organizing two years after they start. From a newcomer's point of view, things are not as democratic as they once were.


Of course, as Raymond points out, unhappy newcomers are free to fork the project if they don't like what the leaders are doing. However, this only proves the project is no longer self-organizing--it is now two projects.


Macro Organization

In R1, Raymond writes, "Above the individual project level, the community is also self-organizing … there is no manager or management hierarchy that plans for the open-source community as a whole. Instead, coordination between projects happens through a horizontal trust network of relationships between individuals--a network which genuinely does look like Mr. Connell's " bazaar" diagram (his figure #3)."

What Raymond says here--the lack of macro-level control over the open-source community--is certainly true. This is a moot point however, since the same could be said for corporate America as a whole. No one plans the overall direction of the US economy or the computer industry. (The Federal Reserve Board does try to steer things along, but we do not live in a socialist system.)


Leaders vs. Managers

In R1, Raymond says, "It is not leadership that the open-source community has rejected--it is conventional management hierarchies and the pointy-haired boss."
No argument from me here. I have worked for many bad managers and hope to not repeat the experience. Enlightened software companies also try to find managers who are technically knowledgeable and compassionate. The open-source community is not the first to rebel against bad management.

Debugging

In R1, Raymond says, "Mr. Connell makes another basic error in attacking the open-source debugging process. He writes as though the most important and limiting factor in debugging is in choosing non-conflicting fixes once the problems are understood. This is wrong, as all open-source developers understand and my paper points out via a quote from Linus. In fact, the hard part is understanding the nature and etiology of each bug--and this is precisely the part that parallelizes well. Once that hard part is done, reconciling fixes is pretty easy."

There is no question having hundreds of volunteers looking for bugs and suggesting fixes is a great help on any project. The most difficult aspect of debugging varies with the software (and bug) at hand however. Sometimes finding the exact nature of a problem is difficult, sometimes making the fix is harder. With his many years of programming experience, Raymond must know this.


Reconciling multiple fixes also is sometimes easy and sometimes hard. It partially depends on the quality of the code. Well-designed code is simpler to fix without breaking something else. It also depends on luck. Sometimes fixes harm each other, sometimes not. Again, Raymond certainly knows this.


The Great Brooks

In R1, Raymond says, "[Connell's] observation that individual project organization resembles a Brooksian surgical team is independent of the serious mistakes elsewhere in the article and worth follow-up; by coincidence, I wrote the same thing in the (not yet published) revision of CatB I'm preparing for the book's second edition."

I find it fascinating so many writers in the software engineering field keep returning to the wisdom in the
Mythical Man Month by Fred Brooks. Sometimes I think there was only one book ever written about software engineering and the rest of us just keep rewriting it. *grin*

Advice

Raymond writes, "My advice to Mr. Connell is this: spend some time in the trenches. Join a project. Observe the process in action. This paper was badly wrongheaded--but a year from now, you might have something important to contribute to the subject."

My advice to Mr. Raymond: Keep a broad perspective. Don't fall too much in love with your current methods. Open-source software is an important development in the computer world. So were high-level languages in the 60s, structured design in the 70s, personal computers in the 80s, and object-oriented design in the 90s. All of these movements promised to cure the ills of the software industry. (And all have helped to some extent.) Open-source software is the revolution du jour of the 00s. It has pluses and minuses, just like everything else.

[Sep 15, 2000] DaveNet What is Open Source

I did a survey of Scripting News readers and found that most of them want to work together, crossing the boundary between open source and commercial software. As I was trying to process the hype of open source, I incorrectly concluded that the philosophy of open source was against commercial software. This created a disconnect, people who do open source would say "We think your stuff is cool" but I couldn't reconcile that with the hype which seemed to say otherwise, quite strongly. I have finally reconciled it, the disconnect is between the leadership of open source, or the most vocal leadership, and the people who actually do the work, many of whom are moonlighting commercial developers, who like their jobs, and also like to do pro bono work. Duh. I wish I had figured that out earlier.

... ... ...

So, if open source projects behave like all other software projects, is there actually such a thing as open source? You may be surprised to hear that I believe there is. But it's not in agreement with what the open source leaders say it is. I'm going to give you Dave's definition of open source, and you can see if you agree.

What is open source? 

A program is said to be open source if the full source code for the program is available publicly, with no constraints on how it can be used.

That's it. We've looked at so many other possibilities, I've even discussed it publicly with Stallman, and he agrees that his philosophy is not open source, because there are constraints on what you can do with his code.

So, if this is what open source is, how long has it been going on? The answer -- a long time. Programmers are not the money-grubbers that some make us out to be. Most programmers want their creations to exist beyond their lives, both in space and time. We want to make things that people use, and we want respect, and mostly we want respect from our peers. Sometimes we release the source, and other times we don't. When we do, and when it's done unconditionally, that's open source.

Sometimes people distribute source to balance a similar kindness that they benefited from. In the mid-1970s I learned how to program reading the Unix source code from Bell Labs. To this day that experience influences the code that I write, and I've passed the lessons on to my team at UserLand and to people who learn to program using Frontier.

I learned to write software for the Apple II by reading Woz's code. Peter Norton taught me how to write IBM PC software, again by giving me code that works. Same on the Macintosh, and every other platform I've coded for, including the Web. The View Source command in the Web browser makes it virtually impossible to hide the secrets of the Web. You can always find out how something works. That's part of the magic of the Web.

Programming is a human endeavor, an artistic one, like writing or music. The art is in the process. What is this program? How do we decide which features to add? Who does it serve? Where should it run? Which bugs should we fix? Whether or not it makes money is often secondary to the purpose. The primary purpose is to make the world a little smarter, and to get the good feeling that comes from playing a part in that.

Free source code releases, or open source, is an essential part of being a programmer, and it dates back to the beginning of programming. It's not a new thing, and not complex. Basically the rule is, here's the source, have fun with it, if you want.

After the hype, the world is much as it was. Some have new fortunes, and new software is still being made, some of it commercial and some open source. That's as it is, and imho, as it should be.

[Sept 7, 2000] Scripting News

Linux Today: Guido van Rossum responds to Python Licensing Issues. "In any case we don't want to use *just* the GPL for Python, because there are many proprietary software developers in the Python community who don't want to work with the GPL. Python has always had a BSD'ish license which was perfect for them. I see no reason to suddenly confront everyone in the Python world with the GPL."

Thank you. Don't give in to Stallman. Open source should not have restrictions. Stallman's philosophy is not open source, it's not the spirit of sharing, it's not generous. It has other purposes, it's designed to create a wall between commercial development and free development. The world is not that simple. There are plenty of commercial developers who participate in open source. Python belongs in commercial products. How does that hurt Python? Why should Python adopt Stallman's goals? What has he done to build Python? (Maybe I'm missing something.) I have a different philosophy which is incompatible with GPL, I will support any open source developer who truly lets the source go. Also, I much prefer the term "commercial" to "proprietary", which is perjorative, imho. (It's possible to make commercial software which is quite open and has major non-proprietary elements.)

BTW, I spoke with Nicholas Petreley this afternoon. We're in agreement on a lot of things. The world is much bigger than it may seem. Just because people send hate mail that doesn't mean we have to hate each other. Ignore the barriers, in the end guys like Stallman can have their way, inside their walls, inside a very small world. If you can see a way to avoid the GPL, you can count on our support. Petreley told me the story of KDE and it made me sick. They were so generous. Keep it simple. Generosity is good. Period.
See also Linux Today - Richard Stallman Response to Dave Winer on Python Licensing

Open Source Projects Manage Themselves Dream On

Raymond lists the responsibilities of traditional software managers as: define goals and keep everybody pointed in the same direction, monitor the project and make sure details don't get skipped, motivate people to do boring but necessary work, organize the deployment of people for best productivity, and marshal resources needed to sustain the project over a long period of time. Raymond then states that none of these tasks are needed for open source projects. Unfortunately, the majority of The Cathedral and the Bazaar describes, in detail, how important these management functions are and how Raymond performed them.

Eric Raymond decided what piece of software he would use as a test for open source programming. He decided what features fetchmail would have and would not. He generalized and simplified its design, defining the software project. Mr. Raymond guided the project over a considerable period of time, remaining a constant as volunteers came and went. In other words, he marshaled resources. He surely was careful about source code control and build procedures (or his releases would have been poor quality) so he monitored the project. And, most significantly, Raymond heaped praise on volunteers who helped him, which motivated those people to help some more. (In his essay, Raymond devotes considerable space to describing how he and Torvalds motivate their helpers.) In short, fetchmail made full use of traditional and effective management operations, except Eric Raymond did all of them.

Another compelling (and often-quoted) section of The Cathedral and the Bazaar is the discussion about debugging. Raymond says: "Given enough eyeballs, all bugs are shallow" and "Debugging is parallelizable." These assertions simply are not true and are distortions of how the development of fetchmail proceeded. It is true that many people, in parallel, looked for bugs and proposed fixes. But only one person (Raymond) actually made fixes, by incorporating the proposed changes into the official code base. Debugging (the process of fixing the program) was performed by one person, from suggestions made by many people. If Raymond had blindly applied all proposed code changes, without reading them and thinking about them, the result would have been chaos. A rare bug can be fixed completely in isolation, with no effect on the rest of the program.

.... .... ....

The first method (traditional) shows a Vice President of Development at the top, with several Directors of Engineering reporting to the VP. Below the Directors are Engineering Managers, and finally the engineers who write the code. Many organizations use this model, and everyone agrees it is sometimes grossly inefficient. The second method (cathedral or open source) uses a single designer/architect at the top, with many engineers reporting directly to the architect. The third method (bazaar) is a peer-to-peer network of many engineers, all reporting to and coordinating with each other, without central control. In The Cathedral and the Bazaar, Raymond claims open source projects are run in the third style. In fact, they are run in the style shown by the second diagram.

For more information about the parallel between software construction and cathedral building, see the classic work of software engineering The Mythical Man-Month by Fred Brooks. In this book , now 25 years old, Brooks describes the chief-programmer (or surgeon) method of software development. It is remarkably similar, in its basic philosophy, to the open source method that Raymond and Torvalds used. Interestingly, the book even contains a picture of a cathedral and relates it to this style of development.

... ... ...

To be fair, Raymond does address the issue of project leader control in his essay. He quickly dismisses the great importance of this control however, by claiming he and Torvalds did not have a crucial role in the design or creation of their software projects. He states he and Torvalds did not design anything new, but merely recognized good ideas from others. Raymond is not seeing clearly his and Torvalds's contributions. It is very hard to shift through thousands of suggestions from swarms of users, find the good ones, synthesize them together, and incorporate them into an existing code base. This constitutes strong management and central control.

The need for good management suggests the scalability of the open source method may be limited. How many people have the technical sophistication to make good software design decisions, the people skills to motivate hundreds of contributors, and the time to dedicate to a complex project? We should be wary about assuming that the open source method can solve the world's software problems. It is possible only a small number of humans, such as Raymond and Torvalds, have the requisite skill set to run an effective open source project. If this is the case, as I suspect it is, the number of true success stories in open source development will be small. The open source method will run into the same wall as traditional software development. Good technical managers are few and far between.

developerWorks Linux Features - Library - Papers An excerpt from Don Rosenberg's book, Open Source: The Unauthorized White Papers by Don Rosenberg [Aug 02, 2000]

Community Power
Although not well organized, the Open Source Community can bring its weight to bear upon problems it believes to be sufficiently threatening. In 1994 an opportunist filed for a trademark on the word "Linux," and after obtaining it a year later began to send letters to various Linux companies demanding 10 percent of their revenues for the use of the trademark. The trademark apparently held good, for after bringing suit, a coalition of Linux plaintiffs who had called for its cancellation acquired it in a private settlement on undisclosed terms; they subsequently put it in the hands of Linus Torvalds. The danger was averted, but the Community was upset by the close call.

They consequently descended with all the wrath of public opinion on a set of individuals who presumed in 1998 to set up a "Linux Standards Association" without any discussion of it within the Linux Community. The surprise was followed by rumors that the association was a cat's paw of large computer industry companies plotting to hijack Linux. The association proposed (like many standards bodies) that only fee-paying members could vote on the standards, and that the two companies who sponsored the association would be the final arbiters of those standards. The promoters withdrew from public view under withering scorn, and no small part of the outrage was the idea that the perpetrators might give a watching world a picture of a disorganized Community at a time when it was trying very hard for public acceptance. The Community-based Linux Standard Base (LSB) found itself strengthened by the episode as at least one independent-minded Linux company decided to give it more support as a means of filling what was obviously a dangerous vacuum.

Businesses venturing into Open Source waters can draw two lessons from these incidents: first, that it is important to deal carefully with the Community when doing something other than straightforward business under Open Source licenses (some companies appoint ambassadors to the Community, a variation on the Apple use of evangelists to drum up interest and support from developers); and second, that the Community can find resources to use to serve its ends. In the case of the Linux trademark, the Community summoned up the necessary effort, funding, and the pro bono services of attorneys. Sun software has begun experimenting with more open licensing in the face of scornful Community opinion, perhaps aided by pressure from IBM, which wants Java, to which IBM has contributed so much code, finally made an Open Source product. The osmotic and networked nature of the Community mean that efforts do not necessarily have to be planned or coordinated to be effective.

The Coming IP Struggle
The crusader aspects of Open Source, first seen by the public as David versus Microsoft, have not gone away; this time the Children of Light will be fighting the dark forces of IP lockdown. If modern digital technology has given users unlimited powers to filch copyrighted material (particularly popular music), the holders of copyrights are learning to use digital technology along with legislatures to enlarge their IP rights at the expense of users. Not directly connected, but sharing the same goals, the litigious proprietors of software patents are also spreading their domain.

Each side of the contest has its extreme positions. Although the GNU GPL and the Open Source movement rest on copyright as a means of ensuring the freedoms they seek, there are elements on this side of the struggle that believe that once information is digitized it is free for everyone. A Free Music movement, for instance, regards itself as justified on moral, if not legal grounds, to take and freely disseminate any copyrighted form of digitized music. The other side of the struggle has no such fundamental split; it sees digitized IP as uniquely able to be portioned out for payment, and intends to capitalize on this windfall. Extended IP rights will make fair use (loaning or giving your copy of a book to a friend, or selling it to a second-hand shop) disappear, along with making a tape of your copy of a CD to take on a picnic. Instead books and music will be tied to particular devices, and transfers among them will be difficult if licensed or impossible if not provided for.

The polarity between the two sides reflects the natural division between business and the Internet culture. Businesses continually seek to erect barriers to entry to protect themselves and their markets from competition; the best barrier is a government-granted and -enforced monopoly, which is what copyright and patent law confer. Open Source, on the other hand, is partly a product of university research laboratories that are publicly funded and that freely publish the results of research to open those results to question or improvement by others; consequently, Open Source supporters believe that only excellence in execution constitutes a real barrier to entry. They believe this not only as a principle, but as a practical truth: a project that fails to serve its constituency will either fail or be forked.

Free Riding on Gnutella   See also Slashdot The Tragedy of the Digital Commons

An extensive analysis of user traffic on Gnutella shows a significant amount of free riding in the system. By sampling messages on the Gnutella network over a 24-hour period, we established that 70% of Gnutella users share no files, and 90% of the users answer no queries. Furthermore, we found out that free riding is distributed evenly between domains, so that no one group contributes significantly more than others, and that peers that volunteer to share files are not necessarily those who have desirable ones. We argue that free riding leads to degradation of the system performance and adds vulnerability to the system. If this trend continues copyright issues might become moot compared to the possible collapse of such systems.

Some Slashdot comments

And this is some suprise? (Score:4, Informative)
by Dman33 (dtisher@nospam.umich.edu) on Monday August 21, @02:20PM EDT (#15)
(User #110217 Info)
C'mon people! Do you really think that this is any shock? I could have told you that people would rather receive than give. Especially when it is anonymous.

I think a ratio type of thing would be a great idea, but how in the world can this be done? Obviously this is not too practicle in an anonymous situation; so is this the great paradox of anonymous vs quality?

In other words, if it is purely anonymous as many would insist it to be, then it will lack quality and usability by nature. So in order to have good quality, you have to sacrifice security?

Can anyone think of a better way?
 
> I think a ratio type of thing would be a great
> idea, but how in the world can this be done?

It has already been done. The Mojo Nation system was designed as a way for people to exchange services using a micropayment system. This system is different from other micropayment systems because the "coins" are backed in digital resources. It is like the old upload and download ratios of BBS days. You contribute services to the system by "selling" to others and when you need services you "buy" them from other agents. Toss in a distributed, de-centralized data sharing services and you have a pretty cool little item. The coins are like tokens at an arcade, except those who contribute more than they consume end up with a surplus they might be able to sell later; greed is a powerful motivation to get people to

Cheating is controlled (or at least minimized) by using market-based mechanisms like reputations. By basing the service on something like a market it is possible for distrustful parties to conduct transactions and exchange services. Look around at any stable social structure and you will see a lot of the same techniques employed to fairly allocate resources and control parasites and cheaters.
Ratios never worked very well (Score:2, Informative)
by nd (nd@kracked.com) on Monday August 21, @02:25PM EDT (#36)
(User #20186 Info) http://demonic.net
What CmdrTaco said about file ratios is pretty accurate. I ran a BBS for several years and found this to be true.

It was also true for Post/Call ratios when implemented on some boards (ie, not letting the user play LORD until he has reached a Post/call ratio of 1, or whatever). This basically created a message base full of crappy 1-line posts (also true on some web-based forums where users with X posts are rewarded with a special 'ranking').
NOT a tragedy, just another Keg party. (Score:5, Insightful)
by tylerh (garbage1@home.com_nospam) on Monday August 21, @02:30PM EDT (#51)
(User #137246 Info)
Programmers know this as the "90/10" rule: 90% of the useful work will be done by 10% of the code

This applies to people as well. I helped organize keggers in college. A small number of us did the organizing/financing/clean up -- everyone else just showed up and partied. That was kind of the whole point. So this "ecology" result is NOT a tragedy of the commons, it' just a another keg party - and you know how hard those are to stop 8)
C'mon, file ratios weren't that bad (Score:2, Interesting)
by MWoody (slashdot@mwoody.com) on Monday August 21, @02:30PM EDT (#53)
(User #222806 Info) http://mwoody.com
Don't dismiss file ratios as a legitimate means of submission encouragement. Yes, the system can be abused, but after years of BBS hopping I've seen excellent methods to prevent it.

My favorite method involved an account system, whereby uploads that other members voted as 'spam' or just worthless space were counted against that user. Upload enough crap and you're booted; add a two-day account activation delay and even the most 'disciplined' media hog is curtailed.

Of course, a ratio of both file size and number of files is necessary, else you'll get people uploading one worthless 1GB file every few days, and then going to town before they're booted.

Now, these systems worked fine with one server, but systems like Napster and Gnutella present a new paradigm. How to enforce ratios when there is really no such thing as an 'upload'? A few ideas, based more on the general concept of user-controlled servers than any specific service's setup:

* For every file downloaded off a user's server, give that user a point. Servers can set a score limit on their machine for downloads (i.e. 'only people with a score of 100 can download from me'). Drop by a point or two for each download, but give 100 points or so when you sign up.
* Same system as above, but base on file size instead of number of downloads. Or perhaps a combination of the two?
* Keep track of everything on some centralized computer, and enforce the ratio much like it would be in days of lore.
* Enforce a per-server ratio, the setup of which is determined by the owner. Not a great system on this scale, but worth a look.
* Rather than 'earning' downloads, penalize users who download too much too fast with a negative score. Doesn't take into account whether or not they serve, though...

And, of course, what will probably happen in the end anyways:
* Keep the system as it is now. As the bandwith dissappears, all the hosers who don't know how to share will leave, and all the savvy slashdot readers and linux lovers will get their connection speeds back.

Any other ideas for control systems, or arguments against them?
The dark side of anonimity (Score:5, Insightful)
by Kiwi (kiwi-nody4la@koala.samiam.org) on Monday August 21, @02:38PM EDT (#75)
(User #5214 Info) http://linux.samiam.org/linux_links.html
Anonimity definitely has a dark side.

A few years ago, Time magazine did an excellent piece on the problems to today's society. One of the things they pointed out is that the privacy of a modern household has greatly increased the incidents of child abuse. In the society that we evolved in, one large factor that stopped people from abusing their child was the fact that there was no privacy--if you abuse your child, the whole village knew about it.

The anonimity of the internet causes similar problems.

Any system administrator knows that if they put any pornographic images on their web server, their machine or their machine's connection will quickly get overloaded. For example, one of my users put up pictures of attractive women. The women were not even naked, yet the server's connection was still overloaded.

I have heard it said that the most common term asked for in the leading search engines is "pornography". People who would normally be too embarassed to go in to a liquor store or a peep show have no problem getting porno on the net. The internet makes people do what they would not normally do.

While pornography is somewhat harmless, other activity on the internet isn't. The actions of the anonymous person who brought down Kiro5hin come to mind. As does the random bannings on many IRC channels (where the operators as often as not broke in to accounts or engaged in credit card fraud to get a system they could run a bot on to control the channel), the efforts people go to to cheat in online games, countless breakin attempts any experienced system administrator sees in their logs, the nonstop tide of spam, and so on. All of these are things that poeple do when they do not get a chance to look in the eyes of the person who they are harming with their selfish actions.

It does not surprise me that the internet is full of people who take but do not give back. Human nature has always had the takers who complain when the stuff they are not taking is not good enough for their selfish purposes, and the givers who get little in return for their giving except complaints from the takers. The anonimity of the internet makes this problem worse.

File Ratios (Score:2, Funny)
by Zulfiya on Monday August 21, @02:45PM EDT (#90)
(User #44302 Info) http://www.eclipse.net/~srudy/

Reminds me of the BBS days of file ratios - 'course then we'd just take an image, resize and upload it, so that idea didn't exactly work as intended.

Ahh, file ratios.

An anecdote:
I was too honest to rename files, but didn't have a worthy collection to upload to keep my ratios up. Finally, in what I thought was surely a nasty trick, I uploaded a collection of short stories I had written, one by one (all dreck - because everything written when you're fifteen is dreck).
Imagine my surprise at returning to the board weeks later to find a story I had deleted from my archive for being exceptionally lame marked as a sysop-preferred download.

The moral:
One man's trash is another's treasure
-or-
Fifteen year old boys are as bad judges of writing as fifteen year old girls are writers
-or-
At least file ratios get things out there that might otherwise stayed buried (even if maybe they should have).


-- I'm not evil, I'm ... differently motivated!

Money and trust (Score:5, Interesting)
by raph (raph+slash@acm.org) on Monday August 21, @02:47PM EDT (#94)
(User #3148 Info) http://www.levien.com/
There are two solutions, I think, to the tragedy of the commons. One is to pay people for their disk space and bandwidth. As several other comments have pointed out, this is exactly what Mojo Nation does, using "mojo" micropayment tokens as the currency. I've been playing with it, and though it's been a bumpy ride, it looks very promising. Check it out.

There is another solution, I think, which is using trust to define a community. The set of "Gnutella users" is too large and diffuse to actually define a community. Why should I donate my bandwidth for other people who I don't know and don't really care about?

If, on the other hand, I were sharing files with a much smaller group of people, many of whom I know personally, then it starts feeling more like a community. Of course I want my friends to be able to listen to the music I like.

I propose that the trust system as deployed on Advogato might be a good way to define these communities. Of course, I might be totally wrong about this as well. Only one way to find out :)

Incidentally, the way Mojo Nation is set up right now, it still has Tragedy of the Commons problems. Currently, you don't get mojo for uploading tasty content. In fact, you actually have to pay for the privilege. However, when you share a file, it's not a continual drain on your bandwidth (or diskspace, fwiw). The actual distribution is handled by "block servers", who do charge for their services.

Of course, the Mojo economy is still in its formative stages. I hope, and expect, that actual markets will develop for providing and identifying tasty content.

In any case, file sharing sure promises to be an interesting ride.
Marxist Leeching (Score:4, Insightful)
by angst_ridden_hipster (angst_ridden_hipster@yahoo.com) on Monday August 21, @03:01PM EDT (#111)
(User #23104 Info) http://www.andthehorseyourodeinon.com/
We used to paraphrase Marx's

"From each according to his ability,
  to each according to his need."

into

"From each according to their assets,
  to each according to their greed."

Of course, this was in our godless commie Warez swappin' Hotline-usin' phase...

In the long run, it's OK. There are 90% leeches. But the 10% who make up the providers is not always the same 10% of the people. Today's leech is tomorrow's provider, and vice versa. Sometimes.
It all tends to work out eventually.
so it dies out or gets stale (Score:2)
by kootch on Monday August 21, @03:01PM EDT (#115)
(User #81702 Info) http://www.jambase.org/~nugget/freebies.html
That just shows you that the entire thing is a fad, and when it dies out, the whole Napster lawsuit will just have been a small bump in everyone's collective memory. These things come and go... we've had BBS's, FTP sites, IRC, Hotline servers, and now Napster, Gnutella, Scour, et all.

They come and go, as the fad takes em. And the whole reason why they go stale is that users take more than they give. DUH. So eventually the people with the fat pipelines that were giving so much get sick of giving and want stuff in return. And as soon as they go to the "membership" model or the "ratio" model, it quickly gets abused or the majority of people leave, giving you a select membership only service (which is what most hotline servers are now). Then again, you still have great places like #macfilez... but those places are becoming fewer and further between.

Napster will die, same with Scour and Gnutella... lets just see how long they stay around. And again, everyone that's really into pirating will stay with their favorite IRC site, FTP site, or BBS. All of the other's that were recently introduced to the fine art of pirating will get sick of Napster and attempt to learn IRC but will find it too difficult to use so they'll give up and start going back to buying CDs and nothing will change.
Visit Dave's Cheapies!
History repeats yet again.. (Score:2, Interesting)
by Lumpy (spamsucks.timgray@lambdanet.com) on Monday August 21, @03:07PM EDT (#121)
(User #12016 Info) http://www.lambdanet.com
I remember back in my college days. we had set up a "kiosk" of 3 pc's in the processing department that along with access to UUnet there was a huge (500 megabyte) file sharing database. users could walk in and use floppies to get what they needed and upload that which was useable by all. when we went online we loaded the thing up with gobs and gobs of shareware, tons of free documents and along with the UUnet connection lots of cool things.. We also kept close track of the users activities (No username attached it was all anon access) for the Phyc grad-students that were researching basic human though patterns when exposed to the digital information age. Well there was a ratio of 900 to 1. of every 900 items downloaded there was 1 item uploaded (and usually by one of us maintainers via modem)

people are greedy, they take and take and take. and WILL NOT give. need proof of that statement? look at the fact that there are starving people in the USA. There is absolutely NO reason for anyone in the USA to go hungry, without shelter.. yet it happenes because of GREED.

Every pattern we found on that Kiosk matched the Phyc patterns they had with people giving cheritably..

Face it, Homo-sapiens are self centered greedy weenies. And that is one of the first things that will be seen by any sentient visitor.
It may not even be intentional. (Score:2, Insightful)
by shren on Monday August 21, @03:20PM EDT (#134)
(User #134692 Info) http://www.io.com/~shren
Files that you download are not shared by default.

Let me say that again : Files that you download, with Gnutella, are not shared by default.

Which means that, regardless of what people might say, Gnutella is not, by default, from a practical standpoint, a peer-to-peer network. It's just like any other client-server network - the clients take and the servers give. It takes work to be a server, even if it's 20 seconds of work in copying files and clicking rescan, and most people are short-sighted nitwits who don't realize they need to do that and wouldn't care if they did.

At the very least, files you download on Gnutella network should be shared by default. I think, if it's supposed to be a peer-to-peer network, then sharing should be the default behavior, and not sharing an option. If you download a file and the file you have is the same length as the source (it's finished) then the file should be shared.

I'm feeling like enough of a jerk right now to say that it should be shared regardless of where it is put on the hard drive. Gnutella should keep track of the stuff it's downloaded (Stolen Album-Stolen Song 1.mp3, Stolen Album-Stolen Song 2.mp3) and search your hard drive for those files, and offer them to the net if they are found. What part of "peer-to-peer" do you not understand?

I don't use Gnutella anymore. It's not what it was said to be. I think of a hundred thousand users who can't even be bothered to put thier incomings in thier outgoing directory, and decide not to bother with it.

MacOPINION Jeff Lewis The Cathedral and the Bizarre -- interesting critique that discuss some points from the point of view of Mac user and consultant that were discussed in my papers from the point of view academic researcher 

... Interestingly, the number one problems with Linux, from a consumer perspective, are that it doesn't have a standardised UI; its tools are simply too difficult to use and configure, and it requires far too much upfront learning to get up to speed. The last is the most telling: the Linux model moves the cost of learning from the developer to the user.

As for upfront developer time, well, that's true - it is harder to learn MacOS than it is to learn, say, the ANSI C libraries... but in truth, Raymond overstates the issue. In fact, MacOS can be divided in several large chunks. I regularly write 'console' apps in MacOS to do little tasks which don't warrant a lot of UI. I also have several shells for drag and drop functionality which handles all that sort of stuff. In fact, I tend to stick to the ANSI C libraries. There are some deficits in MacOS, relative to the 'standard' libraries - MacOS has overwhelmingly complex networking support compared to sockets and streams, but it also offers more complex functionality.

Ironically, it's the focus on UI, perhaps taken to obsession, which gives MacOS its edge over... well, it has to be said this way - over all other OSes. Windows comes close, but misses simply because Windows developers really do not have that obsession to make the UI perfect. Linux and the various X-Window interfaces rarely even come close. The obsession with sticking to a standard behavior means that MacOS users experience a consistency of behavior that no other OS can offer (although again, Windows is getting closer - Linux is not even close).

...Linux, on the other hand, pops up around 1995 after the GUI market had twelve years of development - and promptly reinvents the wheel and badly at that. (In defense of Linus Torvalds, he wasn't trying to build a consumer OS, he was trying to create a freely available Unix-like kernel in the spirit of Andrew Tannenbaum's Minix, which had been the hot mini-OS on Atari and Amiga computers...)

The core 'advantage' Raymond puts forward, many programmers improving the code, many eyes looking for bugs, is also its chief weakness. Programmers - at least the kind who are likely to get involved in OpenSource project - are a notoriously independent lot. They love to do their own thing and always think they have a better way to do things.

Normally, this is an advantage because, with filtering and discipline, this is from whence the fountain of creativity which drives this industry comes. Unchecked and unfiltered, though, and you have unbridled chaos. As a result, you have no less than six different desktop systems and two different configuration systems and tools whose command line options change not only from system flavour - but from revision to revision....

Raymond touts the stability of Linux as proof of the OpenSource concept, but that's a bit misleading. The core of Linux was written by one person - Linus Torvalds. Moreover, there is a small group who shepherds the contributions to the kernel to keep it stable and clean. In other words, there's a priesthood at the top of the bazaar. If you check into each successful OpenSource project, you see the same thing: a small group of referees who filter the input and weed out the bad ideas. The bazaar has cops. The chaos is contained.

When you get a company the size of Apple or Microsoft, you have thousands of developers who do peer review of code. You have the referees who determine what goes in to a product and what doesn't... but they have one thing that the OpenSource method doesn't: they have markets to answer to.

...See, when commercial developers create a product, they start by trying to solve a problem that customers need solved. The focus is always on the customer. What do they need? What do they want (which isn't always the same thing :)? Can they use this software? How much handholding will they need to make this work? Can we make this experience so great that the competition can't sway them back?

... ... ...

...once a group buys into a specific solution, the cost of changing grows with time. That's true even if the software is 'free' because the maintenance costs and time to convert to another solution are not. 

You want fries with that operating system?

The other core point Raymond tried to make is that software is a service industry. To be honest, this is a disturbing concept in several ways.

First, let me ask you a question: if you make your living by selling service on software, what's the motivation to make the software as easy to operate and maintain as possible? The answer? Well - not much. And so we have Linux. Very powerful. Very flexible. Very hard for average computer users to configure and maintain.

Now, that's not to say that commercial developers always have a motivation to make their software easier... I would argue that any software which needs a certification program is bad software. Any software which needs an entire section at Chapters (errr, Computer Literacy... Barnes and Nobel) devoted to rack after rack of MSCE books, cheat cards and training CDs (and it's not just Microsoft - Novell started this idea, but everyone is getting into it - Cisco just joined the gang) is badly written from a user's perspective.

Actually, let me go back to Cisco for a moment. Cisco used to have a great freeware configuration tool. You could download it from their website. That tool has quietly vanished and has been replaced with CiscoWorks and of course, Cisco certified training. A very ominous trend.

... ... ...

OpenSource fans tend to believe that they are quality fanatics, but what definition of quality are we using? To them, the most important kind of quality is stability - software shouldn't crash. The problem is that OpenSource software does crash. Does it crash less often than say, a Mac app? Well, probably, but most OpenSource projects tend to be much simpler and smaller than Mac apps and in general, tend to be very minimalistic. So they have the 'quality' of simplicity - for the programmer, but the lack of quality for the end user.

The argument is that since everyone who uses the software has the source, if you find a bug you can fix it then submit the bug fix to someone and it becomes part of the product - if you can figure out who to contact. But there again, we see a confusion as to whom the customer is: most people who use computers do not know how to program in BASIC, let alone C or C++, and more importantly - they do not want to. They buy a computer to solve a problem - to get work done - not to debug someone's code.

... ... ...

Raymond points out that Windows 2000, which reportedly shipped with 63,000 bugs, shows that OpenSource works because under Brook's "Law", the number of bugs is roughly proportional to the square of the number of programmers. Since Linux has 40,000 developers working on it - there should be 2 billion bugs in Linux. The flaw is that he perceives Linux as a collection of small projects, but sees Win2K as a single monolithic application - much as he seems to see MacOS. In reality, Win2K and MacOS aren't monolithic. They are composed of many smaller parts which are handled by smaller teams. Much like Linux.

As for comparing bug counts - at least Microsoft has a bug count. If Raymond had bothered to check the number, he'd have found that a rather large proportion of the 63,000 bugs are cosmetic - and none were considered 'showstoppers'. We don't even have a way to determine the real bug count for Linux since there's no central repository for this sort of information. 

... ... ...

Eric Raymond made one other comment: that the elitism of Mac programmers is a danger to its long term survival... and I ask him - where has be been? This same 'elitism' is what pulled us into a GUI based world of computing. It's what made computing accessible to the average person. This sounds like a weird echo of the 'Apple is doomed' argument we used to hear so often - but in case he's missed it - Apple is still here and doing better than ever.

... ...

        See also: Slashdot The Cathedral And The Bizarre discussion of macopinion.com article

LinuxPlanet - Reports - .comment Lawyers Guns and Money - KDE and Corel Working in the Real World Dennis E. Powell contains an very nice quote from old Peanuts cartoon and several very interesting observations about KDE development.

I'm reminded of a Peanuts cartoon from 35 years ago or so.

"I'm very idealistic," says Lucy. "I want to make the world a better place for me to live in." She was speaking, in delicious irony, to her little brother, if my recollection is accurate. Her little brother was named Linus.

A number of companies have become involved in development of particular open-source applications and packages. The involvement has included monetary support, payment of salaries to programmers already working on free Linux applications, development and contribution of new features or extensions (I'm talking about features and extensions that are then returned, open source, to the projects, not the always-helpful Microsoft seizing upon the Kerberos code, making it incompatible, and slamming a proprietary lid on it), or actually hiring programmers to work under the aegis of the company itself on particular packages, the code going to the open source organization that originated the packages. The dominant view is that this is generally a good thing.

And it can be. But it can also become a bad thing, and unless some lines are drawn, it's a sure bet that it will.

... ...

Even uglier, one could imagine--and this example is entirely hypothetical--a situation where the top people in a development effort accepted what lawyers call "valuable considerations" in exchange for looking the other way as a project got steered in a way that benefits a particular company. Nothing illegal about it.

Less sinister but troublesome nonetheless is the simple culture clash when open source meets commercial software. Corporations play it close to the vest, open source does anything but. The place where they meet can be awkward and frictional--it's hard to imagine them being otherwise. The potential for abuse is real as well. Indeed, Jon "Maddog" Hall of Linux International has done and is doing much to educate corporations about a crucial reality: they don't own the system and they cannot mandate changes in it. He has helped many companies understand the differences between closed, proprietary software and open source.

... ...

The leading voice of open source is Eric S. Raymond, again one of the few in the Linux community recognized just by his initials. ESR is watching more and more corporations become involved; his philosophy embraces the idea that closed development is old and creaky, while open source development is young and robust. A company will do it the new way or die:

"I'm not worried. What I see is corporations realizing that if they want our results, they have to buy into our process--and if they don't, they'll be eaten by a competitor who does."

... ... ...

I don't know of any lawyers who would be eager to try to defend the GPL if offense came to judge and jury. Commercial software companies have over the last decade tended to give the best parking spaces not to programmers but to lawyers. It would take some heavy-duty mobilization to take on a big software company and not die of attrition before a dispute ever came to trial. I think there will be offenses, as surely as there will be another VBX macro virus.

It's frightening, the idea that Linux and its applications could fall victim to its own success. But wherever there's money, big money as there is now in Linux, there is someone very clever who will try to take it away.

Corporations and open-source development are for the most part diametrically opposed as to organization and goals. Neither is bad; indeed, neither is better and both can be abused. But when the two come together, it's oil and vinegar--it needs a little shaking up before it can go onto the salad. And these are indeed the salad days of Linux.

It's not difficult at all, though, to imagine the companies that make commercial software for other platforms looking upon Linux as some kind of odd little brother. And, with a smile, saying, "I'm very idealistic. I want to make the world a better place for me to live in."

[June 04, 2000] Linux for the Masses and Other Popular Myths by by: Todd Burgess tburgess@krait.com
http://www.krait.com

As Linux popularity spreads so will the number of computers and users who use Linux. While Linux offers many advantages over many of the traditional operating systems, Linux is not suitable for all applications. Unfortunately many of the Linux advocates overlook this fact in their zeal to promote the Linux operating system.

The purpose of this article is to look at some of the fundamental arguments being put forth by many of the advocates. Specifically this article will seek to prove:

  1. The Linux operating system on every computer system is a bad thing. More specifically, why the numbers game is a bad thing.
  2. The attempt to make Linux more user friendly will hurt Linux not help it. Why hacking, the command line and vi are a good thing.
  3. Linux for the sake of Linux is a bad thing. Just because you are running Linux does not mean you are smarter then any of the rest of us.

The Numbers Game

The numbers game is something that everybody plays. Web sites inflate their hits, software companies inflate the number of users using their software and operating systems advocates inflate the number of people who use their operating systems.

The reasons vary. Web sites will inflate hits to make themselves more attractive to advertisers, software companies inflate users to make themselves attractive to shareholders and operating systems advocates inflate operating system numbers to attract development efforts by outside companies. Which in turn, sells more operating systems.

While some may point out that Linux is essentially free, there are many commercial companies with commercial interests in Linux. Inflating the number of installed users will help the companies grow financially. No company will develop for an operating system that nobody uses so it is in the Linux community's best interest to raise the numbers.

Listen to any mainstream press promote Linux and you will always hear a wide range of numbers. Most of the numbers range from as low as five million and as high as ten million. For the purpose of this discussion assume there are ten million Linux users.

What does the number ten million mean? It means that ten million users have either:

  1. installed and use Linux
  2. installed Linux and never used it again
  3. installed Linux and then uninstalled it
  4. bought the Linux CD but never installed it

[May 28, 2000]  Here is a rather  authoritative opinion of John Viega, a Senior Research Associate in the Software Security Group at Reliable Software Technologies, an Adjunct Professor of Computer Science at the Virginia Polytechnic Institute, the author of Mailman, the open source GNU Mailing List Manager, and ITS4, a tool for finding security vulnerabilities in C and C++ code. He has authored over 30 technical publications in the areas of software security and testing, and is responsible for finding several well-publicized security vulnerabilities in major network and e-commerce products, including a recent break in Netscape's security. In his recent paper open source resources at open source it The Myth of Open Source Security he wrote:

...Even if you get the right kind of people doing the right kinds of things, you may have problems that you never hear about. Security problems are often incredibly subtle, and may span large parts of a source tree. It is not uncommon to have two or three features spread throughout a program, none of which constitutes a security problem alone, but which can be used together to perform a security breach. For example, two buffer overflows recently found in Kerberos version 5 could only be exploited when used in conjunction with each other.

As a result, doing security reviews of source code tends to be complex and boring, since you generally have to look at a lot of code, and understand it pretty well. Even many experts don't like to do these kinds of reviews.

And even the experts can miss things. Consider the case of the popular open source FTP server wu-ftpd. In the past two years, several very subtle buffer overflow problems have been found in the code. Almost all of these problems had been in the code for years, despite the fact that the program had been examined many times by both hackers and security auditors. If any of them had discovered the problems, they didn't announce it publicly. In fact, the wu-ftpd has been used as a case study for vulnerability detection techniques that never identified these problems as definite flaws. One tool was able to identify one of the problems as potentially exploitable, but researchers examined the code thoroughly for a couple of days, and came to the conclusion that there was no way that the problem identified by their tool could actually be exploited. Over a year later, they learned that they were wrong, when an expert audit finally did turn up the problem.

In code with any reasonable complexity, it can be very difficult to find bugs. The wu-ftpd is less than 8000 lines of code long, but it was easy for several bugs to remain hidden in that small space over long periods of time.

To compound the problem, even when people know about security holes, they may not get fixed, at least not right away. Even when identified, the security problems in Mailman took many months to fix, because security was not the the core development team's most immediate concern. In fact, the team believes one problem still persists in the code, but only in a configuration that we suspect doesn't get used.

An army in my belly

The single most pernicious problem in computer security today is the buffer overflow. While the availability of source code has clearly reduced the number of buffer overflow problems in open source programs, according to several sources, including CERT, buffer overflows still account for at least a quarter of all security advisories, year after year.

Open source proponents sometimes claim that the "many eyeballs" phenomenon prevents Trojan horses from being introduced in open source software. The speed with which the TCP wrappers Trojan was discovered in early 1999 is sometimes cited as supporting evidence. This too can lull the open source movement into a false sense of security, however, since the TCP wrappers Trojan is not a good example of a truly stealthy Trojan horse: the code was glaringly out of place and obviously put there for malicious purposes only. It was as if the original Trojan horse had been wheeled into Troy with a sign attached that said, "I've got an army in my belly!"

...Currently, however, the benefits open source provides in terms of security are vastly overrated, because there isn't as much high-quality auditing as people believe, and because many security problems are much more difficult to find than people realize. Open source programs which appeal to a limited audience are particularly at risk, because of the smaller number of eyeballs looking at the code. But all open source software is vulnerable, and the open source movement can only benefit by paying more attention to security.

[Jan 18, 2000]  Open-Source Software Development by Davor Cubranic Department of Computer Science University of British Columbia cubranic@cs.ubc.ca was added to Webliography

[Dec. 12, 1999] Accidentally found on the Web and added to the Webliography  Comments by Chuck Cranor about CatB

[Dec. 9, 1999] A second look at The Cathedral and The Bazaar was published by First Friday

[Dec. 8, 1999] Today I accidentally found  and added to the Webliography comments of Dave Winer using Goggle search engine. They are very insightful early comments on the CatB paper.


Brooks Law and Open Source Development

developerWorks Open source projects Linux Brooks' Law and open source The more the merrier  by Paul Jones Director of MetaLab, University of North Carolina, May 2000

Does the open source development method defy the adage about cooks in the kitchen?

An aphorism from some twenty years ago, Brooks' Law, holds that adding more programmers to a project only delays it. But if this is so, what accounts for Linux? Paul Jones gathers perspectives on the open source development method and whether it defies conventional wisdom.

McConnell, S. (1999). Brooks' Law Repealed. IEEE Software.  Retrieved May 1, 2001, from the World Wide Web: http://www.construx.com/stevemcc/ieeesoftware/eic08.htm.

Hsia, P., Hsu, C., & Kung, D. (1999, October). Brooks' Law Revisited: A System Dynamics Approach. Paper presented at the Twenty-Third Annual International Computer Software and Applications Conference, Phoenix, Arizona.

Does Free Software Production in a Bazaar Obey the Law of Diminishing Returns

This essay will now consider conclusions, based upon the data.

If GNOME had obeyed the Law of Diminishing Returns, we would have seen the distinctive hill-and-valley curve in the marginal output.

Therefore, GNOME may be breaking the Law of Diminishing Returns. ,

If GNOME had obeyed the Law of Diminishing Returns, we would expect to see the bell-shaped curve in the average total output graphs. We saw what might have been the beginning of such a curve in both cases.

Therefore, GNOME may be obeying the Law of Diminishing Returns.

As the astute reader has noticed, these are contradictory. We are now forced to fall back on the balance of liklihood to make our conclusion.

Seeing as the Law of Diminishing Returns has not yet been broken, the author is inclined to say that:

GNOME probably obeys the Law of Diminishing Returns, but has not yet reached its production turning point.

Implications and Considerations

There are several implications and things worth considering.

The first is that the GNOME data-set was too small to be conclusive. Whilst there are hundreds of actual volunteers on the project, and a 'virtual Labour force' several times as large, the data simply has not shown us anything conclusive. A much larger set of data will be needed, in particular to:

  1. See if the 'bell' shape manifests on the ATO curves
  2. See if the 'hill-and-valley' shape manifests on MO curves

The second consideration is the oscillatory nature of the MO curve. A standard MO curve moves in a definite way. This MO curve bears no resemblance to the standard one, giving credence to the possibility that Bazaars may, in fact, break the Law of Diminishing Returns.

Further Directions of Research and Work

This essay, alas, raises more questions than it answers. While it has failed to draw definitive conclusions, it does manage to form an initial foray of economic theory into the Free Software world. But there is far, far more room for research. In particular:

In terms of assisting this research, a better range of tools needs to be made available and used for the gathering of project data. Whilst the CVS system keeps logs, these are by themselves not enough. Primarily, they are stored in a format which makes meaningful extraction of data difficult and less useful.

In Summation

In summation:

  1. The essay cannot conclusively state Free Software Production does or does not Obey the Law of Diminishing Returns.
  2. The essay concludes on the balance of liklihood that Bazaar production does not break the Law of Diminishing Returns
  3. The research would be more profitable if repeated on a larger project
  4. Future research needs to be assisted with better data tools.

 

Open Source Projects Manage Themselves Dream On

Much has been written about the open source method of software development. By far, one of the most tantalizing statements about open source development is these projects manage themselves. Gone are layers of do-nothing managers with bloated bureaucracies and interminable development schedules. In their place is a new paradigm of self-organizing software developers with no overhead and high efficiency.

The dust jacket for Eric Raymond's open source manifesto The Cathedral and the Bazaar makes this statement clearly. It says: "…the development of the Linux operating system by a loose confederation of thousands of programmers -- without central project management or control -- turns on its head everything we thought we knew about software project management. … It [open source] suggested a whole new way of doing business, and the possibility of unprecedented shifts in the power structures of the computer industry." This is not just marketing hype on a book cover, Raymond expands the point inside: "… the Linux community seemed to resemble a great babbling bazaar of differing agendas and approaches … out of which a coherent and stable system could seemingly emerge only by a succession of miracles." Other open source adherents make similar statements in trumpeting the virtues of open source programming.

There is one problem with the statement, open source projects manage themselves. It is not true. This article shows open source projects are about as far as you can get from self-organizing. In fact, these projects use strong central control, which is crucial to their success. As evidence, I examine Raymond's fetchmail project (which is the basis of The Cathedral and the Bazaar ) and Linus Torvalds's work with Linux. This article describes a clearer way to understand what happens on successful open source projects and suggests limits on the growth of the open source method.

(Note: This article addresses issues raised in the essay titled The Cathedral and the Bazaar . The essay also is included in a book with the same title, which contains other essays as well.)


 

General

See also

Generic critique of foundations of Raymondism

What should be done? I will conclude with a suggested agenda for everyone, whether a commercial software developer or merely a computer user.

  1. Recognize the major contributions of the free software community, from Linux and GCC to TEX, LATEX and Ghostscript.
  2. Accept that both commercial and free software have a role to play, and that neither will ever go away.
  3. Be respectful of the authors of good free software.
  4. Try to convince them to apply the reciprocal goodwill (in some of the cases cited this may be hard, but one should try).
  5. Refuse and refute the moral defamation of commercial software developers. If you are a software developer, be proud of your profession.
  6. Call the extremists' bluff by questioning their moral premises. Re-establish ethical priorities.
  7. Refuse the distortion of moral values and the use of free software as a pulpit from which to spread ideologies of violence.
  8. Demand (in the spirit of faithful advertising) that the economic origin of "free" software be clearly stated, and that the products be classified as one of "donated", "taxpayer-funded" and the other categories described in this article.
  9. For Microsoft, whose unique position in the community creates unique responsibilities: promote a more open attitude towards the rest of the world; open up; be less mean. You can afford to be.
  10. For everyone: focus on quality. Work with every competent person--free or commercial software developers--to address the increasingly critical, and increasingly ethical, issue of software quality.
  11. Strive to combine the best of what the two communities have to offer.

Salon Technology Of greed, technolibertarianism and geek omnipotence

In her new book "Cyberselfish: A Critical Romp Through the Terribly Libertarian Culture of High Tech," Borsook sounds a round challenge to the techie conceit of total autonomy. She asserts -- among other highly flammable propositions -- that decades of government funding for basic research, aerospace electronics, housing and higher education are conspicuously absent from the standard story of privatized technology heroics. She sees the prevailing libertarian ethos of Silicon Valley and the technology sector as merely a strain of geeky, adolescent narcissism masquerading -- and dignifying itself -- as politics.

It seems like you're contending that technolibertarianism is a rhetorical projection of control-oriented, non-communitarian, arrested-adolescent urges of the preponderantly male geek technocracy. You document a collective, industry-wide failure to grow up and participate in society, as well as a culture that celebrates a massive underdevelopment of its humanity. Did I get that right?

Well, yes, I suppose. Though I should say this book was written over several years, and the culture has changed a bit over that time. One of the very recent changes has been that the übergeek libertarian culture I wrote about has been mated with MBA culture, which brings its own prejudices and religious beliefs to the party.

That's an interesting melding: the masters-of-the-universe MBA culture colliding with awkward geek, "I don't have the world's best social skills" culture. But they love each other's rhetoric and ideology and there's a strange sort of symbiosis going on. Geeks and MBAs intrigue each other for complementary reasons: MBAs like being associated with the geek shibboleths of inventiveness and revolution; Geeks are attracted to the MBAs' promise of making things real through the glamour of money. And both of them like money because it's something that can be counted.

So now, when we talk about high-tech culture, a lot of what we're talking about is really business-speculation culture, and a transplanted Midtown Manhattan advertising culture, or Wall Street financial culture. So, though we may use the words "high tech" these days to refer to this group, they're not all the same kind of person -- but they are finding lots of common ground.

Absolutely. I noticed that at some point in the mid-'90s, we got major culture-creep, when programmers and systems administrators all became covert stock traders on the Web.

Yes. It's horrifying. [laughs] Because -- and I'm not anti-technology or anti-geek -- what is really best about these people is what I call their "curious child" quality -- scientists have it -- that kind of noodling around with code, and zoning out for 36 hours at a time working on something. That's where the really good creative work can happen. But if you have one corner of your monitor that's constantly watching the stock market, or you're thinking about what sort of play you can come up with to impress the institutional investors, well, that's not how really serious technology work gets done. That, to me, is kind of sad and scary, and -- not to sound patronizing -- but kind of a loss of their innocence in a way that I don't think is good.

...It's really important to stress the number of times I've heard computer people say, "Well, I'm not a libertarian like that wacko over there, but I think the government is in people's lives too much," or "I believe in the free market," or advance this fantasy that there's no interdependency or no mesh and that somehow the government doesn't make this a safe, reasonable place to live and make lots of money.

You write about the willful blindness by techies to the prior contributions of government and universities in creating the foundations of the Internet. In your own perfect hypothetical world, what kind of evil torture might you contrive to open their eyes?

I want to say something like an "It's a Wonderful Life" kind of scenario: "Look, Jimmy Stewart -- what would it be like if you hadn't lived!" I mean, you'd have to think of something like that, because, the problem with the Internet example, and all the other examples in the book -- there's nothing new in there, I'm just pulling it all together -- is that the really important thing to remember is the Internet/ARPANET was sheltered from having to make any money for, like, 15 years. So how can you give people an image of what it would be like to live in a world without economic shelter of any kind?

What would it be like to live in a world in which there was never a time-out from the philosophy and pressure of the marketplace? It's kind of like trying to explain to people the notion of prior restraint, or how their world is impoverished by the absence of something they've never seen. And that's a very hard thing to do, and since I'm not fundamentally sadistic, it's hard for me to come up with a torture.

There seems to be this hidden assumption in geekdom that cleverness at coding must naturally generalize to all other areas of knowledge -- as if being a good hacker qualifies geeks to be philosopher-kings as well. Would you agree? Where do you think this feeling of intellectual entitlement comes from?

I would totally agree, but then again I must be a little bit fair -- there are lots of doctors and lawyers and other people who think, well, if I ran the world, I'd know what I was doing. I think perhaps that in the geek world -- because you're basically always creating a system that will work and respond, you're creating little, bounded universes all the time -- I think perhaps it's more tempting to think that this kind of general omnipotence can extrapolate out from that. Because [as a programmer] you're often being brought in to solve a problem -- whatever it is -- and even if you solve that problem poorly, or with unnecessary complexity, the fact is that you're confident in your skill set as a problem-solver. So why wouldn't that extend to everything else?

If the shakeout of the dot-coms continues as we've seen recently, if more companies die off or are eaten by larger companies, and the prevailing technolibertarian meritocracy pose becomes less and less tenable, do you think the rhetoric within the industry will change? As the losers file for protection or seek government intervention, do you anticipate a, shall we say, softening of the libertarian hard line?

I'm very, very curious to see what happens -- whether [technolibertarians] are going to hew to the social Darwinist line or maybe be more intellectually honest, and say it's more a matter of luck -- but I don't see that happening in high tech.

It's a lottery -- it's a less rigged lottery than other things, but it is still a lottery -- so, I'm going to be very curious to see what happens when people want to say, "It's not fair, it's not fair, my company never IPO'ed, but my friend's company went public six months ago, and he got his money out and I didn't, and it's not fair."

So, it's going to be hard to tell because -- you know the notion of cognitive dissonance -- it's like some chiliastic cult that thinks the end of the world is going to happen, and then it doesn't happen, and their fearless leader comes up with an explanation of why the end of the world didn't happen. This ideology is so pervasive and largely invisible that I'm not going to say yes, they're going to have a more sober and mediated view of human life because of witnessing a NASDAQ wobble. I could hope that might happen, but I'd be reluctant to predict it.

ZDNet PC Week Open source is (so far) an open road to nowhere

[ESR Open Source Movement] It's best at tearing apart the establishment because it consists of underappreciated programmers who suddenly have a voice.

Salon: Open-source bloatware


"Vigor lives. Or, more accurately, it interrupts. Vigor's sole purpose is to annoy users with snide comments that must be acknowledged with a click on the OK button before work can continue."

Linux Today WebReference.com BeOS 5 Bounds onto the Scene -- a real competition to Linux in desktop space emerged.

Amid the on-going furor of the OS wars, one contender has often slipped under the radar of most users - the BeOS. This is a shame, really, because the BeOS is one of the most powerful and well- designed operating systems available today. And now, to sweeten the deal, Be Inc. has taken a clue from their Linux competitors, and have started offering up the Personal Edition of the new BeOS 5 for free download.

Some have called this offer a last-ditch attempt to generate popularity for the OS. To be honest, they may be right. Be Inc. was founded by ex-Apple exec Jean-Louis Gassee in 1990, and though its OS has garnered a good deal of critical acclaim since then, it's never developed much of a user base. That might be changing, however. Since BeOS 5 was posted on the Web on March 28, there have been over 870,000 reported downloads of the 40 meg base installation from partner sites - maybe not enough to scare anyone in Redmond, but by no means a shabby number! The freely downloadable Personal Edition of BeOS 5 is fully operational; the BeOS 5 Pro edition is available for purchase for $69.95, and comes bundled with additional development tools, software and add-ons.

Let's take a look at BeOS and see what the buzz is about.

So What is it?

The BeOS is just that - a whole new operating system. Built from the ground-up in the 90s, Be sought to leverage the fact that they did not have to support a large number of legacy applications like other existing OSes. Though each new version of the MacOS and Windows adds additional features (and slowly removes some of the support for outdated system calls, features and hardware), they still have a lot of their history embedded in them, either in the form of actual code, or in philosophical approach. Since Be was starting from a blank slate, they were able to design their OS to use modern hardware to its fullest potential, as well as incorporating the latest developments in OS design. It's this modern design that gives the BeOS some of it's most appealing features.

The Pros

So what are those appealing features?

Stability.
First off, the BeOS is remarkably stable. Though individual applications may crash, it's exceedingly rare for the whole OS to go down. (Chalk up a point for truly modern threading and memory management!) One popular BeOS screensaver points to the chutzpah that Be users have about system crashes - the BSOD screensaver mockingly simulates the Windows "Blue Screen of Death", as well as system crashes on SCO Unix, Sparc-Linux, AmigaDOS, Atari ST and Mac OS.

It's also a fully journaled OS, which means that you can pull the plug out of the wall at any time and not lose any information. Nor will you have to spend several minutes running checkdisk or rebuilding your desktop next time you boot up.

Speed & Media Optimization.
As a newcomer to BeOS, I immediately noticed the speed at which the OS boots up (and later, how fast it shuts down). The lack of the standard Windows "almost there, almost there" boot up/shutdown song and dance, and the endless parade of extensions on my Mac's startup screen were not things I missed.

BeOs has an internal 64-bit pipeline, and has been built from the ground up to deliver extremely fast response for file access and for a large variety of multimedia types. The typical reviewer remark about this is "imagine playing 6 Quicktime movies at the same time!" I decided to take a different approach, and played 11 MP3 files simultaneously. It was a heck of a sonic mess, but the OS mixed all the output in realtime without flinching.

Attributes.
Every file system saves some sort of information about the attributes of individual files. For most operating systems, this information is limited to the name, size, date of creation and perhaps ownership/access permissions for the file. Under the BeOS, attributes are enormously customizable, and new types can be created by the user. These new attributes can contain data of almost any format, and powerful database-like search and sort functions can be performed system-wide (MP3 files could be sorted by information embedded in their ID3 tags, to offer one very simple example). The OS and your applications can also use attribute information to automate a large number of tasks.
POSIX Compliance.
The Unix-like kernel is accessible from a powerful command line running bash, a command shell which will instantly be familiar to Linux users. Mac and Windows users need not be afraid - the GUI is great, and you'll never have to deal with a command shell if you don't want to. Since the kernel is almost fully POSIX compliant, recompiling existing Unix applications is much easier.
Cross-Platform Support.
You can run BeOS on both Intel and PowerPC platforms and there's specific support for Pentium III enhancements build into the OS. This is great for programmers, since you can immediately start developing for both sets of hardware. BeOS can also mount, read, and write to any FAT16, FAT32, and Mac HFS formatted discs.

You can't, however, run BeOS on G3 or G4 processors (in other words, any recent Macintosh). The word on the street is that Apple is making it difficult for Be to be ported to these newer machines, but I don't have any firm information about that.

The Downsides, & Getting Started

The Cons

Ok, if it's so great, why isn't everyone using it? Well, admittedly, it's not all roses and multi-threaded moonbeams in Be-land. There are some definitely cons to the BeOS, and for many people, these may be enough to keep them from even trying it.

Software and Hardware Support.
This is the big one. BeOS, being the "little kid on the block," doesn't have anywhere near the software support of either the Microsoft or Macintosh OSes. It's the catch-22 of any new operating system - if the OS doesn't have any applications, it won't attract any users. And if it doesn't have any users, no one will want to write applications for the OS. And if it doesn't have any applications...you get the idea.

This is not to imply that there are no applications available for BeOS, just that the number is far lower than on other platforms. A quick glance at the BeWare Catalog (http://www.be.com/software/beware/), or at sites like BeBits (http://www.bebits.com) will give you a good idea of range and scope of available applications. Being both media-optimized and programmer-friendly, Be has software available from several big names in the the audio/video/image software community, including MetaCreations, Steinberg, Arboretum, eMagic, VideoWave and more. Most importantly, much of the software available seems to be high in quality, which might make the total number of choices less of a concern.

Hardware support is good, but not complete. While the company and existing user base has done an admirable job of writing drivers and keeping Be compatible with most major hardware, there are definite gaps. As with the software catch-22, most manufacturers won't spend the time and money to develop drivers for a platform with a small user base. On the plus side, BeOS does a wonderful job of automatically recognizing the hardware it does support, so installation of new (supported) devices is extremely easy. For speed demons, the OS also has multi-processor support built into the kernel. More information about currently supported hardware can be found at: http://www.be.com/products/beosreadylist.html

Ease of Use/Learning Curve.
This isn't necessarily a con, as much as it's just a fact of life. BeOS 5 is a complete and independent operating system, so learning to use it will take a little time. Elements of the interface can be changed to more closely resemble other OSes and there are aspects of the interface that will make users migrating from Windows, MacOS or Unix feel more comfortable. The core is distinctly Be though, and to get the most out of it, a new user is going to have to realize that there will be a learning curve. This is especially true for new features like the powerful attributes system that don't have a clear equivalent in other existing OSes.

Getting Going with BeOS 5

I found BeOS 5 to be a breeze to install. After downloading the installation archive (which, at 40 megs, is somewhat large for modem users, but still smaller than most game and application demos these days), I walked through a fairly brainless and painless install process. The new version simply installs itself and it's file structure as a one 500 meg file in your existing system, so there was no need to worry about repartitioning, dual-boot voodoo, or file system changes. Booting into BeOS is just as easy - just click on an icon on your desktop. Your OS shuts itself down, and Be pops up. Since I'm running Windows NT, I did have to make an additional boot floppy, but that was equally painless.

Final Words

I have no vested interest in the BeOS - I don't own stock in the company, and had never used it before this review. Though I get as grumpy as the next guy about the foibles of Windows, MacOS and the various flavors of Unix that I work on in any given day, I have no major grudge or agenda against any of those platforms or their parent companies. So bottom line - will I continue use BeOS 5 in the future? My honest answer is a qualified "maybe". The OS itself seems great: it's zippy, looks nice, and "feels good", if that makes any sense. Though I'm certainly at the start of the learning curve, and need to get much more familiar with the details of the user interface, I like what I see. Every time I use BeOS 5, different elements of the design jump out at me and small but elegant features come to light.

For me, the deciding factor will ultimately come down to software support. Will the available software allow me to accomplish my work? Perhaps more on point, will it make life easier for me, and make up for any time I need to spend learning to use a new OS? I'm not entirely sure yet, but I enjoyed BeOS 5 enough to invest a little more time putting it through its paces and surveying the available software. I'd recommend anyone with an interest in alternative, well-designed operating systems to take a look.

For more information on BeOS 5 and download locations, check: http://free.be.com/

Linux and BeOS -- a real competition to Linux in desktop space emerged.

Linux and BeOS are both getting close to 10 years old, but they have very different histories and are quite different products. The one thing they have in common is they are both alternative operating systems. Rather than looking at them as competitors, let's explore how they can work together to satisfy the needs of a greater audience.

Linux has proven itself to be the fastest-growing server operating system. Its reliability and performance per dollar have helped it get there. Any objective viewing shows Linux as the best choice in the web server world, and its support of the SMB, IPX and Appletalk protocols makes it an excellent choice as a file and print server in a mixed environment.

Linux has also made great inroads into the desktop environment. With the KDE or Gnome GUI and office applications such as Applix, WordPerfect and StarOffice, most users familiar with any GUI-based system can comfortably use Linux on their desktop.

In previous columns, we discussed how Linux is a multi-user operating system, where you generally work in a peer-to-peer environment. That is, any Linux system can be a server or a client. While this has lots of advantages, it does have one disadvantage: administration of a client is almost as complicated as a server.

Enter BeOS, an operating system designed to be running on a workstation. BeOS is not multi-user, but it is multi-tasking. To make matters even more interesting, the GUI is part of the operating system, rather than being an add-on like X is with UNIX/Linux or Microsoft Windows is on MS-DOS. What this means is:

  • the user interface for applications is consistent, because it is part of the OS;
  • system administration is simplified, because there is no multi-user configuration and there need not be any server configuration;
  • GUI performance is excellent, because the layers of overhead associated with translation of GUI functions from the application to the OS are eliminated; and
  • writing GUI-based applications is simplified, thanks to a very clean Applications Program Interface (API).

Linux and BeOS Working Together

What all this means is that BeOS offers easy-to-configure clients with excellent performance. BeOS includes a POSIX-compliant command layer, meaning anyone familiar with Linux or UNIX can bring up a terminal window and use their favorite program--whether it be ftp or vi--and those who want a GUI interface won't be disappointed, either.

With NFS available for BeOS (for free), a Be workstation can be connected to a Linux-based network and just look to Linux like any other workstation. For the home user, BeOS connects to the Internet with virtually no effort, making it an ideal home system and offering home users the same system they have on their desktop at work.

What's Missing from BeOS

This may sound like a rerun of Linux statements, but the answer is applications. Link any "new to mass market" operating system, and it is going to take some time to catch up. The only office suite available is GoBeProductive. Netscape isn't there yet, but with Be's recent announcement about working with Sun Microsystems to add Java support to BeOS, I expect Netscape is not far behind.

In the last six months, BeOS has gone from a virtual unknown to a serious alternative operating system. Moving into the OS market on the coattails of Linux and in an environment that finally supports alternatives to the Microsoft answer, expect BeOS to get its applications out a lot faster than Linux did.

Recent Announcements About BeOS

Part of what supports my assertions regarding Be's growth into the mass market is related to recent announcements. Here are a few:

  • Be Announces Software Licensing Agreement with Compaq that allows Compaq to pre-install and distribute Stinger, the code name for Be's software platform for Internet appliances, on future Internet Appliance computing devices ...
  • Be and Opera Software Team to Enable Rich Web Content On Internet Appliances
  • Be, Inc. and SSC announce Be Magazine ... (yes, that's us; the first issue will be out in March, 2000.

[Apr 15, 2000] Slashdot /Several Stampede Developers Depart Stampede is in trouble and might unfortunately became a history. It looks like they overextended themselves and tried to produce an alternative user-friendly distribution. Some problems with leadership also surfaced. But resourcewise they cannot complete with Corel and have no resource for sustainable development.

We have been developers, end-users and general supporters of the Stampede Linux (Stampede GNU/Linux) project for over two years. We have enjoyed working together as a team and pursuing many of our own interests in conjunction with expansion of the distribution. Many of us have grown up on Stampede.

It is now time for us to part company. Due to a number of reasons based on the current administrative nature of the Stampede Linux distribution, we are unable to continue supporting the efforts of the distribution. As a group, we feel that the needs of the group have not been supported by the current model of operations. Though we have tried, we have not been able to affect change in this regard. Despite our strong long-term commitment to the project, it is time to move on.

As a group, we shall endeavor to find a more suitable, democratic forum to express our interests. To those who have supported our efforts over the last two years, we thank you.

Signed...

Open Source And The Joy Of Beaurocracy (Score:2)
by Bowie J. Poag (poag@u.arizona.edu) on Saturday April 15, @09:37AM EST (#38)
(User Info) http://copyleft.net/cgi-bin/copyleft/t037.pl


The fact that Stampede is currently having a few top-level disputes isn't really the main problem here..The fact that another crack has developed in the Linux foundation, however, is.

I think we're in the middle of an interesting time, really. It used to be, a few years ago, that we were all doing this for fun...We didn't care about getting rich. Now, we all wake up and realize that money is starting to get in the way of ideas. The free exchange of ideas and been limited by the fear of getting screwed. Consequently, people are getting pissed off. After all, why should they work for free, when someone else is profiting from what they do?

While I don't think this will eventually spell out an epitaph for the whole Linux movement, it still remains true that a minority of people with less than pure intentions are driving the majority of people apart rather than uniting them under a common umbrella.

Instead of developers doing what they do for fun, now there are nameless people at the top raking in money off their work, and making alot of people very, very sour in the process..It breeds distrust and resentment in a community that bases its whole existance on mutual trust and cooperation.

Stampede's issues are an indicator of a much bigger problem that supercedes what distribution you run, or what platform you use. It comes down to an issue of people, and how they work together. We all have to agree to play one game, not two. We either continue as we have traditionally, or we become suspicious of eachother and suffer the consequences of doing so.

Open Source and the Joy of Exploitation (Score:0)
by Anonymous Coward on Saturday April 15, @10:52AM EST (#82)

After all, why should they work for free, when someone else is profiting from what they do?

If you have a problem with someone taking your work, putting it on a CD, selling the CD, and keeping all the money, then don't license your work under the GPL or the BSD licenses!!

Use one of those other licenses that say "no you can't use this software to make money in any way shape or form". Of course, then you won't get any distribution, and end users still won't pay you. And if your program is at all useful someone else will write a GPL clone and your users will switch, because the GPL program will be on the distro CD's and your program won't be.

You could try the shareware model ... cut out all the distributors and ask end users to send you money. Unfortunately 90% of all end users are dishonest and will never send you a dime.

The more I think about this, the more I empathize with musicians and what they have to go through in order to get distribution and get paid.

Personally I use the GPL and I like it. So far, the most successful distros are the ones that support Free Software (Red Hat, Debian). The ones that try to bundle in proprietary stuff and don't give back (Caldera) aren't doing as well. That's fine with me. All I ever expected to get from writing Free Software is a chance to use lots of other people's Free Software, and I'm getting that.

Re:Reason Why I Left && Other Information (Score:2, Informative)
by khemicals (khemicals@NOSPAM@marblehorse.org) on Saturday April 15, @06:58PM EST (#150)
(User Info) http://www.khemicals.org

Completely apart from the discussion of what happened and what the consequences are, you should be aware that membership in a not-for-profit organization does not provide you with any legal protection whatsoever.

John,

Although I am no lawyer either, there are plenty of people who seem to agree with my point. Check out the following URL's which say that so long as I am not being paid and I am not working outside of my job description and what I do isn't done with malice or with the intent to harm, then I cannot be held liable. This comes from the Federal Volunteer Protection Act signed by President Clinton in 1997. Here are some URLs to info:
http://www.njnonprofits.org/vol_pr otect_act.html

http://www.ptialaska.net/~jdewitt/vlh/Law/VLHTortLiability.html

http://www.nonprofitlaw.com/quicktipsv ol.htm

 

[Feb.22, 2000] Linux A Bazaar at the Edge of Chaos -- an attempt to explore the phenomena by a sociology student that turned in a very nice example of Raymondism --  naive, on the border of blind-folded chauvinism, view of OSS. IMHO there is very little actual sociology research in the paper. But it does represent a rather good attempt to market Raymondism postulates.  Please note that the web site for the paper contains several interesting letters; One of them I really recommend to read:

DOS is a classic example of this same sort of process. A product gains an early distinctive lead against a competitor based on the bias of a particular group of early adopters. In this case that bias was product branding and reputation associated with IBM. Past the point at which those early adopters acting with that motivation caused a sufficient threshold of marketshare to occur, subsequent growth and success of that particular product is due primarily to a feedback loop.

People don't bother to consider the Macintosh an option because it doesn't have the wide adoption that DOS does or doesn't have the sort of reputation in business associated with IBM.

This EXACT problem is why some of us are drawn to free software. Linux and other products that are free from the constraint of 'surviving' in a free market are better able to compete with products that are similarly 'unconstrained'. Since DOS achieved it's 'watershed marketshare', Microsoft has been free of market constraints or the need to quickly adapt in function or quality. This allowed Microsoft to 'sandbag' technologically and not be punished by the market. This lead to a situation where the predominant product had significant technical flaws from the point of view of consumers with a different bias.

OSS/Free Software allow for the existence of alternatives that can compete against a natural monopoly.

An OS competing against DOS or Windows not only has to provide the functionality that the core OS itself is meant to provide but also support the usage of a wide range of hardware and have available for it a wide array of useful software. An entire collection of products has to be functionally replicated. This is a heavy burden for any company and historically has been unachievable.

The paper itself is not that impressive either form technical or sociological points of view. Here is an example of the author thinking (after that you can expect a hug from ESR and that's really the case; see ESR letter to the author ;-) :

However improbable it might seem that the chaotic, frenzied activities of the bazaar can ever outdo the top-down coordination of the cathedral, it is exactly what happened in the case of Linux. Raymond attributes the success of the Linux project to several important characteristics of the bazaar, summarized in the following imperatives:

"As useful as it is to have good ideas, recognizing good ideas from others can be even more important and productive. Build only what you need to. Be modular. Reuse software whenever possible.

Prepare to change your mind, and be flexible to changing your approach. Often, the most striking and innovative solutions come from realizing that your concept of the problem was wrong. "Plan to throw one away; you will, anyhow."

Feedback is key to rapid, effective code improvement and debugging. Release early and often. Treat your customers as co-developers, and listen to their feedback. Treat your beta testers as your most valuable resource, and they will become your valuable resource.

Peer review is essential. Given enough co-developers, problems will be characterized quickly and the fix obvious to someone: "Given enough eyeballs, all bugs are shallow."" [14]

Most fundamentally, what these attributes suggest is the importance of frequent version updates. Conventionally, updates are released no more than once or twice a year, only after a majority of the bugs have been fixed so as to not wear out the patience of the users. In comparison, updates of the Linux kernel were released as often as every day or every week. But rather than wearing out the users, the frequent updates not only kept the developers stimulated, but also exposed bugs directly to the eyes of the users for quick detection. Raymond elaborates:

"In the cathedral-builder view of programming, bugs and development problems are tricky, insidious, deep phenomena. It takes months of scrutiny by a dedicated few to develop confidence that you've winkled them all out. Thus the long release intervals, and the inevitable disappointment when long-awaited releases are not perfect. In the bazaar view, on the other hand, you assume that bugs are generally shallow phenomena - or, at least, that they turn shallow pretty quickly when exposed to a thousand eager co-developers pounding on every single new release." [15]

Much of the same process is also at work in the development of the code. Given a user and developer base as large as the Linux community, Torvalds can safely rely on others to write and test the code for him. More likely than not, he can also expect multiple versions of the code from people who are more knowledgeable or interested in the specific component than he is. His task, then, is to simply select the best code to incorporate into the next release of the kernel.

[Feb.22, 2000] Speculative Microeconomics for Tomorrow's Economy -- contains probably the first discussion of comparative merits of dual-track software versus open source software.

A fourth, increasingly important, solution to the transparency problem is open-source software [17]. Open source solves the software opacity problem with total transparency; the source code itself is available to all for examination and re-use. Open source programs can be sold, but whenever the right to use a program is conferred, it brings with it additional rights, to inspect and alter the software, and to convey the altered program to others.

In the case of the "Copyleft license," for example, anyone may modify and re-use the code, provided that they comply with the conditions in the original license and impose similar conditions of openness and re-usability on all subsequent users of the code [18]. License terms do vary. This variation itself is a source of potential opacity [19]. However, it is easy to envision a world in which open-source software plays an increasingly large role.

Open source's vulnerability (and perhaps also its strength) is that it is, by design, only minimally excludable. Without excludability, it is harder to get paid for your work. Traditional economic thinking suggests that all other things being equal, people will tend to concentrate their efforts on things that get them paid. And when, as in software markets, the chance to produce a product that dominates a single category holds out the prospect of enormous payment (think Windows), one might expect that effort to be greater still.

Essentially volunteer software development would seem particularly vulnerable to the tragedy of the commons. Open source has, however, evolved a number of strategies that at least ameliorate, and may even overcome, this problem. Open-source authors gain status by writing code. Not only do they receive kudos, but the work can be used as a way to gain marketable reputation. Writing open-source code that becomes widely used and accepted serves as a virtual business card, and helps overcome the lack of transparency in the market for high-level software engineers.

The absence of the prospect of an enormous payout may retard the development of new features in large, collaborative open-source projects. It may also reduce the richness of the feature set. However, since most people apparently use a fairly small subset of the features provided by major packages, this may not be a major problem. Furthermore, open source may make proprietary designer add-ons both technically feasible and economically rational.

In open source, the revenues cannot come from traditional intellectual property rights alone. One must provide some sort of other value - be it a help desk, easy installation, or extra features - in order to be paid. Moreover, there remains the danger that one may not be paid at all.

Nevertheless, open source has already proven itself to be viable in important markets. The World Wide Web as we know it exists because Tim Berners-Lee open-sourced HTML and HTTP.

The sociologist Anselm Strauss explored the way new forms of document allowed new forms of community (or as he calls them, "social worlds") to come into existence. His work predates the proliferation of computers and so provides an interesting view of the way other developing technologies (copiers, faxes, and so forth) supported social relations in new ways. In particular, new media allowed small communities (enthusiasts of exotic breeds of birds or antique motorcycles) to form though their members were often few, and those few spread over large distances.

These groups can look surprisingly like modern equivalents of the scholarly communities that formed throughout the world in the Renaissance. These too were held together by common interests and shared communications. The letters circulating among the Fellows of the Royal Society in England formed the prototype for scientific journals, which still bind intellectual communities together. And indeed Strauss's initial observations were based on the new ways scholars were building communities of interest with the support of new communications technologies -- and, as we have argued elsewhere, may help understand the future of scholarly communities in the digital age.

Agora


Classic Dilemmas

Garrett Hardin, 1968. "The Tragedy of the Commons," Science, Volume 162, pp. 1243-1248, and at http://dieoff.org/page95.htm


Status Competition

See also Softpanorama Lysenkoism and Pseudoscience Page

[LSH99] Status Competition and the Performance of Work Groups by Christoph H. Loch, Suzanne Stout and Bernardo A. Huberman

Status involves group members evaluating themselves relative to fellow group members according to some shared standard of value. Status has been described in economics, sociology and evolutionary anthropology. Based on this work, the authors treat status and the associated recognition by others as an intrinsic preference of all group members. There is disagreement about whether status competition enhances group performance by pushing group members to work harder, or whether it retards performance by causing unproductive behavior. The authors build a dynamic simulation model of a work group, where members are paid a bonus based on group performance. In addition to compensation, group members value a high status relative to their peers. Status is influenced both by contribution to group output and by non-productive, social activities of status enhancement. Group members allocate their total time between working and non-productive status enhancement, trying to maximize the combined utility from compensation and status rank. They show that status competition can serve to push group members to work hard and perform, provided that it is mainly based on merit. However, if status is also based on political manoeuvring, status competition can lead to low group performance, especially in larger groups. Moreover, group performance may fluctuate and be unstable over time if the result of effort is noisy or if the group does not allow the sharing of ranks. The susceptibility to fluctuation depends on how status is awarded and updated over time. Thus, although a firm may be able to avoid status competition, it may succeed in influencing its effects indirectly. They demonstrate these results analytically and via simulation. 

[Roberts99] Scholarly Publishing Peer Review and the Internet by Peter Roberts. First Monday, Vol. 4 No. 4 - April 5th. 1999 contains interesting criticue of scholarly review process that shed some light on the process of selection of OSS products and the power of site such as Slashdot and Freshmeat (bold is mine -- NNB)

... The process of peer review has attracted its share of criticism from academics (and others) over the years. A number of commentators (e.g., Agger, 1990; Readings, 1994) argue that scholarly refereeing is inherently conservative. Those selected to be referees, at least for 'established' international periodicals, are generally 'recognised' scholars in their field who have already passed through the various publication hoops themselves. Original work which challenges orthodox views, while ostensibly encouraged, is - in practice - frequently impeded by academics who have a stake in keeping innovative critical scholarship out of respected journals. For if a contributor to a major journal rubs against the grain of conventional scholarly wisdom in a given discipline, it is likely his or her submitted manuscript will have to pass through the hands of one or more academics who are prime representatives of prevailing opinion. In Readings' words,

"Normally, those who review essays for inclusion in scholarly journals know what they are supposed to do. Their function is to take exciting, innovative, and challenging work by younger scholars and find reasons to reject it. The same goes for book manuscripts: one receives a hundred dollars for rejecting a manuscript, but if you suggest that it should be published, the check never seems to arrive" [1].

In the case of journals, much depends on the goodwill of editors. Anecdotal tales of being 'set up' by editors abound in academic corridors. Such experiences - where referees known to be especially 'vicious' in their criticisms, or to have strong prejudices against particular perspectives, are selected - can be devastating for beginning scholars setting out on the path to an academic career. Equally, of course, there is considerable satisfaction for authors when they encounter conscientious referees who submit their reports promptly, with balanced comments, fair criticisms, and constructive suggestions for improvement. Indeed, on many occasions, referees perform an invaluable service in identifying faults the author may not have noticed - faults which, if left unattended, could prove professionally embarrassing. Undertaking refereeing duties can be a thankless task (although many authors make a point of acknowledging the helpful comments and constructive criticisms of referees in the final versions of their papers). It takes considerable time and effort to read scholarly papers and to respond to them thoughtfully. Refereeing can, however, also be a fickle, unpredictable and frustrating process for authors.

Agger (1990) maintains that, given the shortage of journal space and the abundance of manuscripts in most fields of study, the balance of power at present rests very much in the hands of those who edit, review for, and produce the journals. There is, his analysis suggests, simply not enough room for everybody - at least not in 'respected', international journals. Agger claims that much of the writing produced by academics is either never published or ends up in local, unrefereed publications. As a result, it remains - as far as the international scholarly community is concerned - largely 'invisible'. With so many academics competing to publish in prestigious refereed journals, traditional canons of collegial respect and scholarly support can sometimes disappear. Agger observes:

"Academic reviewing becomes even nastier in an extremely competitive marketplace.... [I]t is no longer enough in many disciplines to have two strongly positive reviews and one lukewarm one; all three must be sterling given the rate at which writers submit papers for publication. In this climate, reviewers learn (and teach themselves, circularly) not to read generously but to target the smallest issues in their overall evaluation" [2].

Agger argues that the 'value-free' process of scholarly exchanged envisaged by thinkers such as John Stuart Mill has failed to eventuate. Instead, he claims, 'reviewing is frequently done out of petty jealousy, ideological interest or simply incompetence' [3].

THE MERCEDES MENACE

overspent.htm

Postmodern Markets

The Constitution of Status-- Part I


Motivation

Famous Last Words Ed Muth on Linux  Copyright 1999 Michael Robinson

Integration is a big deal for Microsoft for two main reasons:

  • Free value.
    Integration means never having to build it twice. Once you create the integration infrastructure, interoperable widgets, and graphical doodads for your first product, all that stuff gets transferred over to your second product for free. Furthermore, because of the interoperability, the first and second product together have a value greater than the sum of the parts. Your costs go down, and the market value goes up. The difference goes to the bank.
  • Consumer lock-in.
    If you can control the integration framework, you can keep consumers who buy into it from straying off the ranch. Given the choice between a mediocre but integrated datadiddler, and a better, but completely incompatible datadiddler, they'll go for the integrated datadiddler nine times out of ten.

Linux isn't a product. Linux is an organic part of a software ecosystem. The Microsoft "long-term development road map" had everyone using MSN in the year 2000. Then the ecosystem produced CERN httpd and NCSA mosaic, and we all know what happened next.

Linux doesn't develop according to a proprietary product plan. Linux just automatically grows towards providing the greatest consumer value, because it is consumer built. If anyone anywhere in the ecosystem has a good and valuable idea, the consumers will absorb it into Linux without prejudice.

"We're all in the business of wanting the customer to have the information needed to make informed choices. We haven't seen a flavor of Linux coverage that addresses that. Some criticalness is needed. Some people say positive things about Linux when their message is anti-Microsoft, but I wonder, in 36 months is this the next [Network Computer] or is it a viable OS? We don't see people question the Linux numbers."

This, actually, is true. Press coverage rarely reflects reality because reality is boring. Press coverage needs horse races, conflict, building people up, tearing people down, because spectacle attract readers and readers attract advertising money.

So there is a hysteresis in press coverage. If it isn't a spectacle, the press will ignore it completely. If it is a spectacle, the press will pump it up into an even bigger spectacle.

While the latter is currently the case with Linux, however, this is still completely beside the point.

... ... ...

Out in the real world, however, the best computer scientists in the world give away the results of their work for a number of very sensible reasons:

  • They get paid to.
    This covers university professors, graduate students, and researchers at public and private research organizations (unquestionably some of the best computer scientists in the world). This also covers employees of many successful companies for which free software is a part of the business model. There is the "give away the razors" model of companies such as Cygnus. There is the "grow the market" model of companies such as O'Reilly. There is the "commoditize the rest of the value chain" strategy of companies such as Oracle and IBM. And there is the good, old-fashioned, "hurt the competition" strategy of Corel.
  • If you want something done right, you have to do it yourself.
    Perhaps the majority of design-by-community software originates because the existing tools for the job didn't meet someone's price/performance standards. Tools are not products. Tools are what you use to produce your products. When someone develops their own tool (as opposed to their own product), they can either sit on it (which helps no one), or they can contribute it to the community software ecosystem (in which case they will directly benefit from any improvements made to the tool by the community). To put it in concrete terms: networking software companies don't develop Samba; network administrators develop Samba.
  • It's a cheap substitute for an expensive education.
    Many very good and experienced programmers work on community-based software projects in their spare time to keep their skill-set sharp and up to date. This translates directly into economic benefits at promotion and job-hunting time.
  • It feeds the ego.
    If Ed wants to argue that it is foolhardy to develop mission-critical business software based on ego gratification, then I merely offer Dave Cutler as Exhibit "A". Ego-based software development may not be "sensible", but it is firmly based in human nature, so it is at least something you can depend on.

OSS organization and leadership

Linux Today - GNU-Linux -- Leader or Follower by Benjamin Sher

I would agree with your point of view -- Linux is a follower, and the task of re-implementing popular products ASAP although not absolute is very high on the movement agenda. The difference with FreeBSD is that  by-and-large Linux is an anti-Microsoft movement so re-implementation of Microsoft products is the top political issue for the movement.

If re-implementation issues are in the core of the movement, any attempt to move it elsewhere (like your recommendations) are doomed -- that actually make the solution proposed by Benjamin Sher irrelevant.

Another interesting thing is that with attempt of re-implementation of Microsoft products some or most of the advantages of open source are lost. The only thing that remains is the price difference and the creation of the alternative to monopolistic Microsoft. Again -- the main part here is the creation of the alternative to Microsoft -- the movement is fed by anti-Microsoft sentiments and without them investments in Red Hat and other Linux players would be much less.

Also I would like to note that open source movement is going by the "One Microsoft Way" the road of complexity and featurism. And without simplicity the value of open source is much less. My main conviction is that open source movement first of all was and is about simplicity, about attempt to avoid overload connected with what Harvard Business School professor Clayton M. Christensen called the technology mudslide -- a perpetual cycle of extending existing architecture in pursuit of the latest techno fashion that Microsoft represents so well.

Please compare with Linux Today - Wired MS Innovator or Integrator

IBM DeveloperWorks Open source Linux Features - Library - Papers  DeveloperWorks Open source Linux Features - Library - Papers Brooks' Law and open source: The more the merrier? Does the open source development method defy the adage about cooks in the kitchen?Paul Jones
Director of MetaLab, University of North Carolina May 2000

While Raymond tells us that "the more the merrier is the rule in an open source project" and Stallman bemoans the scarcity of programmers, Jamie Zawinski, formerly of Netscape and of the Mozilla project disagrees. "The dynamic of development inside a company is definitely different from development in an open source way. However, I don't think that saying 'adding programmers makes it later' isn't true for open source projects because of their (alleged) lack of deadlines: I think that it's true (in any sense that it is true) only because the definition of 'programmer' is slippery in that case. If you have a project that has five people who write 80% of the code, and a hundred people who have contributed bug fixes or a few hundred lines of code here and there, is that a '105-programmer project?'"

What Zawinski and others have seen is that in such undertakings as the Apache project, Linux Kernel, and other large software projects, the bulk of the work is done by a few dedicated members or a core team -- what Brooks calls a "surgical team."

"In most open source projects," says Zawinski, "there is a small group who do the majority of the work, and the other contributors are definitely at a secondary level, meaning that they don't behave as bottlenecks."

Zawinski goes on: "Most of the larger open source projects are also fairly modular, meaning that they are really dozens of different, smaller projects. So when you claim that there are ten zillion people working on the Gnome project, you're lumping together a lot of people who never need to talk to each other, and thus, aren't getting in each others' way."

Brian Behlendorf of Apache and of Collab.net agrees. "We don't consciously think about it, but I think that the philosophy of keeping things simple and pushing out almost anything extraneous or nonessential to external modules has been followed fairly carefully in Apache. We've also been fairly successful (I think) in 'federalizing' the Apache process to sister projects."

Bezroukov agrees that modularity is essential, but there are limits: "For example if a project logically consists of three parts and we have two developers, than adding third developer can be highly beneficial -- the structure of the project will now correspond to the structure of the team. If the project is decomposable into three major parts and we add fourth developer, the effect will be much less."

Leadership of OSS Projects: The Nature of Good Leadership by Douglas Ort (ort@northnet.org). Thanks to Roman Suzi  for pointing this reference.

Structuring and Funding Free Software Organizations by Pedro F. Giffuni, pfgiffun@bachue.usc.unal.edu.co -- interesting comments about democratic vs. authoritarian structures in a simple experiment as well as insightful comments on the Red Hat IPO (see Midas Touch and Linux IPO Casino for more information about the IPO staff and financial future of Linux companies):

In 1950 Alex Bavelas at MIT (1) was interested in the evolution of strategies and the perceptions of people with different skills that participated in group tasks. An experiment was designed where groups of participants were selectively isolated and a participative card game had to be solved:

As the figures suggest, two structures were tested, the circles represent the participants and the links represent the communication channels. The first structure, where all the participants are organized in a circular fashion was named the "democratic" structure to make emphasis on the similar status of all participants. The second structure, in form
of a star, was called the "authoritarian" structure as one of it's members had communication, and therefore influence, with all the others.

In the first part of the experiment, where all the symbols in the cards were identifiable and had names, members of the authoritarian structure finished first, however they did not feel like winners, they felt they had failed and should have done much better; 94% of the team members recognized the vertex as the leader of the group. The democratic
structure finished afterwards, however they were cheerful, they felt like winners,and they also stated they could have done it better; the leadership seemed to be equally distributed among them.

In the second part of the experiment a "noise" level was added by making the symbols in the cards more complex, some symbols didn't even have names to identify them. The "democrats" didn't vary much their performance, however this time the "authoritarians" didn't finish at all. Most interesting than the result is the fact that the "democratic" group extended their comunication language, inventing new names to some symbols, while the "authoritarian" group incremented the noise level by insulting each other.

 

The real owners of a free software company should be their developers, this means that in order to be consistent companies that make money out of free software should pay at least part of their salaries in shares of the company. In recent weeks some companies, including Red Hat Software, have decided to sell shares to the public as a way to obtain additional resources. In general, it seems like a bad idea; everyone with the least financial background knows this is the most expensive way of getting resources for a company. Investors always expect to receive better dividends from their shares than any financial institution; otherwise they would simply leave their money in a bank. Red Hat has never produced dividends, and I suspect they never will, this probably means that Intel and the other companies that have invested their money there either wanted to lose it, or have some second intentions. The Red Hat case is particularly interesting: in order to produce at least some earnings for their owners they will have to become the number one Linux distribution, something that would be good for the commercial hardware producers that own it now since there would only be one Linux distribution to deal with, but that also means annihilating all other Linux distributors. To make things even more interesting, Red Hat won't be able to differentiate their product significantly due to the GNU license so it will be difficult to kill anyone without making REAL enemies in their own community; this is what I, and probably many people, would call really high financial risk.

Does Free Software Production in a Bazaar Obey the Law of Diminishing Returns -- weak -- archtecturital problems are completely ignored.


Gift Economy and Anarcho-Communism

Beyond Portals and Gifts Towards a Bottom-Up Net-Economy by Felix Stalder. First Monday

Richard Barbrook and Andy Cameron, 1996. "The Californian Ideology," Science as Culture, number 26, volume 6 part 1, pp. 44-72, and at http://ma.hrc.wmin.ac.uk/ma.theory.4.2.db

The Hi-Tech Gift Economy by Richard Barbrook (see also reprint at The High Tech Gift Economy) Richard Barbrook, PhD. is the coordinator and a founding member of the Hypermedia Research Centre at the University of Westminster. He is co-author with Andy Cameron of "The California Ideology", an important critique of West Coast Neo-Liberalism, and has written a number of books including "Media Freedom" (Pluto, 1995). In this piece, excerpted from his forthcoming book "The Holy Fools" (Verso, 1999) Barbrook looks at DIY culture on the internet and other topics.

During the Sixties, the New Left created a new form of radical politics: anarcho-communism. Above all, the Situationists and similar groups believed that the tribal gift economy proved that individuals could successfully live together without needing either the state or the market. From May 1968 to the late Nineties, this utopian vision of anarcho-communism has inspired community media and DIY culture activists. Within the universities, the gift economy already was the primary method of socialising labour. From its earliest days, the technical structure and social mores of the Net has ignored intellectual property. Although the system has expanded far beyond the university, the self-interest of Net users perpetuates this hi-tech gift economy. As an everyday activity, users circulate free information as e-mail, on listservs, in newsgroups, within on-line conferences and through Web sites. As shown by the Apache and Linux programs, the hi-tech gift economy is even at the forefront of software development. Contrary to the purist vision of the New Left, anarcho-communism on the Net can only exist in a compromised form. Money-commodity and gift relations are not just in conflict with each other, but also co-exist in symbiosis. The 'New Economy' of cyberspace is an advanced form of social democracy.

"Cooking Pot Markets: an economic model for the trade in free goods and services on the Internet," by Rishab Aiyer Ghosh,   First Monday, 1997, Volume 3, number 3 (March).

"The Attention Economy: The Natural Economy of the Net", by Michael Goldhaber, First Monday,  1997, Volume 2, issue 4


Marxism, Anarchism and Libertarianism

 Manifesto of the Communist Party by Karl Marx and Frederick Engels.

A spectre is haunting Europe -- the spectre of communism. All the powers of old Europe have entered into a holy alliance to exorcise this spectre: Pope and Tsar, Metternich and Guizot, French Radicals and German police-spies.

Where is the party in opposition that has not been decried as communistic by its opponents in power? Where is the opposition that has not hurled back the branding reproach of communism, against the more advanced opposition parties, as well as against its reactionary adversaries?

Two things result from this fact:

I. Communism is already acknowledged by all European powers to be itself a power.

II. It is high time that Communists should openly, in the face of the whole world, publish their views, their aims, their tendencies, and meet this nursery tale of the spectre of communism with a manifesto of the party itself.

Slashdot Articles ESR Responds to Nikolai Bezroukov -- this discussion contains a lot of information about this three politial movements and probably could be a good start ;-). For example here is one responce:

Re:ESR should go out sometimes (Score:2, Insightful)
by AndyElf (andrei.popov(a)usa.net) on Saturday October 09, @04:28AM EDT (#192)
(User Info) http://free.prohosting.com/~andyelf/

Bezrukov's article reminds me very much good-old Soviet times, high school and/or college in Russia, and myself reading obligatory articles written by Marx, Lenin, Plekhanov and other theorists of socialism/communism. Yet it is largely style, and just a little bit the content that do that: the title of the article sounds very Lenin--Vladimir Ulianov liked very long names for his works.

Yet as I said, what is in the article (putting aside stylistics) is not really Marxist or Leninist. It is a fairly good critique the whole essence of which, IMHO, can be phrased like so:

OSS is not exactly such a novel thing -- it has been known in scientific community for a long time. Current hype and success of it has to be largely attributed to Linux, but it is too naive to claim that OSS as a new software development paradigm means the total obliteration of any other way of developing software.

Before you flame, I know that I have left much of the article out. But I think that pages and pages that are left behind are just illustrate and support the above stated points.

Bezrukov does attack ESR as much as having his name in the article name. Why? Because ESR represents exactly this naive, on the border of blind-folded chauvinism, view of OSS. ESR, propaganda is one thing, reality is whole a lot different.

Yes, Linux popularity grows and it is the only OS rapidly gaining ground. This growth is not solely canibalistic (at the expense of other *nixes) as Microsoft would want it to look. But there are problems as well: there still are problems with fitting Linux in a business environment (office productivity suits like StarOffice, Applixware are not exactly a good match to Microsoft Office, while they may actually be as good if not better than Lotus SmartSuit and Corel PerfectOffice). I am not sure that making Linux easier and easier to install will matter as much to success: good publicity, applications, credibility will make a much better job than a no-pain-five-minutes-see-mom-no-hands Linux distros. After all, if business community is the target, their users will not be installing the system -- IT people will. Home user is a whole another matter--and a topic for a separate discussion.

ESR being a very good and a bit less extreme public person for OSS and Linux has done and still does a very good job publicising the movement. Yet he is not perfect. He also seems to get carried away lately, maybe someone else truly needs to get part of his job? He has taken Bezrukov's article way too personally. As much as not even consulting a dictionary before using certain terms, like socialism, Marxism, communism. Do not really want to flogg this horse one more time, but:

  • socialism is "any of various economic and political theories advocating collective or governmental ownership and administration of the means of production and distribution of goods" (Merriam-Webster). The root here is Latin socium--same as in society. It does not have to contradict with freedom of choice, be it personal freedom or market freedom. It is rather that years of Soviet and Chinese socialism (read: vulgar communism) have created bad publisity and a certain mind-set, especially in Western people. I am not very fond of a term (and frankly M-W definition is quite lousy too), just as well as of some ideas behind it, but OSS/FSF does have certain similarities to it (this collectivistic, socium-oriented view), especially when publicized by ESR. It sometime even borders with an utopian communism.
  • communism is "a: a theory advocating elimination of private property b: a system in which goods are owned in common and are available to all as needed" (same source). If you read Sir Arthur C. Clarke's Final Odissey you'd remember him mentioning that comminusm is ideal and perfect (hence utopian) society, but it is possible only on an insect or small animal level. Humans are too complex. It will probably take aeons to reach such an outstanding level of conscience that would permit communism into being.
  • Marxism is "he political, economic, and social principles and policies advocated by Marx; especially : a theory and practice of socialism including the labor theory of value, dialectical materialism, the class struggle, and dictatorship of the proletariat until the establishment of a classless society" (M-W again). If you leave out political crap that starts in, AFAIR, 3rd volume of Capital (political and class struggle, etc.), it is more of a political economy textbook, similar to Adam Smith's The Wealth of Nations

You can never really put an equal sign between these terms. Moreover, free market economy adept should really take time and put some effort into reading Marx's Capital: this is a great description of free market and how it functions. Do not let popular propagandistic views of Marx, socialism and communism prevail--it is like thinking that all Scots are wearing kilts all the time, Dutch people are riding bicycles on icy channels in wooden shoes with baskets of tulips in both hands, or Russians drinkng vodka from a samovar every morning chit-chatting with bears.

Sunday Observer - Features: Marx rediscovered as the prophet of globalisation? by Wheen's Karl Marx: The Man Rather than the Revolutionary Reviewed by Bernard H. Moss

Francis Wheen's Karl Marx has been the talk of the town in London. Who would have imagined that a biography of the obscure German philosopher-activist vilified and discredited by the British establishment since his death, and again entombed with the collapse of Communism, should arouse such public interest and sympathy?

The book benefits from a Marxist revival of sorts. Marx has been rediscovered as the prophet of globalisation and the technological revolution. The collapse of Communism has left exposed the running sores of inequality and misery caused by capitalism and has freed its critics from Cold War inhibitions. It is almost as though the collapse of the one tangible alternative, the Soviet Union, once an inspiration but latterly a counter-model for most activists, has left capitalism without real legitimacy.

Wheen's biography adds another dimension, the human one. He shows that Marx was neither an ogre nor a saint, but a multifaceted, complex human being, loving, domineering, jealous, bilious, and convivial, suffering from carbuncles and from the death of his children. We learn that he went on a drunken spree on Tottenham Court Road just like us, romped with his family on Hampstead Heath and sent his daughter to a South Hampstead finishing school as socialists are not supposed to do.

However, prone he may have been to gratuitous polemics, vendettas and rumour-mongering, Marx is shown to be far more honourable than such erstwhile movement rivals as Lassalle and Bakunin.

The book debunks many scholarly myths about Marx, including his alleged anti-semitism, anti-worker elitism or social Darwinism. It is written in a rather fussy and self-conscious literary style - 'the politics of flibbertigibbets'? - that is more evocative of Victorian than present-day London.

But the human sympathy for Marx comes at the expense of an analysis of his contribution as a revolutionary, the fighter for the working class who was the subject of the classic biographies by Mehring and Nicolaievsky.

Not that Wheen is uninformative about Marx's work and politics. The book contains insight into the Communist Manifesto as a guide to action in 1848 and into Capital as a work of Hegelian art and paradox that has proved less useful as a practical guide. But Marx is largely dissociated from the working class movements to which he contributed and especially from his legacy in Social Democratic and Communist parties. This means that he is relieved of responsibility for the Russian Revolution and everything that followed.

One has only to look at the British Labour Party, which was almost devoid of Marxist influence, or Marx's own German Social Democrats where the Lassallean legacy was strong, to see that socialist parties would have emerged without Marx. Indeed, the young Marx derived his politics of class struggle and socialism from histories of the French Revolution and such republicans as Louis Blanc and Auguste Blanqui (background influences ignored by Wheen), and the first socialist workers' revolution the Paris Commune, took place under the aegis of the First International without Marx's intervention.

However, it is very hard to imagine a successful Russian or Chinese Revolution without Marx and Lenin. Whether that is to Marx's credit or dishonour, it is this practical connection to the real world, the unity of theory and practice, that made Marx so special as a writer and thinker.

Marx alone lent his name to the great political movements and revolutions of this (20th) century. This is doubtless why he was recently chosen in a BBC Online poll as the thinker of the millennium.

The human dimension explored by Wheen raises the problem of Marx's fallibility. Personal traits and circumstances, his health the need to earn a living, his impatience with rivals, influenced what he did.

He wasted much time in feuds with obscure rivals like Herr Vogt. Rather than abide competition from Willich in the communist International in 1850 or from Bakunin in the workers' International in 1872, in each case he quit the organisation. To earn money in the 1850s he wrote many journalistic articles on Palmerston's diplomacy that lacked a class perspective. His liver ailment interrupted his defense of the Paris Commune.

If Marx never made an impact on the country in which he spent most of his adult life, Britain, it was due in part to the fact that he was a heavy-accented foreigner in a country with no fondness for Continentals.

But even the unity of theory and practice was only approximate and problematical. Marx never wrote, as Wheen points out, theoretical treatises or doctrinal catechisms. Most of his theoretical works - with the exception of Capital - were really written for the purpose of self clarification. They were also partial approximations of the truth that reflected particular influences, circumstances and periods of his life: the Feuerbachian essentialism of the Ecocomic and Philosophical Manuscripts, the almost reductionist materialism of The German Ideology, the Hegelianism of the Grundrisse.

Even if the broad distinction between the young and mature Marx is accepted, one must still account for the sharp breaks in Marx's political strategy that occurred as the working class movement developed: the creation of a communist International with Blanquists in 1850, followed a few months later by fourteen years of political inactivity, the formation of the workers' First International in 1864 and its abandonment in 1872, and finally Marx's sponsorship before his death of directive parties led by middle class intellectuals, Jules Guesde and his son-in-law Paul Lafargue in France and Henry Hyndman in Britain, that ran rough-shod over working class traditions.

Wheen touches on these breaks in a fuzzy artistic way. Marx was good at strategy but poor at tactics, in politics as in chess, he says. "Marx's adult life has a tidal rhythm of advance and retreat, in which foaming surges forward are followed by a long withdrawing roar". Marx spent the 1850s passively awaiting a crisis that would revive the working class movement. One could say that Marx was only following the objective rhythms of history, the advances and retreats of the workers movement, but there is the question of tactics and timing, the uneven development of the movement in different countries, the preference for a theoretical advance that anticipates the future - the communist turn of 1850 prefigured the Soviet Revolution - over gradual progress, that remain matters of judgment rather than of science.

If Marx like Lenin was a revolutionary of circumstance and not dogma, then he too was as Wheen recognises, susceptible to error. But of course, Marx was not merely a genius of circumstance, as Wheen is inclined to believe. Despite shifts in Marx's philosophic discourse and political strategy there is still a constancy in Marxism and its method that makes it the most radical and thus the most scientific approximation of human history.

(From What Next? No. 16, 2000 published in Britain)

See also Professor James Bolner Links  and  Introduction to Political Science for additional links on the subject

Anarchism Triumphant Free Software and the Death of Copyright

Non-propertarian production is also directly responsible for the famous stability and reliability of free software, which arises from what Eric Raymond calls "Linus' law": With enough eyeballs, all bugs are shallow. In practical terms, access to source code means that if I have a problem I can fix it. Because I can fix it, I almost never have to, because someone else has almost always seen it and fixed it first.

For the free software community, commitment to anarchist production may be a moral imperative; as Richard Stallman wrote, it's about freedom, not about price. Or it may be a matter of utility, seeking to produce better software than propertarian modes of work will allow. From the droid point of view, the copyleft represents the perversion of theory, but better than any other proposal over the past decades it resolves the problems of applying copyright to the inextricably merged functional and expressive features of computer programs. That it produces better software than the alternative does not imply that traditional copyright principles should now be prohibited to those who want to own and market inferior software products, or (more charitably) whose products are too narrow in appeal for communal production. But our story should serve as a warning to droids: The world of the future will bear little relation to the world of the past. The rules are now being bent in two directions. The corporate owners of "cultural icons" and other assets who seek ever-longer terms for corporate authors, converting the "limited Time" of Article I, §8 into a freehold have naturally been whistling music to the android ear [26]. After all, who bought the droids their concert tickets? But as the propertarian position seeks to embed itself ever more strongly, in a conception of copyright liberated from the minor annoyances of limited terms and fair use, at the very center of our "cultural software" system, the anarchist counter-strike has begun. Worse is yet to befall the droids, as we shall see. But first, we must pay our final devoirs to the dwarves.

...Oscar Wilde says somewhere that the problem with socialism is that it takes up too many evenings. The problems with anarchism as a social system are also about transaction costs. But the digital revolution alters two aspects of political economy that have been otherwise invariant throughout human history. All software has zero marginal cost in the world of the Net, while the costs of social coordination have been so far reduced as to permit the rapid formation and dissolution of large-scale and highly diverse social groupings entirely without geographic limitation [32]. Such fundamental change in the material circumstances of life necessarily produces equally fundamental changes in culture. Think not? Tell it to the Iroquois. And of course such profound shifts in culture are threats to existing power relations. Think not? Ask the Chinese Communist Party. Or wait 25 years and see if you can find them for purposes of making the inquiry.

...28. Eric Raymond is a partisan of the "ego boost" theory, to which he adds another faux-ethnographic comparison, of free software composition to the Kwakiutl potlatch. See Eric S. Raymond, 1998. Homesteading the Noosphere.. But the potlatch, certainly a form of status competition, is unlike free software for two fundamental reasons: it is essentially hierarchical, which free software is not, and, as we have known since Thorstein Veblen first called attention to its significance, it is a form of conspicuous waste. See Thorstein Veblen, 1967. The Theory of the Leisure Class. New York: Viking, p. 75. These are precisely the grounds which distinguish the anti-hierarchical and utilitiarian free software culture from its propertarian counterparts.

An Anarchist FAQ Webpage

Chomsky Discussion Salon

"Anarcho-capitalism, in my opinion, is a doctrinal system which, if ever implemented, would lead to forms of tyranny and oppression that have few counterparts in human history. There isn't the slightest possibility that its (in my view, horrendous) ideas would be implemented, because they would quickly destroy any society that made this colossal error. The idea of 'free contract' between the potentate and his starving subject is a sick joke, perhaps worth some moments in an academic seminar exploring the consequences of (in my view, absurd) ideas, but nowhere else."

[Noam Chomsky on Anarchism, interview with Tom Lane, December 23, 1996]

Anarchism vs. anti-Bolshevik Marxism,

How to organize societies.

Marxism and religion

McCain Library Title List -- books connecting Marxism and Christianity. this topic is studied in several university courses. See for example 383I. Christianity and Marxism,   Liberation Theology Department of Social and Economic Studies or RELIGION AND THE CULTURE OF THE TWENTIETH CENTURY I  

See also Movements and Issues in World Religions A Sourcebook and Analysis of Developments Since 1945 -- Religion, Ideology, and Politics that contains several essays in Part 4:

Finding A Social Voice; The Church And Marxism In Africa

DSS Links - Sociology (General)

Jerry Eades and Caroline Schwaller (eds.) Transitional Agendas- Working papers from the Summer School for Soviet Sociologists

Etc

Open source @ Links by Moskalyuk

 

Last modified: March 09, 2002