All images on this page are purely decorative

[This piece was orginally a talk for the Research Triangle Computer Law Roundtable in May 1997, and was intended to introduce Copyleft and Linux to an audience of IP attorneys, only two of whom had heard of Linux. The Linux material at the end of the talk was added for the presentation at the Linux Expo in Durham, N.C. in May 1998. Six months later, events are beginning to overtake predictions.]

 

 

COPYLEFT AND THE RELIGIOUS WARS OF THE 21ST CENTURY

 

Donald K. Rosenberg

Stromian Technologies®

This is a Single Page Divided into Six Sections:

I. Introduction
II. In the Beginning
The 1960's
The 1970's
III. The GNU General Public License Defined
The 1980's
IV. UNIX and the Great Betrayal
1979
V. The Great Awakening
The 1990's
VI. Armageddon
Into the Next Millenium
Linux: Salvation or False Paradise?
Fight the Good Fight

Introduction

Articles about Linus Torvalds and his UNIX-clone operating system, Linux, have been appearing in the popular press for some time now, but there has been no discussion of his half-serious, half-joking intentions of World Domination other than an occasional reference to Microsoft (which always talks seriously about World Domination in terms of "embracing and extending" new standards such as Java). Most people, even if they are computer users, have trouble making sense of the larger movements in the computer world, dominated as it is by hype artists who unleash underperforming products with multi-million-dollar promotion budgets. Today I would like to go back over thirty years to the gentler origins of what is actually a great contest, and look ahead to the Armageddon which will soon be upon us. The religion metaphor comes easily to hand because that is how software people discuss operating systems; in the days when the Macintosh was cleverest microcomputer system around, its devoted followers gave rise to the term "Mac bigot," which others used to describe their impatient and intolerant attitude towards other systems, which in those days were character-based. Unfortunately, the Mac religion turned into a cult, and its votaries appear to be heading down the trail into obscurity already taken by the WordStar fanatics.

But to return to religion and operating systems, what is DOS/Windows but mainstream religion (or orthodoxy, if you will); Pick could be called the Quakers, doing good in quiet ways that no one ever notices; OS/2 might be called Christian Science (its followers say it really works, but nobody believes them), and the UNIX tribes follow strange gods, unknown to the rest of us. Why are all these obscure things suddenly important? Why does Mac cry betrayal by Windows, and why does a geekish OS vendor find himself on the wrong end of a Senate investigation committee? Money, of course.

 


In the Beginning

The 1960’s

The Apostle Paul told us that the love of money is the root of all evil. St. Francis believed it, and many worthy divines of our own age tell us they do as well, but some thirty years ago bunches of young Americans suddenly made the twin discovery that our society was based on the love of money, and that that love was corrupting everything around it. The Hippies believed that Love was the most important thing in the world, and that love for one’s fellow creatures and the whole earth would make the world a better place. An expression of this belief was the Diggers, a volunteer group who named themselves after a harmless and extinct Californian Indian tribe, and who attended the giant rock concerts of the age to offer first aid and comfort to those that hungered, thirsted, and were suffering from bad acid. They believed that their freely giving to help others was part of the whole effort to make the world better.

Their sober elders thought this world was a product of fantasy, and it was, but many aspects of our modern world are products of the fantasies of that time. Tolkien’s work, The Lord of the Rings, was very popular in those days, and names from that fantastic history dot the computer industry to this day: Gandalf and Palantir, to name only two. The book was an epic of combat between good and evil, and appealed strongly to Hippies. Combat? Yes, Hippies had an aggressive side. This was their hostility to authority of all kinds. I still remember a picture of Goofy (these people were no respecters of copyright) pointing one of his three white-gloved fingers at the beholder and admonishing him, "Don’t fergit t’ Smash the State!"

In the computer world it was very easy to identify authority. It was embodied in the datapriests, who tended large machines in glass temples, and who controlled access to those machines under rather humiliating terms: punch your cards somewhere else, bring them in, neatly organized in exact order, and we’ll run them and you can see the results--and the errors--tomorrow, maybe. Everybody hated the datapriests, and when PC’s first became available twenty years later, droves of middle managers surreptitiously acquired these new machines on their expense accounts so they could get out from under the thumb of the datapriests.

This anti-authoritarian impulse is a Hippie trait. In Berkeley, Lee Felsenstein organized computers on the street; they had to be terminals, of course, given the technology of the day, but the principle was that every person who walked down the street could have access to computer power, could read and post on bulletin boards, and become a citizen of the great cyberworld. If you remember what in those days walked up and down Telegraph Avenue and to this day lies in its gutters, you will see what a bold, Digger-like move this was. We could call Lee the Father of the personal computer. Power to the People.

This is the period that saw the rise of Ted Nelson and his Xanadu project, which was to put on-line not just the best that has been thought and said, but all things that have been and are being thought and said, accessible to all. Ted was foreshadowing the World Wide Web, where almost anyone can post anything, and you are free to read it if you care to. No librarians, no datapriests, no mediators.

In those days of thirty years ago, software was not bought and sold; it was given away with the expensive hardware, which was useless without it. Software was not copyrighted--even the DEC OS was that way, and IBM published source code into the 1970’s. There were no rights and business practices for software, because one company’s software did not run on another company’s hardware, and the only way to sell an expensive machine was to throw in the software to make it useful. Nothing was cross-platform in those days.

The 1970’s

If you loved computers, the best place to be was in a computer laboratory. Richard Stallman describes the MIT Artificial Intelligence laboratory as a paradise in the 1970’s, and he certainly had an epiphany there. The power of that experience of paradise (it was more than a vision) has marked Stallman to this day with the double-faced virtue/vice combination of being utterly implacable in a worthy cause--free software. Stallman is the Father of Copyleft.

At MIT, when terminals were introduced, students had to share the few allotted to them, but professors each had one in their offices. Stallman felt he was being perfectly reasonable in breaking into the office of any professor who had locked his office so that the students couldn’t use his badly-needed terminal when he wasn’t there (say at 3 a.m.). Stallman opposed passwords, and promoted the use of a single carriage-return as a password for everyone. His reasoning goes back to why he regarded the lab as a paradise: the system was open, all participants had full control, and there was no central control or passwords. To put it very clearly, all software was free, and it was freely modifiable...by anyone.

As Stallman tells it (and you can visit his voluble Free Software Foundation Web site at www.gnu.org), his paradise was ruined by the root of all evil, the love of money. First, the capable programmers were hired away from the lab by the emerging commercial software vendors, who offered them so much money they couldn’t refuse. These vendors sold commercial software for money, and because they had raided (among other places) the MIT AI lab, there was a lack of competent programmers left to maintain the systems. This lack led to the introduction of closed, proprietary software, which users could not really understand or modify. Stallman believes in freely distributable software, freely modifiable, and with source code available, and he acted on these beliefs by going off to start the Free Software Foundation. He believes that the wrong analogy was chosen for software distribution, that of the copyrightable book, with all the proprietary and limiting powers of the copyright. Stallmann thinks that the recipe would be a far better analogy: it is a solution to a problem, but it can’t be copyrighted. Anyone can modify it and anyone can pass it on. But my favorite analogy is the legal argument: you can invent it, use it to win your case, and then it is freely available for other people to use. Some of Stallman’s arguments on the FSF site amount to the Miracle of Loaves and Fishes, implying that the originator of the program is not impaired no matter how many people freely use his program.

Let’s be clear about what Stallman means by free: you can charge money to cover your distribution costs, but the software itself carries no charge, and the recipient is free to pass it on. Because the software comes with its source code, the user can make any modifications he pleases. If and when he passes on the software and its modifications, he must also pass along the source code. Free does not mean that no money changes hands at all, but it does mean free from the external control of the software police, or of the government. Hippie sentiments.

 


The GNU General Public License (GPL) Defined

The 1980’s

Being a man of strong principles, Stallman started the Free Software Foundation, and began building software tools (where else was the new free software going to come from?) and giving them away with source code. Income was to come from consulting services related to the software. Along the way, Stallman reviewed existing methods of free distribution and authored the most important document for Free Software: The GNU General Public License, commonly called the GPL. This license gives the user three rights: 1) to use the software, to copy it, and to give it away 2) to change the software 3) to have access to the source code. The key requirement is that the user pass on these rights, unimpaired, to other users. This automatically means that any changes passed on by the user must be distributed in source code form as well. If passing along the source code is an inconvenience, you don’t have to do it with every copy, but you must say with every copy that source code is available, and tell how to obtain it. You can charge for the distribution costs of the program and source code--free refers to liberty, not to cost.

The whole process is commonly called "copylefting;" anyone can use the General Public License, and a copy of it must go with each copy of the distributed program. The smart software programmer will first put a copyright notice on his program; this simple procedure costs nothing and is an essential step to asserting control over the distribution rights for the software. Once copyrighted, the program can be distributed in any way the copyright holder chooses: he can OEM license it to IBM, sell it through Egghead, and still distribute it under Copyleft. Any software happens for some reason to be in the public domain is at risk, for anyone is free to take it, modify it, and copyright and sell the result, thus returning the software to proprietary ways. In theoretical terms, Copyleft eliminates the middleman who takes away the user’s freedom. The GPL itself is copyrighted 1989, 1991 by the Free Software Foundation. As a document, the license states that everyone is permitted to distribute copies of the license, but no one is allowed to modify it. This documentary form of Copyleft is found in books--check some of the Linux books in any large bookstore. The notice typically reads "Verbatim copying and distribution is permitted in any medium, provided this notice is preserved." The freedoms conveyed are enormous. How many times has a reader wanted a book that the publisher did not see fit to reprint? The reader might legally make a copy for personal use, but he couldn’t give it to friends or pass out 30 copies to a class. With Copyleft a book need not go out of print. And while it might be difficult to photocopy a 400-page book, if we leave the world of atoms and go to the world of bits (as a certain man, also from MIT, likes to Negropontificate), nothing is easier than copying and sending a computer file. In the case of documents, no modifications are allowed; the author need not fear having his views truncated or misrepresented.

To return to Copyleft for software, note that the GPL covers only copying, distributing, and modifying--everything else is outside the limits of the license. Running the program is not restricted, and the output is covered only if the output is a work based on the program. Looking at the license in more detail, we see that 1) verbatim copies of the program may be distributed, but must be accompanied by the GPL, along with any notices referring to the GPL, and including the copyright notice and a disclaimer of warranty for the software. Fees are allowable to cover a) the costs of the copy and its distribution b) warranty service. 2) The freedoms to copy, modify, and distribute stipulate that distribution of changes can be made only with notice of the changes and who made them. Proprietary products that simply link to GPL software are allowed to remain proprietary; only derivative products need be placed under the GPL. The GPL notice must be displayed on start-up. And finally, the injunction that displays the legal mind at work: no copying, distribution, or modification may take place except under the terms of the GPL

The logical, happy, but somehow unexpected result of many people following this license is that much of the free software floating over the Internet, and all of the software in the Linux kernel, has the same licensing terms, greatly simplifying compound distributions. There are other free licenses besides the GPL; these include the MIT X License and the BSD-style licenses (none insist on source code distribution). The new licensing emerging for the Netscape browser differs chiefly in its concern to combine GPL-style freedoms with proprietary software.

The software that Richard Stallman was cooking and giving away was part of a system he called GNU (as in GNU Public License); in those pleasant recursions beloved of programmers, it stands for GNU’s Not UNIX. Actually, GNU’s lots like UNIX, it just isn’t UNIX, that’s all. GNU is a clone of the UNIX system. As we get into what UNIX was, is, and shall be (or might be), we will look at the other driving force of Richard Stallman’s crusade for Free Software.

 


UNIX and the Great Betrayal

1979

Although as fractured and quarrelsome as Protestantism, UNIX was originally a great seamless religion, although limited to a datapriesthood of formidable intellectuality and consequent contempt for the uninitiated. It was developed in the rarefied atmosphere of Bell Labs, which had a collegial atmosphere akin to the MIT AI lab. UNIX was originally conceived of as a higher abstraction from the machine-specific languages of the time: it was designed to be cross-platform. UNIX was freely passed by Bell Labs to other software researchers; because source code was included, a great many developers around the world contributed improvements, which were incorporated into UNIX by Bell Labs on an ongoing basis. UNIX was a favorite in universities.

The breakup of Bell in 1979 caused auditing eyes to value every possible asset, including UNIX. If the software had value, obviously it could not be so freely distributed as formerly. Users now had to pay handsomely for the source code; as a consequence, it was no longer studied in computer classes. Once launched as a commercial product, and adaptable by design to run on numerous platforms, the natural customers for the UNIX OS were the hardware companies. These in turn sold their machines and operating systems to large corporations. These iron-vendors had to find software somewhere, or their machines were useless. They purposely chose UNIX for its power and scalability, vital for running large systems in well-heeled corporations, but they did not share the UNIX ideal of the same OS running on many different boxes. Each vendor promptly made improvements in the source code he received. These improvements consisted of changing the OS code so that it would run faster on the vendor’s particular hardware. That it would now not run on someone else’s box did not dismay the vendors, and they did like the result that only their own applications would run on their own version of UNIX on their own hardware. That is how you lock customers in. DEC did it, HP did it, IBM did it, Tandem did it, everybody did it. And so began the UNIX Wars of the 1980’s, the computer world’s equivalent of the Thirty Years’ War. Of that waste and destruction we shall not speak.

 


The Great Awakening

The 1990’s

The commercialization of software and the commercialization of UNIX in particular were the forces that drove Richard Stallman to try to restore the software paradise lost. UNIX is a powerful system, divinely simple in its principles and devilishly complicated in its details. In UNIX, for example, every computer is a server, and in the box on your desk there can be several servers that can talk with each other or with other machines. The networking is built into the system, not added later. Inspired by all this power which he served, Stallman began work on the tools to create his UNIX clone, GNU, writing compilers, etc., and laboring for years. The very heart of this UNIX-like operating system, the kernel, he put aside, planning to devise one as the crown of his labors, a kernel that would put UNIX itself to shame. He labored with a faith like that of the characters in the Tolkien epic who await the Return of the King.

And the King did come, the missing kernel came, over the water, out of the north, and its name was Linux. The story of that young Finn, Linus Torvalds, and his kernel is so well known there is no need to go into its details; my purpose is to set it in context. That he began the work, threw it up on the Internet for others to improve, and finally distributed it under the GPL is known to everyone. And he has been very open in his indebtedness to Stallman’s GNU tools for their part in creating Linux. Stallman finally did complete his kernel, called HURD, and released it in August 1996. By then Linux, or what Stallman prefers to call "a Linux-based GNU system" (some have suggested the term LIGNUX) was well established. Stallman claims that his HURD kernel is more powerful than UNIX, and therefore more powerful than Linux, the UNIX clone, but no one seems to be paying attention. Everyone is watching Linux as it comes upon the field, waxing in strength and stature, and armed with a non-commercial innocence that may transform the proprietary software world, if not destroy it. Its most enthusiastic champions cheer for that--one more authority gone, and another victory for cooperation and love.

 


Armageddon

Into the Next Millennium

The field onto which Linux has walked is actually a battlefield on which two large armies are doing battle. On the one hand, the fragmented survivors of the UNIX Wars are still playing Last-Man-Standing while trying to resist an enormous and unified invader who is determined to take their turf and grind them into oblivion. This is the great contest between UNIX and Windows NT. Readers of Tolkien’s The Lord of the Rings will have no trouble recognizing in Bill Gates the archenemy Sauron, who rules a land blighted by his evil magic, called Mordor. He has gathered the scum of the universe (UNIX people believe Bill has third-rate software) to enslave all the decent folk, fairies, elves, and good wizards (as UNIX people like to see themselves). The hype and self-aggrandizement of the software world is enough to turn this into an epic battle, but the contest is actually of far-reaching proportions when viewed soberly.

There are several dimensions into which to divide the combatants, notably large-scale vs. small scale, and open systems vs. proprietary. A look at the blood-lines of the two contestants shows them marked by their origins in opposite ends of these scales.

The microcomputer, or Personal Computer (as IBM was the first to call it), was God’s gift to the little man, enabling a desktop to handle spreadsheets and documentation projects. Bill Gates rode this horse up from the hobbyist level to a dominant position on the desktops of today. By controlling the source code of the operating system, Microsoft obtains an advantage for its developers against outsiders in developing applications for Windows and NT. Moving upward from the desktop, and using the NT operating system, Microsoft is making inroads into the middle tier, or server market--the departmental, as opposed to desktop, level of corporate computing services.

UNIX, on the other hand, had its origins in large organizations, running on the largest machines. It comes from a tradition of open, freely-modifiable software, and for that reason forms the basis of Internet software technology, and is the OS most frequently found on Internet servers. Because of its adaptability, it was taken onto the middle tier, or mini-computers (a tier today occupied by servers). Besides servers, there is another kind of machine in this middle tier: the workstation. These are large machines found on the desktops of graphic designers and engineers. Sun, now the largest UNIX with its Solaris OS, had its origins in the invention of a cheaper workstation from off-the-shelf parts (just as the IBM PC began with off-the-shelf parts), fitting it with a proprietary version of UNIX, and tearing off a large hunk of the expensive workstation market. This strategy is now being employed by Microsoft, as cheaper NT servers make inroads into the expensive UNIX server market. UNIX, in turn, has been unable to fight its way downward onto the ordinary, non-workstation desktop because of the size and expense of the software. These impediments are closer to solution with cheaper, larger desktop machines (the same machines that with some souping-up are doubling as the new servers), but the obstacle of a complicated, user-unfriendly UNIX operating system remains.

For the moment, UNIX is secure at the top end of the market, on the large systems. It never was on the desktops. Right now the fight is in the middle, over the servers. The winner will then move up--if it is NT, or down--if it is UNIX. This contest will take a number of years, and at the moment, NT is winning in the middle as more UNIX shops begin to add NT, and as NT is ported to formerly UNIX-only servers in order for the server vendors to hold onto the middle market. NT is also planning a downward move onto the desktop, with the plan of making Windows, with which Microsoft currently controls the desktop, either disappear or become merely a home system.

 

Besides the dimension of large/small, there is the dimension of open/closed. UNIX in origin is an open system, freely-modifiable. Microsoft products have always been closed and proprietary. As Sun was increasingly hard-pressed by Microsoft, it reached back into its UNIX origins and pulled out the weapon of cross-platform interoperability: Java was born. Tolkien fans will recall the Dwarves finding the lost magic jewel of their ancestors, the Arkenstone, in the vaults of their long-abandoned caves. Although Microsoft seems about to wrest even this jewel from Sun’s hands and transform it with their own black magic, the world’s interest in Java should give us some clues as to where the market may be heading.

The promise the world is applauding is not necessarily the Sun product itself, Java (everyone is quick to admit its present faults), but its promised benefits of write-once, run anywhere, and greater user control over software. Sun and Oracle would have us believe that the corporate world is waiting for thin client machines (which will sell lots of servers to run them), but corporations are really looking for the promised benefits of cheaper desktops that can be centrally administered and more easily tied into the central computing system. If you listen carefully you will hear that the world is not crying for Java, it is crying for Linux.

There is a growing disaffection with Microsoft in the corporate world. The move from Windows 3.x to Windows 95 took much longer than Microsoft had planned. People resent the requirement for larger machines, the uselessness of the older machines, and the necessary retraining, all for an incommensurate return on the investment. Microsoft’s delays in getting out the follow-on operating systems, particularly NT 5.0, do not inspire confidence. The covers of recent issues of popular computing magazines feature articles on how to crash-proof Windows. The power-grab for the desktop represented by integrating the IE browser into the desktop reminds users that application choices are dwindling, just like application vendors.

 

Linux: Salvation or False Paradise?

Surely this is the window--pardon the expression--that Linux has been waiting for. This is an OS that costs nothing to install on a desktop machine, will work smoothly with the large UNIX systems above it, behaves like a native on the Internet, and is even faster than it is cheap. It knows what networking is, because every box is a server. It does not run out of interrupts for attaching peripheral equipment, and it can run gangs of chips, making multi-processing super-computers possible. Windows NT will be slow to arrive on the new Merced chip, but Linux already runs 64-bit, should the user eventually want to move up to more powerful machines. Linux will also run efficiently and comfortably on all the old iron (486’s and even 386’s) around the office. It can be administered remotely. It runs on the most popular chips. Here is the promise of Java and the promise of the thin client, both fulfilled and real today.

All it needs for acceptance is to overcome corporate suspicion of its origins and price (we are told). There are now paid support programs, if enough publicity can be given to them to inform corporate buyers (we hear). And the Copyleft model of product distribution and support is about to be given a very public test by Netscape, which is making the source code of its browser freely available and modifiable, inspired by the success of Linux and by the Eric S. Raymond paper, The Cathedral and the Bazaar. Copyleft proponents tell us that if we follow their logic, we will one day be able to look on the Internet and find any sort of software that we want, for free and continually updated by the coding community around the world. There is a lot of free (if not freely-modifiable) software out there right now, and some of it is very good indeed.

 

Fight the Good Fight

No one should mistake the location of the battlefield, however. Just because Microsoft owns the desktop does not mean that this is where the struggle should take place. Everyone knows that the battlefield is the server. It will not be won with applications alone (do Apache and sendmail, as capable and useful as they are, really hold off the spread of NT?). To become the choice among operating systems, Linux must leverage its natural strengths of distributed development among crafty programmers by matching Solaris in features, so that its superior speed and reliability--even on SPARC systems--have a functional benefit. Only at that point will the advantage of lower price be felt. Put another way: Linux needs to win the server by gaining the strength, features, and scalabilty to move up the ladder beyond the mid-level server.

Because Linux developers live the Hacker Ethic of the old MIT lab, looking up the ladder and tackling increasingly-difficult technical challenges is their nature. The question is how to harness this aggressive engineering efficiency to advance Linux World Domination. Besides looking up the ladder, Linux must look downward, beyond the desktop. Linux already has success in tiny embedded systems; how much success cannot be reliably determined because manufacturers at this time feel no obligation to state that their devices are run by chips programmed in free software, and there is no licensing authority to count them as they go out the factory door. But as Windows and other large systems struggle to fit themselves into the tight places where compact Linux thrives, Linux needs to expand its capabilities beyond even the largest of those systems. A major impetus behind IPv6, the emerging sixth version of the Internet Protocol, is the need for IP addresses for the many devices expected to communicate in the future over the Internet: cars, cell phones, toasters, athletic shoes, and many others. Linux developers need to increase the likelihood that Linux will be selected for these projects by adding IPv6 to Linux so that it will operate there along with the older IPv4. Solaris has not yet reached this point, and it is important for Linux to get there first, both to increase its likelihood of adoption and to show the world (as in security bug fixes) that things happen faster in Linux.

Linux developers seeking another technical challenge can take another page from the Solaris development model by working seriously with binary compatibility issues. The Solaris Web site has a section devoted to binary compatibility, where Sun provides free technical information and tools for compatibility testing. Linux likewise needs to be sure that other UNIX applications run easily on Linux. Binary compatibility and emulation are the only means to provide bridges over which users can switch to Linux. Ease-of-use from the desktop user perspective may be desirable, but that this stage Linux needs to promote ease-of-use for the IS professional. As a further inducement to switching, the installation and administration of system, network, and peripherals need to be made so easy that hard-core professionals in IS departments will prefer to work with Linux.

After matching Solaris feature for feature, after beating Solaris with IPv6 support and assuring attention in the embedded market, after making plain and public the instructions and tests for achieving Linux binary compatibility, comes the question of the desktop. For starters, take a warning from Sun’s desktop version, Solaris for Intel. This uses WABI, in case anyone wants to run any Windows 3.x applications on Solaris, and the Common Desktop Environment, an interface which raises no great enthusiasm in anyone. The Solaris for Intel version is just as hard to administer as its big brothers. Linux must learn from Sun’s difficulties: the system, its administration, and its applications must be made easy to deal with. One way to achieve this is to drop the attitude that UNIX--or Linux--is only for the technologically elite. Embrace and extend the Windows interface. The suggestions below are not intended to disparage the effort that has gone into projects like TWIN or WINE or GNOME, or other efforts to create user-friendly desktops and solve the Windows-to-Linux problem. But these efforts need more push, more visibility, and probably a deeper concentration on Linux in order to bring something besides alphas and betas to light.

The first step is to put a recognizable Windows desktop on Linux, one that looks like Win 3.x, and another that looks like Windows 95. Frankly, most Linux developers are disinclined to worry much about the desktop interface: many Linux users prefer the command line, and many of the rest feel that current UNIX GUI’s are far superior to Windows. Most importantly, writing a GUI does not represent the cutting-edge technical challenge that impresses other developers. So let these developers undertake a more difficult challenge: develop a file manager and all the system-management screens and tools with it, so that the new desktop that behaves like Windows and makes cross-over users feel at home.

The next step is an even more difficult challenge: build a binary emulator for Windows 3.x and Windows 95 that will let Windows users run their applications directly on Linux. Not only is this difficult, it is the surest way to desktop success. Joe Point-n-Click will not come over to Linux if he has to learn everything all over again. WINE and TWIN both appear to be working on binary emulation, and on Windows API’s, a final step in making the Windows interface public. Both projects, however, need more results. If there are Windows API’s available to write to on Linux, then new Linux applications can court crossover customers. Along the way, there is a need to develop file-exchange filters which are open to the community of Linux developers so that they can make it easy for users of other products to switch over to Linux applications. The time is then ripe to start showing and spreading a finished product.

How to show and spread that software is a question of distribution. There are two ways to address this question, and Linux has an advantage in both methods. The first is a low-level, direct-to-customer approach that works naturally for VARs. Microsoft has begun a concerted effort to own the small business desktop, and has discovered that the path lies not through the television set--that is, mass advertising--but through the VAR, or integrator, or whatever you want to call the small computer business that services small businesses. The small business usually has a tight relationship with a single supplier and follows that supplier’s advice religiously. If you are that supplier, you stand an excellent chance of moving your customers to Linux. The advantages of stability, low cost and remote administration should make both of you very happy. The customer wants a solution, not a particular operating system. Unlike the large corporation, he doesn’t have an IS department to argue with you. If the supplier can move the small business customer over to applications that handle like his present Windows applications, there should be no problem; and if he can move his Windows applications over directly, so much the easier.

The other distribution approach is high-level: to add another sort of value to the free Linux software system. We see this now with vendors of distributions of the Linux OS, and with Linux support vendors. Although it is perfectly possible to set up as a software vendor who sells a proprietary product that runs on Linux, it is also possible to take a more interesting route--and one that also has adequate profits. This is done by offering a brand of a free product, and making that brand stand for something that customers want and trust. Now that the Netscape browser is being opened to the market, Netscape is supposed to function as the central source to pull together the changes made by the wide world of developers and integrate them into better versions faster. This does not mean that some other firm can’t undertake the job, and possibly outperform Netscape at it. Netscape knows this. And if Netscape is successful in maintaining itself as the best source of the Netscape browser for Everyman, there will also be niches that need a specialized version of the browser, and which will be served by the clever opportunity-seeker.

Some of you may have noticed that Eric Allman, the father of sendmail, has formed a company to commercialize this free product. He has a great perceived advantage in the marketplace, since he developed sendmail, but he will have to prove--every day--that his company is the best source of it. There are other opportunities of products crying out for someone to own them, to take care of them, to promote them. The WINE project is one: an entrepreneur could stimulate and grow the WINE development community, provide centralized testing and integration of the product, and end up profiting from its promotion and distribution.

I don’t know whether it will come to the day when all the applications we want (and their fixes) are up there on the Internet someplace. But before that day comes, we will have to pass through an intermediate stage, one in which integrators switch over small-business customers to Linux, Linux software vendors distribute a mix of free and proprietary Linux products, and corporate IS departments see the benefit of Linux in integrating a single enterprise computing system from top to bottom.

Any victory for Linux will have to be a victory of technology and marketing, that is, giving the market what it wants. Operating systems often inspire religious devotion from their users (and especially from their developers). But don’t be a bigot. Don’t sneer like the Pharisees at others. Forgive your enemies. Be kind to the weak who can’t compile their own kernels. If you want a religion of service to others, finding out and filling their needs, acting with integrity in their best interests and your own, then Linux is that religion.

Copyright © 1998 by Donald K. Rosenberg. All Rights Reserved.

 

For further reading:

Steven Levy, Hackers, 1985

Peter H. Salus, A Quarter Century of UNIX, 1994

Free Software Foundation http://www.gnu.org

League for Programming Freedom http://lpf.ai.mit.edu/

Return to Top of Copyleft and the Religious Wars of the 21st Century

Open Source Software Licensing

Stromian's Guide to OEM Software Licensing

Other Software Marketing Resources

Stromian Technologies Home Page

15 Nov 98

Contents except graphics Copyright © 1996-98 by Stromian Technologies. All rights reserved.