Complexity In Indexing Systems -- Abandonment And Failure: Implications For Organizing The Internet

Bella Hass Weinberg
Division of Library and Information Science,
St. John's University
Jamaica, New York


The past hundred years have seen the development of numerous systems for the structured representation of knowledge and information, including hierarchical classification systems with notation as well as alphabetical indexing systems with sophisticated features for the representation of term relationships. The reasons for the lack of widespread adoption of these systems, particularly in the United States, are discussed. The suggested structure for indexing the Internet or other large electronic collections of documents is based on that of book indexes: specific headings with coined modifications.

Detailed Classification Systems

At the end of the nineteenth century, the Universal Decimal Classification (UDC) was developed for the detailed specification of information contained in parts of books and serials. It built upon the Dewey Decimal Classification (DDC), whose first edition had been published in 1876. DDC was designed for the arrangement of books on library shelves and thus was not specific enough for the representation of the topics of chapters and articles. The designers of UDC refined the Dewey classification and developed symbols for the representation of relationships between subjects. An example of UDC notation is 622.33:622.86(410)"1945/61":31, representing the topic "Statistical summary of accidents in British coal mines since the war" (Mills, 1963, p. 37).

UDC was used in only a few classified catalogs in the U.S., including the Engineering Societies Library, YIVO Institute for Jewish Research, and the United Nations. Of these, only the last mentioned still uses the scheme (personal communications). It is more widely used in Europe and the Middle East because the numeric notation overcomes the language barrier. A list of libraries throughout the world that were using the scheme three decades ago--and which may still be using it--is in the Guide to the Universal Decimal Classification (British Standards Institution, 1963, pp. 115-128). There have been recent attempts to improve the management of the classification and to speed up the revision process (Gilchrist, 1992). Adoption of UDC by American libraries that have never used it is unlikely, however.

Ranganathan developed the Colon Classification in India in the 1930s. He believed in the importance of ultimate specificity in the representation of the subjects of documents (Ranganathan, 1967, pp. 289-290). His scheme is characterized by its many synthetic features: the classification schedule provides the building blocks, and the classifier puts the elements together. The complex mixed notation of the Colon Classification may be illustrated by the call number assigned to the sixth edition (Ranganathan, 1960): 2:51N3 qN60. Ranganathan is also credited with the invention of chain indexing, an economical system of providing access to the terms in classification schedules without replicating the hierarchical structure of the classification in the alphabetical index. Chain indexing has been quite widely implemented (and has been experimented with as a classification retrieval interface by Liu and Svenonius [1991]), but not the Colon Classification.

Bliss developed a classification system featuring many options, i.e., characterized by a flexible hierarchical structure. Thomas (1995, p. 114) provides the example of the alternative notation QFK B for the class mark QFQ Y3K B, to represent the subject "Aid to families with dependent children in the United States of America." The system has been revised in England, and Thomas (1992) has advocated its study and adoption in the U.S.; the latter is highly unlikely.

Library & Information Science Abstracts ( LISA) used to employ the synthetic classification scheme developed by the Classification Research Group. Recently, the editor (Moore, 1993) announced a "new, improved layout," noting that "Over the years, there have been many comments from LISA users regarding difficulties experienced in the use of the CRG . . . Scheme, which was employed to arrange abstracts in the printed issues." LISA now uses a "simplified subject arrangement . . . based on a system of Broad Subject Headings." Metcalfe had stated several decades earlier (1957, pp. 68-69) that the use of synthetic classification schemes in printed abstracting tools was unnecessary.

More than a decade ago, Anderson (1983) designed the Contextual Indexing and Faceted Taxonomic (CIFT) classification scheme for the Modern Language Association (MLA). The selection of terms and the coding of roles by a human indexer yield both classification and index entries, which are generated by computer. Examples of role indicators are "<apo = application of" and "<cot = compared to" (p. 12). At the 1993 Annual Meeting of the American Society of Indexers, Daniel Uchitelle of MLA (in an unpublished paper) spoke of the difficulty that student end-users of the online database had with the faceted classification scheme; he also noted the loss of volunteer indexers by MLA owing to the complexity of the CIFT scheme. Uchitelle predicted that thesauri, not classification schemes, would be the growth industry of the '90s. LISA incorporated thesaurus features into its new system, mentioned above.

Sophisticated Alphabetical Indexing Systems

Alphabetical indexing antedates the invention of printing, but the design of indexing systems with sophisticated syntactic relationships is a phenomenon of the latter half of the twentieth century. This type of indexing was probably influenced by the recent focus on syntax in the field of linguistics.

Telegraphic abstracts were invented at Western Reserve University (Perry & Kent, 1958). They represented the information in a natural language abstract using semantic codes for terms along with role indicators, a structure designed to enhance information retrieval. Examples of the codes are "K9r = weld tests" and "1-73 = common variable for manufacturing and test methods, vacuum" (p. 208). The code dictionary is an interesting example of an alphabetical list of terms linked to a hierarchical classification, but this method of content analysis never caught on. Today, we use natural-language abstracts in both the print environment and machine-readable databases. In the field of medicine, some authors have recently been labeling the parts of their abstracts in a manner that reflects the section headings of scientific articles, e.g., Background, Methods, Results, Conclusions, but no artificial codes are used. (For examples of structured abstracts, see recent issues of the New England Journal of Medicine, one of the journals that requires them. Structured abstracts are mentioned in the uniform requirements for biomedical journals [International Committee, 1991, p. 425].)

Roles and links were invented in the early 1960s to overcome the problem of false drops in postcoordinate indexing. Artandi and Hines (1963) criticized these devices in a paper entitled "Roles and links--or forward to Cutter," arguing that they served the same purpose as subdivisions in traditional subject heading systems. Studies showed that roles and links were not applied consistently, and their use was abandoned. There have been recent proposals for refining the coding of related-term references in thesauri, to specify the nature of the relationship. Similar problems of consistency are expected to result, as noted by Milstead (1995, p. 94).

Farradane (1970) developed a set of nine symbols to represent relationships between concepts. An example of the application of his system is the string "Sugar Cane/:Sugar, Raw/; Filterability/-Measuring/; Filter" (p. 613). Farradane's relational indexing has been the subject of scholarly research, but was not implemented. PRECIS, the Preserved Context Index System developed by Derek Austin (1974) under the influence of Noam Chomsky's work on transformational grammar, attracted attention in the United States (Wellisch, 1977) and was implemented by several organizations in Canada and England, but has been largely abandoned or simplified by these organizations. A modified version of PRECIS has been described by Jacobs and Arsenault (1994); their article discusses the drawbacks of the "pure" version of the system.

Although some had proposed PRECIS as an alternative to Library of Congress Subject Headings (LCSH), this was never a real possibility. Too many libraries used LCSH, and too many machine-readable cataloging records contained those headings. In fact, the synthesis of subdivisions with headings in LCSH has been found to be too complicated (Conway, 1992), and the Library of Congress (LC) has embarked on a program of Subdivision Simplification: more combinations of terms will be established editorially at LC, and less original synthesis will have to be done by catalogers (the changes in subdivision policy are reported in Cataloging Service Bulletins).

Reasons for the Abandonment of Complex Indexing Systems

The bibliographic classification systems discussed in the first part of this paper have a better theoretical basis than the widely adopted Library of Congress Classification (LCC) scheme. The draft proposal for LCC Subclass ZA, Information Resources, provides evidence for this: under "Information superhighway," class numbers are enumerated for the forms Periodicals, Congresses, Dictionaries, Directories, etc. (Library of Congress, 1995, p. 8). These same forms are enumerated with different notations under other topics within the draft schedule. The synthesis of form divisions was a feature of the Dewey classification early on; just about all subsequent bibliographic classification schemes have increased the amount of synthesis relative to enumeration. Synthetic features result in a more compact classification schedule, give a mnemonic character to a scheme, and most importantly, allow the classifier to represent specifically the holdings of a local collection.

Alphabetical indexing systems with syntactic enhancements allow for more sophisticated retrieval than (a) the largely precoordinate Library of Congress Subject Headings or (b) simple postcoordination of descriptors derived from thesauri. The role indicators are designed to eliminate the false drops which result from simple combinations of terms. Why have the complex systems been abandoned or ignored? Several reasons are suggested below.


Sophisticated classification and indexing systems require a substantial training period as well as time-consuming human analysis. Management is reluctant to pay for this. In the U.S. in particular, numerous articles have been published by library and network administrators stating that the cataloging of a document should be done only once by a central organization, and no further analysis of the work or enhancement of the "standard" record should be done on the local level (Rush, 1993, p. 21). Libraries currently employ numerous paraprofessionals to derive catalog records from bibliographic utilities.

Lowry reported more than a decade ago (1985, p. 289) on the downshifting of the functions of indexers to clerks. The increasing availability of full text in machine-readable form has led to the widespread belief that no human content analysis is necessary, since every word is searchable. The searching thesaurus, i.e., a post-controlled vocabulary, has been touted as the solution to the semantic problems of retrieval in this environment (Anderson & Rowley, 1992).


Concomitant with the view that the cataloging of a document need be done only once by a centralized agency has been the diminution of cataloging courses (Miksa, 1989). Some schools of information science no longer require their students to take cataloging courses. Advanced cataloging and indexing are generally optional courses in library schools; these courses are selected by the minority of students who do not believe that reference is cool and technical services is for nerds (Robbins, 1991, p. 57). Thus, few librarians are trained to evaluate and modify centralized cataloging, let alone implement a new complex system. Nevertheless, there are frequent claims in the literature, e.g., in journals such as American Libraries, that all librarians have expertise in the organization of information and hence are the logical people to manage the Internet. It has been suggested that reference librarians be diverted to the creation of Internet "finding tools and self-teaching aids that could be used online" (Scepanski, 1996, p. 43), but those who have skipped advanced technical services courses are not prepared to do this work.

Theoretical flaws

Rigid hierarchical classification schemes cannot keep up with scientific advances. Sections of the widely-used schemes -- notably Dewey -- are restructured periodically, but there are always protests from the library community when the revisions necessitate reclassification of large parts of a collection (e.g., Berman, 1981, p. 178).

Jacob (1994) has argued for flexible categorization, providing evidence from the field of psychology that people classify things in different ways. And yet, the systems replete with options -- such as Bliss -- have not been widely adopted, perhaps for the economic reason cited above: a professional librarian has to select an option and document it locally. It is easier to have a paraprofessional copy a centrally supplied classification number without thinking whether it serves local needs.

The claims of Farradane, Austin, and other information scientists to have identified all possible relationships are inconsistent with the linguistic literature, in which new syntactic structures are constantly being identified. An indexing system with only seven or nine relationship indicators is sure to break down when new constructs are encountered.

Human factors

The inconsistency in the assignment of roles and links as a reason for their abandonment has already been noted. Saracevic (1989) has surveyed human inconsistency in the selection of index terms, search terms, and other areas; it is an inescapable human factor. Research has shown, however, that people tend to agree on the major topic of a document or the key term in a search query; the inconsistency is greater for subthemes.

Information literacy has received much attention of late. Liddy and Jorgensen (1993) have shown that many structural features of alphabetical indexes to books are misunderstood, although these indexes have no artificial codes. We live in a society in which people want information quickly; they do not want to study the structure of an indexing system or the notation of a classification scheme. Complex information systems require human intermediaries, and management is reluctant to hire them. There is also considerable evidence that people are reluctant to consult intermediaries; they want to be able to search by themselves.

Indexing the Internet

Many of the systems currently used for indexing the Internet employ simple automatic indexing algorithms (Bordonaro & Lynch, 1995; Scales & Felt, 1995). They extract words from titles or full text, sometimes adding weights to words in significant positions. Users of electronic databases are increasingly encountering the problem of excessive frequency of search terms. Researchers have shown that the automatic ranking algorithms used to deal with this problem perform inconsistently (Marchionini et al., 1994; Courtois et al., 1995).

Stoll (1995, pp. 211-212) described amateurish classifications that were developed for the Internet by non-librarians. After calling library classification schemes "antiquated and inadequate," Steinberg (1996, p. 109) proceeds to describe categorization schemes devised by non-librarians that have even greater flaws.

Some librarians (e.g., Micco, 1995) have advocated the application of the Dewey Decimal Classification to organize the massive number of electronic documents on the Internet. In my view, the Dewey Decimal Classification is not an appropriate scheme for organizing electronic documents because its primary facet is discipline, not concrete topic. The Brown classification was designed years ago to make concrete topic (e.g., horses, rather than agriculture) the primary facet, but was hardly implemented.

While there are many discipline-based listservs and collections of documents on the Internet, I believe that its primary use is currently, and will continue to be, searching for specific pieces of information. Steinberg (1996, p. 182) cites Internet gurus who claim that "it's about people finding people, not people finding information," i.e., that the purpose of the Net is to connect individuals with an interest in a specialized topic. Indexing such topics and categories of human expertise is not unlike indexing text, however.

Since DDC was designed to organize whole books, if selected to index the Internet, it would have to be refined to accommodate documents and listservs of narrower scope. Such refinement has already been done--in the form of UDC--but the problems with the notation, currency, and management of that scheme have been noted above. People consult the Internet for the latest information; they cannot wait for an international committee to decide in which class a new topic should be placed. Steinberg (1996, pp. 111-112) describes the protests stimulated by the classification of a Messianic group in Yahoo. As that category system grows, all the problems experienced by librarians in the area of classification will be encountered: accusations of bias, lack of currency, and lack of specificity.

Assessment and Recommendations

The ultimate representation of the thought content of a document, which was espoused by Ranganathan, represents optimization in classification and indexing. Human beings may not require this. Pointing the user to a manageable chunk of text or number of documents that can be scanned in a reasonable amount of time for the desired fact or information would constitute satisficing in the field of content analysis. R. N. Taylor (1992, p. 970) notes that in satisficing, "some assumptions of the more rigorous rational-analytic models are relaxed to permit decision makers to deal with the more realistic information processing and decision-making demands" (emphasis mine). There is considerable evidence that users don't want intermediaries to retrieve only the single most specific document on a topic. Users want to select from a group of documents and make their own relevance judgments.

The type of index structure that reflects the philosophy of satisficing is the back-of-the-book index. It has a simple structure: direct access to natural-language terms is provided by the alphabetical arrangement, obviating both the extra lookup and the interpretation of symbols that classification requires. In book indexes, aspects of topics are represented via natural language subheadings. These are not standardized terms, but coined modifications, formulated by a human indexer to reflect the text being analyzed. Natural language prepositions and conjunctions are used to indicate relationships between topics. Franz et al. (1994) have shown that the artificial syntax of Library of Congress subject headings is incomprehensible to many users. Weinberg (1988) has demonstrated that coined modifications are more specific than standard subdivisions. Unlike most classification notations, book index structure is infinitely hospitable, i.e., a new topic can always be interpolated.

What is being proposed here is book index structure, not the exhaustivity of a typical back-of-the-book index. On the Internet, the structure should be applied at the document or series level, not to the individual "information nugget." Arlene Taylor (1994, p. 104) observed that subject headings summarize the content of documents, while book indexes provide access to the detailed information. Lipetz (1989, pp. 117-119) has proposed electronic interfiling of book indexes, to create databases similar to those which cover journals. An integrated exhaustive index to the entire Internet is clearly impracticable, but an index on the summary level is feasible. Merged exhaustive indexes can be created for particular fields.

After this paper was refereed, a selective index to the Internet that can be evaluated in terms of the preceding principles was described by Mitchell and Mooney (1996). INFOMINE is a virtual library of scholarly resources; the main menu contains broad categories that are not mutually exclusive, a common problem with Internet classification schemes. Each document in the "collection" is indexed by Library of Congress subject headings. The display for the heading Zoology has 33 titles. That list could have been broken up by form subdivisions and/or cross-references to terms of narrower scope. For example, Goose Bibliography is not a general zoological work.

Book indexers constantly monitor frequency. An excessive number of locators is considered a major flaw in a book index, although a string of page numbers is considered acceptable by some to indicate minor mention (Bell, 1992, p. 34). Computers are great at counting words, but are not so successful at distinguishing the significant occurrences of words from the insignificant ones. Moreover, computers are terrible at recognizing concepts. Brenner (1989, p. 66) has reported that a computer working with machine-readable abstracts and a thesaurus fails to assign 30% of the index terms assigned by humans. In my dissertation research (Weinberg, 1981, p. 80), I found that approximately 10% of humanly assigned index terms do not occur in full text. This has been confirmed in subsequent studies which demonstrate that controlled vocabulary indexing enhances full text retrieval by 10% (Hersh & Hickam, 1995). Alta Vista (1996), an Internet search engine, encourages authors of Web pages "to specify additional keywords to index" and to provide abstracts.

Book indexers generally operate within a closed system and with limitations of space. Once the analysis of a text is complete, subheadings are generally omitted for terms with five locators or fewer, as this is a reasonable number for the user to examine. In contrast, the Internet is an open system, with electronic documents constantly being added and deleted. Control mechanisms from the fields of library science and indexing can be applied to deleted and changed URLs (Universal Resource Locators). The cataloging construct of tracing identifies all the places that a reference to a document is provided. If a book is reclassified in a library, the call number is changed on all the access points.

Hypertext links in HTML (HyperText Markup Language) are currently not typed, i.e., they do not indicate the nature of the relationship. Many ideas could be borrowed from the fields of subject cataloging and thesaurus design to enhance the cross-reference structure of the Internet, and even more importantly, to control it. The simple concept of reciprocal serves the latter function: every Internet document should have a record of other documents that refer to it, i.e., that have hypertext links to it.

The INFOMINE system mentioned above recognizes the need to verify that URLs to which one is referred actually exist. Mitchell and Mooney (1996, p. 23) describe a feature of the system, a URL Checker, which is reminiscent of the cross-reference validation capability of standalone book indexing software. This is not as powerful, however, as the standard capability of thesaurus software: automatic generation of the reciprocals of cross-references. Given the decentralized nature of the Internet, it is not clear whether this concept is technically feasible. It may be necessary to check the validity of links manually, just as catalogers working on bibliographic utilities have to consult separate, unlinked authority files.

The specific topics of Internet documents need to be represented and ideally controlled by a thesaurus. The aspects of those topics must also be represented in natural language. Like a book indexer, a content analyst must examine the existing index to see whether the address of a new text (i.e., the locator) can be posted to an existing term-subheading combination. If not, a new representation has to be formulated. An index to the Internet could be like the index to a continuously revised encyclopedia: book-index-like entries in an open system. The advantages of this type of precoordinate indexing over postcoordination of terms, which has been applied in open systems such as serial indexes (databases), have been explained by Weinberg (1995).

The cost of storage space in computers has declined, while the cost of paper has increased. There are thus fewer constraints on the Internet indexer than on the book indexer in this regard. There are, however, constraints of time on the part of the end-user as well as limitations of screen size, which make it desirable to limit the number of subheadings. Researchers of online public access catalogs have observed the problems of multiple screen display for numerous subdivisions of Library of Congress subject headings (Massicotte, 1988); discussions of the optimal number of modifications have filled the indexing literature recently as well (Wellisch, 1993).

Book index structure involves human content analysis, but not the development of artificial notation. The intelligence required is the recognition of concepts, along with the ability to represent information in terms likely to be sought by end-users. Book indexes are generally compiled by a single person, but an index to Internet resources will have to be a team effort. Indexes to works by multiple authors need a controlled vocabulary, and this requires centralized management. A thesaurus for the Internet should, of course, be maintained electronically, with on-screen approval of candidate terms (NISO, 1994, p. 29). Prof. Charles Livermore has suggested that the approval procedure for new descriptors could be similar to that for new newsgroups on the Internet, as described by Hahn & Stout (1994, pp. 178-179). The analogy is apt particularly with regard to the splitting of newsgroups (p. 168): Descriptors often have to be split in two (decoordinated), or narrower terms have to be added to a thesaurus to accommodate the scope of new specialized documents.

There are serious management and economic issues that affect the development of a centralized index to the Internet. As it is a decentralized database, who should pay for organizing it and maintaining the controlled vocabulary? A simple structure for organizing and searching the Internet has been identified, but the resolution of the economic issues will determine whether it is implemented. Just as online vendors are changing their charging mechanisms from one based primarily on connect time to one based on search terms, i.e., consultation of indexes (Nelson, 1995, p. 81), a well-maintained index to the Internet will either incur fees, or have to be supported by advertising, as current search engines are. Crawford and Gorman (1995, pp. 166-167) have described the sequential "free" searches of numerous online catalogs done by some Internet users to avoid paying the fees of RLIN and OCLC, the bibliographic utilities which provide large union catalogs. I have observed that there is a Gresham's Law of the Internet: "Bad, free information drives out the good, fee-based information." (There are, admittedly, examples of the opposite phenomenon, e.g., free authoritative government information that drives out inferior commercial services.)

R. S. Taylor (1986) explained that indexing is a value-added process. A good index saves the time of the user, and hence is worth paying for; those who create indexes deserve remuneration. Some librarians are authorized to devote part of their full-time jobs to the creation of guides to Internet resources. These are accessible by numerous users worldwide, who benefit from the salaries paid to these librarians by their institutions. Freelance indexers are not in a position to create Internet guides voluntarily, but may be hired to create book-like indexes for specialized areas.

The ray of light in this chaotic environment is that schools of library and information science are increasingly advertising for faculty members who specialize in the organization of information. It is recognized that the decentralized nature of the Internet requires original solutions to the problems of access: no single organization is providing all the requisite cataloging and indexing data for the wide variety of media available electronically. It is hoped that the systems for organizing information which were developed in the last century will not be ignored, that their design flaws will not be replicated, and that our increasing knowledge of human factors will be incorporated into systems for indexing the Internet.


This paper was written during a Research Leave from St. John's University. I express my thanks to Prof. Charles Livermore, Head of the Reference Dept. of St. John's University Libraries, for commenting on the Internet component of the paper, and to Dr. James Benson, Director, Division of Library and Information Science, for reviewing the complete manuscript. Maxine Feinberg, Graduate Assistant; Kevin Dunn, Database Administrator; and Mary Egan, Secretary -- all of the Division of Library and Information Science -- assisted in the production of the paper.


Alta Vista. (1996). Controlling how your Web page is indexed by Alta Vista. Help for Advanced Query [On-line]. Available: Internet bin/query?pg=ah&what=web (Printed 4/2/96, p. 7 of 9).

Anderson, J. D. (1983). Essential decisions in indexing systems design. In H. Feinberg (Ed.), Indexing specialized formats and subjects (pp. 1-21). Metuchen, NJ: Scarecrow Press.

Anderson, J. D., & Rowley, F. A. (1992). Building end-user thesauri from full text. In B. H. Kwasnik, & R. Fidel (Eds.), Advances in Classification Research, vol. 2: Proceedings of the 2nd ASIS SIG/CR Classification Research Workshop (pp. 1-13). Medford, NJ: Learned Information.

Artandi, S., & Hines, T. C. (1963). Roles and links--or forward to Cutter. American Documentation, 14, 74-77.

Austin, D. (1974). PRECIS: A manual of concept analysis and subject indexing. London: The Council of the British National Bibliography.

Bell, H. K. (1992). Indexing biographies, and other stories of human lives. London: Society of Indexers. (Occasional Papers on Indexing, no. 1.)

Berman, S. (1981). DDC 19: An indictment. In The joy of cataloging: Essays, letters, reviews, and other explosions (pp. 177-185). Phoenix, AZ: Oryx Press. Reprinted from Library Journal (March 1, 1980), pp. 585-589.

Bordonaro, K., & Lynch, P. (1995). Search engines and subject searching of the World Wide Web. The Internet Homesteader, 2 (8), 14-16.

Brenner, E. H. (1989). Vocabulary control. In B. H. Weinberg (Ed.), Indexing: The state of our knowledge and the state of our ignorance: Proceedings of the 20th Annual Meeting of the American Society of Indexers, 1988 (pp. 62-67). Medford, NJ: Learned Information.

British Standards Institution (1963). Guide to the Universal Decimal Classification (UDC). London. (B. S. 1000C:1963; F. I. D. no. 345).

Conway, M. O. (Ed.). (1992). The future of subdivisions in the Library of Congress subject headings system: Report from the Subject Subdivisions Conference, 1991. Washington, DC: Library of Congress, Cataloging Distribution Service.

Courtois, M. P., Baer, W. M., & Stark, M. (1995). Cool tools for searching the Web: A performance evaluation. Online: The Magazine of Online Information Systems, 19 (6), 14-32.

Crawford, W., & Gorman, M. (1995). Future libraries: Dreams, madness & reality. Chicago: American Library Association.

Dewey, M. (1876). A classification and subject index for cataloguing and arranging the books and pamphlets of a library. Amherst, MA. Facsimile reprinted by Forest Press Division, Lake Placid Education Foundation, 1976.

Farradane, J. E. L. (1970). Analysis and organization of knowledge for retrieval. Aslib Proceedings, 22, 607-616.

Franz, Lori et al. (1994) End-user understanding of subdivided subject headings. Library Resources & Technical Services, 38, 213-226.

Gilchrist, A. (1992). UDC: The 1990s and beyond. In N. J. Williamson & M. Hudon (Eds.), Classification research for knowledge representation and organization: Proceedings of the 5th International Study Conference on Classification Research, 1991. Amsterdam: Elsevier.

Hahn, H., & Stout, R. (1994). The Internet complete reference. Berkeley: Osborne McGraw-Hill.

Hersh, W. R., & Hickam, D. (1995). Information retrieval in medicine: The SAPHIRE experience. Journal of the American Society for Information Science, 46, 743-747. -- Letter by S.M. Humphrey. (1996). JASIS, 47, 407-408.

International Committee of Medical Journal Editors. (1991). Uniform requirements for manuscripts submitted to biomedical journals. New England Journal of Medicine, 324, 424-428.

Jacob, E. K. (1994). Cognition and classification: A crossdisciplinary approach to a philosophy of classification. (Abstract.) In B. Maxian (Ed.), ASIS '94: Proceedings of the 57th ASIS Annual Meeting (p. 82). Medford, NJ: Learned Information.

Jacobs, C., & Arsenault, C. (1994). Words can't describe it: Streamlining PRECIS just for laughs! The Indexer, 19, 88-92.

Library of Congress. (1995, Fall). LC Classification Subclass ZA, Information Resources. Cataloging Service Bulletin, no. 70, 7-10.

Liddy, E. D., & Jorgensen, C. L. (1993). Reality check! Book index characteristics that facilitate information access. In N. C. Mulvany (Ed.), Indexing, providing access to information: Looking back, looking ahead: The Proceedings of the 25th Annual Meeting of the American Society of Indexers (pp. 125-138). Port Aransas, TX: American Society of Indexers.

Lipetz, B. A. (1989). The usefulness of indexes. In B. H. Weinberg (Ed.), Indexing: The state of our knowledge and the state of our ignorance: Proceedings of the 20th Annual Meeting of the American Society of Indexers, 1988 (pp. 112-120). Medford, NJ: Learned Information.

Liu, S., & Svenonius, E. (1991). DORS: DDC Online Retrieval System. Library Resources & Technical Services, 35, 359-375.

Lowry, G. R. (1985). Change, growth, and staffing trends among U.S. online database producers: 1982-1984. In M. E. Williams & T. H. Hogan (Eds.), National Online Meeting Proceedings--1985 (pp. 283-290). Medford, NJ: Learned Information.

Marchionini, G., Barlow, D., & Hill, L. (1994). Extending retrieval strategies to networked environments: Old ways, new ways, and a critical look at WAIS. Journal of the American Society for Information Science, 45, 561-564.

Massicotte, M. (1988). Improved browsable displays for online subject access. Information Technology and Libraries, 7, 373-380.

Metcalfe, J. (1957). Information indexing and subject cataloging: Alphabetical, classified, coordinate, mechanical. New York: Scarecrow Press.

Micco, M. (1995). Subject authority control in the world of the Internet (Part 2). From Catalog to Gateway: Briefings from the CCFC [ALCTS Catalog Form and Function Committee], No. 6. Supplement to ALCTS Newsletter, 6, A-D.

Miksa, F. (1989). Cataloging education in the library and information science curriculum. In S. S. Intner & J. S. Hill (Eds.), Recruiting, educating, and training cataloging librarians: Solving the problems (pp. [273]-297). New York: Greenwood Press.

Mills, J. (1963). Guide to the use of the UDC. In Guide to the Universal Decimal Classification (UDC) (pp. 4-48). London: British Standards Institution.

Milstead, J. L. (1995). Invisible thesauri: The year 2000. Online & CD ROM Review, 19, 93-94.

Mitchell, S., & Mooney, M. (1996). INFOMINE: A model Web-based academic virtual library. Information Technology and Libraries, 15, 20-25.

Moore, N. L. (1993, Jan.). Editor's note: LISA: New improved layout, indexing and currency. Library & Information Science Abstracts, [1].

Nelson, M. L. (1995). Database pricing: A survey of practices, predictions, and opinions. Online: The Magazine of Online Information Systems, 19 (6), 76-86.

NISO. (1994). Guidelines for the construction, format, and management of monolingual thesauri: An American National Standard. Bethesda, MD: NISO Press. (ANSI/NISO Z39.19-1993).

Perry, J. W., & Kent, A. (1958). Tools for machine literature searching: Semantic code dictionary, equipment, procedures. New York: Interscience.

Ranganathan, S. R. (1960). Colon classification. 6th ed. Bombay: Asia Publishing House. Reprinted 1963 with amendments.

Ranganathan, S. R. (1967). Prolegomena to library classification. 3rd ed. London: Asia Publishing House.

Robbins, J. B. (1991). Fiction and reality in educating catalogers. In S. S. Intner & J. S. Hill (Eds.), Cataloging: The professional development cycle (pp. [55]-62). New York: Greenwood Press.

Rush, J. E. (1993). Technology-driven resource sharing. Bulletin of the American Society for Information Science, 19 (5), 19-23.

Saracevic, T. (1989). Indexing, searching, and relevance. In B. H. Weinberg (Ed.), Indexing: The state of our knowledge and the state of our ignorance: Proceedings of the 20th Annual Meeting of the American Society of Indexers, 1988 (pp. 102-109). Medford, NJ: Learned Information.

Scales, B. J., & Felt, E. C. (1995). Diversity on the World Wide Web: Using robots to search the Web. Library Software Review, 14, 132-136.

Scepanski, J.M. (1996). Public services in a telecommuting world. Information Technology and Libraries, 15, 41-43.

Steinberg, S. G. (1996). Seek and ye shall find, maybe. Wired, 4, 108-114,172,174,176,178,180-182.

Stoll, C. (1995). Silicon snake oil: Second thoughts on the information highway. New York: Doubleday.

Taylor, A. G. (1994). Books and other bibliographic materials. In T. Petersen & P. J. Barnett (Eds.), Guide to indexing and cataloging with the Art & Architecture Thesaurus (pp. 101-119). New York: Oxford University Press.

Taylor, R. N. (1992). Strategic decision making. In M. D. Dunnette & L. M. Hough (Eds.), Handbook of industrial and organizational psychology, 2nd ed. (pp. 961-1007). Palo Alto, CA: Consulting Psychologists Press.

Taylor, R. S. (1986). Value-added processes in abstracting and indexing services. In R. S. Taylor, Value-added processes in information systems (pp. 96-125). Norwood, NJ: Ablex.

Thomas, A. R. (1992). Options in the arrangement of library materials and the new edition of the Bliss Bibliographic Classification. In B. H. Weinberg (Ed.), Cataloging heresy: Challenging the standard bibliographic product: Proceedings of the Congress for Librarians, 1991 (pp. 197-211). Medford, NJ: Learned Information.

Thomas, A. R. (1995). Bliss Classification update. Cataloging & Classification Quarterly, 19 (3/4), 105-117.

Weinberg, B. H. (1981). Word frequency and automatic indexing (dissertation). New York: Columbia University, School of Library Service. Ann Arbor, MI: University Microfilms, 1983.

Weinberg, B. H. (1988). Why indexing fails the researcher. The Indexer, 16, 3-6.

Weinberg, B. H. (1995). Why postcoordination fails the searcher. The Indexer, 19, 155-159.

Wellisch, H. H. (Ed.). (1977). The PRECIS index system: Principles, applications and prospects: Proceedings of the International PRECIS Workshop, 1976. New York: Wilson.

Wellisch, H. H. (1993). Subheadings: How many are too many? Key Words: The Newsletter of the American Society of Indexers, 1 (6), 12-13. Reprinted, along with other articles on this subject, in Subheadings: A matter of opinion. Port Aransas, TX: American Society of Indexers, 1995.

© 1996, American Society for Information Science. Permission to copy and distribute this document is hereby granted provided that this copyright notice is retained on all copies and that copies are not altered.