An ever increasing volume of information is becoming accessible through the Internet and through corporate Intranets. This explosion is increasing the difficulties that users suffer whilst trying to discover the information relevant to their decision making activities. To work successfully, the user needs to know where to find information and which links to follow once at a relevant location. Although users tend to be able to navigate information spaces when in the rough neighbourhood of their desired information objects, knowledge of pertinent landmarks in the information landscape from which to start is difficult to obtain given the amount and diversity of information present.
Users interact with information spaces in various ways (such as trying to locate specific information objects, topic oriented information collation and berrypicking ). Index generators and information classification systems (such as AltaVista and Yahoo) facilitate these tasks and offer a single point of contact (hence reducing the number of landmarks which need to be known). However, the inherent difficulties of expressing information goals means that users perceive such tools as having limited utility. Moreover, because these tools do not exploit the inherently distributed nature of many information spaces (their being in many cases monolithic), the coverage and temporal accuracy of indexes will diminish as the infostructure continues to change and increase in size. They also fail to tailor their services to the needs of individual users and do not have common usage and output formats (meaning users have to learn how to use each tool).
Some of these problems are being tackled by autonomous agents (or softbots). Such agents can either replicate domain tasks on the user's behalf (e.g. browsing for information, uploading and printing documents, etc.), and/or analyse information in order to assist the user (e.g. highlighting anchor text, synthesizing summaries, and so on). More complex agents can take a vague specification of an information goal, determine for themselves how it should best be met and then actually perform the necessary actions. In all cases, however, the agents operate from the perspective of their user-knowing his preferences and objectives (either by being told explicitly or by being able to infer them implicitly). Such agents are now being used to undertake a range of tasks including comparison shopping, finding people's home pages, and finding experts on particular topics.
Current generation softbots clearly represent a useful addition to the information management tool armoury. However they have several inherent limitations which will require a radical reappraisal of their method of operation in the not too distant future. Firstly, many of these tools rely upon index generators for their operation, and we believe that these will become less effective as time passes unless changed fundamentally. However, the chief concern follows from the fact that softbots satisfy their user's requirements from a purely individualistic perspective. Thus if all Web users had their own softbot (which presumably is the aim of this research community) then there would be millions of agents scouring the web. These agents would have more or less the same functionality and would invariably be trying to satisfy goals which are not unique. However, they make no attempt to coordinate their activity, nor share information which could benefit one another (whether it be information used by the agents, or information gained for their users). When viewed from a system's level perspective (i.e. from the view of the whole Internet or an entire Intranet) this is a ridiculous position. There are massive amounts of replicated functionality, data and redundant computation and agents' experiences and discoveries do not benefit others. To this end, it is clear that sharing information and cooperating are necessary to increase utility, efficiency and scaleability. For these reasons, softbots as they are currently conceived will not scale up to meet the short term needs of users, let alone those of the future.
Given this seemingly bleak situation, what can be done to facilitate the development of robust, large scale information management systems? Firstly, it is necessary to jettison the user-agent centred perspective of the softbot-community. This act allows us to view both users and information providers as agents (i.e. consumer and producer agents). It also promotes the notion of an intermediary agent to facilitate the coupling of producers and consumers. (Primitive versions of these agents can be found in today's softbot world: softbots, acting on behalf of users, interact with low level intermediaries (directories and indices) to find information resources (web pages)). Coupling allows intermediaries to provide high-level problem solving services. For example, a brokerage intermediary may provide a coupling service in which consumers who indicate a need for a particular service are put in touch with producers who can provide it  or task intermediaries may have specialised know-how about solving a particular domain problem which the user agent can buy in on an as-needed basis . Also the producers of information are viewed as first class agents in their own right. Rather than being passive repositories which meekly lay out their wares for all to see and exploit, producers can actively seek customers for their information (i.e. they can push their services as well as relying on user pull) and they can offer personalised and customisable services by interacting with consumers to see exactly what their needs are . The shift away from the user-agent view also focuses the unit of analysis up from the component level (individual softbots, specific directories, etc.) to the level of the system.
Intermediaries can be envisaged for many high level problem solving services , and it is tempting to simply let them inherit the same familiar functions that we ourselves have evolved in human socio-economic organisations (i.e. particular instances of services-for-sale, agent yellow-pages, data corpora, and so on). We believe that this atomic-agents view (design with agents as indivisible entities) is of little utility in dealing with the design, creation, operation and scaleability of a large system. Firstly, this approach is likely to lead to a (re)design-to-need method which produces fixed agent/intermediary organisations (as can be seen in current research). This is undesirable because fixed organisational structures do not scale. For example, providing one yellow-pages intermediary in a multi-agent system will result in a bottle-neck as the number of agents increases. Secondly, just as our socio-economic facilities and their organisation are an outgrowth of our environment, our goals and our abilities, those of the agent (as a computer program) differ and hence they should not be merely a projection of what is familiar (since different organisational functions/structures may be more appropriate). Thirdly, the use of agents as atomic building blocks tends to focus attention on cooperative problem solving and heterogeneity, not the information sharing which is key to information management, utility, and scaleability.
To rectify these problems we believe a different stance is required. Acknowledging that intermediaries are nothing more than a centralisation of some task (with or without associated data) which provide a more efficient and/or higher utility for the agent collective, indicates that they should be created solely for this reason. That is, they should not be created simply because they are familiar anthropic socio-economic analogues. Consequently, we believe that their creation is best left to the agents themselves. Additionally, despite its obvious advantages, the atomic-agents view is a distortion of reality-although many users are consumers and many organisations producers, it is by no means the case that an agent that represents the interests of either consumes or produces only. When we look into the internal goals and algorithms of agents, we observe that producers and consumers have many goals which are mutually satisfying. For example, an academic writes a paper to disseminate results, whereas another needs the results for his own work (both of which are valid goals). Thus regardless of the classification we give an agent (consumer or producer), it is the efficient achievement of these goals that gives purpose to intermediaries. Further: because intermediaries are themselves agents, they too should be able to detect and correct organisational bottle-necks, inefficiencies and drops in utility.
Allowing agents to build and maintain their own hierarchic organisational structure requires that they are able to decide for themselves what tasks should be `centralised'. This means they must be aware and be able to meta-reason about their own internal efficiency/efficacy and goals, those of their acquaintances, and the goals of the system as a whole. It also entails that agents are able to create and annihilate other agents, delegate and surrender tasks and information (for their internal and user goals), modify their own operation and influence that of others (because organisational changes are taken by the collective, not the individual). As designers of such systems, we must be aware of all these issues and for the sake of creating multi-agent systems which are extremely large, complex, and scaleable that the `agent as atom' stance must be dismissed.
In summary, we believe that large scale multi-agent systems should be self-building because this compensates for the difficulty of their design and self-organising because the agents themselves are best able to evolve structure appropriately and because it allows for a higher degree of scale tolerance.
 M. J. Bates (1989). "The design of browsing and berrypicking techniques for the on-line search interface." Online Review 13 (5), 407-424.
 J. D. Foss (1997) "Brokering the Info-Underworld" in N. R. Jennings and M. Wooldridge (eds.) "Agent Technology: Foundations, Applications and Markets" Springer-Verlag.
 K. Sycara and D. Zeng (1996) "Coordination of Multiple Intelligent Software Agents" Int Journal of Cooperative Information Systems 5 (2&3) 181-212.
 E. H. Durfee, D. L. Kiskis, and W. P. Birmingham (1997) "The agent architecture of the University of Michigan Digital Library" IEE Proc on Software Engineering 144 (1) 61-71.
Useful (printable) info: