text.skipToContent text.skipToNavigation

Ontologies for Agents: Theory and Experiences Theory and Experiences

  • Erscheinungsdatum: 30.03.2006
  • Verlag: Birkhäuser Basel
eBook (PDF)
51,16 €
inkl. gesetzl. MwSt.
Sofort per Download lieferbar

Online verfügbar

Ontologies for Agents: Theory and Experiences

The volume aims at providing a comprehensive review of the diverse efforts covering the gap existing between the two main perspectives on the topic of ontologies for multi-agent systems, namely: How ontologies should be modelled and represented in order to be effectively used in agent systems, and on the other hand, what kind of capabilities should be exhibited by an agent in order to make use of ontological knowledge and to perform efficient reasoning with it. The volume collects the most significant papers of the AAMAS 2002 and AAMAS 2003 workshop on ontologies for agent systems, and the EKAW 2002 workshop on ontologies for multi-agent systems.


Weiterlesen weniger lesen

Ontologies for Agents: Theory and Experiences

Reconciling Implicit and Evolving Ontologies for Semantic Interoperability (p. 121-122)

Kendall Lister, Maia Hristozova and Leon Sterling

Abstract. This paper addresses current approaches to the goal of semantic interoperability on the web and presents new research directions. We critically discuss the existing approaches, including RDF, SHOE, PROMPT and Chimaera, and identify the most e.ective elements of each. In our opinion, the ability of these primarily closed solutions to succeed on a global web scale is limited. In general, a unilateral solution to the problem on a global level seems unlikely in the foreseeable future. We review and contrast our own research experiments AReXS and CASA and suggest that as yet unaddressed issues should be considered, such as reconciling implicit ontologies and evolving ontologies and task-oriented analysis. We also consider the role of semantic interoperation in multi-agent systems and describe strategies for achieving this via the ROADMAP methodology, with emphasis on building and assuring knowledge models.

Keywords. Ontology translation/mapping, Ontology maintenance/evolution, Data standardisation.

1. Introduction

The much talked about goal of building a new Internet that is comprehensible to machines as well as humans is generally considered to involve enhancing content and information sources with semantic markings and explicit ontologies. A number of approaches to this goal have been proposed, and these generally involve a new representation for semantically enriched data. Something that seems to be often overlooked, however, is that a single solution is unlikely to be usefully applicable to the entire world wide web. It is obvious that business needs are generally quite di.erent to the needs of individuals, and that even within the business community di.erent areas will require solutions of varying sophistication, accuracy and scale. The widespread success of the world wide web and its underlying technologies, HTML and HTTP, has been due in no small part to their simplicity and ease of adoption. By providing a simple architecture that anyone could learn and use with minimal overhead, content .ourished on the web. Other information technologies that arguably provided more e.ective methods for locating and retrieving data failed to take o. in the same exponential way that the web did.

Where the web infrastructure itself doesn't even contain the most rudimentary searching and resource location features, Gopher, WAIS and a large number of proprietary online databases that predated the world wide web all provided automated indexing, searching, hypertextuality and other information management capabilities. But despite their apparent advantages, all of these technologies were overtaken by the web. In fact, in many cases proprietary databases and indexes have had their interfaces replaced with web-based solutions, to the point that the actual technology is largely hidden. It is more than a coincidence that where the world wide web succeeded and grew to become a de facto standard, the more complex alternatives faltered and missed out on popular adoption.

Similarly, we consider that the next generation of semantically-capable global information infrastructure will necessarily be relatively simple in order to achieve the same scale of acceptance. That is not to say that sophisticated technologies have no place - on the contrary, they will be vital for the areas of industry that require them, and their advances will no doubt drive other research e.orts even further. Also, the intelligent agents that roam this infrastructure will themselves be very sophisticated. However, there remains a fundamental role for simple, .exible and adaptive technologies that do not demand strict adher

Weiterlesen weniger lesen