Refine
Year of publication
Document Type
- Part of Periodical (38)
- Doctoral Thesis (31)
- Diploma Thesis (24)
- Study Thesis (19)
- Bachelor Thesis (14)
- Master's Thesis (14)
- Report (1)
Keywords
- Routing (5)
- Bluetooth (4)
- Knowledge Compilation (4)
- Netzwerk (4)
- Semantic Web (4)
- Software Engineering (4)
- VNUML (4)
- E-KRHyper (3)
- Netzwerksimulation (3)
- RIP-MTI (3)
Institute
- Institut für Informatik (141) (remove)
This paper shows how multiagent systems can be modeled by a combination of UML statecharts and hybrid automata. This allows formal system specification on different levels of abstraction on the one hand, and expressing real-time system behavior with continuous variables on the other hand. It is not only shown how multi-robot systems can be modeled by a combination of hybrid automata and hierarchical state machines, but also how model checking techniques for hybrid automata can be applied. An enhanced synchronization concept is introduced that allows synchronization taking time and avoids state explosion to a certain extent.
Networked RDF graphs
(2007)
Networked graphs are defined in this paper as a small syntactic extension of named graphs in RDF. They allow for the definition of a graph by explicitly listing triples as well as by SPARQL queries on one or multiple other graphs. By this extension it becomes possible to define a graph including a view onto other graphs and to define the meaning of a set of graphs by the way they reference each other. The semantics of networked graphs is defined by their mapping into logic programs. The expressiveness and computational complexity of networked graphs, varying by the set of constraints imposed on the underlying SPARQL queries, is investigated. We demonstrate the capabilities of networked graphs by a simple use case.
Hyper tableaux with equality
(2007)
In most theorem proving applications, a proper treatment of equational theories or equality is mandatory. In this paper we show how to integrate a modern treatment of equality in the hyper tableau calculus. It is based on splitting of positive clauses and an adapted version of the superposition inference rule, where equations used for paramodulation are drawn (only) from a set of positive unit clauses, the candidate model. The calculus also features a generic, semantically justified simplification rule which covers many redundancy elimination techniques known from superposition theorem proving. Our main results are soundness and completeness, but we briefly describe the implementation, too.
UML models and OWL ontologies constitute modeling approaches with different strength and weaknesses that make them appropriate for use of specifying different aspects of software systems. In particular, OWL ontologies are well suited to specify classes using an expressive logical language with highly flexible, dynamic and polymorphic class membership, while UML diagrams are much more suitable for specifying not only static models including classes and associations, but also dynamic behavior. Though MOF based metamodels and UML profiles for OWL have been proposed in the past, an integrated use of both modeling approaches in a coherent framework has been lacking so far. We present such a framework, TwoUse, for developing integrated models, comprising the benefits of UML models and OWL ontologies
In this paper we describe a network for distributing personalized Information in a metropolitan area. We discuss the system architecture of our Bluetooth-based information system as well as the reasoning process that fits users" needs with potential messages. We furthermore present our findings on parallelizing Bluetooth connection setup and performance.
Semantic descriptions of non-textual media available on the web can be used to facilitate retrieval and presentation of media assets and documents containing them. While technologies for multimedia semantic descriptions already exist, there is as yet no formal description of a high quality multimedia ontology that is compatible with existing (semantic) web technologies. We explain the complexity of the problem using an annotation scenario. We then derive a number of requirements for specifying a formal multimedia ontology, including: compatibility with MPEG-7, embedding in foundational ontologies, and modularisation including separation of document structure from domain knowledge. We then present the developed ontology and discuss it with respect to our requirements.
Generalized methods for automated theorem proving can be used to compute formula transformations such as projection elimination and knowledge compilation. We present a framework based on clausal tableaux suited for such tasks. These tableaux are characterized independently of particular construction methods, but important features of empirically successful methods are taken into account, especially dependency directed backjumping and branch local operation. As an instance of that framework an adaption of DPLL is described. We show that knowledge compilation methods can be essentially improved by weaving projection elimination partially into the compilation phase.
This paper offers an informal overview and discussion on first order predicate logic reasoning systems together with a description of applications which are carried out in the Artificial Intelligence Research Group of the University in Koblenz. Furthermore the technique of knowledge compilation is shortly introduced.
The E-KRHyper system is a model generator and theorem prover for first-order logic with equality. It implements the new E-hyper tableau calculus, which integrates a superposition-based handling of equality into the hyper tableau calculus. E-KRHyper extends our previous KRHyper system, which has been used in a number of applications in the field of knowledge representation. In contrast to most first order theorem provers, it supports features important for such applications, for example queries with predicate extensions as answers, handling of large sets of uniformly structured input facts, arithmetic evaluation and stratified negation as failure. It is our goal to extend the range of application possibilities of KRHyper by adding equality reasoning.
Diese Arbeit stellt ein Werkzeug zur Verfügung, das strukturierte Tests des RIP-MTI Algorithmus vereinfachen, beschleunigen und automatisieren kann. Die vormals zwei Dimensionen Topologie und Updatekonstellation, auf die die MTI-Erweiterung getestet werden musste, konnten auf den variablen Anteil der Topologie vereinfacht werden. Die zeitliche Reihenfolge des Auftretens der Updates kann zentral gesteuert werden. Bisher mussten Tests händisch und sehr aufwändig über Skripte auf der Konsole gesteuert werden. Die entwickelte Testumgebung "XTPeer" ermöglicht es, die gleichen und viele weitere Tests mit kleinem Aufwand durchzuführen.
SNMP in VNUML Simulationen
(2007)
Das Simple Network Management Protocol (SNMP) gilt als die Universalsprache des Netzwerkmanagements. Bereits Anfang der 90er Jahre wurde die erste Version von SNMP durch die Internet Engineering Task Force (IETF) zum Standard-Internet Management Protocol erklärt und Teil der TCP/IP Protocol-Suite. Für die meisten Betriebssystemplattformen sind SNMP Implementierungen verfügbar und viele netzwerkfähige Geräte und Netzwerkmanagement-Programme unterstützen SNMP, das sich vor allem für die Verwaltung plattformübergreifender und herstellerunabhängiger Netzwerke bewährt. Virtual Network User Mode Linux (VNUML) ist ein mächtiges Netzwerk-Simulationsprogramm für Linux mit dem virtuelle Linux-Rechnernetze aufgebaut werden können, um darin Programmabläufe zu simulieren. Die VNUML Netzwerk-Simulationen sind in erster Linie für das Entwickeln, Analysieren und Testen von Linux Netzwerk-Software, wie zum Beispiel Netzwerk-Protokollen, geeignet. Das Simulationsprogramm entstand im Rahmen des Euro6IX-Projektes, zur Einführung des IPv6 Standards in Europa, am Telematics Engineering Department der Technischen Universität Madrid. Die Rechner der virtuellen Netze, die VNUML aufbaut, basieren auf User Mode Linux (UML), einer in breitem Spektrum eingesetzten Virtualisierung des Linux-Kernels. Diese Studienarbeit beschäftigt sich mit den Möglichkeiten und der Funktionsweise des Netzwerkmanagements mit SNMP. Dafür wird die SNMP-Software Net-SNMP in VNUML Simulationen eingesetzt, um die Möglichkeiten der Konfiguration und des Umgangs mit SNMP in einer praxisnahen Umgebung zu untersuchen. Der Einsatz von Net-SNMP in VNUML Simulationen kann dazu dienen, die Integration von Netzwerkmanagement mit SNMP für relevante Rechnernetze vorzubereiten und Möglichkeiten der Konfiguration und der Verwendung auszuloten oder VNUML Simulationen im Allgemeinen mit diesem bewährten Netzwerkmanagement-System zur Unterstützung auszustatten.
Problems in the analysis of requirements often lead to failures when developing software systems. This problem is nowadays being faced by requirements engineering. The early involvement of all kinds of stakeholders in the development of such a system and a structured process to elicitate and analyse requirements have made it a crucial factor as a first step in software development. The increasing complexity of modern softwaresystems though leads to a rising amount of information which has to be dealt with during analysis. Without the support of appropriate tools this would be almost impossible to do. Especially in bigger projects, which tend to be spatially distributed, an effective requirements engineering could not be implemented without this kind of support. Today there is a wide range of tools dealing with this matter. They have been in use since some time now and, in their most recent versions, realize the most important aspects of requirements engineering. Within the scope of this thesis some of these tools will be analysed, focussing on both the major functionalities concerning the management of requirements and the repository of these tools. The results of this analyis will be integrated into a reference model.
Interactive video retrieval
(2006)
The goal of this thesis is to develop a video retrieval system that supports relevance feedback. One research approach of the thesis is to find out if a combination of implicit and explicit relevance feedback returns better retrieval results than a system using explicit feedback only. Another approach is to identify a model to weight existing feature categories. For this purpose, a state-of-the-art analysis is presented and two systems implemented, which run under the conditions of the international TRECVID workshop. It will be a basis system for further research approaches in the field of interactive video retrieval. Amongst others, it shall participate in the 2006 search task of the mentioned workshop.
The high cost of routing infrastructure makes checking theories about larger nets a very difficult and expensive task. One possible approach to fight this problem is the use of virtual instead of physical infrastructure. OPNet- IT Guru software is a suite designed to simulate large nets and present relevant information. This allows validating extensive changes before actually implementing them on a productive network or testing theories without the overhead of a physical infrastructure.
Viele Probleme in der Aussagenlogik sind nur sehr aufwändig lösbar. Ist beispielsweise eine Wissensbasis gegeben, an die wir Anfragen stellen, wollen, so kann dies mitunter sehr mühsam sein. Um trotzdem effizient Anfragen beantworten zu können, hat sich die Vorgehensweise der Wissenskompilation entwickelt. Dabei wird die Lösung der Aufgabe in eine Offline- und eine Online-Phase aufgeteilt. In der Offline-Phase wird die Wissensbasis präkompiliert. Dabei wird sie in eine bestimmte Form umgewandelt, auf der sich die erwarteten Anfragen effizient beantworten lassen. Diese Transformation der Wissensbasis ist meist sehr aufwändig, muss jedoch nur einmalig durchgeführt werden. In der darauffolgenden Online-Phase können nun effizient Anfragen beantwortet werden. In dieser Diplomarbeit wird eine spezielle Normalform, die sich als Zielsprache der Präkompilation anbietet, untersucht. Außerdem wird die Präkompilation so in einzelne Schritte unterteilt, dass möglicherweise bereits nach einigen Teilschritten Anfragen beantwortet werden können.
The model evolution calculus
(2004)
The DPLL procedure is the basis of some of the most successful propositional satisfiability solvers to date. Although originally devised as a proof procedure for first-order logic, it has been used almost exclusively for propositional logic so far because of its highly inefficient treatment of quantifiers, based on instantiation into ground formulas. The recent FDPLL calculus by Baumgartner was the first successful attempt to lift the procedure to the first-order level without resorting to ground instantiations. FDPLL lifts to the first-order case the core of the DPLL procedure, the splitting rule, but ignores other aspects of the procedure that, although not necessary for completeness, are crucial for its effectiveness in practice. In this paper, we present a new calculus loosely based on FDPLL that lifts these aspects as well. In addition to being a more faithful litfing of the DPLL procedure, the new calculus contains a more systematic treatment of universal literals, one of FDPLL's optimizations, and so has the potential of leading to much faster implementations.
The Living Book is a system for the management of personalized and scenario specific teaching material. The main goal of the system is to support the active, explorative and selfdetermined learning in lectures, tutorials and self study. The Living Book includes a course on 'logic for computer scientists' with a uniform access to various tools like theorem provers and an interactive tableau editor. It is routinely used within teaching undergraduate courses at our university. This paper describes the Living Book and the use of theorem proving technology as a core component in the knowledge management system (KMS) of the Living Book. The KMS provides a scenario management component where teachers may describe those parts of given documents that are relevant in order to achieve a certain learning goal. The task of the KMS is to assemble new documents from a database of elementary units called 'slices' (definitions, theorems, and so on) in a scenario-based way (like 'I want to prepare for an exam and need to learn about resolution'). The computation of such assemblies is carried out by a model-generating theorem prover for first-order logic with a default negation principle. Its input consists of meta data that describe the dependencies between different slices, and logic-programming style rules that describe the scenario-specific composition of slices. Additionally, a user model is taken into account that contains information about topics and slices that are known or unknown to a student. A model computed by the system for such input then directly specifies the document to be assembled. This paper introduces the elearning context we are faced with, motivates our choice of logic and presents the newly developed calculus used in the KMS.
We aim to demonstrate that automated deduction techniques, in particular those following the model computation paradigm, are very well suited for database schema/query reasoning. Specifically, we present an approach to compute completed paths for database or XPath queries. The database schema and a query are transformed to disjunctive logic programs with default negation, using a description logic as an intermediate language. Our underlying deduction system, KRHyper, then detects if a query is satisfiable or not. In case of a satisfiable query, all completed paths -- those that fulfill all given constraints -- are returned as part of the computed models. The purpose of our approach is to dramatically reduce the workload on the query processor. Without the path completion, a usual XML query processor would search the database for solutions to the query. In the paper we describe the transformation in detail and explain how to extract the solution to the original task from the computed models. We understand this paper as a first step, that covers a basic schema/query reaÂsoning task by model-based deduction. Due to the underlying expressive logic formalism we expect our approach to easily adapt to more sophisticated problem settings, like type hierarchies as they evolve within the XML world.