Refine
Year of publication
Document Type
- Part of Periodical (38)
- Doctoral Thesis (35)
- Diploma Thesis (24)
- Study Thesis (19)
- Master's Thesis (16)
- Bachelor Thesis (14)
- Report (1)
Keywords
- Routing (5)
- Bluetooth (4)
- Knowledge Compilation (4)
- Netzwerk (4)
- Semantic Web (4)
- Software Engineering (4)
- VNUML (4)
- E-KRHyper (3)
- Netzwerksimulation (3)
- RIP-MTI (3)
- XTPeer (3)
- Campus Information System (2)
- Drahtloses Sensorsystem (2)
- Equality (2)
- Information Retrieval (2)
- Konvergenz (2)
- Modellgetriebene Entwicklung (2)
- Network (2)
- Ontology (2)
- Petri-Netze (2)
- Prädikatenlogik (2)
- RIP (2)
- Rechnernetz (2)
- Theorem Proving (2)
- UML (2)
- University (2)
- constraint logic programming (2)
- hybrid automata (2)
- probability propagation nets (2)
- API (1)
- API analysis (1)
- API-Analyse (1)
- AUTOSAR (1)
- Abduktion <Logik> (1)
- Ad-hoc-Netz (1)
- Adaptation (1)
- Administration (1)
- Algorithm Engineering (1)
- Algorithmische Geometrie (1)
- Anpassung (1)
- Architektur <Informatik> (1)
- Auditing (1)
- Augmented Reality (1)
- Aussagenlogik (1)
- Authentisierung (1)
- Automated Theorem Proving (1)
- Automated Theorem Proving Systems (1)
- Automatisches Beweisverfahren (1)
- Automotive Systems (1)
- Backtrack (1)
- Bayes Procedures (1)
- Beaconless (1)
- Befahrbarkeit (1)
- Betriebssystem (1)
- Border Gateway Protocol (1)
- Border Gateway Protocol 4 (1)
- C-Socketbibliothek (1)
- CMS (1)
- CSCA (1)
- CTI (1)
- CTI -Problem (1)
- Calculus (1)
- Cheops (1)
- Cicero (1)
- Circumscription (1)
- Cisco Catalyst 3500XL (1)
- Communication Networks (1)
- Computational logic (1)
- Computer Security (1)
- Computer Supported Cooperative Work (1)
- Computersicherheit (1)
- Computersimulation (1)
- Conference (1)
- Connected Vehicles (1)
- Context-aware processes (1)
- Controlling (1)
- Counting-to-Infinity (1)
- DHCPv6 (1)
- DNNF (1)
- DPLL (1)
- DPLL procedure (1)
- Data Mining (1)
- Datenschutz (1)
- Deduktion (1)
- Description Logics (1)
- Destiny (1)
- Developer profiling (1)
- DiaLex (1)
- Dialekt (1)
- Dijkstras Algorithmus (1)
- Dimension 3 (1)
- Diskrete Simulation (1)
- Distanz-Vektor (1)
- Distributed Algorithm (1)
- Distributed Environments (1)
- Distributed process execution (1)
- Dokumentation (1)
- Drahtloses vermachtes Netz (1)
- Driver Assistance Systems (1)
- E-Participation (1)
- Ebener Graph (1)
- Echtzeit-Interaktion (1)
- Echtzeitsysteme (1)
- Empirical Studies (1)
- Emulation (1)
- Enhanced Reality (1)
- Enterprise Architecture Framework (1)
- Entscheidungsprozess (1)
- Entwickler Profil (1)
- Fahrerassistenzsysteme (1)
- Fahrzeug (1)
- Fließgewässer (1)
- Flowing Inland Waters (1)
- Formal Methods (1)
- Formale Methoden (1)
- Formale Ontologie (1)
- Fragebeantwortung (1)
- GPS (1)
- GReTL (1)
- Gelände (1)
- Geographic routing (1)
- Geometric spanner (1)
- Graph (1)
- Grounded Theory (1)
- HNF-Algorithmus (1)
- Hermite-Normalform (1)
- Heuristiken (1)
- Hindernis (1)
- Horn Clauses (1)
- Human-Computer Interaction (1)
- Hyper Tableau Calculus (1)
- IASON (1)
- IPv6 (1)
- IT Guru (1)
- IT-Security (1)
- Information (1)
- Information Centric Networking (1)
- Information Visualization (1)
- Informationsverlinkung (1)
- Informationsvisualisierung (1)
- InstaHub (1)
- Intelligent Information Network (1)
- Interactive Video Retrieval (1)
- Interactive Visualizations (1)
- Interaktive Visualisierungen (1)
- Interior Gateway Protokoll (1)
- JGraLab (1)
- JML (1)
- Java (1)
- Java Modeling Language (1)
- Java. Programmiersprache (1)
- KRHyper (1)
- Klassifikation (1)
- Klausellogik (1)
- Knowledge (1)
- Knowledge Engineering (1)
- Komplexität / Algorithmus (1)
- Laser (1)
- Linking of Information (1)
- Linux (1)
- Local algorithm (1)
- Logischer Schluss (1)
- Lookup-Algorithmen (1)
- MIA (1)
- MPEG-7 (1)
- MSR (1)
- Mail-Filter (1)
- Mehrprozessorsystem (1)
- Mensch-Maschine-Interaktion (1)
- Metamodel (1)
- Meteor (1)
- Metriken (1)
- Mikrocontroller (1)
- Mining (1)
- Mining Software Repositories (1)
- Mobile Information Systems (1)
- Mobiltelefon (1)
- Modelltransformation (1)
- Monitoring (1)
- Multi-Agenten-Simulation (1)
- Multi-robot System (1)
- Multiagent System (1)
- Multiagentensysteme (1)
- Nachbarschaftsgraph (1)
- Nagios (1)
- Named Function Networking (1)
- Network Monitoring (1)
- Network Simulation (1)
- Netzwerk Protokollen (1)
- Netzwerkanalyse (1)
- Netzwerkchip (1)
- Netzwerkmanagment (1)
- Netzwerktopologie (1)
- Netzwerküberwachung (1)
- Neuronales Netz (1)
- Nutzerzufriedenheit (1)
- OPNET (1)
- OWL <Informatik> (1)
- Ontologie <Wissensverarbeitung> (1)
- OpenWrt (1)
- Oracle Generation (1)
- Oraklegenerierung (1)
- Orientierungsstufe (1)
- PDA (1)
- Personal Key Infrastruktur (1)
- Personalised Information Systems (1)
- Petri Nets (1)
- Petri net (1)
- Petrinetz (1)
- Placement Strategies (1)
- Planar graphs (1)
- Policy Language (1)
- Proactive Caching (1)
- Probability (1)
- Probability propagation nets (1)
- Process (1)
- Process tracing (1)
- Projekt iCity (1)
- Propagation (1)
- Propositional Logic (1)
- Protocol (1)
- Prototyp (1)
- Prototype (1)
- Provenance (1)
- Quasi unit disk graph (1)
- Query Expansion (1)
- RDF Graphs (1)
- RDF modeling (1)
- RIP-MTI Algorithmus (1)
- RMTI (1)
- Reactive algorithm (1)
- Rechnernetze (1)
- Reference Model (1)
- Relationale Datenbank (1)
- Relevance Feedback (1)
- Requirement-Engineering (1)
- Resource Description Framework (RDF) (1)
- Robocup 2008 (1)
- Roboter (1)
- Router (1)
- Routing Information Protocol (1)
- Routing Information Protocol (RIP) (1)
- Routing Loops (1)
- Routing information protocol (1)
- Routing with Metric based Topology Investigation (RMTI) (1)
- Routing- und Forwarding Tabellen (1)
- Routing-Loop (1)
- SAC (1)
- SNMP (1)
- SOA (1)
- SPARQL (1)
- STP (1)
- Schleifenerkennung (1)
- Schlussfolgern (1)
- Security Requirements (1)
- Semantic Web Data (1)
- Semantics (1)
- Semantisches Wiki (1)
- Sentiment Analysis, ChatGPT, Students sentiments (1)
- Serdar Ayalp (1)
- Serviceorientierte Architektur (1)
- Software Language (1)
- Software Repositories (1)
- Software Technology (1)
- Softwarearchitektur (1)
- Softwaretest (1)
- Softwaretesting (1)
- Soziale Netzwerke (1)
- Spam-Mail (1)
- Spannender Baum (1)
- Spanning Tree (1)
- Specification (1)
- Spezifikation (1)
- Statistische Analyse (1)
- Stochastic Logic (1)
- Straßenzustand (1)
- Suat Algin (1)
- TAP (1)
- TGraph (1)
- TRECVID (1)
- Tableau <Logik> (1)
- Tableau Calculus (1)
- Technologischer Raum (1)
- Technology Acceptance Model (1)
- Temperaturmessung (1)
- Test Generation (1)
- Testgenerierung (1)
- Theorem prover (1)
- Theorembeweiser (1)
- Tokens (1)
- Transaction concept (1)
- Transaktion (1)
- Transaktionskonzept (1)
- Ultraschall (1)
- Unified Modeling Language (UML ) (1)
- Unit disk graph (1)
- VDE (1)
- VDE-Switch (1)
- VLAN (1)
- VTP (1)
- Vegetation distribution (1)
- Verifikation (1)
- Verortung (1)
- Verteilter Algorithmus (1)
- Virtual Square (1)
- Virtual network user mode Linux (1)
- Virtualisation (1)
- Virtualisierung (1)
- Visualisierung (1)
- Wahrscheinlichkeitsrechnung (1)
- Wasseroberfläche (1)
- Water Surface Temperature (1)
- Web Ontology Language (OWL) (1)
- Werbung (1)
- Wiki (1)
- Wireless Sensor Networks (1)
- Wireless sensor network (1)
- Wireshark (1)
- Wissenskompilation (1)
- Wissensmanagement (1)
- World Wide Web (1)
- World Wide Web 2.0 (1)
- XML (1)
- XML-Dokument (1)
- Zertifikat (1)
- Zirkumskription (1)
- Zusammenhängender Graph (1)
- andLinux (1)
- andLinux. (1)
- automated theorem prover (1)
- backpropagation (1)
- bias (1)
- classification (1)
- concurrency (1)
- convergence analysis (1)
- data mining (1)
- deductive (1)
- description logic (1)
- e-learning (1)
- e-service quality (1)
- ediv (1)
- event model (1)
- event-based systems (1)
- first-order logic (1)
- folksonomies (1)
- gaze information (1)
- grassland (1)
- hybrid systems (1)
- hybride Automaten (1)
- iCity project (1)
- image semantics (1)
- information retrieval (1)
- knowledge management system (1)
- landmarks (1)
- living book (1)
- model generation (1)
- multi-agent systems (1)
- multiagent systems (1)
- multiprocessing (1)
- networks (1)
- neural (1)
- ontology (1)
- parallel computing (1)
- personal information management (1)
- persönliches Informationsmanagement (1)
- privacy protection (1)
- public key infrastructure (1)
- question answering (1)
- real-time computing (1)
- reasoning (1)
- recurrent (1)
- rip (1)
- rmti (1)
- semantic desktop (1)
- semantics (1)
- semantischer Desktop (1)
- sequent calculi (1)
- simulation (1)
- summative evaluation (1)
- tag recommendation (1)
- tagging (1)
- vegetation modeling (1)
- verification (1)
- virtual network user mode linux (1)
- vnuml (1)
- web 2.0 (1)
- xtpeer (1)
Institute
- Institut für Informatik (147) (remove)
In the recent years, Software Engineering research has shown the rise of interest in the empirical studies. Such studies are often based on empirical evidence derived from corpora - collections of software artifacts. While there are established forms of carrying out empirical research (experiments, case studies, surveys, etc.), the common task of preparing the underlying collection of software artifacts is typically addressed in ad hoc manner.
In this thesis, by means of a literature survey we show how frequently software engineering research employs software corpora and using a developed classification scheme we discuss their characteristics. Addressing the lack of methodology, we suggest a method of corpus (re-)engineering and apply it to an existing collection of Java projects.
We report two extensive empirical studies, where we perform a broad and diverse range of analyses on the language for privacy preferences (P3P) and on object-oriented application programming interfaces (APIs). In both cases, we are driven by the data at hand, by the corpus itself, discovering the actual usage of the languages.
Eine systematische Literaturstudie zu beaconless Algorithmen für drahtlose Ad-hoc- und Sensornetze
(2014)
Wireless sensor networks (WSN) are computer networks consisting of sensor nodes.In position-based protocols nodes broadcast messages containing their position information periodically. These messages are called beacons. The use of beacons however has downsides, which led to the emerging of beaconless algorithms. Despite their importance in the research area of computer networks and to the best of our knowledge, beaconless algorithms have not been systematically surveyed.
The aim of this work is a comprehensive, systematic review of beaconless algorithms since the turn of the millennium. Relevant articles are categorized based on their similarities and their differences are compared. Thus, this work serves as a reference to the current state of research of beaconless algorithms and as a basis for finding research gaps.
Im Verlauf der vergangenen Jahre wurden unter der Leitung von Dr. Merten Joost basierend auf Microcontrollern der ATMega-Reihe verschiedene Projekte zur Ansteuerung der Peripheriegeräte eines Computers realisiert. Hierzu zählen unter anderem die Abfrage einer Tastatur, die Ausgabe von Audio- und Videosignalen sowie eine Programmierumgebung mit eigener Programmiersprache. Ziel dieser Arbeit ist es, die gesammelten Projekte zu verbinden, um als Ergebnis einen eigenständigen "Homecomputer" zu erhalten, der per Tastatur angesteuert werden können soll und über eine Audio- und Videoausgabe verfügen soll. Dabei wird eine SD-Karte als Speichermedium dienen, das per Tastatureingabe über eine Art Shell verwaltet werden kann.
In diesem Bericht wird der Einsatz von drahtlosen Sensornetzen zur Temperaturmessung in Fließgewässern untersucht. Es wird dargestellt, inwieweit solche Netze als Bindeglied zwischen Fernerkundung und stationären Sensoren eingesetzt werden können. Es werden die Anforderungen an Sensornetze für die Anwendung Gewässermonitoring ermittelt und eine prototypische Realisierung von Netzknoten für ein solches Sensornetz dargestellt. Als Ergebnis dieser Arbeit werden die Genauigkeit von Temperaturmessungen mit solchen Sensorknoten im Vergleich zu einem Temperaturlogger als Referenzsystem dargestellt. Die Messungen zeigen, dass eine vergleichsweise gute Messgenauigkeit zu geringen Kosten erreichbar ist. Durch Weiterentwicklung des hier dargestellten Prototypen steht für die Temperaturüberwachung in Gewässern ein vielversprechendes und kostengünstiges neues Messinstrument zur Verfügung. Dieses kann auf der einen Seite in tieferen Regionen Gewässertemperaturen messen, als dies mit Fernerkundung möglich ist, und auf der anderen Seite eine höhere räumliche Auflösung als stationäre Messstationen erreichen. Zusätzlich dienen die Literaturrecherche und die Formulierung der Kriterien einer Eingrenzung des Anwendungsbereichs für weiterführende Arbeiten.
Der Apple ][ war einer der drei ersten kompletten Computersysteme auf dem Markt. Von April 1977 an wurde er rund 16 Jahre lang mehrere Millionen mal verkauft. Entwickelt wurde dieser 8 Bit Homecomputer von Steve Wozniak und Steve Jobs. Sie ebneten damit den Weg für den Macintosh und das heute gut bekannte Unternehmen Apple.
Diese Arbeit beschreibt die Implementierung eines Softwareemulators für das komplette Apple ][ Computersystem auf nur einem Atmel AVR Microcontroller. Die größte Herausforderung besteht darin, dass der Microcontroller nur eine geringfügig höhere Taktrate als die zu emulierende Hardware hat. Dies erfordert eine effiziente Emulation der CPU und Speicherverwaltung, die nachfolgend zusammen mit der Laufzeitumgebung für die Emulation vorgestellt wird. Weiterhin wird die Umsetzung des Emulators mit Display und Tastatur in Hardware naher erläutert.
Mit dieser Arbeit wird die erfolgreiche Entwicklung eines portablen Apple ][ Emulators, von der Software über die Hardware bis hin zu einem Prototypen, vorgestellt.
The increased networking of devices has established the usage of distributed applications as a common method in software development. Despite the demand of anonymity in using distributed applications, software frameworks still lack appropriate support in developing them. Building anonymous communicating applications therefore often results in an expensive individual approach. This work integrates an approach for anonymous communication using remote procedure calls into a software framework for building distributed applications. This also includes the design, development, and prototypical implementation of a binding for the MAppLab Remote Procedure Call Framework on the basis of the mPartAPI Framework. Furthermore the resulting prototype will be tested in an exemplary scenario.
Diese Bachelorarbeit befasst sich mit der Entwicklung und Implementierung einer Gesichtserkennenden Software, die in der Lage ist Personenströme zurnerkennen und zu protokollieren. Dabei wird, ausgehend von den speziellen Anforderungen der Bildverarbeitung die entstandene Softwarearchitektur und deren Implementation vorgestellt. Zusätzlich wird zur Implementation ein Webinterface entwickelt welches die Verwaltung der Daten vereinfachen soll. Abschließend werden weitere Verfahren der Gesichtserkennung vorgestellt und gegen das eingesetzte Verfahren verglichen. Zum Schluss wird die implementierte Software evaluiert.
Community-Plattformen im Internet verwenden codebasierte Governance, um ihre hohe Anzahl an Nutzerbeiträgen zu verwalten. Dazu gehören alle Arten von Funktionalitäten, mit denen die Community Nutzerbeiträge in irgendeiner Form direkt oder indirekt beurteilen kann. Diese Arbeit erklärt zunächst die Bedeutung codebasierter Governance und der verschiedenen dafür nutzbaren Funktionalitäten. Anschließend werden die erfolgreichsten 50 Community-Plattformen auf codebasierte Governance untersucht. Das Ergebnis zeigt die Zusammenhänge zwischen dem Aufbau einer Plattform, der Beschaσffenheit der Nutzerbeiträge und der darauf ausübbaren codebasierten Governance auf.
Web 2.0 provides technologies for online collaboration of users as well as the creation, publication and sharing of user-generated contents in an interactive way. Twitter, CNET, CiteSeerX, etc. are examples of Web 2.0 platforms which facilitate users in these activities and are viewed as rich sources of information. In the platforms mentioned as examples, users can participate in discussions, comment others, provide feedback on various issues, publish articles and write blogs, thereby producing a high volume of unstructured data which at the same time leads to an information overload. To satisfy various types of human information needs arising from the purpose and nature of the platforms requires methods for appropriate aggregation and automatic analysis of this unstructured data. In this thesis, we propose methods which attempt to overcome the problem of information overload and help in satisfying user information needs in three scenarios.
To this end, first we look at two of the main challenges of sparsity and content quality in Twitter and how these challenges can influence standard retrieval models. We analyze and identify Twitter content features that reflect high quality information. Based on this analysis we introduce the concept of "interestingness" as a static quality measure. We empirically show that our proposed measure helps in retrieving and filtering high quality information in Twitter. Our second contribution relates to the content diversification problem in a collaborative social environment, where the motive of the end user is to gain a comprehensive overview of the pros and cons of a discussion track which results from social collaboration of the people. For this purpose, we develop the FREuD approach which aims at solving the content diversification problem by combining latent semantic analysis with sentiment estimation approaches. Our evaluation results show that the FREuD approach provides a representative overview of sub-topics and aspects of discussions, characteristic user sentiments under different aspects, and reasons expressed by different opponents. Our third contribution presents a novel probabilistic Author-Topic-Time model, which aims at mining topical trends and user interests from social media. Our approach solves this problem by means of Bayesian modeling of relations between authors, latent topics and temporal information. We present results of application of the model to the scientific publication datasets from CiteSeerX showing improved semantically cohesive topic detection and capturing shifts in authors" interest in relation to topic evolution.
Diffusion imaging captures the movement of water molecules in tissue by applying varying gradient fields in a magnetic resonance imaging (MRI)-based setting. It poses a crucial contribution to in vivo examinations of neuronal connections: The local diffusion profile enables inference of the position and orientation of fiber pathways. Diffusion imaging is a significant technique for fundamental neuroscience, in which pathways connecting cortical activation zones are examined, and for neurosurgical planning, where fiber reconstructions are considered as intervention related risk structures.
Diffusion tensor imaging (DTI) is currently applied in clinical environments in order to model the MRI signal due to its fast acquisition and reconstruction time. However, the inability of DTI to model complex intra-voxel diffusion distributions gave rise to an advanced reconstruction scheme which is known as high angular resolution diffusion imaging (HARDI). HARDI received increasing interest in neuroscience due to its potential to provide a more accurate view of pathway configurations in the human brain.
In order to fully exploit the advantages of HARDI over DTI, advanced fiber reconstructions and visualizations are required. This work presents novel approaches contributing to current research in the field of diffusion image processing and visualization. Diffusion classification, tractography, and visualizations approaches were designed to enable a meaningful exploration of neuronal connections as well as their constitution. Furthermore, an interactive neurosurgical planning tool with consideration of neuronal pathways was developed.
The research results in this work provide an enhanced and task-related insight into neuronal connections for neuroscientists as well as neurosurgeons and contribute to the implementation of HARDI in clinical environments.
The amount of information on the Web is constantly increasing and also there is a wide variety of information available such as news, encyclopedia articles, statistics, survey data, stock information, events, bibliographies etc. The information is characterized by heterogeneity in aspects such as information type, modality, structure, granularity, quality and by its distributed nature. The two primary techniques by which users on the Web are looking for information are (1) using Web search engines and (2) browsing the links between information. The dominant mode of information presentation is mainly static in the form of text, images and graphics. Interactive visualizations offer a number of advantages for the presentation and exploration of heterogeneous information on the Web: (1) They provide different representations for different, very large and complex types of information and (2) large amounts of data can be explored interactively using their attributes and thus can support and expand the cognition process of the user. So far, interactive visualizations are still not an integral part in the search process of the Web. The technical standards and interaction paradigms to make interactive visualization usable by the mass are introduced only slowly through standardatization organizations. This work examines how interactive visualizations can be used for the linking and search process of heterogeneous information on the Web. Based on principles in the areas of information retrieval (IR), information visualization and information processing, a model is created, which extends the existing structural models of information visualization with two new processes: (1) linking of information in visualizations and (2) searching, browsing and filtering based on glyphs. The Vizgr toolkit implements the developed model in a web application. In four different application scenarios, aspects of the model will be instantiated and are evaluated in user tests or examined by examples.
In dieser Studienarbeit sollen verschiedene Routing-Lookup Algorithmen aufgelistet und verglichen werden, mit denen eine Routing-Tabelle erstellt und angepasst werden kann. Dazu werden hier nur dynamische Verfahren in Betracht gezogen. Allgemein wird die Funktionsweise einer Routing-Tabelle erklärt und drei Verfahren bzw. Algorithmen analysiert und bewertet. Die Algorithmen werden anhand von Beispielen erläutert und in einem abschließenden Kapitel gegenüber gestellt. Dabei werden die Vor- und Nachteile der einzelnen Verfahren aufgelistet.
This dissertation investigates the usage of theorem provers in automated question answering (QA). QA systems attempt to compute correct answers for questions phrased in a natural language. Commonly they utilize a multitude of methods from computational linguistics and knowledge representation to process the questions and to obtain the answers from extensive knowledge bases. These methods are often syntax-based, and they cannot derive implicit knowledge. Automated theorem provers (ATP) on the other hand can compute logical derivations with millions of inference steps. By integrating a prover into a QA system this reasoning strength could be harnessed to deduce new knowledge from the facts in the knowledge base and thereby improve the QA capabilities. This involves challenges in that the contrary approaches of QA and automated reasoning must be combined: QA methods normally aim for speed and robustness to obtain useful results even from incomplete of faulty data, whereas ATP systems employ logical calculi to derive unambiguous and rigorous proofs. The latter approach is difficult to reconcile with the quantity and the quality of the knowledge bases in QA. The dissertation describes modifications to ATP systems in order to overcome these obstacles. The central example is the theorem prover E-KRHyper which was developed by the author at the Universität Koblenz-Landau. As part of the research work for this dissertation E-KRHyper was embedded into a framework of components for natural language processing, information retrieval and knowledge representation, together forming the QA system LogAnswer.
Also presented are additional extensions to the prover implementation and the underlying calculi which go beyond enhancing the reasoning strength of QA systems by giving access to external knowledge sources like web services. These allow the prover to fill gaps in the knowledge during the derivation, or to use external ontologies in other ways, for example for abductive reasoning. While the modifications and extensions detailed in the dissertation are a direct result of adapting an ATP system to QA, some of them can be useful for automated reasoning in general. Evaluation results from experiments and competition participations demonstrate the effectiveness of the methods under discussion.
E-KRHyper is a versatile theorem prover and model generator for firstorder logic that natively supports equality. Inequality of constants, however, has to be given by explicitly adding facts. As the amount of these facts grows quadratically in the number of these distinct constants, the knowledge base is blown up. This makes it harder for a human reader to focus on the actual problem, and impairs the reasoning process. We extend E-Hyper- underlying E-KRhyper tableau calculus to avoid this blow-up by implementing a native handling for inequality of constants. This is done by introducing the unique name assumption for a subset of the constants (the so called distinct object identifiers). The obtained calculus is shown to be sound and complete and is implemented into the E-KRHyper system. Synthetic benchmarks, situated in the theory of arrays, are used to back up the benefits of the new calculus.
Dualizing marked Petri nets results in tokens for transitions (t-tokens). A marked transition can strictly not be enabled, even if there are sufficient "enabling" tokens (p-tokens) on its input places. On the other hand, t-tokens can be moved by the firing of places. This permits flows of t-tokens which describe sequences of non-events. Their benefiit to simulation is the possibility to model (and observe) causes and effects of non-events, e.g. if something is broken down.
In this paper, we demonstrate by means of two examples how to work with probability propagation nets (PPNs). The fiirst, which comes from the book by Peng and Reggia [1], is a small example of medical diagnosis. The second one comes from [2]. It is an example of operational risk and is to show how the evidence flow in PPNs gives hints to reduce high losses. In terms of Bayesian networks, both examples contain cycles which are resolved by the conditioning technique [3].
The paper deals with a specific introduction into probability propagation nets. Starting from dependency nets (which in a way can be considered the maximum information which follows from the directed graph structure of Bayesian networks), the probability propagation nets are constructed by joining a dependency net and (a slightly adapted version of) its dual net. Probability propagation nets are the Petri net version of Bayesian networks. In contrast to Bayesian networks, Petri nets are transparent and easy to operate. The high degree of transparency is due to the fact that every state in a process is visible as a marking of the Petri net. The convenient operability consists in the fact that there is no algorithm apart from the firing rule of Petri net transitions. Besides the structural importance of the Petri net duality there is a semantic matter; common sense in the form of probabilities and evidencebased likelihoods are dual to each other.
Cheops für VNUML - Erstellen und beobachten einer VNUML-Simulation per MausklickrnEs wird untersucht, wie Virtual Network User Mode Linux (VNUML), eine Software zur Simulation von Rechnernetzen, die aus virtualisierten Linux Instanzen aufgebaut werden, für den Benutzer besser handhabbar gemacht werden kann. Mit dem Linux-Paket VNUML, welches die dateigesteuerte Konfiguration virtueller Betriebssysteminstanzen ermöglicht, erhält der Anwender die Möglichkeit, komplexe Netzwerktopologien zu simulieren. Verschiedene Netzwerküberwachungsprogramme werden auf ihre Fähigkeit hin untersucht, eine laufende VNUML-Simulation zu erfassen und sinnvoll abzubilden. Dabei soll der Benutzer einen schnellen Überblick über die Funktion der simulierten Netzwerkumgebung, sowie nach Möglichkeit auch über deren Topologie erhalten können. Das Programm Cheops, welches der Netzwerküberwachung dient, wird erweitert, um nicht nur eine laufende Simulation abbilden und beobachten zu können, sondern darüber hinaus in der Lage zu sein, in jedem Schritt der Arbeit mit VNUML eingesetzt zu werden.
Das erweiterte Programm gestattet sowohl die Erstellung der VNUML-Topologiedatei, als auch das Starten und Steuern der Simulation. Damit werden der lange Kommandozeilenaufruf, sowie das Editieren der Konfigurationsdatei, durch einfach zu benutzende Funktionen in einer grafischen Oberfläche (GUI) ersetzt. Zur schnellen Kontrolle der vollen Funktionsfähigkeit der gestarteten Simulation sind keine weiteren Eingaben oder Konfigurationen nötig. Ebenso kann eine differenzierte Beobachtung verschiedener Netzwerkdienste während der Laufzeit der Simulation erfolgen. Die hierzu nötigen Werkzeuge sind im Paket Cheops für VNUML ebenfalls enthalten und speziell zur Anwendung mit VNUML-Simulationen vorkonfiguriert.
Education and training of the workforce have become an important competitive factor for companies because of the rapid technological changes in the economy and the corresponding ever shorter innovation cycles. Traditional training methods, however, are limited in terms of meeting the resulting demand for education and training in a company, which continues to grow and become faster all the time. Therefore, the use of technology-based training programs (that is, courseware) is increasing because courseware enables self-organized and self-paced learning and, through integration into daily work routines, allows optimal transfer of knowledge and skills, resulting in high learning outcome. To achieve these prospects, high-quality courseware is required, with quality being defined as supporting learners optimally in achieving their learning goals. Developing high-quality courseware, however, usually requires more effort and takes longer than developing other programs, which limits the availability of this courseware in time and with the required quality.
This dissertation therefore deals with the research question of how courseware has to be developed in order to produce high-quality courseware with less development effort and shorter project duration. In addition to its high quality, this courseware should be optimally aligned to the characteristics and learning goals of the learners as well as to the planned usage scenarios for the knowledge and skills being trained. The IntView Method for the systematic and efficient development of high-quality courseware was defined to answer the research question of this dissertation. It aims at increasing the probability of producing courseware in time without exceeding project schedules and budgets while developing a high-quality product optimally focused on the target groups and usage scenarios.
The IntView Methods integrates those execution variants of all activities and activity steps required to develop high-quality courseware, which were identified in a detailed analysis of existing courseware development approaches as well as production approaches from related fields, such as multimedia, web, or software engineering, into a systematic process that in their interaction constitute the most efficient way to develop courseware. The main part of the proposed method is therefore a systematic process for engineering courseware that encompasses all courseware lifecycle phases and integrates the activities and methods of all disciplines involved in courseware engineering, including a lifecycle encompassing quality assurance, into a consolidated process. This process is defined as a lifecycle model as well as a derived process model in the form of a dependency model in order to optimally support courseware project teams in coordinating and synchronizing their project work. In addition to the models, comprehensive, ready-to-apply enactment support materials are provided, consisting of work sheets and document templates that include detailed activity descriptions and examples.
The evaluation of the IntView Method proved that the method together with the enactment support materials enables efficient as well as effective courseware development. The projects and case studies conducted in the context of this evaluation demonstrate that, on the one hand, the method is easily adaptable to the production of different kinds of courseware or to different project contexts, and, on the other hand, that it can be used efficiently and effectively.
Modern Internet and Intranet techniques, such as Web services and virtualization, facilitate the distributed processing of data providing improved flexibility. The gain in flexibility also incurs disadvantages. Integrated workflows forward and distribute data between departments and across organizations. The data may be affected by privacy laws, contracts, or intellectual property rights. Under such circumstances of flexible cooperations between organizations, accounting for the processing of data and restricting actions performed on the data may be legally and contractually required. In the Internet and Intranet, monitoring mechanisms provide means for observing and auditing the processing of data, while policy languages constitute a mechanism for specifying restrictions and obligations.
In this thesis, we present our contributions to these fields by providing improvements for auditing and restricting the data processing in distributed environments. We define formal qualities of auditing methods used in distributed environments. Based on these qualities, we provide a novel monitoring solution supporting a data-centric view on the distributed data processing. We present a solution for provenance-aware policies and a formal specification of obligations offering a procedure to decide whether obligatory processing steps can be met in the future.
Model-Driven Engineering (MDE) aims to raise the level of abstraction in software system specifications and increase automation in software development. Modelware technological spaces contain the languages and tools for MDE that software developers take into consideration to model systems and domains. Ontoware technological spaces contain ontology languages and technologies to design, query, and reason on knowledge. With the advent of the Semantic Web, ontologies are now being used within the field of software development, as well. In this thesis, bridging technologies are developed to combine two technological spaces in general. Transformation bridges translate models between spaces, mapping bridges relate different models between two spaces, and, integration bridges merge spaces to new all-embracing technological spaces. API bridges establish interoperability between the tools used in the space. In particular, this thesis focuses on the combination of modelware and ontoware technological spaces. Subsequent to a sound comparison of languages and tools in both spaces, the integration bridge is used to build a common technological space, which allows for the hybrid use of languages and the interoperable use of tools. The new space allows for language and domain engineering. Ontology-based software languages may be designed in the new space where syntax and formal semantics are defined with the support of ontology languages, and the correctness of language models is ensured by the use of ontology reasoning technologies. These languages represent a core means for exploiting expressive ontology reasoning in the software modeling domain, while remaining flexible enough to accommodate varying needs of software modelers. Application domains are conceptually described by languages that allow for defining domain instances and types within one domain model. Integrated ontology languages may provide formal semantics for domain-specific languages and ontology technologies allow for reasoning over types and instances in domain models. A scenario in which configurations for network device families are modeled illustrates the approaches discussed in this thesis. Furthermore, the implementation of all bridging technologies for the combination of technological spaces and all tools for ontology-based language engineering and use is illustrated.
This dissertation investigates the emergence of dialects in a model of a multi-agent simulation based on neural networks that is developed within this thesis. First the linguistic foundation of language is illustrated. Besides discussing some important definitions of language, this is achieved by giving a summary of the evolutionary steps in language evolution followed by an overview of the elements of human modern languages including the ways of language change. Subsequently some examples of socially learned animals" communicative behaviour and its formations of dialects are shown.
In the following the computational and mathematical basis are to be explained. Besides the expressions model and simulation, these comprehend the setup of multi-agent simulations and the functionality of artificial neural networks. Based on the abovementioned basics the model of this dissertation is derived and described in a detailed way. Results drawn out of several hundreds of simulation runs are explicated thereafter. Each destructive factor defined in the model is examined separately and its domain is divided into intervals with different effects on the outcome of the simulation.
Furthermore, existing interdependences between the single factors and the process of language merging after a prior dialect divide are shown. Results and outlook are followed by specification, draft, architecture, a detailed illustration of the implementation and a user guide of the tool named DiaLex. DiaLex is a java based tool providing users the opportunity to simulate and analyse the influence of different destructive factors on dialect formation within a commonly used language of one or multiple communities of agents.
Towards Improving the Understanding of Image Semantics by Gaze-based Tag-to-Region Assignments
(2011)
Eye-trackers have been used in the past to identify visual foci in images, find task-related image regions, or localize affective regions in images. However, they have not been used for identifying specific objects in images. In this paper, we investigate whether it is possible to assign image regions showing specific objects with tags describing these objects by analyzing the users' gaze paths. To this end, we have conducted an experiment with 20 subjects viewing 50 image-tag-pairs each. We have compared the tag-to-region assignments for nine existing and four new fixation measures. In addition, we have investigated the impact of extending region boundaries, weighting small image regions, and the number of subjects viewing the images. The paper shows that a tag-to-region assignment with an accuracy of 67% can be achieved by using gaze information. In addition, we show that multiple regions on the same image can be differentiated with an accuracy of 38%.
An der Universität Koblenz wurde mit dem RMTI (RIP with Metric based Topology Investigation) eine Erweiterung zu RIP entwickelt, die das Counting-to-Infinity Problem beheben soll. Eine lauffähige Implementierung des RMTI Algorithmus, auf Basis des RIP Daemons der Quagga Software Routing Suite, liegt vor. Openwrt ist ein, auf Linux basierendes, quelloffenes Betriebssystem für Router, wie z.B. den WRT54G der Firma Linksys. Die Möglichkeiten der Konfiguration der Router mit Openwrt gehen weit über die der original Firmware hinaus. So können die Router mit Software aus dem Linux-Bereich erweitert werden. Ziel dieser Studienarbeit ist die Installation des RMTI Daemons in Openwrt auf den Linksys WRT54G Routern der AG Rechnernetze. Anschließend sollen typische Netzwerksituationen aufgebaut und das Verhalten der Router untersucht werden.
Nagios unter VNUML
(2011)
This paper documents the development of an abstract physics layer (APL) for Simspark. After short introductions to physics engines and Simspark, reasons why an APL was developed are explained. The biggest part of this paper describes the new design and why certain design choices were made based on requirements that arose during developement. It concludes by explaining how the new design was eventually implemented and what future possibilities the new design holds.
Folksonomies are Web 2.0 platforms where users share resources with each other. Furthermore, they can assign keywords (called tags) to the resources for categorizing and organizing the resources. Numerous types of resources like websites (Delicious), images (Flickr), and videos (YouTube) are supported by different folksonomies. The folksonomies are easy to use and thus attract the attention of millions of users. Together with the ease they offer, there are also some problems. This thesis addresses different problems of folksonomies and proposes solutions for these problems. The first problem occurs when users search for relevant resources in folksonomies. Often, the users are not able to find all relevant resources because they don't know which tags are relevant. The second problem is assigning tags to resources. Although many folksonomies (like Delicious) recommend tags for the resources, other folksonomies (like Flickr) do not recommend any tags. Tag recommendation helps the users to easily tag their resources. The third problem is that tags and resources are lacking semantics. This leads for example to ambiguous tags. The tags are lacking semantics because they are freely chosen keywords. The automatic identification of the semantics of tags and resources helps in reducing problems that arise from this freedom of the users in choosing the tags. This thesis proposes methods which exploit semantics to address the problems of search, tag recommendation, and the identification of tag semantics. The semantics are discovered from a variety of sources. In this thesis, we exploit web search engines, online social communities and the co-occurrences of tags as sources of semantics. Using different sources for discovering semantics reduces the efforts to build systems which solve the problems mentioned earlier. This thesis evaluates the proposed methods on a large scale data set. The evaluation results suggest that it is possible to exploit the semantics for improving search, recommendation of tags, and automatic identification of the semantics of tags and resources.
The semantic web and model-driven engineering are changing the enterprise computing paradigm. By introducing technologies like ontologies, metadata and logic, the semantic web improves drastically how companies manage knowledge. In counterpart, model-driven engineering relies on the principle of using models to provide abstraction, enabling developers to concentrate on the system functionality rather than on technical platforms. The next enterprise computing era will rely on the synergy between both technologies. On the one side, ontology technologies organize system knowledge in conceptual domains according to its meaning. It addresses enterprise computing needs by identifying, abstracting and rationalizing commonalities, and checking for inconsistencies across system specifications. On the other side, model-driven engineering is closing the gap among business requirements, designs and executables by using domain-specific languages with custom-built syntax and semantics. In this scenario, the research question that arises is: What are the scientific and technical results around ontology technologies that can be used in model-driven engineering and vice versa? The objective is to analyze approaches available in the literature that involve both ontologies and model-driven engineering. Therefore, we conduct a literature review that resulted in a feature model for classifying state-of-the-art approaches. The results show that the usage of ontologies and model-driven engineering together have multiple purposes: validation, visual notation, expressiveness and interoperability. While approaches involving both paradigms exist, an integrated approach for UML class-based modeling and ontology modeling is lacking so far. Therefore, we investigate the techniques and languages for designing integrated models. The objective is to provide an approach to support the design of integrated solutions. Thus, we develop a conceptual framework involving the structure and the notations of a solution to represent and query software artifacts using a combination of ontologies and class-based modeling. As proof of concept, we have implemented our approach as a set of open source plug-ins -- the TwoUse Toolkit. The hypothesis is that a combination of both paradigms yields improvements in both fields, ontology engineering and model-driven engineering. For MDE, we investigate the impact of using features of the Web Ontology Language in software modeling. The results are patterns and guidelines for designing ontology-based information systems and for supporting software engineers in modeling software. The results include alternative ways of describing classes and objects and querying software models and metamodels. Applications show improvements on changeability and extensibility. In the ontology engineering domain, we investigate the application of techniques used in model-driven engineering to fill the abstraction gap between ontology specification languages and programming languages. The objective is to provide a model-driven platform for supporting activities in the ontology engineering life cycle. Therefore, we study the development of core ontologies in our department, namely the core ontology for multimedia (COMM) and the multimedia metadata ontology. The results are domain-specific languages that allow ontology engineers to abstract from implementation issues and concentrate on the ontology engineering task. It results in increasing productivity by filling the gap between domain models and source code.
Location Provider
(2011)
In dieser Arbeit wurde gemäß den in Kapitel 1.3 definierten Anforderungen ein System entwickelt, um mobile und in der Rechenleistung beschränkte Geräte wie Mobiltelefone und PDAs sowohl im Freien als auch in geschlossenen Räumen verorten zu können. Der eingeschränkten Leistungsfähigkeit der Geräte wird durch die Umsetzung einer Client-Server Lösung Rechnung getragen, bei der der Client als Sensorträger dient, der Daten aus seiner Umgebung registriert und über ein Kommunikationsnetz zur weiteren Berechnung an einen Server überträgt. Die Datenübertragung ist als Webservice realisiert, der auf offenen Standards wie XML und SOAP basiert und leicht um weitere Funktionen erweitert werden kann.
Improvements to the RMTI network routing daemon implementation and preparation of a public release
(2011)
Routing with Metric based Topology Investigation (RMTI) is an algorithm meant to extend distance-vector routing protocols. It is under research and development at the University of Koblenz-Landau since 1999 and currently implemented on top of the well-known Routing Information Protocol (RIP). Around midyear 2009, the latest implementation of RMTI included a lot of deprecated functionality. Because of this, the first goal of this thesis was the reduction of the codebase to a minimum. Beside a lot of reorganization and a general cleanup, this mainly involved the removal of some no longer needed modes as well as the separation of the formerly mandatory XTPeer test environment. During the second part, many test series were carried out in order to ensure the correctness of the latest RMTI implementation. A replacement for XTPeer was needed and several new ways of testing were explored. In conjunction with this thesis, the RMTI source code was finally released to the public under a free software license.
Die Arbeit beschäftigt sich mit der für Router und Kleingeräte entwickelten Linux-Distribution OpenWrt. Im ersten Teil der Arbeit wird die Erstellung von Softwarepaketen für OpenWrt erläutert und eine Anzeige- und Konfigurationskomponente für die Dateien des Quagga RIP-Daemons für die Benutzeroberfläche LuCI erstellt. Im zweiten Teil wird der Aufbau eines Testnetzwerkes mit OpenWrt Routern dokumentiert, sowie die Anbindung an VNUML-virtualisierte Netzwerke.
Knowledge compilation is a common technique for propositional logic knowledge bases. A given knowledge base is transformed into a normal form, for which queries can be answered efficiently. This precompilation step is expensive, but it only has to be performed once. We apply this technique to concepts defined in the Description Logic ALC. We introduce a normal form called linkless normal form for ALC concepts and discuss an efficient satisability test for concepts given in this normal form. Furthermore, we will show how to efficiently calculate uniform interpolants of precompiled concepts w.r.t. a given signature.
This thesis introduces fnnlib, a C++ library for recurrent neural network simulations that I developed between October 2009 and March 2010 at Osaka University's Graduate School of Engineering. After covering the theory behind recurrent neural networks, backpropagation through time, recurrent neural networks with parametric bias, continuous-time recurrent neural networks, and echo state networks, the design of the library is explained. All of the classes as well as their interrelationships are presented along with reasons as to why certain design decisions were made. Towards the end of the thesis, a small practical example is shown. Also, fnnlib is compared to other neural network libraries.
Diese Arbeit behandelt verschiedene Ansätze zur Ermittlung einer Heuristik, welche zur Bestimmung einer optimalen Konfiguration des Theorembeweisers E-KRHyper eingesetzt werden soll. Es wird erläutert, wie der Beweiser durch eine angepasste Voreinstellung optimiert werden kann und die erarbeiteten Ansätze zur Ermittlung dieser Voreinstellung werden vorgestellt. Anhand der erzielten Ergebnisse werden die Ansätze anschließend bewertet und für eines der vorgestellten Verfahren wird außerdem eine Idee zur Implementierung vorgestellt.
This thesis presents an analysis of API usage in a large corpus of Java software retrieved from the open source repositories hosted at SourceForge. Most larger software projects use software libraries, which offer a public "application programming interface" or API as an interface for the programmer. In order to facilitate the transition between different APIs, there are emerging research projects in the field of automated API migration. However, there is a lack of basic statistical background information about in-the-wild usage of APIs as such measurements have, until now, only been done on rather small corpora. We thus present an analysis method suitable for measurements with large corpora. First, we create a corpus of open source projects hosted on SourceForge, as well as a corpus of software libraries. Then, all projects in the corpus are compiled with an instrumented compiler. We use a compiler plugin for javac that gives detailed information about every method created by the compiler. This information is stored in a database and analyzed.
Specifying behaviors of multi-agent systems (MASs) is a demanding task, especially when applied in safety-critical systems. In the latter systems, the specification of behaviors has to be carried out carefully in order to avoid side effects that might cause unwanted or even disastrous behaviors. Thus, formal methods based on mathematical models of the system under design are helpful. They not only allow us to formally specify the system at different levels of abstraction, but also to verify the consistency of the specified systems before implementing them. The formal specification aims a precise and unambiguous description of the behavior of MASs, whereas the verification aims at proving the satisfaction of specified requirements. A behavior of an agent can be described as discrete changes of its states with respect to external or internal actions. Whenever an action occurs, the agent moves from one state to another one. Therefore, an efficient way to model this type of discrete behaviors is to use a kind of state transition diagrams such as finite automata. One remarkable advantage of such transition diagrams is that they lend themselves formal analysis techniques using model checking. The latter is an automatic verification technique which determines whether given properties are satisfied within a model underlying a particular system. In realistic physical environments, however, it is necessary to consider continuous behaviors in addition to discrete behaviors of MASs. Examples of those type of behaviors include the movement of a soccer agent to kick off or to go to the ball, the process of putting out the fire by a fire brigade agent in a rescue scenario, or any other behaviors that depend on any timed physical law. The traditional state transition diagrams are not sufficient to combine these types of behaviors. Hybrid automata offer an elegant method to capture such types of behaviors. Hybrid automata extend regular state transition diagrams with methods that deal with those continuous actions such that the state transition diagrams are used to model the discrete changes of behaviors, while differential equations are used to model the continuous changes. The semantics of hybrid automata make them accessible to formal verification by means of model checking. The main goal of this thesis is to approach hybrid automata for specifying and verifying behaviors of MASs. However, specifying and and verifying behaviors of MASs by means of hybrid automata raises several issues that should be considered. These issues include the complexity, modularity, and the expressiveness of MASs' models. This thesis addresses these issues and provides possible solutions to tackle them.
So far VNUML (Virtual Network User Mode Linux) has been used by the group for Computer Networks at the University of Koblenz in such a way as to test its own protocol enhancement for RIP (Routing Information Protocol) on strengths and weaknesses. The modified version of RIP is called RMTI (RIP with minimal topology information). In particular, special test scenarios have been used to investigate wether a Count-to-Infinity (CTI) problem can be completely avoided and how quickly the network converges after the failure or breakdown of a router. Tius thesis investigates whether the MTI enhancement also provides for better performance in larger networks. Furthermore, it will be investigated if it is worth using the script tool EDIV ((spanish: Escenarios DIstribuidos con VNUML, english: Distributed Scenarios using VNUML) due to its enhanced scalability and whether the distribution of an XML scenario on several computers has a significant impact on the convergence time. Apart from simulations, test scenarios will be developed and tested in order to generate results about the efficiency and scalability of the Distance Vector Routing Protocol.
In einer Welt, die immer mehr durch den Einsatz von Technik geprägt wird, gewinnen Systeme, welche eigenständig Aufgaben in einem vorgegebenen Kontext übernehmen können, mehr und mehr an Bedeutung. Solche Systeme werden als autonome Systeme bezeichnet, die ein eigenständiges Verhalten und Interaktionen mit ihrer Umwelt und anderen Systemen beinhalten. Zu diesen Systemen zählen auch autonome mobile Roboter. Sie sind sozusagen intelligente Fahrzeuge, die ihnen übertragene Aufgaben ohne menschliche Unterstützung durchführen können. Um ihre Umwelt wahrzunehmen, mit ihr interagieren und in ihr navigieren zu können benötigen diese Systeme verschiedene Arten von Sensoren. Somit ist die Robotikforschung stark interdisziplinär. Diese Arbeit befasst sich ausschliesslich mit dem Bereich der Navigation, insbesondere mit der Positionsbestimmung, welche für eine gute Navigation die Grundlage bildet. Ein heutzutage häufig benutztes, weltweit verfügbares System zur Positionsbestimmung ist GPS. Ziel dieser Arbeit ist der Anschluss eines GPS-Moduls an einen Mikrocontroller und der Versuch die Positionsgenauigkeit lokal zu präzisieren, um so einem autonomen System die Möglichkeit zur eröffnen, Aufgaben, die eine höhere Genauigkeit erfordern, zu bewerkstelligen.
This minor thesis shows a way to optimise a generated oracle to achieve shorter runtimes. Shorter runtimes of test cases allows the execution of more test cases in the same time. The execution of more test cases leads to a higher confidence in the software-quality. Oracles can be derived from specifications. However specifications are used for different purposes and therefore are not necessarily executable. Even if the are executable it might be with only a high runtime. Those two facts come mostly from the use of quantifiers in the logic. If the quantifier-range is not bounded, respectively if the bounds are outside the target language-datatype-limits, the specification is too expressive to be exported into a program. Even if the bounds inside the used datatype-limits, the quantification is represented as a loop which leads to a runtime blowup, especially if quantifiers are nested. This work explains four different possibilities to reduce the execution time of the oracle by manipulating the quantified formular whereas this approach is only applicable if the quantified variables are of type Integer.
Die Entwicklung von Algorithmen im Sinne des Algorithm Engineering geschieht zyklisch. Der entworfene Algorithmus wird theoretisch analysiert und anschließend implementiert. Nach der praktischen Evaluierung wird der Entwurf anhand der gewonnenen Kenntnisse weiter entwickelt. Formale Verifffizierung der Implementation neben der praktischen Evaluierung kann den Entwicklungsprozess verbessern. Mit der Java Modeling Language (JML) und dem KeY tool stehen eine einfache Spezififfkationssprache und ein benutzerfreundliches, automatisiertes Verififfkationstool zur Verfügung. Diese Arbeit untersucht, inwieweit das KeY tool für die Verifffizierung von komplexeren Algorithmen geeignet ist und welche Rückmeldungen für Algorithmiker aus der Verififfkation gewonnen werden können.Die Untersuchung geschieht anhand von Dijkstras Algorithmus zur Berechnung von kürzesten Wegen in einem Graphen. Es sollen eine konkrete Implementation des Standard-Algorithmus und anschließend Implementationen weiterer Varianten verifffiziert werden. Dies ahmt den Entwicklungsprozess des Algorithmus nach, um in jeder Iteration nach möglichen Rückmeldungen zu suchen. Bei der Verifffizierung der konkreten Implementation merken wir, dass es nötig ist, zuerst eine abstraktere Implementation mit einfacheren Datenstrukturen zu verififfzieren. Mit den dort gewonnenen Kenntnissen können wir dann die Verifikation der konkreten Implementation fortführen. Auch die Varianten des Algorithmus können dank der vorangehenden Verififfkationen verifiziert werden. Die Komplexität von Dijkstras Algorithmus bereitet dem KeY tool einige Schwierigkeiten bezüglich der Performanz, weswegen wir während der Verifizierung die Automatisierung etwas reduzieren müssen. Auf der anderenrn Seite zeigt sich, dass sich aus der Verifffikation einige Rückmeldungen ableiten lassen.
The processing of data is often restricted by contractual and legal requirements for protecting privacy and IPRs. Policies provide means to control how and by whom data is processed. Conditions of policies may depend on the previous processing of the data. However, existing policy languages do not provide means to express such conditions. In this work we present a formal model and language allowing for specifying conditions based on the history of data processing. We base the model and language on XACML.
In this thesis a Java program is developed that can be used to visualize networks previously described in configuration files. These networks must consist of layer 2 switches and hosts only. After loading such a configuration file, the program will visualize the network, and the spanning tree algorithm IEEE 802.1D may be started. The program allows the user to modify specific attributes of switches and hosts. The hosts will be able to exchange messages. The switches are realized as threads so that they can run independently and parallel to each other. The absence of central coordination and control prevents the switches from sharing global knowledge. This characteristic renders the implementation closer to the way an actual network functions.
A network like the internet is a set of subnets that are connected to each other by a router. A router is a computer, containing multiple network devices to be connected to multiple subnets. So, it is able to forward packages from one subnet to another. A network can be represented as a graph with its routers as vertices and subnets as edges. This graph is called the topology of the network. A packet send to a host outside the own subnet usually will be send first to the so-called default router. This router (like any router) contains a table (the so-called forwarding table) with every subnet. Additionally for each net, the table contains the router through which the subnet can be reached best. So, the packet will be forwarded from router to router until it reaches the destination subnet. On this way every router looks up in its forwarding table for the best next router. A routing protocol takes care of the automatic exchange of informations between the routers to build the forwarding tables and keep them up to date. If the forwarding tables of all routers are up to date the network is called convergent. The time needed to build or update the routing tables is called the convergence time The RIP routing protocol is a well known and well explored distance vector protocol. But there are only few examinations about the convergence properties (e.g. the time needed to converge or the traffic volume produced by the routing messages). This work tries to examine a relationship between the topology properties of a network and the convergence properties of the rip routing protocol. Therefore, over 5000 single measurements were performed and statistically analyzed. Mathematical formulas have been derived from the results that are able to approximate the convergence properties of a network from its topology properties.
In this work, some network protocols with Wireshark Protokollanalyser should be observed and described the deal with them. Wireshark is an offshoot of "Ethereal", one of the most popular protocol analyzer. Wireshark analysis network traffic, draws on it and make it clear . For the simulation of the network is used VNUML. Since VNUML can only be used under Linux, andLinux is running as a virtual machine in between to work in Windows to be able to.
RMTI (RIP with Metric based Topology Investigation) wurde in der AG Rechnernetze an der Universität Koblenz-Landau entwickelt. RMTI stellt eine Erweiterung zum RIP (Routing Information Protocol) dar, die das Konvergenzverhalten bei Netzwerkveränderungen, insb. bei Routingschleifen, verbessern soll. Dies geschieht durch Erkennen von Routingschleifen und Reduzieren des Count-to-infinity Problems. Um dieses gewünschte Verhalten nachweisen zu können, bedarf eine reichhaltige Evaluierung des RMTI- Algorithmus. Hierzu wurde in der gleichen Arbeitsgruppe die Client-/Server-Applikation XTPeer entwickelt. In Kombination mit anderen Software wie VNUML und Quagga Routing Suite lässt sich per XT-Peer der Algorithmus evaluieren. Die Applikation XTPeer generiert durch die Simulationen Daten. Diese können in Form von XML konforme SDF-Dateien exportiert werden. Diese können ohne weitere Auswertungen wieder in die XTPeer Applikation importiert werden. Die Evaluierung der Simulationen findet automatisiert nur an der aktuellen Simulation statt. Evaluierung über mehrere Simulationen muss der Benutzer manuell berechnen. Um diese Evaluierungsarbeiten für den Benutzer zu vereinfachen, verfolgt die vorliegende Diplomarbeit daher das Ziel, die XTPeer Applikation mit einem Auswertungsmodul zu erweitern. Die Auswertungen soll sich über alle gespeicherten Simulationsdaten und nicht wie bisher nur über die aktuell laufende Simulation erstrecken. Dies ermöglicht bessere statistisch verwertbare Aussagen. Zusätzlich können diese Auswertungsergebnisse grafisch unterstrichen werden.
Dieses Dokument schlägt ein Konzept für eine Personal Key Infrastruktur in iCity vor. Über ein Trust Center (TC) ausgestellte Zertiffkate gewährleisten einen sicheren Schlüsselaustausch mit nachweisbarer Authentisierung des Kommunikationspartners, Abhörsicherheit sowie Unverf älschtheit und Nachweisbarkeit der Nachrichten. Das gemeinsam vertrauensw ürdige TC muss während der Kommunikation nicht erreichbar sein. Es erhält lediglich öffentliche Informationen. Das Konzept stellt mehrere Sicherheitsstufen vor, die sichere Identiffkation und Anonymität unterschiedlich gewichten.
Conventional security infrastructures in the Internet cannot be directly adopted to ambient systems, especially if based on short-range communication channels: Personal, mobile devices are used and the participants are present during communication, so privacy protection is a crucial issue. As ambient systems cannot rely on an uninterrupted connection to a Trust Center, certiffed data has to be veriffed locally. Security techniques have to be adjusted to the special environment. This paper introduces a public key infrastructure (PKI) to provide secure communication channels with respect to privacy, confidentiality, data integrity, non-repudiability, and user or device authentication. It supports three certiffcate levels with a different balance between authenticity and anonymity. This PKI is currently under implementation as part of the iCity project.
Hybrid automata are used as standard means for the specification and analysis of dynamical systems. Several researches have approached them to formally specify reactive Multi-agent systems situated in a physical environment, where the agents react continuously to their environment. The specified systems, in turn, are formally checked with the help of existing hybrid automata verification tools. However, when dealing with multi-agent systems, two problems may be raised. The first problem is a state space problem raised due to the composition process, where the agents have to be parallel composed into an agent capturing all possible behaviors of the multi-agent system prior to the verification phase. The second problem concerns the expressiveness of verification tools when modeling and verifying certain behaviors. Therefore, this paper tackles these problems by showing how multi-agent systems, specified as hybrid automata, can be modeled and verified using constraint logic programming(CLP). In particular, a CLP framework is presented to show how the composition of multi-agent behaviors can be captured dynamically during the verification phase. This can relieve the state space complexity that may occur as a result of the composition process. Additionally, the expressiveness of the CLP model flexibly allows not only to model multi-agent systems, but also to check various properties by means of the reachability analysis. Experiments are promising to show the feasibility of our approach.
Software is vital for modern society. The efficient development of correct and reliable software is of ever-growing importance. An important technique to achieve this goal is deductive program verification: the construction of logical proofs that programs are correct. In this thesis, we address three important challenges for deductive verification on its way to a wider deployment in the industry: 1. verification of thread-based concurrent programs 2. correctness management of verification systems 3. change management in the verification process. These are consistently brought up by practitioners when applying otherwise mature verification systems. The three challenges correspond to the three parts of this thesis (not counting the introductory first part, providing technical background on the KeY verification approach). In the first part, we define a novel program logic for specifying correctness properties of object-oriented programs with unbounded thread-based concurrency. We also present a calculus for the above logic, which allows verifying actual Java programs. The calculus is based on symbolic execution resulting in its good understandability for the user. We describe the implementation of the calculus in the KeY verification system and present a case study. In the second part, we provide a first systematic survey and appraisal of factors involved in reliability of formal reasoning. We elucidate the potential and limitations of self-application of formal methods in this area and give recommendations based on our experience in design and operation of verification systems. In the third part, we show how the technique of similarity-based proof reuse can be applied to the problems of industrial verification life cycle. We address issues (e.g., coping with changes in the proof system) that are important in verification practice, but have been neglected by research so far.
Die Motivation für diese Arbeit bestand darin, den Studierenden in den Rechnerpools der Universität Koblenz die Möglichkeit zu geben, mit der Simulationssoftware VNUML (Virtual Network User Mode Linux) zu arbeiten. Eingesetzt wird diese Software in den Vorlesungen und Übungen zu Rechnernetzen I und II, was eine Anwendung der Software für die Studenten unumgänglich macht. In der Vergangenheit gab es jedoch immer wieder Probleme bei der Installation und Einrichtung auf den privaten Rechnern, obwohl in früheren Studienarbeiten mehrfach vereinfachte Installationsroutinen entwickelt worden waren. Ein weiteres Problem für die Verwendung von VNUML stellt auch die Tatsache dar, dass die Software nur in einer Linux-Umgebung lauffähig ist. Da aber nicht alle Studierenden das Betriebssystem Linux benutzen und viele vor einer Installation allein zur Verwendung von VNUML zurückschrecken, war es schon länger angedacht, diese Software an den Rechnern der Universität zur Verfügung zu stellen. In dieser Arbeit wird der Prozess beschrieben, wie eine Installation der VNUML-Software in den Rechnerpools möglich war, welche Probleme dabei aufgetreten sind und welche Alternativen zur gewählten Vorgehensweise möglich gewesen wären. Das Ergebnis bietet auch eine sehr einfache Installation für den privaten Anwender, ohne dass hierfür eine eigenständige Linux-Installation nötig wäre. Auch wurden während der Entwicklung immer weitere Verbesserungen vorgenommen, welche die Anwenderfreundlichkeit der endgültigen Lösung weiter erhöhten. Die Möglichkeiten und Ideen sind dabei auch so vielfältig, dass sich die Arbeitsgruppe noch weiter mit diesem Thema beschäftigen wird und weitere Optimierungen vorgenommen werden können.
Cicero ist eine asynchrone Diskussionsplattform, die im Rahmen der Arbeitsgruppe Informationssysteme und Semantic Web (ISWeb) der Universität Koblenz-Landau entwickelt wurde. Die webbasierte Anwendung folgt dem Gedanken eines semantischen Wikis und soll insbesondere beim Arbeitsablauf von Entwurfsprozessen eingesetzt werden. Dabei verwendet Cicero ein restriktives Argumentationsmodell, das einerseits strukturierte Diskussionen von schwierigen Prozessen fördert und andererseits den Entscheidungsfindungsprozess unterstützt. Im Zentrum der Arbeit steht die Evaluation von Cicero, wobei im vorhergehenden theoretischen Teil die Hintergründe und Funktionsweisen vorgestellt werden und im nachfolgenden praktischen Teil die Anwendung anhand einer Fallstudie evaluiert wird. Die Studie wurde im Rahmen der Übungsveranstaltung zu Grundlagen der Datenbanken der Universität Koblenz im Wintersemester 2008/2009 durchgeführt , und die Studenten hatten die Aufgabe, einen Entwurfsprozess mit Hilfe von Cicero zu diskutieren. Über die erhobenen Daten der Fallstudie wird ein Akzeptanztest durchgeführt. Hierbei wird überprüft, ob die Benutzer Cicero positiv annehmen und die Methodik richtig anwenden. Denn aufgrund des vorgegebenen Argumentationsmodells müssen die Benutzer ihr Kommunikationsverhalten ändern und ihren herkömmlichen Diskussionsstil der Anwendung anpassen. Ziel der Evaluation ist es, kritische Erfolgsfaktoren im Umgang mit Cicero ausfindig zu machen. Anhand der identifizierten Schwachstellen werden abschließend gezielte Maßnahmen vorgeschlagen, die die Akzeptanz der Benutzer gegenüber Cicero erhöhen könnten.
An empirical study to evaluate the location of advertisement panels by using a mobile marketing tool
(2009)
The efficiency of marketing campaigns is a precondition for business success. This paper discusses a technique to transfer advertisement content vie Bluetooth technology and collects market research information at the same time. Conventional advertisement media were enhanced by devices to automatically measure the number, distance, frequency and exposure time of passersby, making information available to evaluate both the wireless media as well as the location in general. This paper presents a study analyzing these data. A cryptographic one-way function protects privacy during data acquisition.
Der RIPMTI-Algorithmus wurde entwickelt um die Schleifenerkennung in Rechnernetzen zu verbessern. Das Count-To-Infinity-Problem (kurz: CTI) führt dazu, dass ein Netzwerk nur sehr langsam in einen konvergenten Zustand gelangt. Der Hopcount-Wert 16, der als künstliche Beschränkung des CTI eingeführt wurde, beschränkt leider auch die maximale Topologietiefe eines Netzwerkes. Diese Arbeit soll zeigen wie sich eine schrittweise Erhöhung des Hopcount-Wertes und darüber hinaus die Änderung der Updatezeiten jedes Routers auf die Konvergenzzeiten unterschiedlicher Netzwerk-Szenarios auswirkt.
Ein Switch (engl. Schalter; auch Weiche) ist eine Netzwerk-Komponente zur Verbindung mehrerer Computer bzw. Netz-Segmente in einem lokalen Netzwerk (LAN). Da Switches den Netzwerkverkehr analysieren und logische Entscheidungen treffen, werden sie auch als intelligente Hubs bezeichnet. Die Funktionsweise eines Switches ist der einer Bridge sehr ähnlich, daher wurde anfangs auch der Begriff Multi-Port-Bridge genutzt 1. Ziel der Diplomarbeit ist es, eine Dokumentation auf die Beine zu stellen, der interessierten Studenten der Informationstechnologie die Möglichkeit bietet, einerseits anhand von physikalischen Switches Administrationsaufgaben nachzuempfinden und andererseits anhand von virtuellen Switches größere Netzwerktopologien aufzubauen. Diese Virtualisierung wird durch das von Virtual Square entwickelte Tool VDE erreicht. Die physikalischen Switches bestehen aus vier Catalyst 3500 XL. Im Laufe dieser Arbeit wird sowohl die Bedienung der einzelnen Systeme untereinander, wie auch die Vernetzung der virtuellen Switches mit den physikalischen Switches erläutert. In diesem Zusammenhang wird auch auf Protokolle und Methoden wie das Spanning Tree Protokoll oder die Virtualisierung eines Netzes durch VLANs eingegangen. Zum Schluss kann der Leser das gelernte in einigen praktischen Aufgaben anwenden.
Hybrid systems are the result of merging the two most commonly used models for dynamical systems, namely continuous dynamical systems defined by differential equations and discrete-event systems defined by automata. One can view hybrid systems as constrained systems, where the constraints describe the possible process flows, invariants within states, and transitions on the one hand, and to characterize certain parts of the state space (e.g. the set of initial states, or the set of unsafe states) on the other hand. Therefore, it is advantageous to use constraint logic programming (CLP) as an approach to model hybrid systems. In this paper, we provide CLP implementations, that model hybrid systems comprising several concurrent hybrid automata, whose size is only straight proportional to the size of the given system description. Furthermore, we allow different levels of abstraction by making use of hierarchies as in UML statecharts. In consequence, the CLP model can be used for analyzing and testing the absence or existence of (un)wanted behaviors in hybrid systems. Thus in summary, we get a procedure for the formal verification of hybrid systems by model checking, employing logic programming with constraints.
This work is about three subjects: Virtualisation, real-time computing and parallel computing. Taken by itself, each of these subjects has already been wellresearched, however, when considering all three together, as is necessary when looking at embedded systems, numerous questions as well as new possibilities arise. In this work we develop models describing the behaviour and requirements of real-time applications which execute in a hierarchy of processes as they do when running in a virtual machine. Also, the real-time capabilities of existing virtual machines are evaluated and new interfaces for virtualisation of multiprocessor machines which take into account the characteristics of embedded systems"specifically real-time computing" are defined, implemented and tested. This enables safe, secure and efficient coexistence of programs with largely differing time constraints within separate virtual machines on a single, common multiprocessor computer.
Im Rahmen dieser Diplomarbeit wird ein Transaktionskonzept für die aktuelle Implementationsversion der TGraphenbibliothek JGraLab Carnotaurus umgesetzt. Nach einer grundlegenden Einführung in das Konzept der TGraphen werden die relevanten Implementationsdetails der TGraphenbibliothek erläutert. Anschließend erfolgt ein konzeptueller Entwurf, in dem die formalen Grundlagen des Transaktionskonzepts beschrieben werden. Das aus der Datenbankwelt bekannte ACID-Paradigma für Transaktionen dient dabei als wissenschaftliche Grundlage. In einem nächsten Schritt erfolgt der objektorientierte Feinentwurf der Integration des zu entwickelnden Transaktionskonzepts in das vorhandene Gesamtsystem, anhand dessen die Implementation durchgeführt wird. Eine Analyse und Bewertung des umgesetzten Transaktionskonzepts (vor allem im Hinblick auf den Speicherverbrauch und das Laufzeitverhalten) schließen die Arbeit ab.
Performanz von RIP-MTIfi
(2009)
Diese Diplomarbeit beschäftigt sich mit der Performanz von RIP-MTI, insbesondere mit der Performanz der Schleifenerkennung. Ziel der Arbeit ist es, die Zeitdauer der Schleifenerkennung von RIP-MTI zu untersuchen und Probleme, welche bei der Erkennung von Schleifen auftreten könen, aufzudecken und zu lösen.
Ziel dieser Arbeit war es, den in [Rhe06] dargestellten operationalen Ansatz zur Modelltransformation mit Hilfe der am Institut für Softwaretechnik der Universität Koblenz-Landau vorhandenen Bibliotheken "JGraLab" und "GReQL" in Java zu implementieren. Die Implementierung sollte beweisen, dass der aufgezeigte Transformationsansatz in der Praxis umsetzbar ist. Dies wurde durch verschiedene Beispiele bewiesen. Die geplante Verwendung in weiteren Projekten des IST wird für die Zukunft zeigen, ob sich weitere Transformationen umsetzten lassen oder wo die Grenzen des Ansatzes sind. Des weiteren ist denkbar, die Transformationen nicht mehr in zwei Schritten (Schematransformation vor Graphtransformation), sondern beide Schritte auf einmal ablaufen zu lassen. Dieser Schritt setzt jedoch voraus, dass JGraLab dies ebenfalls unterstützt.
The lack of a formal event model hinders interoperability in distributed event-based systems. Consequently, we present in this paper a formal model of events, called F. The model bases on an upper-level ontology and pro-vides comprehensive support for all aspects of events such as time and space, objects and persons involved, as well as the structural aspects, namely mereological, causal, and correlational relationships. The event model provides a flexible means for event composition, modeling of event causality and correlation, and allows for representing different interpretations of the same event. The foundational event model F is developed in a pattern-oriented approach, modularized in different ontologies, and can be easily extended by domain specifific ontologies.
Avoidance of routing loops
(2009)
We introduce a new routing algorithm which can detect routing loops by evaluating routing updates more thoroughly. Our new algorithm is called Routing with Metric based Topology Investigation (RMTI), which is based on the simple Routing Information Protocol (RIP) and is compatible to all RIP versions. In case of a link failure, a network can reorganize itself if there are redundant links available. Redundant links are only available in a network system like the internet if the topology contains loops. Therefore, it is necessary to recognize and to prevent routing loops. A routing loop can be seen as a circular trace of a routing update information which returns to the same router, either directly from the neighbor router or via a loop topology. Routing loops could consume a large amount of network bandwidth and could impact the endtoend performance of the network. Our RMTI approach is capable to improve the efficiency of Distance Vector Routing.
Rechnernetze gehört zu den Fächern, die ein breites Spektrum an Anwendungsgebieten abdecken. Die vielen unterschiedlichen Protokolle sind Belege dafür. Jedes Protokoll hat sozusagen ein eigenes Anwendungsbereich und doch sind deren Aufgaben unter dem Nenner gleich. Miteinander zu Kommunizieren, Informationen auszutauschen und das möglichst auf eine sichere und schnelle Art und Weise. Als Beispiel denke man an Autos, Handys, Lokale Netze in einem kleinen Betrieb, Intranet in einer größeren Organisation oder in Netze die über Länder hinweggehen. Ein modernes Auto an sich hat schon eine Vielzahl an Bussystemen in sich integriert. CAN, LIN, FlexRay oder MOST Bus sind ein paar Stichworte dazu. Wenn man jetzt von der Autoindustrie weggeht und an die Autonomen Systeme kommt, gibt es dort ein Routing Information Protokoll, das innerhalb von Autonomen Systemen verwendet wird. Routing Information Protokoll ist eines der meist verwendeten Protokolle innerhalb von Autonomen Systemen. Das Routing Information Protokoll ist einer der zentralen Bestandteile des Forschungsbereiches der AG Steigner. Die Arbeit widmet sich unter Anderem dieser Forschungsunterstützung.Diese Studienarbeit hat das Ziel die Testumgebung XTPeer auf die VNUML Version 1.8 zu integrieren.
Semantic desktop environments aim at improving the effectiveness and efficiency of users carrying out daily tasks within their personal information management infrastructure (PIM). They support the user by transferring and exploiting the explicit semantics of data items across different PIM applications. Whether such an approach does indeed reach its aim of facilitating users" life and—if so—to which extent, however, remains an open question that we address in this paper with the first summative evaluation of a semantic desktop approach. We approach the research question exploiting our own semantic desktop infrastructure, X-COSIM. As data corpus, we have used over 100 emails and 50 documents extracted from the organizers of a conference-like event at our university. The evaluation has been carried out with 18 subjects. We have developed a test environment to evaluate COSIMail and COSIFile, two semantic PIM applications based on X-COSIM. As result, we have found a significant improvement for typical PIM tasks compared to a standard desktop environment.
Querying for meta knowledge
(2008)
The Semantic Web is based on accessing and reusing RDF data from many different sources, which one may assign different levels of authority and credibility. Existing Semantic Web query languages, like SPARQL, have targeted the retrieval, combination and reuse of facts, but have so far ignored all aspects of meta knowledge, such as origins, authorship, recency or certainty of data, to name but a few. In this paper, we present an original, generic, formalized and implemented approach for managing many dimensions of meta knowledge, like source, authorship, certainty and others. The approach re-uses existing RDF modeling possibilities in order to represent meta knowledge. Then, it extends SPARQL query processing in such a way that given a SPARQL query for data, one may request meta knowledge without modifying the query proper. Thus, our approach achieves highly flexible and automatically coordinated querying for data and meta knowledge, while completely separating the two areas of concern.
Das Routing Information Protocol (RIP) ist ein Internet-Standard-Routing-Protokoll, das einst mit zu den am meisten eingesetzten Routing-Protokollen in IP-Netzwerken gehörte. Es basiert auf dem sogenannten Distanzvektoralgorithmus und ist in seiner Funktion und seinem Aufbau sehr einfach ausgelegt. Seit jeher leidet es allerdings unter dem sogenannten Counting-to-Infinity (CTI) Problem, bei dem die Erreichbarkeit einer eigentlich ausgefallenen Verbindung zu einem Ziel scheinbar aufrechterhalten wird. Die Distanz zu diesem Ziel wird aufgrund des fortwährenden Austauschs von nicht mehr gültigen Verbindungsinformationen zwischen in einem Ring geschalteten RIP-Routern hochgezählt, theoretisch bis ins Unendliche. Dabei entstehen Routingschleifen, die den Netzwerkbetrieb erheblich stören können, da die gesendeten Netzwerkpakete aufgrund der Schleife die selben Router immer wieder passieren und weder an ihr eigentliches Ziel gelangen noch verworfen werden können. Die Gefahr des Auftretens des CTI-Problems schränkt die Einsetzbarkeit von RIP enorm ein. Die Netzwerke, in denen RIP eingesetzt wird, können nicht beliebig wachsen, da die maximale Größe des Netzwerks auf eine relativ kleine Distanz zwischen den Routern begrenzt ist, um die Dauer und die Folgen des CTI-Problems im Falle des Auftretens gering zu halten. Je stärker auch die Topologie eines Netzwerks vermascht ist, um mit zusätzlichen, alternativen Verbindungen Ausfällen entgegenzuwirken, umso stärker steigt auch die Gefahr des Auftretens des CTI-Problems nach einem Ausfall. Bislang existierten für RIP lediglich Mechanismen, die das Risiko des Auftretens und die Auswirkungen des CTI-Problems verringern, das Problem selbst aber nicht beheben können. Mit "RIP with minimal topology information" (RIP-MTI) wurde in der AG Rechnernetze an der Universität Koblenz-Landau eine abwärtskompatible Erweiterung zu RIP geschaffen, die das CTI-Problem zu beheben verspricht. Der RIP-MTI-Algorithmus sammelt zusätzliche Informationen über die Topologie des Netzwerks und nutzt diese, um nach dem Ausfall einer Verbindung richtige Informationen über die Erreichbarkeit von Zielen von falschen Informationen unterscheiden zu können. In dieser Diplomarbeit wird die Implementierung des RIP-MTI-Algorithmus behandelt. Mit Hilfe der speziell entwickelten RIP-Netzwerk-Testumgebung XTPeer, in der das CTI-Problem kontrolliert provoziert werden kann, wird die Wirksamkeit der Implementierung eines Quagga RIP-MTI-Routers überprüft und entsprechend weiterentwickelt. Dafür wird der RIP-MTI-Algorithmus an die Implementierung des Quagga RIP-Routing-Software sowie an die der Netzwerk-Testumgebung XTPeer angepasst. Diese Diplomarbeit wird vom Autor selbst als fortgeschrittene Zwischenstation eingestuft, vor der Herstellung und Herausgabe der Implementierung einer RIP-MTI-Routing-Software, die auch in produktiven Netzwerken eingesetzt werden könnte.
In dieser Arbeit werden einige bekannte Spam Klassiffzierungsmethoden, welche für viele Anwendungen schon im Einsatz sind, kurz erläutert, um sie in einem neuen Szenario zu analysieren. In diesem Szenario wird nicht wie bei E-Mails üblich davon ausgegangen, dass die Nachrichten unbegrenzter Länge sein können, sondern dass es sich hierbei um Nachrichten begrenzter Länge handelt. Auch die Darstellungsform und der vermutliche Inhalt solcher Nachrichten werden aus einem speziellen Kontext heraus betrachtet. So wird davon ausgegangen, dass die Nachrichten auf einem Ticker eines Bildschirmes zu sehen sind, welcher in einer Universität angebracht ist. Somit werden die Nachrichteninhalte sich eher auf das universitäre Umfeld beziehen, wobei angenommen wird, dass die Nachrichteninhalte nur von einer abgeschlossenen Gruppe von Personen eingestellt werden. Nach der Erzeugung einiger Spam- und Hamnachrichten, die auf die Kriterien des Szenarios zutreσfn, werden Klassiffzierungsmethoden evaluiert. Am Ende der Analyse folgt eine Diskussion über die Verwendbarkeit jener Methoden in diesem Szenario. Abgeschlossen wird diese Untersuchung von einem Programm, welches die entsprechenden Methoden zur Klassifizierung implementiert.
This dissertation introduces a methodology for formal specification and verification of user interfaces under security aspects. The methodology allows to use formal methods pervasively in the specification and verification of human-computer interaction. This work consists of three parts. In the first part, a formal methodology for the description of human-computer interaction is developed. In the second part, existing definitions of computer security are adapted for human-computer interaction and formalized. A generic formal model of human-computer interaction is developed. In the third part, the methodology is applied to the specification and verification of a secure email client.
In unserer heutigen Welt spielen soziale Netzwerke eine immer größere werdende Rolle. Im Internet entsteht fast täglich eine neue Anwendung in der Kategorie Web 2.0. Aufgrund dieser Tatsache wird es immer wichtiger die Abläufe in sozialen Netzwerken zu verstehen und diese für Forschungszwecke auch simulieren zu können. Da alle gängigen sozialen Netzwerke heute nur im eindimensionalen Bereich arbeiten, beschäftigt sich diese Diplomarbeit mit mehrdimensionalen sozialen Netzwerken. Mehrdimensionale soziale Netzwerke bieten die Möglichkeit verschiedene Beziehungsarten zu definieren. Beispielsweise können zwei Akteure nicht nur in einer "kennt"-Beziehung stehen, sondern diese Beziehungsart könnte auch in diverse Unterbeziehungsarten, wie z.B. Akteur A "ist Arbeitskollege von" Akteur B oder Akteur C "ist Ehepartner von" Akteur D, unterteilt werden. Auf diese Art und Weise können beliebig viele, völlig verschiedene Beziehungsarten nebeneinander existieren. Die Arbeit beschäftigt sich mit der Frage, in welchem Grad die Eigenschaften von eindimensionalen auch bei mehrdimensionalen sozialen Netzwerken gelten. Um das herauszufinden werden bereits bestehende Metriken weiterentwickelt. Diese Metriken wurden für eindimensionale soziale Netzwerke entwickelt und können nun auch für die Bewertung mehrdimensionaler sozialer Netzwerke benutzt werden. Eine zentrale Fragestellung ist hierbei wie gut sich Menschen finden, die sich etwas zu sagen haben. Um möglichst exakte Ergebnisse zu erhalten, ist es notwendig reale Daten zu verwenden. Diese werden aus einem Web 2.0-Projekt, in das Benutzer Links zu verschiedenen Themen einstellen, gewonnen (siehe Kapitel 4). Der erste praktische Schritte dieser Arbeit besteht daher darin, das soziale Netzwerk einzulesen und auf diesem Netzwerk eine Kommunikation, zwischen zwei Personen mit ähnlichen Themengebieten, zu simulieren. Die Ergebnisse der Simulation werden dann mit Hilfe der zuvor entwicklelten Metriken ausgewertet.
In dieser Arbeit wurden insgesamt fünf verwandte, aber variante Definitionen der Hermite-Normalform vorgestellt, angefangen bei der ursprünglichen Version von Charles Hermite. Anschließend wurden vier Basisalgorithmen zur Berechnung der Hermite-Normalform ganzzahliger Matrizen im Detail und anhand von vollständigen Beispielen präsentiert. Hierbei wurde vor allem Wert darauf gelegt, linear-algebraische Grundlagen wie beispielsweise die Verwendung von Permutationsmatrizen für unimodulare Spaltenoperationen verständlich zu vermitteln.
This volume contains those research papers presented at the Second International Conference on Tests and Proofs (TAP 2008) that were not included in the main conference proceedings. TAP was the second conference devoted to the convergence of proofs and tests. It combines ideas from both areas for the advancement of software quality. To prove the correctness of a program is to demonstrate, through impeccable mathematical techniques, that it has no bugs; to test a program is to run it with the expectation of discovering bugs. On the surface, the two techniques seem contradictory: if you have proved your program, it is fruitless to comb it for bugs; and if you are testing it, that is surely a sign that you have given up on any hope of proving its correctness. Accordingly, proofs and tests have, since the onset of software engineering research, been pursued by distinct communities using rather different techniques and tools. And yet the development of both approaches leads to the discovery of common issues and to the realization that each may need the other. The emergence of model checking has been one of the first signs that contradiction may yield to complementarity, but in the past few years an increasing number of research efforts have encountered the need for combining proofs and tests, dropping earlier dogmatic views of their incompatibility and taking instead the best of what each of these software engineering domains has to offer. The first TAP conference (held at ETH Zurich in February 2007) was an attempt to provide a forum for the cross-fertilization of ideas and approaches from the testing and proving communities. For the 2008 edition we found the Monash University Prato Centre near Florence to be an ideal place providing a stimulating environment. We wish to sincerely thank all the authors who submitted their work for consideration. And we would like to thank the Program Committee members as well as additional referees for their great effort and professional work in the review and selection process. Their names are listed on the following pages. In addition to the contributed papers, the program included three excellent keynote talks. We are grateful to Michael Hennell (LDRA Ltd., Cheshire, UK), Orna Kupferman (Hebrew University, Israel), and Elaine Weyuker (AT&T Labs Inc., USA) for accepting the invitation to address the conference. Two very interesting tutorials were part of TAP 2008: "Parameterized Unit Testing with Pex" (J. de Halleux, N. Tillmann) and "Integrating Verification and Testing of Object-Oriented Software" (C. Engel, C. Gladisch, V. Klebanov, and P. Rümmer). We would like to express our thanks to the tutorial presenters for their contribution. It was a team effort that made the conference so successful. We are grateful to the Conference Chair and the Steering Committee members for their support. And we particularly thank Christoph Gladisch, Beate Körner, and Philipp Rümmer for their hard work and help in making the conference a success. In addition, we gratefully acknowledge the generous support of Microsoft Research Redmond, who financed an invited speaker.
The University of Koblenz-Landau would like to apply for participation in the RoboCup Mixed Reality League in Suzhou, China 2008. Our team is composed of ten team members and two supervisors. All members are graduate students of Computational Visualistics. Our supervisors are Ph.D. candidates currently researching in the working groups of artificial intelligence and computer graphics.
IPv6 autoconfiguration
(2008)
Diese Studienarbeit stellt verschiedene Möglichkeiten der automatischen Konfiguration von Netzwerkknoten unter IPv6, dem designierten Nachfolger des Internetprotokolls, vor. Dabei wird recht ausführlich in IPv6-Netzwerke eingeführt, wobei aber vorausgesetzt wird, dass der Leser Kenntnisse über das IP-Protokoll der Version 4 hat. Es werden sowohl die zustandslose als auch DHCPv6 ausführlich in der Theorie als auch im praktischen Einsatz gezeigt. Dafür wird das VNUML-System eingesetzt, das an der Technischen Universität Madrid entwickelt wurde und das es ermöglicht, mehrere Linuxsysteme auf einem Wirtsrechner zu simulieren.
Im Rahmen von Projekten haben die Mitarbeiter in einem Unternehmen oft komplexe Problemstellungen zu bearbeiten, für die es keine objektiv richtigen oder falschen Lösungen gibt. Stattdessen werden im Rahmen der Entwurfs- und Entscheidungsprozesse mehrere Lösungsvorschläge erarbeitet um dann unter Abwägung von Pro- und Contra-Argumenten eine möglichst optimale Lösung zu finden.
CAMPUS NEWS - artificial intelligence methods combined for an intelligent information network
(2008)
In this paper we describe a network for distributing personalised information with the usage of artificial intelligence methods. Reception of this information should be possible with everyday mobile equipment. Intelligent filtering and spam protection aim at integrating this technology into our environment. Information on the system architecture and usage of the installation are also presented.
Simulation mit VNUML
(2008)
Diese Studienarbeit soll als Einführung in das Thema Netzwerksimulation dienen und unter anderem auch als Einstiegs-Referenz für zukünftige Besucher der Rechnernetze-Veranstaltungen an der Universität Koblenz nutzbar sein. Die Ausarbeitung beginnt mit den Grundlagen zu UML und VNUML und beschreibt dann die Installation, Konfiguration und das Arbeiten mit dem Netzwerksimulator sowie oft genutzter Tools. Im Anschluss daran werden konkrete Anwendungsfelder vorgestellt: der simulierte Einsatz des Paketfilter iptables zur Realisierung von Firewalls und NAT, verschiedene Netzwerkdienste und zuguterletzt simuliertes Routing mit der quagga-Suite.
Diese Arbeit stellt ein Werkzeug zur Verfügung, das strukturierte Tests des RIP-MTI Algorithmus vereinfachen, beschleunigen und automatisieren kann. Die vormals zwei Dimensionen Topologie und Updatekonstellation, auf die die MTI-Erweiterung getestet werden musste, konnten auf den variablen Anteil der Topologie vereinfacht werden. Die zeitliche Reihenfolge des Auftretens der Updates kann zentral gesteuert werden. Bisher mussten Tests händisch und sehr aufwändig über Skripte auf der Konsole gesteuert werden. Die entwickelte Testumgebung "XTPeer" ermöglicht es, die gleichen und viele weitere Tests mit kleinem Aufwand durchzuführen.
Der an der Universität Koblenz-Landau entwickelte RIP-MTI-Algorithmus stellt eine Modifikation des Routingalgorithmus RIP dar, die es dem RIP-Algorithmus ermöglichen soll, die Häufigkeit des Auftretens des Counting-to-infinity-Problems (CTI) zu reduzieren. Um die Korrektheit und Zuverlässigkeit dieses Algorithmus nachweisen, aber auch Schwächen aufdecken zu können, bedarf es der Möglichkeit, das Verhalten des Algorithmus zu testen. Ziel der Arbeit ist die Nutzbarmachung der von unter VNUML laufenden RIP-Routern dezentral verwalteten Routing-Informationen, um die Entstehung von CTIs zentral protokollieren und analysieren zu können. Zu diesem Zweck wird eine Software entwickelt, die Informationen zur Netzkonfiguration, zu Erreichbarkeiten und Update-Aufkommen sammelt, verwaltet und analysiert. So können neben den bereits bekannten problematischen Netztopologien weitere für die einzelnen RIP-Ausprägungen problematische Topologien ermittelt werden.
The E-KRHyper system is a model generator and theorem prover for first-order logic with equality. It implements the new E-hyper tableau calculus, which integrates a superposition-based handling of equality into the hyper tableau calculus. E-KRHyper extends our previous KRHyper system, which has been used in a number of applications in the field of knowledge representation. In contrast to most first order theorem provers, it supports features important for such applications, for example queries with predicate extensions as answers, handling of large sets of uniformly structured input facts, arithmetic evaluation and stratified negation as failure. It is our goal to extend the range of application possibilities of KRHyper by adding equality reasoning.
Hyper tableaux with equality
(2007)
In most theorem proving applications, a proper treatment of equational theories or equality is mandatory. In this paper we show how to integrate a modern treatment of equality in the hyper tableau calculus. It is based on splitting of positive clauses and an adapted version of the superposition inference rule, where equations used for paramodulation are drawn (only) from a set of positive unit clauses, the candidate model. The calculus also features a generic, semantically justified simplification rule which covers many redundancy elimination techniques known from superposition theorem proving. Our main results are soundness and completeness, but we briefly describe the implementation, too.
In this paper we describe a network for distributing personalized Information in a metropolitan area. We discuss the system architecture of our Bluetooth-based information system as well as the reasoning process that fits users" needs with potential messages. We furthermore present our findings on parallelizing Bluetooth connection setup and performance.
In this paper we describe a network for distributing personalized information within a pervasive university. We discuss the system architecture of our Bluetooth-based CampusNews-system, both, from the administrator and the user viewpoint. We furthermore present first statistical data about the usage of the partial installation at the Koblenz campus together with an outlook to future work.
Probability propagation nets
(2007)
A class of high level Petri nets, called "probability propagation nets", is introduced which is particularly useful for modeling probability and evidence propagation. These nets themselves are well suited to represent the probabilistic Horn abduction, whereas specific foldings of them will be used for representing the flows of probabilities and likelihoods in Bayesian networks.
Generalized methods for automated theorem proving can be used to compute formula transformations such as projection elimination and knowledge compilation. We present a framework based on clausal tableaux suited for such tasks. These tableaux are characterized independently of particular construction methods, but important features of empirically successful methods are taken into account, especially dependency directed backjumping and branch local operation. As an instance of that framework an adaption of DPLL is described. We show that knowledge compilation methods can be essentially improved by weaving projection elimination partially into the compilation phase.
Knowledge compilation is a common technique for propositional logic knowledge bases. The idea is to transform a given knowledge base into a special normal form ([MR03],[DH05]), for which queries can be answered efficiently. This precompilation step is very expensive but it only has to be performed once. We propose to apply this technique to knowledge bases defined in Description Logics. For this, we introduce a normal form, called linkless concept descriptions, for ALC concepts. Further we present an algorithm, based on path dissolution, which can be used to transform a given concept description into an equivalent linkless concept description. Finally we discuss a linear satisfiability test as well as a subsumption test for linkless concept descriptions.
UML models and OWL ontologies constitute modeling approaches with different strength and weaknesses that make them appropriate for use of specifying different aspects of software systems. In particular, OWL ontologies are well suited to specify classes using an expressive logical language with highly flexible, dynamic and polymorphic class membership, while UML diagrams are much more suitable for specifying not only static models including classes and associations, but also dynamic behavior. Though MOF based metamodels and UML profiles for OWL have been proposed in the past, an integrated use of both modeling approaches in a coherent framework has been lacking so far. We present such a framework, TwoUse, for developing integrated models, comprising the benefits of UML models and OWL ontologies
This paper shows how multiagent systems can be modeled by a combination of UML statecharts and hybrid automata. This allows formal system specification on different levels of abstraction on the one hand, and expressing real-time system behavior with continuous variables on the other hand. It is not only shown how multi-robot systems can be modeled by a combination of hybrid automata and hierarchical state machines, but also how model checking techniques for hybrid automata can be applied. An enhanced synchronization concept is introduced that allows synchronization taking time and avoids state explosion to a certain extent.
In this paper we describe a series of projects on location based and personalised information systems. We start wit a basic research project and we show how we came with the help of two other more application oriented project to a product. This is developed by a consortium of enterprises and it already is in use in the city of Koblenz.
This paper offers an informal overview and discussion on first order predicate logic reasoning systems together with a description of applications which are carried out in the Artificial Intelligence Research Group of the University in Koblenz. Furthermore the technique of knowledge compilation is shortly introduced.
The term "Augmented Reality (AR)" denotes the superposition of additional virtual objects and supplementary information over real images. The joint project Enhanced Reality (ER)1 aims at a generic AR-system. The ER-project is a cooperation of six different research groups of the Department of Computer Science at the University of Koblenz-Landau. According to Ronald Azuma an AR-system combines real and virtual environments, where the real and virtual objects are registered in 3-D, and it provides interactivity in real time [Azu97]. Enhanced Reality extends Augmented Reality by requiring the virtual objects to be seamlessly embedded into the real world as photo-realistic objects according to the exact lighting conditions. Furthermore, additional information supplying value-added services may be displayed and interaction of the user may even be immersive. The short-term goal of the ER-project is the exploration of ER-fundamentals using some specific research scenarios; the long-term goal is the development of a component-based ER-framework for the creation of ER-applications for arbitrary application areas. ER-applications are developed as single-user applications for users who are moving in a real environment and are wearing some kind of visual output device like see-through glasses and some mobile end device. By these devices the user is able to see reality as it is, but he can also see the virtual objects and the additional information about some value-added service. Furthermore he might have additional devices whereby he can interact with the available virtual objects. The development of a generic framework for ER-applications requires the definition of generic components which are customizable and composable to build concrete applications and it requires a homogeneous data model which supports all components equally well. The workgroup "Software Technology"2 is responsible for this subproject. This report gives some preliminary results concerning the derivation of a component-based view of ER. There are several augmented reality frameworks like ARVIKA, AMIRE, DWARF, MORGAN, Studierstube and others which offer some support for the development of AR-applications. All of them ease the use of existing subsystems like AR-Toolkit, OpenGL and others and leverage the generation process for realistic systems by making efficient use of those subsystems. Consequently, they highly rely on them.
Semantic descriptions of non-textual media available on the web can be used to facilitate retrieval and presentation of media assets and documents containing them. While technologies for multimedia semantic descriptions already exist, there is as yet no formal description of a high quality multimedia ontology that is compatible with existing (semantic) web technologies. We explain the complexity of the problem using an annotation scenario. We then derive a number of requirements for specifying a formal multimedia ontology, including: compatibility with MPEG-7, embedding in foundational ontologies, and modularisation including separation of document structure from domain knowledge. We then present the developed ontology and discuss it with respect to our requirements.
Networked RDF graphs
(2007)
Networked graphs are defined in this paper as a small syntactic extension of named graphs in RDF. They allow for the definition of a graph by explicitly listing triples as well as by SPARQL queries on one or multiple other graphs. By this extension it becomes possible to define a graph including a view onto other graphs and to define the meaning of a set of graphs by the way they reference each other. The semantics of networked graphs is defined by their mapping into logic programs. The expressiveness and computational complexity of networked graphs, varying by the set of constraints imposed on the underlying SPARQL queries, is investigated. We demonstrate the capabilities of networked graphs by a simple use case.
SNMP in VNUML Simulationen
(2007)
Das Simple Network Management Protocol (SNMP) gilt als die Universalsprache des Netzwerkmanagements. Bereits Anfang der 90er Jahre wurde die erste Version von SNMP durch die Internet Engineering Task Force (IETF) zum Standard-Internet Management Protocol erklärt und Teil der TCP/IP Protocol-Suite. Für die meisten Betriebssystemplattformen sind SNMP Implementierungen verfügbar und viele netzwerkfähige Geräte und Netzwerkmanagement-Programme unterstützen SNMP, das sich vor allem für die Verwaltung plattformübergreifender und herstellerunabhängiger Netzwerke bewährt. Virtual Network User Mode Linux (VNUML) ist ein mächtiges Netzwerk-Simulationsprogramm für Linux mit dem virtuelle Linux-Rechnernetze aufgebaut werden können, um darin Programmabläufe zu simulieren. Die VNUML Netzwerk-Simulationen sind in erster Linie für das Entwickeln, Analysieren und Testen von Linux Netzwerk-Software, wie zum Beispiel Netzwerk-Protokollen, geeignet. Das Simulationsprogramm entstand im Rahmen des Euro6IX-Projektes, zur Einführung des IPv6 Standards in Europa, am Telematics Engineering Department der Technischen Universität Madrid. Die Rechner der virtuellen Netze, die VNUML aufbaut, basieren auf User Mode Linux (UML), einer in breitem Spektrum eingesetzten Virtualisierung des Linux-Kernels. Diese Studienarbeit beschäftigt sich mit den Möglichkeiten und der Funktionsweise des Netzwerkmanagements mit SNMP. Dafür wird die SNMP-Software Net-SNMP in VNUML Simulationen eingesetzt, um die Möglichkeiten der Konfiguration und des Umgangs mit SNMP in einer praxisnahen Umgebung zu untersuchen. Der Einsatz von Net-SNMP in VNUML Simulationen kann dazu dienen, die Integration von Netzwerkmanagement mit SNMP für relevante Rechnernetze vorzubereiten und Möglichkeiten der Konfiguration und der Verwendung auszuloten oder VNUML Simulationen im Allgemeinen mit diesem bewährten Netzwerkmanagement-System zur Unterstützung auszustatten.
Diese Studienarbeit soll eine Einführung in die Arbeit mit virtual network user mode linux (VNUML) geben. Mit Hilfe dieser Arbeit möchte ich speziell die Version VNUML 1.6 näher bringen und die wesentlichen Unterschiede, Vor- und Nachteile zur Version 1.5 zeigen. In den nächsten zwei Kapiteln wird auf das Thema VNUML und UML oberflächlich eingegangen. Das darauffolgende Kapitel befasst sich mit der Installation von VNUML 1.6, der Vorraussetzung und den möglichen Fehlermeldungen. Wenn dies abgeschlossen ist, wird VNUML 1.6 mit eigenen Beispielen ausfürlich, praktisch und theoretisch vorgestellt. Danach werden die wesentlichen Unterschiede von VNUML 1.5 zu VNUML 1.6 beschrieben. Zum Abschluss sind noch ein Kapitel mit kurzen Begriffsdefinitionen und der Anhang mit allen XML-Dateien zu finden. Auf den Aufbau einer XML-Datei möchte ich in meiner Arbeit nicht weiter eingehen. Dazu verweise ich auf die Arbeit von Thomas Chmielowiec und Tim Keupen. In diesen Arbeiten sind die XML-Syntax und Semantik ausfürlich beschrieben.
Problems in the analysis of requirements often lead to failures when developing software systems. This problem is nowadays being faced by requirements engineering. The early involvement of all kinds of stakeholders in the development of such a system and a structured process to elicitate and analyse requirements have made it a crucial factor as a first step in software development. The increasing complexity of modern softwaresystems though leads to a rising amount of information which has to be dealt with during analysis. Without the support of appropriate tools this would be almost impossible to do. Especially in bigger projects, which tend to be spatially distributed, an effective requirements engineering could not be implemented without this kind of support. Today there is a wide range of tools dealing with this matter. They have been in use since some time now and, in their most recent versions, realize the most important aspects of requirements engineering. Within the scope of this thesis some of these tools will be analysed, focussing on both the major functionalities concerning the management of requirements and the repository of these tools. The results of this analyis will be integrated into a reference model.
The high cost of routing infrastructure makes checking theories about larger nets a very difficult and expensive task. One possible approach to fight this problem is the use of virtual instead of physical infrastructure. OPNet- IT Guru software is a suite designed to simulate large nets and present relevant information. This allows validating extensive changes before actually implementing them on a productive network or testing theories without the overhead of a physical infrastructure.