Refine
Year of publication
Document Type
- Part of Periodical (38)
- Doctoral Thesis (35)
- Diploma Thesis (24)
- Study Thesis (19)
- Bachelor Thesis (14)
- Master's Thesis (14)
- Report (1)
Keywords
- Routing (5)
- Bluetooth (4)
- Knowledge Compilation (4)
- Netzwerk (4)
- Semantic Web (4)
- Software Engineering (4)
- VNUML (4)
- E-KRHyper (3)
- Netzwerksimulation (3)
- RIP-MTI (3)
Institute
- Institut für Informatik (145) (remove)
Modern software projects are composed of several software languages, software technologies and different kind of artifacts. Therefore, the understanding of the software project at hand, including the semantic links between the different parts, becomes a difficult challenge for a developer. One approach to attack this issue is to document the software project with the help of a linguistic architecture. This kind of architecture can be described with the help of the MegaL ontology. A remaining challenge is the creation of it since it requires different kind of skills. Therefore, this paper proposes an approach for the automatic extraction of a linguistic architecture. The open source framework Apache Jena, which is focusing on semantic web technologies like RDF and OWL, is used to define custom rules that are capable to infer new knowledge based on the defined or already extracted RDF triples. The complete approach is tested in a case study on ten different open source projects. The aim of the case study is to extract a linguistic architecture that is describing the use of Hibernate in the selected projects. In the end, the result is evaluated with the help of different metrics. The evaluation is performed with the help of an internal and external approach.
E-Learning Anwendungen sind heutzutage im Trend. Jedoch nicht nur aus diesem Grund sind sie vom großen Interesse. Im Vordergrund steht ihr didaktisches Konzept. Heutzutage ist es technisch möglich, die Teilnehmer interaktiv in einen
E- Learning Kurs einzubinden. Die Lernschritte können durch die Anwendung kontrolliert werden. Im Gegensatz zu einer Vorlesung, kann die Theorie direkt mit der Praxis verknüpft werden. Das Ziel der Arbeit ist die Erstellung eines Konzeptes und die Entwicklung eines Prototypen einer interaktiven Entwicklungsumgebung von Java für Programmieranfänger. In dieser Arbeit wurden aktuelle E-Learning Plattformen zu Java analysiert. Aus den gewonnen Erkenntnissen wurde ein Konzept für eine E-Learningplattform entwickelt. Dieses Konzept wurde erfolgreich als Prototyp implementiert. Für die Implementation wurden bewährte Technologien, wie das Django-Framework und das Javascript-Framework Vue.js eingesetzt. Der Kurs wurde von neun Teilnehmern getestet. Zusätzlich wurde eine Umfrage zu der Kursanwendung durchgeführt. Das Testergebnis bestätigt, dass die Teilnehmer den Kurs bestanden haben.
Skalierbarkeit und garantierte Ausliererung sind essentielle Eigenschaften eines jeden Routingalgorithmus. Beides bietet bei drahtlosen Ad-hoc Netzwerken die Kombination aus Greedy- und Face- Routing, sofern ein planarer Graph zur Verfügung steht. Doch gerade die fehlerfreie Planarisierung bereitet bei realistischen Netzwerken Schwierigkeiten. Daher soll mit dieser Arbeit die Frage beantwortet werden, zu welcher Fehlerrate es führt, wenn der Graph lediglich mit lokalen Methoden teilplanarisiert wird. Dazu wurde eine Simulationsumgebung geschaffen, um unter Anwendung des Log-Normal-Shadowing-Modells zufällige Konnektivitätsgraphen zu generieren. Diese wurden anschließend durch zwei unterschiedliche, lokale Strategien teilplanarisiert. Es wurden neun verschiedene Settings definiert, die sich aus drei unterschiedlichen Graphendichten und drei unterschiedlichen Werten für den Sigmaparameter des Log-Normal-Shadowing-Modells ergeben. Für jedes Setting wurde in 2000 Simulationsdurchläufen das Verhalten von Greedy-, Face- und kombiniertem Greedy-Face-Routing untersucht und ausgewertet. Zum Abschluss wurden die Ergebnisse dieser Simulation bewertet und diskutiert.
Die Forschung im Bereich der modellbasierten Objekterkennung und Objektlokalisierung hat eine vielversprechende Zukunft, insbesondere die Gebäudeerkennung bietet vielfaltige Anwendungsmöglichkeiten. Die Bestimmung der Position und der Orientierung des Beobachters relativ zu einem Gebäude ist ein zentraler Bestandteil der Gebäudeerkennung.
Kern dieser Arbeit ist es, ein System zur modellbasierten Poseschätzung zu entwickeln, das unabhängig von der Anwendungsdomäne agiert. Als Anwendungsdomäne wird die modellbasierte Poseschätzung bei Gebäudeaufnahmen gewählt. Vorbereitend für die Poseschätzung bei Gebäudeaufnahmen wird die modellbasierte Erkennung von Dominosteinen und Pokerkarten realisiert. Eine anwendungsunabhängige Kontrollstrategie interpretiert anwendungsspezifische Modelle, um diese im Bild sowohl zu lokalisieren als auch die Pose mit Hilfe dieser Modelle zu bestimmen. Es wird explizit repräsentiertes Modellwissen verwendet, sodass Modellbestandteilen Bildmerkmale zugeordnet werden können. Diese Korrespondenzen ermöglichen die Kamerapose aus einer monokularen Aufnahme zurückzugewinnen. Das Verfahren ist unabhängig vom Anwendungsfall und kann auch mit Modellen anderer rigider Objekte umgehen, falls diese der definierten Modellrepräsentation entsprechen. Die Bestimmung der Pose eines Modells aus einem einzigen Bild, das Störungen und Verdeckungen aufweisen kann, erfordert einen systematischen Vergleich des Modells mit Bilddaten. Quantitative und qualitative Evaluationen belegen die Genauigkeit der bestimmten Gebäudeposen.
In dieser Arbeit wird zudem ein halbautomatisches Verfahren zur Generierung eines Gebäudemodells vorgestellt. Das verwendete Gebäudemodell, das sowohl semantisches als auch geometrisches Wissen beinhaltet, den Aufgaben der Objekterkennung und Poseschätzung genügt und sich dennoch an den bestehenden Normen orientiert, ist Voraussetzung für das Poseschätzverfahren. Leitgedanke der Repräsentationsform des Modells ist, dass sie für Menschen interpretierbar bleibt. Es wurde ein halbautomatischer Ansatz gewählt, da die automatische Umsetzung dieses Verfahrens schwer die nötige Präzision erzielen kann. Das entwickelte Verfahren erreicht zum einen die nötige Präzision zur Poseschätzung und reduziert zum anderen die Nutzerinteraktionen auf ein Minimum. Eine qualitative Evaluation belegt die erzielte Präzision bei der Generierung des Gebäudemodells.
This Work analyzes if a hardware prototype on an Arduino basis for an Adhoc Network can be created. The objective of the prototype development is, the creation of a sensor node with a modular design, where components can be easily changed. Furthermore the application area has requirements, which the node must fulfill. These requirements are derived from the Tmote Sky sensor node, therefore the new created sensor node must be a possible alternative for it and fulfill the same functions. For that purpose this study reviews some available Arduino microprocessors on their suitability for a sensor node. Later in the work the composition of the sensor node is documented. For this, the hardware and their costs are illustrated. The created hardware prototype allows, through easily changed radio modules, the covering of 433 MHz, 866 MHz and 2,40 GHz radio frequencies. At the end of the work, the sensor node prototype is used in an experiment to check for the suitability for water monitoring. For this, an experiment was performed on land and on water and the results evaluated. In the end the prototype fulfilled most of the requirements, but the cost was a little too high.
This thesis proposes the use of MSR (Mining Software Repositories) techniques to identify software developers with exclusive expertise about specific APIs and programming domains in software repositories. A pilot Tool for finding such
“Islands of Knowledge” in Node.js projects is presented and applied in a case study to the 180 most popular npm packages. It is found that on average each package has 2.3 Islands of Knowledge, which is possibly explained by the finding that npm packages tend to have only one main contributor. In a survey, the maintainers of 50 packages are contacted and asked for opinions on the results produced by the Tool. Together with their responses, this thesis reports on experiences made with the pilot Tool and how future iterations could produce even more accurate statements about programming expertise distribution in developer teams.
Motion capture refers to the process of capturing, processing and trans- lating real motions onto a 3D model. Not only in the movie and gaming industries, motion capture creates an indispensable realism of human and animal movement. Also in the context of robotics, medical movement therapy, as well as in AR and VR, motion capture is used extensively. In addition to the well established optical processes, especially in the last three areas, alternative systems based on inertial navigation (IMUs) are being used in-creasingly, because they do not rely on external cameras and thus limit the area of movement considerably less.
Fast evolving technical progress in the manufacturing of such IMUs allows building small sensors, wearable on the body which can transfer movements to a computer. The development of applying inertial systems to a motion capture context, however, is still at an early state. Problems like drift can currently only be minimized by adding additional hardware for correcting the read data.
In the following master thesis an IMU based motion capture system is designed and constructed. This contains the assembly of the hardware components as well as processing of the received movement data on the software side and their application to a 3D model.
Using semantic data from general-purpose programming languages does not provide the unified experience one would want for such an application. Static error checking is lacking, especially with regards to static typing of the data. Based on the previous work of λ-DL, which integrates semantic queries and concepts as types into a typed λ-calculus, this work takes its ideas a step further to meld them into a real-world programming language. This thesis explores how λ-DL's features can be extended and integrated into an existing language, researches an appropriate extension mechanism and produces Semantics4J, a JastAdd-based Java language semantic data extension for type-safe OWL programming, together with examples of its usage.
The aim of this work was to present the accident and traffic modeling of Oman (Muskat City) under theoretical assumptions by means of agent-based simulation. TRASS should be taken as the basis for the simulation environment. In addition, possible counter-measures should be proposed with regard to the high accident rate, which result from different simulation variants.
It was not part of this work whether the theoretical assumptions by the simulation should actually be applied and implemented in reality.
It was also necessary to check whether three different simulation variants could be represented differently by TRASS. The three variants were divided into simulation sequence with traffic light control, without traffic light control and finally with traffic light control, which however should be ignored with a probability of 70%. All three variants could produce different results. It has been shown that all three variants have advantages and disadvantages with respect to traffic flow and accident rate.
Not to be neglected is the fact that this work is modeled, analyzed and evaluated exclusively with TRASS framework. All potentials and deficits thus flow into the overall framework of this work. Therefore, after critical observation, this statement was viewed from a single angle only: the theoretical assumptions of Oman and the modeling limits of TRASS. In order to be able to make a promising statement about the actual implementation of the proposals, further comprehensive analyzes and simulations are necessary in a much wider range of variants of transport for Oman.
One of the main goals of the artificial intelligence community is to create machines able to reason with dynamically changing knowledge. To achieve this goal, a multitude of different problems have to be solved, of which many have been addressed in the various sub-disciplines of artificial intelligence, like automated reasoning and machine learning. The thesis at hand focuses on the automated reasoning aspects of these problems and address two of the problems which have to be overcome to reach the afore-mentioned goal, namely 1. the fact that reasoning in logical knowledge bases is intractable and 2. the fact that applying changes to formalized knowledge can easily introduce inconsistencies, which leads to unwanted results in most scenarios.
To ease the intractability of logical reasoning, I suggest to adapt a technique called knowledge compilation, known from propositional logic, to description logic knowledge bases. The basic idea of this technique is to compile the given knowledge base into a normal form which allows to answer queries efficiently. This compilation step is very expensive but has to be performed only once and as soon as the result of this step is used to answer many queries, the expensive compilation step gets worthwhile. In the thesis at hand, I develop a normal form, called linkless normal form, suitable for knowledge compilation for description logic knowledge bases. From a computational point of view, the linkless normal form has very nice properties which are introduced in this thesis.
For the second problem, I focus on changes occurring on the instance level of description logic knowledge bases. I introduce three change operators interesting for these knowledge bases, namely deletion and insertion of assertions as well as repair of inconsistent instance bases. These change operators are defined such that in all three cases, the resulting knowledge base is ensured to be consistent and changes performed to the knowledge base are minimal. This allows us to preserve as much of the original knowledge base as possible. Furthermore, I show how these changes can be applied by using a transformation of the knowledge base.
For both issues I suggest to adapt techniques successfully used in other logics to get promising methods for description logic knowledge bases.
The publication of open source software aims to support the reuse, the distribution and the general utilization of software. This can only be enabled by the correct usage of open source software licenses. Therefore associations provide a multitude of open source software licenses with different features, of which a developer can choose, to regulate the interaction with his software. Those licenses are the core theme of this thesis.
After an extensive literature research, two general research questions are elaborated in detail. First, a license usage analysis of licenses in the open source sector is applied, to identify current trends and statistics. This includes questions concerning the distribution of licenses, the consistency in their usage, their association over a period of time and their publication.
Afterwards the recommendation of licenses for specific projects is investigated. Therefore, a recommendation logic is presented, which includes several influences on a suitable license choice, to generate an at most applicable recommendation. Besides the exact features of a license of which a user can choose, different methods of ranking the recommendation results are proposed. This is based on the examination of the current situation of open source licensing and license suggestion. Finally, the logic is evaluated on the exemplary use-case of the 101companies project.
Reactive local algorithms are distributed algorithms which suit the needs of battery-powered, large-scale wireless ad hoc and sensor networks particularly well. By avoiding both unnecessary wireless transmissions and proactive maintenance of neighborhood tables (i.e., beaconing), such algorithms minimize communication load and overhead, and scale well with increasing network size. This way, resources such as bandwidth and energy are saved, and the probability of message collisions is reduced, which leads to an increase in the packet reception ratio and a decrease of latencies.
Currently, the two main application areas of this algorithm type are geographic routing and topology control, in particular the construction of a node's adjacency in a connected, planar representation of the network graph. Geographic routing enables wireless multi-hop communication in the absence of any network infrastructure, based on geographic node positions. The construction of planar topologies is a requirement for efficient, local solutions for a variety of algorithmic problems.
This thesis contributes to reactive algorithm research in two ways, on an abstract level, as well as by the introduction of novel algorithms:
For the very first time, reactive algorithms are considered as a whole and as an individual research area. A comprehensive survey of the literature is given which lists and classifies known algorithms, techniques, and application domains. Moreover, the mathematical concept of O- and Omega-reactive local topology control is introduced. This concept unambiguously distinguishes reactive from conventional, beacon-based, topology control algorithms, serves as a taxonomy for existing and prospective algorithms of this kind, and facilitates in-depth investigations of the principal power of the reactive approach, beyond analysis of concrete algorithms.
Novel reactive local topology control and geographic routing algorithms are introduced under both the unit disk and quasi unit disk graph model. These algorithms compute a node's local view on connected, planar, constant stretch Euclidean and topological spanners of the underlying network graph and route messages reactively on these spanners while guaranteeing the messages' delivery. All previously known algorithms are either not reactive, or do not provide constant Euclidean and topological stretch properties. A particularly important partial result of this work is that the partial Delaunay triangulation (PDT) is a constant stretch Euclidean spanner for the unit disk graph.
To conclude, this thesis provides a basis for structured and substantial research in this field and shows the reactive approach to be a powerful tool for algorithm design in wireless ad hoc and sensor networking.
The present thesis deals with the realization of a stepper motor driver on an 8-bit microcontroller by the company Atmel. The focus is on the devel- opment of a current control, which allows microstepping in addition to the basic modes of operation like full- and halfstep. For this purpose, a PI con- troller is derived using physical and control engineering principles, which is implemented on the microcontroller. In this context, essential knowledge for the practical implementation will be discussed. In addition, the development of the hardware is documented, which is of great significance for the current measurement.
Confidentiality, integrity, and availability are often listed as the three major requirements for achieving data security and are collectively referred to as the C-I-A triad. Confidentiality of data restricts the data access to authorized parties only, integrity means that the data can only be modified by authorized parties, and availability states that the data must always be accessible when requested. Although these requirements are relevant for any computer system, they are especially important in open and distributed networks. Such networks are able to store large amounts of data without having a single entity in control of ensuring the data's security. The Semantic Web applies to these characteristics as well as it aims at creating a global and decentralized network of machine-readable data. Ensuring the confidentiality, integrity, and availability of this data is therefore also important and must be achieved by corresponding security mechanisms. However, the current reference architecture of the Semantic Web does not define any particular security mechanism yet which implements these requirements. Instead, it only contains a rather abstract representation of security.
This thesis fills this gap by introducing three different security mechanisms for each of the identified security requirements confidentiality, integrity, and availability of Semantic Web data. The mechanisms are not restricted to the very basics of implementing each of the requirements and provide additional features as well. Confidentiality is usually achieved with data encryption. This thesis not only provides an approach for encrypting Semantic Web data, it also allows to search in the resulting ciphertext data without decrypting it first. Integrity of data is typically implemented with digital signatures. Instead of defining a single signature algorithm, this thesis defines a formal framework for signing arbitrary Semantic Web graphs which can be configured with various algorithms to achieve different features. Availability is generally supported by redundant data storage. This thesis expands the classical definition of availability to compliant availability which means that data must only be available as long as the access request complies with a set of predefined policies. This requirement is implemented with a modular and extensible policy language for regulating information flow control. This thesis presents each of these three security mechanisms in detail, evaluates them against a set of requirements, and compares them with the state of the art and related work.
This thesis analyzes the online attention towards scientists and their research topics. The studies compare the attention dynamics towards the winners of important scientific prizes with scientists who did not receive a prize. Web signals such as Wikipedia page views, Wikipedia edits, and Google Trends were used as a proxy for online attention. One study focused on the time between the creation of the article about a scientist and their research topics. It was discovered that articles about research topics were created closer to the articles of prize winners than to scientists who did not receive a prize. One possible explanation could be that the research topics are more closely related to the scientist who got an award. This supports that scientists who received the prize introduced the topics to the public. Another study considered the public attention trends towards the related research topics before and after a page of a scientist was created. It was observed that after a page about a scientist was created, research topics of prize winners received more attention than the topics of scientists who did not receive a prize. Furthermore, it was demonstrated that Nobel Prize winners get a lower amount of attention before receiving the prize than the potential nominees from the list of Citation Laureates of Thompson Reuters. Also, their popularity is going down faster after receiving it. It was also shown that it is difficult to predict the prize winners based on the attention dynamics towards them.
The provision of electronic participation services (e-participation) is a complex socio-technical undertaking that needs comprehensive design and implementation strategies. E-participation service providers, in the most cases administrations and governments, struggle with changing requirements that demand more transparency, better connectivity and increased collaboration among different actors. At the same time, less staff are available. As a result, recent research assesses only a minority of e-participation services as successful. The challenge is that the e-participation domain lacks comprehensive approaches to design and implement (e-)participation services. Enterprise Architecture (EA) frameworks have evolved in information systems research as an approach to guide the development of complex socio-technical systems. This approach can guide the design and implementation services, if the collection of organisations with the commonly held goal to provide participation services is understood as an E Participation Enterprise (EE). However, research & practice in the e participation domain has not yet exploited EA frameworks. Consequently, the problem scope that motivates this dissertation is the existing gap in research to deploy EA frameworks in e participation design and implementation. The research question that drives this research is: What methodical and technical guides do architecture frameworks provide that can be used to design and implement better and successful e participation?
This dissertation presents a literature study showing that existing approaches have not covered yet the challenges of comprehensive e participation design and implementation. Accordingly, the research moves on to investigate established EA frameworks such as the Zachman Framework, TOGAF, the DoDAF, the FEA, the ARIS, and the ArchiMate for their use. While the application of these frameworks in e participation design and implementation is possible, an integrated approach is lacking so far. The synthesis of literature review and practical insights in design and implementation of e participation services from four projects show the challenges of adapting architecture frameworks for this domain. However, the research shows also the potential of a combination of the different approaches. Consequently, the research moves on to develop the E-Participation Architecture Framework (EPART-Framework). Therefore, the dissertation applies design science research including literature review and action research. Two independent settings test an initial EPART-Framework version. The results yield into the EPART-Framework presented in this dissertation.
The EPART-Framework comprises of the EPART-Metamodel with six EPART-Viewpoints, which frame the stakeholder concerns: the Participation Scope, the Participant Viewpoint, the Participation Viewpoint, the Data & Information Viewpoint, the E-participation Viewpoint, and Implementation & Governance Viewpoint. The EPART-Method supports the stakeholders to design the EE and implement e participation and stores its output in an architecture description and a solution repository. It consists of five consecutive phases accompanied by requirements management: Initiation, Design, Implementation and Preparation, Participation, and Evaluation. The EPART-Framework fills the gap between the e participation domain and the enterprise architecture framework domain. The evaluation gives reasonable evidence that the framework is a valuable addition in academia and in practice to improve e-participation design and implementation. The same time, it shows opportunities for future research to extend and advance the framework.
The aim of this thesis was to develop and to evaluate a method, which enables the utilization of traditional dialog marketing tools through the web. For this purpose, a prototype of a website with "extended real-time interaction (eEI)" capabilities has been implemented and tested. The prototype was evaluated by a methodology based on the five-dimensional "e-service quality" measure after Gwo-Guang Lee und Hsiu-Fen Lin. The Foundation of the "e-service quality" measure is the SERVQUAL-Model. A statistical analysis of the user study results showed a significant correlation between eEI and user satisfaction. Before the actual realization of eEI, the "Technology Acceptance Model" after Fred D. Davis was used to investigate currently used real-time interaction systems.
Die folgende Arbeit zeigt eine Möglichkeit auf, Lokalisierung eines Objektes mittels Ultraschall zu realisieren. Dazu werden drei bis fünf im Raum verteilte Sensoren genutzt, um anhand von Distanzinformationen die Position eines Objekts relativ zu den Positionen der Sensoren zu bestimmen. Eine Besonderheit besteht dabei darin, dass die Sensoren nahezu beliebig in der Ebene verteilt sein können. Ihre Anordnung wird vom System in der Kalibrierungsphase mit Unterstützung des Anwenders ermittelt. Dabei dürften ein gleichseitiges Dreieck, ein Quadrat oder Pentagramm je nach Sensoranzahl die besten Ergebnisse liefern. Um die relative Bewegung in eine Absolute zu übertragen, findet eine Umrechnung in Meter anhand der Taktung der Mikrocontroller, des Prescalers des verwendeten Timers und der Schallgeschwindigkeit statt.
This thesis addresses the problem of terrain classification in unstructured outdoor environments. Terrain classification includes the detection of obstacles and passable areas as well as the analysis of ground surfaces. A 3D laser range finder is used as primary sensor for perceiving the surroundings of the robot. First of all, a grid structure is introduced for data reduction. The chosen data representation allows for multi-sensor integration, e.g., cameras for color and texture information or further laser range finders for improved data density. Subsequently, features are computed for each terrain cell within the grid. Classification is performedrnwith a Markov random field for context-sensitivity and to compensate for sensor noise and varying data density within the grid. A Gibbs sampler is used for optimization and is parallelized on the CPU and GPU in order to achieve real-time performance. Dynamic obstacles are detected and tracked using different state-of-the-art approaches. The resulting information - where other traffic participants move and are going to move to - is used to perform inference in regions where the terrain surface is partially or completely invisible for the sensors. Algorithms are tested and validated on different autonomous robot platforms and the evaluation is carried out with human-annotated ground truth maps of millions of measurements. The terrain classification approach of this thesis proved reliable in all real-time scenarios and domains and yielded new insights. Furthermore, if combined with a path planning algorithm, it enables full autonomy for all kinds of wheeled outdoor robots in natural outdoor environments.
The publication of freely available and machine-readable information has increased significantly in the last years. Especially the Linked Data initiative has been receiving a lot of attention. Linked Data is based on the Resource Description Framework (RDF) and anybody can simply publish their data in RDF and link it to other datasets. The structure is similar to the World Wide Web where individual HTML documents are connected with links. Linked Data entities are identified by URIs which are dereferenceable to retrieve information describing the entity. Additionally, so called SPARQL endpoints can be used to access the data with an algebraic query language (SPARQL) similar to SQL. By integrating multiple SPARQL endpoints it is possible to create a federation of distributed RDF data sources which acts like one big data store.
In contrast to the federation of classical relational database systems there are some differences for federated RDF data. RDF stores are accessed either via SPARQL endpoints or by resolving URIs. There is no coordination between RDF data sources and machine-readable meta data about a source- data is commonly limited or not available at all. Moreover, there is no common directory which can be used to discover RDF data sources or ask for sources which offer specific data. The federation of distributed and linked RDF data sources has to deal with various challenges. In order to distribute queries automatically, suitable data sources have to be selected based on query details and information that is available about the data sources. Furthermore, the minimization of query execution time requires optimization techniques that take into account the execution cost for query operators and the network communication overhead for contacting individual data sources. In this thesis, solutions for these problems are discussed. Moreover, SPLENDID is presented, a new federation infrastructure for distributed RDF data sources which uses optimization techniques based on statistical information.
Geographic cluster based routing in ad-hoc wireless sensor networks is a current field of research. Various algorithms to route in wireless ad-hoc networks based on position information already exist. Among them algorithms that use the traditional beaconing approach as well as algorithms that work beaconless (no information about the environment is required besides the own position and the destination). Geographic cluster based routing with guaranteed message delivery can be carried out on overlay graphs as well. Until now the required planar overlay graphs are not being constructed reactively.
This thesis proposes a reactive algorithm, the Beaconless Cluster Based Planarization (BCBP) algorithm, which constructs a planar overlay graph and noticeably reduces the number of messages required for that. Based on an algorithm for cluster based planarization it beaconlessly constructs a planar overlay graph in an unit disk graph (UDG). An UDG is a model for a wireless network in which every participant has the same sending radius. Evaluation of the algorithm shows it to be more efficient than the non beaconless variant. Another result of this thesis is the Beaconless LLRAP (BLLRAP) algorithm, for which planarity but not continued connectivity could be proven.
The identification of experts for a specific technology or framework produces a large benefit for collaborative software projects. Hence it reduces the communication overhead that is required to identify an expert on the fly. Therefore this thesis describes a tool and approach that can be used to identify an expert that has a specific skill-set. It will mainly focus on the skills and expertise of developers that use the Django framework. By adding more rules to our framework that approach could easily be extended for different technologies or frameworks. The paper will close with a case study on an open source project.
One task of executives and project managers in IT companies or departments is to hire suitable developers and to assign them to suitable problems. In this paper, we propose a new technique that directly leverages previous work experience of developers in a systematic manner. Existing evidence for developer expertise based on the version history of existing projects is analyzed. More specifically, we analyze the commits to a repository in terms of affected API usage. On these grounds, we associate APIs with developers and thus we assess API experience of developers. In transitive closure, we also assess programming domain experience.
Im Rahmen dieser Arbeit soll eine Methodik erarbeitet werden, die englische, keyword-basierte Anfragen in SPARQL übersetzt und bewertet. Aus allen generierten SPARQL-Queries sollen die relevantesten ermittelt und ein Favorit bestimmt werden. Das Ergebnis soll in einer Nutzerevaluation bewertet werden.
Traditional Driver Assistance Systems (DAS) like for example Lane Departure Warning Systems or the well-known Electronic Stability Program have in common that their system and software architecture is static. This means that neither the number and topology of Electronic Control Units (ECUs) nor the presence and functionality of software modules changes after the vehicles leave the factory.
However, some future DAS do face changes at runtime. This is true for example for truck and trailer DAS as their hardware components and software entities are spread over both parts of the combination. These new requirements cannot be faced by state-of-the-art approaches of automotive software systems. Instead, a different technique of designing such Distributed Driver Assistance Systems (DDAS) needs to be developed. The main contribution of this thesis is the development of a novel software and system architecture for dynamically changing DAS using the example of driving assistance for truck and trailer. This architecture has to be able to autonomously detect and handle changes within the topology. In order to do so, the system decides which degree of assistance and which types of HMI can be offered every time a trailer is connected or disconnected. Therefore an analysis of the available software and hardware components as well as a determination of possible assistance functionality and a re-configuration of the system take place. Such adaptation can be granted by the principles of Service-oriented Architecture (SOA). In this architectural style all functionality is encapsulated in self-contained units, so-called Services. These Services offer the functionality through well-defined interfaces whose behavior is described in contracts. Using these Services, large-scale applications can be built and adapted at runtime. This thesis describes the research conducted in achieving the goals described by introducing Service-oriented Architectures into the automotive domain. SOA deals with the high degree of distribution, the demand for re-usability and the heterogeneity of the needed components.
It also applies automatic re-configuration in the event of a system change. Instead of adapting one of the frameworks available to this scenario, the main principles of Service-orientation are picked up and tailored. This leads to the development of the Service-oriented Driver Assistance (SODA) framework, which implements the benefits of Service-orientation while ensuring compatibility and compliance to automotive requirements, best-practices and standards. Within this thesis several state-of-the-art Service-oriented frameworks are analyzed and compared. Furthermore, the SODA framework as well as all its different aspects regarding the automotive software domain are described in detail. These aspects include a well-defined reference model that introduces and relates terms and concepts and defines an architectural blueprint. Furthermore, some of the modules of this blueprint such as the re-configuration module and the Communication Model are presented in full detail. In order to prove the compliance of the framework regarding state-of-the-art automotive software systems, a development process respecting today's best practices in automotive design procedures as well as the integration of SODA into the AUTOSAR standard are discussed. Finally, the SODA framework is used to build a full-scale demonstrator in order to evaluate its performance and efficiency.
Software systems are often developed as a set of variants to meet diverse requirements. Two common approaches to this are "clone-and-owning" and software product lines. Both approaches have advantages and disadvantages. In previous work we and collaborators proposed an idea which combines both approaches to manage variants, similarities, and cloning by using a virtual platform and cloning-related operators.
In this thesis, we present an approach for aggregating essential metadata to enable a propagate operator, which implements a form of change propagation. For this we have developed a system to annotate code similarities which were extracted throughout the history of a software repository. The annotations express similarity maintenance tasks, which can then either be executed automatically by propagate or have to be performed manually by the user. In this work we outline the automated metadata extraction process and the system for annotating similarities; we explain how the implemented system can be integrated into the workflow of an existing version control system (Git); and, finally, we present a case study using the 101haskell corpus of variants.
Die Arbeitsgruppe Echtzeitsysteme an der Universität Koblenz beschäftigt sich seit mehreren Jahren mit der Thematik autonomes und assistiertes Fahren. Eine große Herausforderung stellen in diesem Zusammenhang mehrgliedrige Fahrzeuge dar, deren Steuerung für den Fahrer während der Rückwärtsfahrt sehr anspruchsvoll ist. Um präzise Manöver zu ermöglichen, können elektronische Fahrerassistenzsysteme zum Einsatz kommen. Im Rahmen vorhergehender Arbeiten sind bereits einige Prototypen entstanden, von denen jedoch keiner eine geeignete Lösung für moderne, zweiachsige Anhänger darstellt. Im Rahmen dieser Arbeit wurde ein prototypisches Fahrerassistenzsystem entwickelt, wobei es noch weiterer Forschungs- und Entwicklungsarbeit bedarf, um das System straßentauglich zu machen.
Der Apple ][ war einer der drei ersten kompletten Computersysteme auf dem Markt. Von April 1977 an wurde er rund 16 Jahre lang mehrere Millionen mal verkauft. Entwickelt wurde dieser 8 Bit Homecomputer von Steve Wozniak und Steve Jobs. Sie ebneten damit den Weg für den Macintosh und das heute gut bekannte Unternehmen Apple.
Diese Arbeit beschreibt die Implementierung eines Softwareemulators für das komplette Apple ][ Computersystem auf nur einem Atmel AVR Microcontroller. Die größte Herausforderung besteht darin, dass der Microcontroller nur eine geringfügig höhere Taktrate als die zu emulierende Hardware hat. Dies erfordert eine effiziente Emulation der CPU und Speicherverwaltung, die nachfolgend zusammen mit der Laufzeitumgebung für die Emulation vorgestellt wird. Weiterhin wird die Umsetzung des Emulators mit Display und Tastatur in Hardware naher erläutert.
Mit dieser Arbeit wird die erfolgreiche Entwicklung eines portablen Apple ][ Emulators, von der Software über die Hardware bis hin zu einem Prototypen, vorgestellt.
In diesem Bericht wird der Einsatz von drahtlosen Sensornetzen zur Temperaturmessung in Fließgewässern untersucht. Es wird dargestellt, inwieweit solche Netze als Bindeglied zwischen Fernerkundung und stationären Sensoren eingesetzt werden können. Es werden die Anforderungen an Sensornetze für die Anwendung Gewässermonitoring ermittelt und eine prototypische Realisierung von Netzknoten für ein solches Sensornetz dargestellt. Als Ergebnis dieser Arbeit werden die Genauigkeit von Temperaturmessungen mit solchen Sensorknoten im Vergleich zu einem Temperaturlogger als Referenzsystem dargestellt. Die Messungen zeigen, dass eine vergleichsweise gute Messgenauigkeit zu geringen Kosten erreichbar ist. Durch Weiterentwicklung des hier dargestellten Prototypen steht für die Temperaturüberwachung in Gewässern ein vielversprechendes und kostengünstiges neues Messinstrument zur Verfügung. Dieses kann auf der einen Seite in tieferen Regionen Gewässertemperaturen messen, als dies mit Fernerkundung möglich ist, und auf der anderen Seite eine höhere räumliche Auflösung als stationäre Messstationen erreichen. Zusätzlich dienen die Literaturrecherche und die Formulierung der Kriterien einer Eingrenzung des Anwendungsbereichs für weiterführende Arbeiten.
Diffusion imaging captures the movement of water molecules in tissue by applying varying gradient fields in a magnetic resonance imaging (MRI)-based setting. It poses a crucial contribution to in vivo examinations of neuronal connections: The local diffusion profile enables inference of the position and orientation of fiber pathways. Diffusion imaging is a significant technique for fundamental neuroscience, in which pathways connecting cortical activation zones are examined, and for neurosurgical planning, where fiber reconstructions are considered as intervention related risk structures.
Diffusion tensor imaging (DTI) is currently applied in clinical environments in order to model the MRI signal due to its fast acquisition and reconstruction time. However, the inability of DTI to model complex intra-voxel diffusion distributions gave rise to an advanced reconstruction scheme which is known as high angular resolution diffusion imaging (HARDI). HARDI received increasing interest in neuroscience due to its potential to provide a more accurate view of pathway configurations in the human brain.
In order to fully exploit the advantages of HARDI over DTI, advanced fiber reconstructions and visualizations are required. This work presents novel approaches contributing to current research in the field of diffusion image processing and visualization. Diffusion classification, tractography, and visualizations approaches were designed to enable a meaningful exploration of neuronal connections as well as their constitution. Furthermore, an interactive neurosurgical planning tool with consideration of neuronal pathways was developed.
The research results in this work provide an enhanced and task-related insight into neuronal connections for neuroscientists as well as neurosurgeons and contribute to the implementation of HARDI in clinical environments.
Diese Bachelorarbeit befasst sich mit der Entwicklung und Implementierung einer Gesichtserkennenden Software, die in der Lage ist Personenströme zurnerkennen und zu protokollieren. Dabei wird, ausgehend von den speziellen Anforderungen der Bildverarbeitung die entstandene Softwarearchitektur und deren Implementation vorgestellt. Zusätzlich wird zur Implementation ein Webinterface entwickelt welches die Verwaltung der Daten vereinfachen soll. Abschließend werden weitere Verfahren der Gesichtserkennung vorgestellt und gegen das eingesetzte Verfahren verglichen. Zum Schluss wird die implementierte Software evaluiert.
In the recent years, Software Engineering research has shown the rise of interest in the empirical studies. Such studies are often based on empirical evidence derived from corpora - collections of software artifacts. While there are established forms of carrying out empirical research (experiments, case studies, surveys, etc.), the common task of preparing the underlying collection of software artifacts is typically addressed in ad hoc manner.
In this thesis, by means of a literature survey we show how frequently software engineering research employs software corpora and using a developed classification scheme we discuss their characteristics. Addressing the lack of methodology, we suggest a method of corpus (re-)engineering and apply it to an existing collection of Java projects.
We report two extensive empirical studies, where we perform a broad and diverse range of analyses on the language for privacy preferences (P3P) and on object-oriented application programming interfaces (APIs). In both cases, we are driven by the data at hand, by the corpus itself, discovering the actual usage of the languages.
Eine systematische Literaturstudie zu beaconless Algorithmen für drahtlose Ad-hoc- und Sensornetze
(2014)
Wireless sensor networks (WSN) are computer networks consisting of sensor nodes.In position-based protocols nodes broadcast messages containing their position information periodically. These messages are called beacons. The use of beacons however has downsides, which led to the emerging of beaconless algorithms. Despite their importance in the research area of computer networks and to the best of our knowledge, beaconless algorithms have not been systematically surveyed.
The aim of this work is a comprehensive, systematic review of beaconless algorithms since the turn of the millennium. Relevant articles are categorized based on their similarities and their differences are compared. Thus, this work serves as a reference to the current state of research of beaconless algorithms and as a basis for finding research gaps.
Community-Plattformen im Internet verwenden codebasierte Governance, um ihre hohe Anzahl an Nutzerbeiträgen zu verwalten. Dazu gehören alle Arten von Funktionalitäten, mit denen die Community Nutzerbeiträge in irgendeiner Form direkt oder indirekt beurteilen kann. Diese Arbeit erklärt zunächst die Bedeutung codebasierter Governance und der verschiedenen dafür nutzbaren Funktionalitäten. Anschließend werden die erfolgreichsten 50 Community-Plattformen auf codebasierte Governance untersucht. Das Ergebnis zeigt die Zusammenhänge zwischen dem Aufbau einer Plattform, der Beschaσffenheit der Nutzerbeiträge und der darauf ausübbaren codebasierten Governance auf.
The increased networking of devices has established the usage of distributed applications as a common method in software development. Despite the demand of anonymity in using distributed applications, software frameworks still lack appropriate support in developing them. Building anonymous communicating applications therefore often results in an expensive individual approach. This work integrates an approach for anonymous communication using remote procedure calls into a software framework for building distributed applications. This also includes the design, development, and prototypical implementation of a binding for the MAppLab Remote Procedure Call Framework on the basis of the mPartAPI Framework. Furthermore the resulting prototype will be tested in an exemplary scenario.
Web 2.0 provides technologies for online collaboration of users as well as the creation, publication and sharing of user-generated contents in an interactive way. Twitter, CNET, CiteSeerX, etc. are examples of Web 2.0 platforms which facilitate users in these activities and are viewed as rich sources of information. In the platforms mentioned as examples, users can participate in discussions, comment others, provide feedback on various issues, publish articles and write blogs, thereby producing a high volume of unstructured data which at the same time leads to an information overload. To satisfy various types of human information needs arising from the purpose and nature of the platforms requires methods for appropriate aggregation and automatic analysis of this unstructured data. In this thesis, we propose methods which attempt to overcome the problem of information overload and help in satisfying user information needs in three scenarios.
To this end, first we look at two of the main challenges of sparsity and content quality in Twitter and how these challenges can influence standard retrieval models. We analyze and identify Twitter content features that reflect high quality information. Based on this analysis we introduce the concept of "interestingness" as a static quality measure. We empirically show that our proposed measure helps in retrieving and filtering high quality information in Twitter. Our second contribution relates to the content diversification problem in a collaborative social environment, where the motive of the end user is to gain a comprehensive overview of the pros and cons of a discussion track which results from social collaboration of the people. For this purpose, we develop the FREuD approach which aims at solving the content diversification problem by combining latent semantic analysis with sentiment estimation approaches. Our evaluation results show that the FREuD approach provides a representative overview of sub-topics and aspects of discussions, characteristic user sentiments under different aspects, and reasons expressed by different opponents. Our third contribution presents a novel probabilistic Author-Topic-Time model, which aims at mining topical trends and user interests from social media. Our approach solves this problem by means of Bayesian modeling of relations between authors, latent topics and temporal information. We present results of application of the model to the scientific publication datasets from CiteSeerX showing improved semantically cohesive topic detection and capturing shifts in authors" interest in relation to topic evolution.
Im Verlauf der vergangenen Jahre wurden unter der Leitung von Dr. Merten Joost basierend auf Microcontrollern der ATMega-Reihe verschiedene Projekte zur Ansteuerung der Peripheriegeräte eines Computers realisiert. Hierzu zählen unter anderem die Abfrage einer Tastatur, die Ausgabe von Audio- und Videosignalen sowie eine Programmierumgebung mit eigener Programmiersprache. Ziel dieser Arbeit ist es, die gesammelten Projekte zu verbinden, um als Ergebnis einen eigenständigen "Homecomputer" zu erhalten, der per Tastatur angesteuert werden können soll und über eine Audio- und Videoausgabe verfügen soll. Dabei wird eine SD-Karte als Speichermedium dienen, das per Tastatureingabe über eine Art Shell verwaltet werden kann.
This dissertation investigates the usage of theorem provers in automated question answering (QA). QA systems attempt to compute correct answers for questions phrased in a natural language. Commonly they utilize a multitude of methods from computational linguistics and knowledge representation to process the questions and to obtain the answers from extensive knowledge bases. These methods are often syntax-based, and they cannot derive implicit knowledge. Automated theorem provers (ATP) on the other hand can compute logical derivations with millions of inference steps. By integrating a prover into a QA system this reasoning strength could be harnessed to deduce new knowledge from the facts in the knowledge base and thereby improve the QA capabilities. This involves challenges in that the contrary approaches of QA and automated reasoning must be combined: QA methods normally aim for speed and robustness to obtain useful results even from incomplete of faulty data, whereas ATP systems employ logical calculi to derive unambiguous and rigorous proofs. The latter approach is difficult to reconcile with the quantity and the quality of the knowledge bases in QA. The dissertation describes modifications to ATP systems in order to overcome these obstacles. The central example is the theorem prover E-KRHyper which was developed by the author at the Universität Koblenz-Landau. As part of the research work for this dissertation E-KRHyper was embedded into a framework of components for natural language processing, information retrieval and knowledge representation, together forming the QA system LogAnswer.
Also presented are additional extensions to the prover implementation and the underlying calculi which go beyond enhancing the reasoning strength of QA systems by giving access to external knowledge sources like web services. These allow the prover to fill gaps in the knowledge during the derivation, or to use external ontologies in other ways, for example for abductive reasoning. While the modifications and extensions detailed in the dissertation are a direct result of adapting an ATP system to QA, some of them can be useful for automated reasoning in general. Evaluation results from experiments and competition participations demonstrate the effectiveness of the methods under discussion.
The amount of information on the Web is constantly increasing and also there is a wide variety of information available such as news, encyclopedia articles, statistics, survey data, stock information, events, bibliographies etc. The information is characterized by heterogeneity in aspects such as information type, modality, structure, granularity, quality and by its distributed nature. The two primary techniques by which users on the Web are looking for information are (1) using Web search engines and (2) browsing the links between information. The dominant mode of information presentation is mainly static in the form of text, images and graphics. Interactive visualizations offer a number of advantages for the presentation and exploration of heterogeneous information on the Web: (1) They provide different representations for different, very large and complex types of information and (2) large amounts of data can be explored interactively using their attributes and thus can support and expand the cognition process of the user. So far, interactive visualizations are still not an integral part in the search process of the Web. The technical standards and interaction paradigms to make interactive visualization usable by the mass are introduced only slowly through standardatization organizations. This work examines how interactive visualizations can be used for the linking and search process of heterogeneous information on the Web. Based on principles in the areas of information retrieval (IR), information visualization and information processing, a model is created, which extends the existing structural models of information visualization with two new processes: (1) linking of information in visualizations and (2) searching, browsing and filtering based on glyphs. The Vizgr toolkit implements the developed model in a web application. In four different application scenarios, aspects of the model will be instantiated and are evaluated in user tests or examined by examples.
E-KRHyper is a versatile theorem prover and model generator for firstorder logic that natively supports equality. Inequality of constants, however, has to be given by explicitly adding facts. As the amount of these facts grows quadratically in the number of these distinct constants, the knowledge base is blown up. This makes it harder for a human reader to focus on the actual problem, and impairs the reasoning process. We extend E-Hyper- underlying E-KRhyper tableau calculus to avoid this blow-up by implementing a native handling for inequality of constants. This is done by introducing the unique name assumption for a subset of the constants (the so called distinct object identifiers). The obtained calculus is shown to be sound and complete and is implemented into the E-KRHyper system. Synthetic benchmarks, situated in the theory of arrays, are used to back up the benefits of the new calculus.
In dieser Studienarbeit sollen verschiedene Routing-Lookup Algorithmen aufgelistet und verglichen werden, mit denen eine Routing-Tabelle erstellt und angepasst werden kann. Dazu werden hier nur dynamische Verfahren in Betracht gezogen. Allgemein wird die Funktionsweise einer Routing-Tabelle erklärt und drei Verfahren bzw. Algorithmen analysiert und bewertet. Die Algorithmen werden anhand von Beispielen erläutert und in einem abschließenden Kapitel gegenüber gestellt. Dabei werden die Vor- und Nachteile der einzelnen Verfahren aufgelistet.
Education and training of the workforce have become an important competitive factor for companies because of the rapid technological changes in the economy and the corresponding ever shorter innovation cycles. Traditional training methods, however, are limited in terms of meeting the resulting demand for education and training in a company, which continues to grow and become faster all the time. Therefore, the use of technology-based training programs (that is, courseware) is increasing because courseware enables self-organized and self-paced learning and, through integration into daily work routines, allows optimal transfer of knowledge and skills, resulting in high learning outcome. To achieve these prospects, high-quality courseware is required, with quality being defined as supporting learners optimally in achieving their learning goals. Developing high-quality courseware, however, usually requires more effort and takes longer than developing other programs, which limits the availability of this courseware in time and with the required quality.
This dissertation therefore deals with the research question of how courseware has to be developed in order to produce high-quality courseware with less development effort and shorter project duration. In addition to its high quality, this courseware should be optimally aligned to the characteristics and learning goals of the learners as well as to the planned usage scenarios for the knowledge and skills being trained. The IntView Method for the systematic and efficient development of high-quality courseware was defined to answer the research question of this dissertation. It aims at increasing the probability of producing courseware in time without exceeding project schedules and budgets while developing a high-quality product optimally focused on the target groups and usage scenarios.
The IntView Methods integrates those execution variants of all activities and activity steps required to develop high-quality courseware, which were identified in a detailed analysis of existing courseware development approaches as well as production approaches from related fields, such as multimedia, web, or software engineering, into a systematic process that in their interaction constitute the most efficient way to develop courseware. The main part of the proposed method is therefore a systematic process for engineering courseware that encompasses all courseware lifecycle phases and integrates the activities and methods of all disciplines involved in courseware engineering, including a lifecycle encompassing quality assurance, into a consolidated process. This process is defined as a lifecycle model as well as a derived process model in the form of a dependency model in order to optimally support courseware project teams in coordinating and synchronizing their project work. In addition to the models, comprehensive, ready-to-apply enactment support materials are provided, consisting of work sheets and document templates that include detailed activity descriptions and examples.
The evaluation of the IntView Method proved that the method together with the enactment support materials enables efficient as well as effective courseware development. The projects and case studies conducted in the context of this evaluation demonstrate that, on the one hand, the method is easily adaptable to the production of different kinds of courseware or to different project contexts, and, on the other hand, that it can be used efficiently and effectively.
Modern Internet and Intranet techniques, such as Web services and virtualization, facilitate the distributed processing of data providing improved flexibility. The gain in flexibility also incurs disadvantages. Integrated workflows forward and distribute data between departments and across organizations. The data may be affected by privacy laws, contracts, or intellectual property rights. Under such circumstances of flexible cooperations between organizations, accounting for the processing of data and restricting actions performed on the data may be legally and contractually required. In the Internet and Intranet, monitoring mechanisms provide means for observing and auditing the processing of data, while policy languages constitute a mechanism for specifying restrictions and obligations.
In this thesis, we present our contributions to these fields by providing improvements for auditing and restricting the data processing in distributed environments. We define formal qualities of auditing methods used in distributed environments. Based on these qualities, we provide a novel monitoring solution supporting a data-centric view on the distributed data processing. We present a solution for provenance-aware policies and a formal specification of obligations offering a procedure to decide whether obligatory processing steps can be met in the future.
The paper deals with a specific introduction into probability propagation nets. Starting from dependency nets (which in a way can be considered the maximum information which follows from the directed graph structure of Bayesian networks), the probability propagation nets are constructed by joining a dependency net and (a slightly adapted version of) its dual net. Probability propagation nets are the Petri net version of Bayesian networks. In contrast to Bayesian networks, Petri nets are transparent and easy to operate. The high degree of transparency is due to the fact that every state in a process is visible as a marking of the Petri net. The convenient operability consists in the fact that there is no algorithm apart from the firing rule of Petri net transitions. Besides the structural importance of the Petri net duality there is a semantic matter; common sense in the form of probabilities and evidencebased likelihoods are dual to each other.
Dualizing marked Petri nets results in tokens for transitions (t-tokens). A marked transition can strictly not be enabled, even if there are sufficient "enabling" tokens (p-tokens) on its input places. On the other hand, t-tokens can be moved by the firing of places. This permits flows of t-tokens which describe sequences of non-events. Their benefiit to simulation is the possibility to model (and observe) causes and effects of non-events, e.g. if something is broken down.
In this paper, we demonstrate by means of two examples how to work with probability propagation nets (PPNs). The fiirst, which comes from the book by Peng and Reggia [1], is a small example of medical diagnosis. The second one comes from [2]. It is an example of operational risk and is to show how the evidence flow in PPNs gives hints to reduce high losses. In terms of Bayesian networks, both examples contain cycles which are resolved by the conditioning technique [3].
Cheops für VNUML - Erstellen und beobachten einer VNUML-Simulation per MausklickrnEs wird untersucht, wie Virtual Network User Mode Linux (VNUML), eine Software zur Simulation von Rechnernetzen, die aus virtualisierten Linux Instanzen aufgebaut werden, für den Benutzer besser handhabbar gemacht werden kann. Mit dem Linux-Paket VNUML, welches die dateigesteuerte Konfiguration virtueller Betriebssysteminstanzen ermöglicht, erhält der Anwender die Möglichkeit, komplexe Netzwerktopologien zu simulieren. Verschiedene Netzwerküberwachungsprogramme werden auf ihre Fähigkeit hin untersucht, eine laufende VNUML-Simulation zu erfassen und sinnvoll abzubilden. Dabei soll der Benutzer einen schnellen Überblick über die Funktion der simulierten Netzwerkumgebung, sowie nach Möglichkeit auch über deren Topologie erhalten können. Das Programm Cheops, welches der Netzwerküberwachung dient, wird erweitert, um nicht nur eine laufende Simulation abbilden und beobachten zu können, sondern darüber hinaus in der Lage zu sein, in jedem Schritt der Arbeit mit VNUML eingesetzt zu werden.
Das erweiterte Programm gestattet sowohl die Erstellung der VNUML-Topologiedatei, als auch das Starten und Steuern der Simulation. Damit werden der lange Kommandozeilenaufruf, sowie das Editieren der Konfigurationsdatei, durch einfach zu benutzende Funktionen in einer grafischen Oberfläche (GUI) ersetzt. Zur schnellen Kontrolle der vollen Funktionsfähigkeit der gestarteten Simulation sind keine weiteren Eingaben oder Konfigurationen nötig. Ebenso kann eine differenzierte Beobachtung verschiedener Netzwerkdienste während der Laufzeit der Simulation erfolgen. Die hierzu nötigen Werkzeuge sind im Paket Cheops für VNUML ebenfalls enthalten und speziell zur Anwendung mit VNUML-Simulationen vorkonfiguriert.
The semantic web and model-driven engineering are changing the enterprise computing paradigm. By introducing technologies like ontologies, metadata and logic, the semantic web improves drastically how companies manage knowledge. In counterpart, model-driven engineering relies on the principle of using models to provide abstraction, enabling developers to concentrate on the system functionality rather than on technical platforms. The next enterprise computing era will rely on the synergy between both technologies. On the one side, ontology technologies organize system knowledge in conceptual domains according to its meaning. It addresses enterprise computing needs by identifying, abstracting and rationalizing commonalities, and checking for inconsistencies across system specifications. On the other side, model-driven engineering is closing the gap among business requirements, designs and executables by using domain-specific languages with custom-built syntax and semantics. In this scenario, the research question that arises is: What are the scientific and technical results around ontology technologies that can be used in model-driven engineering and vice versa? The objective is to analyze approaches available in the literature that involve both ontologies and model-driven engineering. Therefore, we conduct a literature review that resulted in a feature model for classifying state-of-the-art approaches. The results show that the usage of ontologies and model-driven engineering together have multiple purposes: validation, visual notation, expressiveness and interoperability. While approaches involving both paradigms exist, an integrated approach for UML class-based modeling and ontology modeling is lacking so far. Therefore, we investigate the techniques and languages for designing integrated models. The objective is to provide an approach to support the design of integrated solutions. Thus, we develop a conceptual framework involving the structure and the notations of a solution to represent and query software artifacts using a combination of ontologies and class-based modeling. As proof of concept, we have implemented our approach as a set of open source plug-ins -- the TwoUse Toolkit. The hypothesis is that a combination of both paradigms yields improvements in both fields, ontology engineering and model-driven engineering. For MDE, we investigate the impact of using features of the Web Ontology Language in software modeling. The results are patterns and guidelines for designing ontology-based information systems and for supporting software engineers in modeling software. The results include alternative ways of describing classes and objects and querying software models and metamodels. Applications show improvements on changeability and extensibility. In the ontology engineering domain, we investigate the application of techniques used in model-driven engineering to fill the abstraction gap between ontology specification languages and programming languages. The objective is to provide a model-driven platform for supporting activities in the ontology engineering life cycle. Therefore, we study the development of core ontologies in our department, namely the core ontology for multimedia (COMM) and the multimedia metadata ontology. The results are domain-specific languages that allow ontology engineers to abstract from implementation issues and concentrate on the ontology engineering task. It results in increasing productivity by filling the gap between domain models and source code.
Die Arbeit beschäftigt sich mit der für Router und Kleingeräte entwickelten Linux-Distribution OpenWrt. Im ersten Teil der Arbeit wird die Erstellung von Softwarepaketen für OpenWrt erläutert und eine Anzeige- und Konfigurationskomponente für die Dateien des Quagga RIP-Daemons für die Benutzeroberfläche LuCI erstellt. Im zweiten Teil wird der Aufbau eines Testnetzwerkes mit OpenWrt Routern dokumentiert, sowie die Anbindung an VNUML-virtualisierte Netzwerke.
Folksonomies are Web 2.0 platforms where users share resources with each other. Furthermore, they can assign keywords (called tags) to the resources for categorizing and organizing the resources. Numerous types of resources like websites (Delicious), images (Flickr), and videos (YouTube) are supported by different folksonomies. The folksonomies are easy to use and thus attract the attention of millions of users. Together with the ease they offer, there are also some problems. This thesis addresses different problems of folksonomies and proposes solutions for these problems. The first problem occurs when users search for relevant resources in folksonomies. Often, the users are not able to find all relevant resources because they don't know which tags are relevant. The second problem is assigning tags to resources. Although many folksonomies (like Delicious) recommend tags for the resources, other folksonomies (like Flickr) do not recommend any tags. Tag recommendation helps the users to easily tag their resources. The third problem is that tags and resources are lacking semantics. This leads for example to ambiguous tags. The tags are lacking semantics because they are freely chosen keywords. The automatic identification of the semantics of tags and resources helps in reducing problems that arise from this freedom of the users in choosing the tags. This thesis proposes methods which exploit semantics to address the problems of search, tag recommendation, and the identification of tag semantics. The semantics are discovered from a variety of sources. In this thesis, we exploit web search engines, online social communities and the co-occurrences of tags as sources of semantics. Using different sources for discovering semantics reduces the efforts to build systems which solve the problems mentioned earlier. This thesis evaluates the proposed methods on a large scale data set. The evaluation results suggest that it is possible to exploit the semantics for improving search, recommendation of tags, and automatic identification of the semantics of tags and resources.