Refine
Year of publication
Document Type
- Part of Periodical (38)
- Doctoral Thesis (31)
- Diploma Thesis (24)
- Study Thesis (19)
- Bachelor Thesis (14)
- Master's Thesis (14)
- Report (1)
Keywords
- Routing (5)
- Bluetooth (4)
- Knowledge Compilation (4)
- Netzwerk (4)
- Semantic Web (4)
- Software Engineering (4)
- VNUML (4)
- E-KRHyper (3)
- Netzwerksimulation (3)
- RIP-MTI (3)
Institute
- Institut für Informatik (141) (remove)
Reactive local algorithms are distributed algorithms which suit the needs of battery-powered, large-scale wireless ad hoc and sensor networks particularly well. By avoiding both unnecessary wireless transmissions and proactive maintenance of neighborhood tables (i.e., beaconing), such algorithms minimize communication load and overhead, and scale well with increasing network size. This way, resources such as bandwidth and energy are saved, and the probability of message collisions is reduced, which leads to an increase in the packet reception ratio and a decrease of latencies.
Currently, the two main application areas of this algorithm type are geographic routing and topology control, in particular the construction of a node's adjacency in a connected, planar representation of the network graph. Geographic routing enables wireless multi-hop communication in the absence of any network infrastructure, based on geographic node positions. The construction of planar topologies is a requirement for efficient, local solutions for a variety of algorithmic problems.
This thesis contributes to reactive algorithm research in two ways, on an abstract level, as well as by the introduction of novel algorithms:
For the very first time, reactive algorithms are considered as a whole and as an individual research area. A comprehensive survey of the literature is given which lists and classifies known algorithms, techniques, and application domains. Moreover, the mathematical concept of O- and Omega-reactive local topology control is introduced. This concept unambiguously distinguishes reactive from conventional, beacon-based, topology control algorithms, serves as a taxonomy for existing and prospective algorithms of this kind, and facilitates in-depth investigations of the principal power of the reactive approach, beyond analysis of concrete algorithms.
Novel reactive local topology control and geographic routing algorithms are introduced under both the unit disk and quasi unit disk graph model. These algorithms compute a node's local view on connected, planar, constant stretch Euclidean and topological spanners of the underlying network graph and route messages reactively on these spanners while guaranteeing the messages' delivery. All previously known algorithms are either not reactive, or do not provide constant Euclidean and topological stretch properties. A particularly important partial result of this work is that the partial Delaunay triangulation (PDT) is a constant stretch Euclidean spanner for the unit disk graph.
To conclude, this thesis provides a basis for structured and substantial research in this field and shows the reactive approach to be a powerful tool for algorithm design in wireless ad hoc and sensor networking.
The present thesis deals with the realization of a stepper motor driver on an 8-bit microcontroller by the company Atmel. The focus is on the devel- opment of a current control, which allows microstepping in addition to the basic modes of operation like full- and halfstep. For this purpose, a PI con- troller is derived using physical and control engineering principles, which is implemented on the microcontroller. In this context, essential knowledge for the practical implementation will be discussed. In addition, the development of the hardware is documented, which is of great significance for the current measurement.
Confidentiality, integrity, and availability are often listed as the three major requirements for achieving data security and are collectively referred to as the C-I-A triad. Confidentiality of data restricts the data access to authorized parties only, integrity means that the data can only be modified by authorized parties, and availability states that the data must always be accessible when requested. Although these requirements are relevant for any computer system, they are especially important in open and distributed networks. Such networks are able to store large amounts of data without having a single entity in control of ensuring the data's security. The Semantic Web applies to these characteristics as well as it aims at creating a global and decentralized network of machine-readable data. Ensuring the confidentiality, integrity, and availability of this data is therefore also important and must be achieved by corresponding security mechanisms. However, the current reference architecture of the Semantic Web does not define any particular security mechanism yet which implements these requirements. Instead, it only contains a rather abstract representation of security.
This thesis fills this gap by introducing three different security mechanisms for each of the identified security requirements confidentiality, integrity, and availability of Semantic Web data. The mechanisms are not restricted to the very basics of implementing each of the requirements and provide additional features as well. Confidentiality is usually achieved with data encryption. This thesis not only provides an approach for encrypting Semantic Web data, it also allows to search in the resulting ciphertext data without decrypting it first. Integrity of data is typically implemented with digital signatures. Instead of defining a single signature algorithm, this thesis defines a formal framework for signing arbitrary Semantic Web graphs which can be configured with various algorithms to achieve different features. Availability is generally supported by redundant data storage. This thesis expands the classical definition of availability to compliant availability which means that data must only be available as long as the access request complies with a set of predefined policies. This requirement is implemented with a modular and extensible policy language for regulating information flow control. This thesis presents each of these three security mechanisms in detail, evaluates them against a set of requirements, and compares them with the state of the art and related work.
This thesis analyzes the online attention towards scientists and their research topics. The studies compare the attention dynamics towards the winners of important scientific prizes with scientists who did not receive a prize. Web signals such as Wikipedia page views, Wikipedia edits, and Google Trends were used as a proxy for online attention. One study focused on the time between the creation of the article about a scientist and their research topics. It was discovered that articles about research topics were created closer to the articles of prize winners than to scientists who did not receive a prize. One possible explanation could be that the research topics are more closely related to the scientist who got an award. This supports that scientists who received the prize introduced the topics to the public. Another study considered the public attention trends towards the related research topics before and after a page of a scientist was created. It was observed that after a page about a scientist was created, research topics of prize winners received more attention than the topics of scientists who did not receive a prize. Furthermore, it was demonstrated that Nobel Prize winners get a lower amount of attention before receiving the prize than the potential nominees from the list of Citation Laureates of Thompson Reuters. Also, their popularity is going down faster after receiving it. It was also shown that it is difficult to predict the prize winners based on the attention dynamics towards them.
The provision of electronic participation services (e-participation) is a complex socio-technical undertaking that needs comprehensive design and implementation strategies. E-participation service providers, in the most cases administrations and governments, struggle with changing requirements that demand more transparency, better connectivity and increased collaboration among different actors. At the same time, less staff are available. As a result, recent research assesses only a minority of e-participation services as successful. The challenge is that the e-participation domain lacks comprehensive approaches to design and implement (e-)participation services. Enterprise Architecture (EA) frameworks have evolved in information systems research as an approach to guide the development of complex socio-technical systems. This approach can guide the design and implementation services, if the collection of organisations with the commonly held goal to provide participation services is understood as an E Participation Enterprise (EE). However, research & practice in the e participation domain has not yet exploited EA frameworks. Consequently, the problem scope that motivates this dissertation is the existing gap in research to deploy EA frameworks in e participation design and implementation. The research question that drives this research is: What methodical and technical guides do architecture frameworks provide that can be used to design and implement better and successful e participation?
This dissertation presents a literature study showing that existing approaches have not covered yet the challenges of comprehensive e participation design and implementation. Accordingly, the research moves on to investigate established EA frameworks such as the Zachman Framework, TOGAF, the DoDAF, the FEA, the ARIS, and the ArchiMate for their use. While the application of these frameworks in e participation design and implementation is possible, an integrated approach is lacking so far. The synthesis of literature review and practical insights in design and implementation of e participation services from four projects show the challenges of adapting architecture frameworks for this domain. However, the research shows also the potential of a combination of the different approaches. Consequently, the research moves on to develop the E-Participation Architecture Framework (EPART-Framework). Therefore, the dissertation applies design science research including literature review and action research. Two independent settings test an initial EPART-Framework version. The results yield into the EPART-Framework presented in this dissertation.
The EPART-Framework comprises of the EPART-Metamodel with six EPART-Viewpoints, which frame the stakeholder concerns: the Participation Scope, the Participant Viewpoint, the Participation Viewpoint, the Data & Information Viewpoint, the E-participation Viewpoint, and Implementation & Governance Viewpoint. The EPART-Method supports the stakeholders to design the EE and implement e participation and stores its output in an architecture description and a solution repository. It consists of five consecutive phases accompanied by requirements management: Initiation, Design, Implementation and Preparation, Participation, and Evaluation. The EPART-Framework fills the gap between the e participation domain and the enterprise architecture framework domain. The evaluation gives reasonable evidence that the framework is a valuable addition in academia and in practice to improve e-participation design and implementation. The same time, it shows opportunities for future research to extend and advance the framework.
The aim of this thesis was to develop and to evaluate a method, which enables the utilization of traditional dialog marketing tools through the web. For this purpose, a prototype of a website with "extended real-time interaction (eEI)" capabilities has been implemented and tested. The prototype was evaluated by a methodology based on the five-dimensional "e-service quality" measure after Gwo-Guang Lee und Hsiu-Fen Lin. The Foundation of the "e-service quality" measure is the SERVQUAL-Model. A statistical analysis of the user study results showed a significant correlation between eEI and user satisfaction. Before the actual realization of eEI, the "Technology Acceptance Model" after Fred D. Davis was used to investigate currently used real-time interaction systems.
Die folgende Arbeit zeigt eine Möglichkeit auf, Lokalisierung eines Objektes mittels Ultraschall zu realisieren. Dazu werden drei bis fünf im Raum verteilte Sensoren genutzt, um anhand von Distanzinformationen die Position eines Objekts relativ zu den Positionen der Sensoren zu bestimmen. Eine Besonderheit besteht dabei darin, dass die Sensoren nahezu beliebig in der Ebene verteilt sein können. Ihre Anordnung wird vom System in der Kalibrierungsphase mit Unterstützung des Anwenders ermittelt. Dabei dürften ein gleichseitiges Dreieck, ein Quadrat oder Pentagramm je nach Sensoranzahl die besten Ergebnisse liefern. Um die relative Bewegung in eine Absolute zu übertragen, findet eine Umrechnung in Meter anhand der Taktung der Mikrocontroller, des Prescalers des verwendeten Timers und der Schallgeschwindigkeit statt.
This thesis addresses the problem of terrain classification in unstructured outdoor environments. Terrain classification includes the detection of obstacles and passable areas as well as the analysis of ground surfaces. A 3D laser range finder is used as primary sensor for perceiving the surroundings of the robot. First of all, a grid structure is introduced for data reduction. The chosen data representation allows for multi-sensor integration, e.g., cameras for color and texture information or further laser range finders for improved data density. Subsequently, features are computed for each terrain cell within the grid. Classification is performedrnwith a Markov random field for context-sensitivity and to compensate for sensor noise and varying data density within the grid. A Gibbs sampler is used for optimization and is parallelized on the CPU and GPU in order to achieve real-time performance. Dynamic obstacles are detected and tracked using different state-of-the-art approaches. The resulting information - where other traffic participants move and are going to move to - is used to perform inference in regions where the terrain surface is partially or completely invisible for the sensors. Algorithms are tested and validated on different autonomous robot platforms and the evaluation is carried out with human-annotated ground truth maps of millions of measurements. The terrain classification approach of this thesis proved reliable in all real-time scenarios and domains and yielded new insights. Furthermore, if combined with a path planning algorithm, it enables full autonomy for all kinds of wheeled outdoor robots in natural outdoor environments.
The publication of freely available and machine-readable information has increased significantly in the last years. Especially the Linked Data initiative has been receiving a lot of attention. Linked Data is based on the Resource Description Framework (RDF) and anybody can simply publish their data in RDF and link it to other datasets. The structure is similar to the World Wide Web where individual HTML documents are connected with links. Linked Data entities are identified by URIs which are dereferenceable to retrieve information describing the entity. Additionally, so called SPARQL endpoints can be used to access the data with an algebraic query language (SPARQL) similar to SQL. By integrating multiple SPARQL endpoints it is possible to create a federation of distributed RDF data sources which acts like one big data store.
In contrast to the federation of classical relational database systems there are some differences for federated RDF data. RDF stores are accessed either via SPARQL endpoints or by resolving URIs. There is no coordination between RDF data sources and machine-readable meta data about a source- data is commonly limited or not available at all. Moreover, there is no common directory which can be used to discover RDF data sources or ask for sources which offer specific data. The federation of distributed and linked RDF data sources has to deal with various challenges. In order to distribute queries automatically, suitable data sources have to be selected based on query details and information that is available about the data sources. Furthermore, the minimization of query execution time requires optimization techniques that take into account the execution cost for query operators and the network communication overhead for contacting individual data sources. In this thesis, solutions for these problems are discussed. Moreover, SPLENDID is presented, a new federation infrastructure for distributed RDF data sources which uses optimization techniques based on statistical information.
Geographic cluster based routing in ad-hoc wireless sensor networks is a current field of research. Various algorithms to route in wireless ad-hoc networks based on position information already exist. Among them algorithms that use the traditional beaconing approach as well as algorithms that work beaconless (no information about the environment is required besides the own position and the destination). Geographic cluster based routing with guaranteed message delivery can be carried out on overlay graphs as well. Until now the required planar overlay graphs are not being constructed reactively.
This thesis proposes a reactive algorithm, the Beaconless Cluster Based Planarization (BCBP) algorithm, which constructs a planar overlay graph and noticeably reduces the number of messages required for that. Based on an algorithm for cluster based planarization it beaconlessly constructs a planar overlay graph in an unit disk graph (UDG). An UDG is a model for a wireless network in which every participant has the same sending radius. Evaluation of the algorithm shows it to be more efficient than the non beaconless variant. Another result of this thesis is the Beaconless LLRAP (BLLRAP) algorithm, for which planarity but not continued connectivity could be proven.
The identification of experts for a specific technology or framework produces a large benefit for collaborative software projects. Hence it reduces the communication overhead that is required to identify an expert on the fly. Therefore this thesis describes a tool and approach that can be used to identify an expert that has a specific skill-set. It will mainly focus on the skills and expertise of developers that use the Django framework. By adding more rules to our framework that approach could easily be extended for different technologies or frameworks. The paper will close with a case study on an open source project.
One task of executives and project managers in IT companies or departments is to hire suitable developers and to assign them to suitable problems. In this paper, we propose a new technique that directly leverages previous work experience of developers in a systematic manner. Existing evidence for developer expertise based on the version history of existing projects is analyzed. More specifically, we analyze the commits to a repository in terms of affected API usage. On these grounds, we associate APIs with developers and thus we assess API experience of developers. In transitive closure, we also assess programming domain experience.
Im Rahmen dieser Arbeit soll eine Methodik erarbeitet werden, die englische, keyword-basierte Anfragen in SPARQL übersetzt und bewertet. Aus allen generierten SPARQL-Queries sollen die relevantesten ermittelt und ein Favorit bestimmt werden. Das Ergebnis soll in einer Nutzerevaluation bewertet werden.
Traditional Driver Assistance Systems (DAS) like for example Lane Departure Warning Systems or the well-known Electronic Stability Program have in common that their system and software architecture is static. This means that neither the number and topology of Electronic Control Units (ECUs) nor the presence and functionality of software modules changes after the vehicles leave the factory.
However, some future DAS do face changes at runtime. This is true for example for truck and trailer DAS as their hardware components and software entities are spread over both parts of the combination. These new requirements cannot be faced by state-of-the-art approaches of automotive software systems. Instead, a different technique of designing such Distributed Driver Assistance Systems (DDAS) needs to be developed. The main contribution of this thesis is the development of a novel software and system architecture for dynamically changing DAS using the example of driving assistance for truck and trailer. This architecture has to be able to autonomously detect and handle changes within the topology. In order to do so, the system decides which degree of assistance and which types of HMI can be offered every time a trailer is connected or disconnected. Therefore an analysis of the available software and hardware components as well as a determination of possible assistance functionality and a re-configuration of the system take place. Such adaptation can be granted by the principles of Service-oriented Architecture (SOA). In this architectural style all functionality is encapsulated in self-contained units, so-called Services. These Services offer the functionality through well-defined interfaces whose behavior is described in contracts. Using these Services, large-scale applications can be built and adapted at runtime. This thesis describes the research conducted in achieving the goals described by introducing Service-oriented Architectures into the automotive domain. SOA deals with the high degree of distribution, the demand for re-usability and the heterogeneity of the needed components.
It also applies automatic re-configuration in the event of a system change. Instead of adapting one of the frameworks available to this scenario, the main principles of Service-orientation are picked up and tailored. This leads to the development of the Service-oriented Driver Assistance (SODA) framework, which implements the benefits of Service-orientation while ensuring compatibility and compliance to automotive requirements, best-practices and standards. Within this thesis several state-of-the-art Service-oriented frameworks are analyzed and compared. Furthermore, the SODA framework as well as all its different aspects regarding the automotive software domain are described in detail. These aspects include a well-defined reference model that introduces and relates terms and concepts and defines an architectural blueprint. Furthermore, some of the modules of this blueprint such as the re-configuration module and the Communication Model are presented in full detail. In order to prove the compliance of the framework regarding state-of-the-art automotive software systems, a development process respecting today's best practices in automotive design procedures as well as the integration of SODA into the AUTOSAR standard are discussed. Finally, the SODA framework is used to build a full-scale demonstrator in order to evaluate its performance and efficiency.
Software systems are often developed as a set of variants to meet diverse requirements. Two common approaches to this are "clone-and-owning" and software product lines. Both approaches have advantages and disadvantages. In previous work we and collaborators proposed an idea which combines both approaches to manage variants, similarities, and cloning by using a virtual platform and cloning-related operators.
In this thesis, we present an approach for aggregating essential metadata to enable a propagate operator, which implements a form of change propagation. For this we have developed a system to annotate code similarities which were extracted throughout the history of a software repository. The annotations express similarity maintenance tasks, which can then either be executed automatically by propagate or have to be performed manually by the user. In this work we outline the automated metadata extraction process and the system for annotating similarities; we explain how the implemented system can be integrated into the workflow of an existing version control system (Git); and, finally, we present a case study using the 101haskell corpus of variants.
Die Arbeitsgruppe Echtzeitsysteme an der Universität Koblenz beschäftigt sich seit mehreren Jahren mit der Thematik autonomes und assistiertes Fahren. Eine große Herausforderung stellen in diesem Zusammenhang mehrgliedrige Fahrzeuge dar, deren Steuerung für den Fahrer während der Rückwärtsfahrt sehr anspruchsvoll ist. Um präzise Manöver zu ermöglichen, können elektronische Fahrerassistenzsysteme zum Einsatz kommen. Im Rahmen vorhergehender Arbeiten sind bereits einige Prototypen entstanden, von denen jedoch keiner eine geeignete Lösung für moderne, zweiachsige Anhänger darstellt. Im Rahmen dieser Arbeit wurde ein prototypisches Fahrerassistenzsystem entwickelt, wobei es noch weiterer Forschungs- und Entwicklungsarbeit bedarf, um das System straßentauglich zu machen.
Der Apple ][ war einer der drei ersten kompletten Computersysteme auf dem Markt. Von April 1977 an wurde er rund 16 Jahre lang mehrere Millionen mal verkauft. Entwickelt wurde dieser 8 Bit Homecomputer von Steve Wozniak und Steve Jobs. Sie ebneten damit den Weg für den Macintosh und das heute gut bekannte Unternehmen Apple.
Diese Arbeit beschreibt die Implementierung eines Softwareemulators für das komplette Apple ][ Computersystem auf nur einem Atmel AVR Microcontroller. Die größte Herausforderung besteht darin, dass der Microcontroller nur eine geringfügig höhere Taktrate als die zu emulierende Hardware hat. Dies erfordert eine effiziente Emulation der CPU und Speicherverwaltung, die nachfolgend zusammen mit der Laufzeitumgebung für die Emulation vorgestellt wird. Weiterhin wird die Umsetzung des Emulators mit Display und Tastatur in Hardware naher erläutert.
Mit dieser Arbeit wird die erfolgreiche Entwicklung eines portablen Apple ][ Emulators, von der Software über die Hardware bis hin zu einem Prototypen, vorgestellt.
In diesem Bericht wird der Einsatz von drahtlosen Sensornetzen zur Temperaturmessung in Fließgewässern untersucht. Es wird dargestellt, inwieweit solche Netze als Bindeglied zwischen Fernerkundung und stationären Sensoren eingesetzt werden können. Es werden die Anforderungen an Sensornetze für die Anwendung Gewässermonitoring ermittelt und eine prototypische Realisierung von Netzknoten für ein solches Sensornetz dargestellt. Als Ergebnis dieser Arbeit werden die Genauigkeit von Temperaturmessungen mit solchen Sensorknoten im Vergleich zu einem Temperaturlogger als Referenzsystem dargestellt. Die Messungen zeigen, dass eine vergleichsweise gute Messgenauigkeit zu geringen Kosten erreichbar ist. Durch Weiterentwicklung des hier dargestellten Prototypen steht für die Temperaturüberwachung in Gewässern ein vielversprechendes und kostengünstiges neues Messinstrument zur Verfügung. Dieses kann auf der einen Seite in tieferen Regionen Gewässertemperaturen messen, als dies mit Fernerkundung möglich ist, und auf der anderen Seite eine höhere räumliche Auflösung als stationäre Messstationen erreichen. Zusätzlich dienen die Literaturrecherche und die Formulierung der Kriterien einer Eingrenzung des Anwendungsbereichs für weiterführende Arbeiten.
Diffusion imaging captures the movement of water molecules in tissue by applying varying gradient fields in a magnetic resonance imaging (MRI)-based setting. It poses a crucial contribution to in vivo examinations of neuronal connections: The local diffusion profile enables inference of the position and orientation of fiber pathways. Diffusion imaging is a significant technique for fundamental neuroscience, in which pathways connecting cortical activation zones are examined, and for neurosurgical planning, where fiber reconstructions are considered as intervention related risk structures.
Diffusion tensor imaging (DTI) is currently applied in clinical environments in order to model the MRI signal due to its fast acquisition and reconstruction time. However, the inability of DTI to model complex intra-voxel diffusion distributions gave rise to an advanced reconstruction scheme which is known as high angular resolution diffusion imaging (HARDI). HARDI received increasing interest in neuroscience due to its potential to provide a more accurate view of pathway configurations in the human brain.
In order to fully exploit the advantages of HARDI over DTI, advanced fiber reconstructions and visualizations are required. This work presents novel approaches contributing to current research in the field of diffusion image processing and visualization. Diffusion classification, tractography, and visualizations approaches were designed to enable a meaningful exploration of neuronal connections as well as their constitution. Furthermore, an interactive neurosurgical planning tool with consideration of neuronal pathways was developed.
The research results in this work provide an enhanced and task-related insight into neuronal connections for neuroscientists as well as neurosurgeons and contribute to the implementation of HARDI in clinical environments.
Diese Bachelorarbeit befasst sich mit der Entwicklung und Implementierung einer Gesichtserkennenden Software, die in der Lage ist Personenströme zurnerkennen und zu protokollieren. Dabei wird, ausgehend von den speziellen Anforderungen der Bildverarbeitung die entstandene Softwarearchitektur und deren Implementation vorgestellt. Zusätzlich wird zur Implementation ein Webinterface entwickelt welches die Verwaltung der Daten vereinfachen soll. Abschließend werden weitere Verfahren der Gesichtserkennung vorgestellt und gegen das eingesetzte Verfahren verglichen. Zum Schluss wird die implementierte Software evaluiert.
In the recent years, Software Engineering research has shown the rise of interest in the empirical studies. Such studies are often based on empirical evidence derived from corpora - collections of software artifacts. While there are established forms of carrying out empirical research (experiments, case studies, surveys, etc.), the common task of preparing the underlying collection of software artifacts is typically addressed in ad hoc manner.
In this thesis, by means of a literature survey we show how frequently software engineering research employs software corpora and using a developed classification scheme we discuss their characteristics. Addressing the lack of methodology, we suggest a method of corpus (re-)engineering and apply it to an existing collection of Java projects.
We report two extensive empirical studies, where we perform a broad and diverse range of analyses on the language for privacy preferences (P3P) and on object-oriented application programming interfaces (APIs). In both cases, we are driven by the data at hand, by the corpus itself, discovering the actual usage of the languages.
Eine systematische Literaturstudie zu beaconless Algorithmen für drahtlose Ad-hoc- und Sensornetze
(2014)
Wireless sensor networks (WSN) are computer networks consisting of sensor nodes.In position-based protocols nodes broadcast messages containing their position information periodically. These messages are called beacons. The use of beacons however has downsides, which led to the emerging of beaconless algorithms. Despite their importance in the research area of computer networks and to the best of our knowledge, beaconless algorithms have not been systematically surveyed.
The aim of this work is a comprehensive, systematic review of beaconless algorithms since the turn of the millennium. Relevant articles are categorized based on their similarities and their differences are compared. Thus, this work serves as a reference to the current state of research of beaconless algorithms and as a basis for finding research gaps.
Community-Plattformen im Internet verwenden codebasierte Governance, um ihre hohe Anzahl an Nutzerbeiträgen zu verwalten. Dazu gehören alle Arten von Funktionalitäten, mit denen die Community Nutzerbeiträge in irgendeiner Form direkt oder indirekt beurteilen kann. Diese Arbeit erklärt zunächst die Bedeutung codebasierter Governance und der verschiedenen dafür nutzbaren Funktionalitäten. Anschließend werden die erfolgreichsten 50 Community-Plattformen auf codebasierte Governance untersucht. Das Ergebnis zeigt die Zusammenhänge zwischen dem Aufbau einer Plattform, der Beschaσffenheit der Nutzerbeiträge und der darauf ausübbaren codebasierten Governance auf.
The increased networking of devices has established the usage of distributed applications as a common method in software development. Despite the demand of anonymity in using distributed applications, software frameworks still lack appropriate support in developing them. Building anonymous communicating applications therefore often results in an expensive individual approach. This work integrates an approach for anonymous communication using remote procedure calls into a software framework for building distributed applications. This also includes the design, development, and prototypical implementation of a binding for the MAppLab Remote Procedure Call Framework on the basis of the mPartAPI Framework. Furthermore the resulting prototype will be tested in an exemplary scenario.
Web 2.0 provides technologies for online collaboration of users as well as the creation, publication and sharing of user-generated contents in an interactive way. Twitter, CNET, CiteSeerX, etc. are examples of Web 2.0 platforms which facilitate users in these activities and are viewed as rich sources of information. In the platforms mentioned as examples, users can participate in discussions, comment others, provide feedback on various issues, publish articles and write blogs, thereby producing a high volume of unstructured data which at the same time leads to an information overload. To satisfy various types of human information needs arising from the purpose and nature of the platforms requires methods for appropriate aggregation and automatic analysis of this unstructured data. In this thesis, we propose methods which attempt to overcome the problem of information overload and help in satisfying user information needs in three scenarios.
To this end, first we look at two of the main challenges of sparsity and content quality in Twitter and how these challenges can influence standard retrieval models. We analyze and identify Twitter content features that reflect high quality information. Based on this analysis we introduce the concept of "interestingness" as a static quality measure. We empirically show that our proposed measure helps in retrieving and filtering high quality information in Twitter. Our second contribution relates to the content diversification problem in a collaborative social environment, where the motive of the end user is to gain a comprehensive overview of the pros and cons of a discussion track which results from social collaboration of the people. For this purpose, we develop the FREuD approach which aims at solving the content diversification problem by combining latent semantic analysis with sentiment estimation approaches. Our evaluation results show that the FREuD approach provides a representative overview of sub-topics and aspects of discussions, characteristic user sentiments under different aspects, and reasons expressed by different opponents. Our third contribution presents a novel probabilistic Author-Topic-Time model, which aims at mining topical trends and user interests from social media. Our approach solves this problem by means of Bayesian modeling of relations between authors, latent topics and temporal information. We present results of application of the model to the scientific publication datasets from CiteSeerX showing improved semantically cohesive topic detection and capturing shifts in authors" interest in relation to topic evolution.
Im Verlauf der vergangenen Jahre wurden unter der Leitung von Dr. Merten Joost basierend auf Microcontrollern der ATMega-Reihe verschiedene Projekte zur Ansteuerung der Peripheriegeräte eines Computers realisiert. Hierzu zählen unter anderem die Abfrage einer Tastatur, die Ausgabe von Audio- und Videosignalen sowie eine Programmierumgebung mit eigener Programmiersprache. Ziel dieser Arbeit ist es, die gesammelten Projekte zu verbinden, um als Ergebnis einen eigenständigen "Homecomputer" zu erhalten, der per Tastatur angesteuert werden können soll und über eine Audio- und Videoausgabe verfügen soll. Dabei wird eine SD-Karte als Speichermedium dienen, das per Tastatureingabe über eine Art Shell verwaltet werden kann.
This dissertation investigates the usage of theorem provers in automated question answering (QA). QA systems attempt to compute correct answers for questions phrased in a natural language. Commonly they utilize a multitude of methods from computational linguistics and knowledge representation to process the questions and to obtain the answers from extensive knowledge bases. These methods are often syntax-based, and they cannot derive implicit knowledge. Automated theorem provers (ATP) on the other hand can compute logical derivations with millions of inference steps. By integrating a prover into a QA system this reasoning strength could be harnessed to deduce new knowledge from the facts in the knowledge base and thereby improve the QA capabilities. This involves challenges in that the contrary approaches of QA and automated reasoning must be combined: QA methods normally aim for speed and robustness to obtain useful results even from incomplete of faulty data, whereas ATP systems employ logical calculi to derive unambiguous and rigorous proofs. The latter approach is difficult to reconcile with the quantity and the quality of the knowledge bases in QA. The dissertation describes modifications to ATP systems in order to overcome these obstacles. The central example is the theorem prover E-KRHyper which was developed by the author at the Universität Koblenz-Landau. As part of the research work for this dissertation E-KRHyper was embedded into a framework of components for natural language processing, information retrieval and knowledge representation, together forming the QA system LogAnswer.
Also presented are additional extensions to the prover implementation and the underlying calculi which go beyond enhancing the reasoning strength of QA systems by giving access to external knowledge sources like web services. These allow the prover to fill gaps in the knowledge during the derivation, or to use external ontologies in other ways, for example for abductive reasoning. While the modifications and extensions detailed in the dissertation are a direct result of adapting an ATP system to QA, some of them can be useful for automated reasoning in general. Evaluation results from experiments and competition participations demonstrate the effectiveness of the methods under discussion.
The amount of information on the Web is constantly increasing and also there is a wide variety of information available such as news, encyclopedia articles, statistics, survey data, stock information, events, bibliographies etc. The information is characterized by heterogeneity in aspects such as information type, modality, structure, granularity, quality and by its distributed nature. The two primary techniques by which users on the Web are looking for information are (1) using Web search engines and (2) browsing the links between information. The dominant mode of information presentation is mainly static in the form of text, images and graphics. Interactive visualizations offer a number of advantages for the presentation and exploration of heterogeneous information on the Web: (1) They provide different representations for different, very large and complex types of information and (2) large amounts of data can be explored interactively using their attributes and thus can support and expand the cognition process of the user. So far, interactive visualizations are still not an integral part in the search process of the Web. The technical standards and interaction paradigms to make interactive visualization usable by the mass are introduced only slowly through standardatization organizations. This work examines how interactive visualizations can be used for the linking and search process of heterogeneous information on the Web. Based on principles in the areas of information retrieval (IR), information visualization and information processing, a model is created, which extends the existing structural models of information visualization with two new processes: (1) linking of information in visualizations and (2) searching, browsing and filtering based on glyphs. The Vizgr toolkit implements the developed model in a web application. In four different application scenarios, aspects of the model will be instantiated and are evaluated in user tests or examined by examples.
E-KRHyper is a versatile theorem prover and model generator for firstorder logic that natively supports equality. Inequality of constants, however, has to be given by explicitly adding facts. As the amount of these facts grows quadratically in the number of these distinct constants, the knowledge base is blown up. This makes it harder for a human reader to focus on the actual problem, and impairs the reasoning process. We extend E-Hyper- underlying E-KRhyper tableau calculus to avoid this blow-up by implementing a native handling for inequality of constants. This is done by introducing the unique name assumption for a subset of the constants (the so called distinct object identifiers). The obtained calculus is shown to be sound and complete and is implemented into the E-KRHyper system. Synthetic benchmarks, situated in the theory of arrays, are used to back up the benefits of the new calculus.
In dieser Studienarbeit sollen verschiedene Routing-Lookup Algorithmen aufgelistet und verglichen werden, mit denen eine Routing-Tabelle erstellt und angepasst werden kann. Dazu werden hier nur dynamische Verfahren in Betracht gezogen. Allgemein wird die Funktionsweise einer Routing-Tabelle erklärt und drei Verfahren bzw. Algorithmen analysiert und bewertet. Die Algorithmen werden anhand von Beispielen erläutert und in einem abschließenden Kapitel gegenüber gestellt. Dabei werden die Vor- und Nachteile der einzelnen Verfahren aufgelistet.
Education and training of the workforce have become an important competitive factor for companies because of the rapid technological changes in the economy and the corresponding ever shorter innovation cycles. Traditional training methods, however, are limited in terms of meeting the resulting demand for education and training in a company, which continues to grow and become faster all the time. Therefore, the use of technology-based training programs (that is, courseware) is increasing because courseware enables self-organized and self-paced learning and, through integration into daily work routines, allows optimal transfer of knowledge and skills, resulting in high learning outcome. To achieve these prospects, high-quality courseware is required, with quality being defined as supporting learners optimally in achieving their learning goals. Developing high-quality courseware, however, usually requires more effort and takes longer than developing other programs, which limits the availability of this courseware in time and with the required quality.
This dissertation therefore deals with the research question of how courseware has to be developed in order to produce high-quality courseware with less development effort and shorter project duration. In addition to its high quality, this courseware should be optimally aligned to the characteristics and learning goals of the learners as well as to the planned usage scenarios for the knowledge and skills being trained. The IntView Method for the systematic and efficient development of high-quality courseware was defined to answer the research question of this dissertation. It aims at increasing the probability of producing courseware in time without exceeding project schedules and budgets while developing a high-quality product optimally focused on the target groups and usage scenarios.
The IntView Methods integrates those execution variants of all activities and activity steps required to develop high-quality courseware, which were identified in a detailed analysis of existing courseware development approaches as well as production approaches from related fields, such as multimedia, web, or software engineering, into a systematic process that in their interaction constitute the most efficient way to develop courseware. The main part of the proposed method is therefore a systematic process for engineering courseware that encompasses all courseware lifecycle phases and integrates the activities and methods of all disciplines involved in courseware engineering, including a lifecycle encompassing quality assurance, into a consolidated process. This process is defined as a lifecycle model as well as a derived process model in the form of a dependency model in order to optimally support courseware project teams in coordinating and synchronizing their project work. In addition to the models, comprehensive, ready-to-apply enactment support materials are provided, consisting of work sheets and document templates that include detailed activity descriptions and examples.
The evaluation of the IntView Method proved that the method together with the enactment support materials enables efficient as well as effective courseware development. The projects and case studies conducted in the context of this evaluation demonstrate that, on the one hand, the method is easily adaptable to the production of different kinds of courseware or to different project contexts, and, on the other hand, that it can be used efficiently and effectively.
Modern Internet and Intranet techniques, such as Web services and virtualization, facilitate the distributed processing of data providing improved flexibility. The gain in flexibility also incurs disadvantages. Integrated workflows forward and distribute data between departments and across organizations. The data may be affected by privacy laws, contracts, or intellectual property rights. Under such circumstances of flexible cooperations between organizations, accounting for the processing of data and restricting actions performed on the data may be legally and contractually required. In the Internet and Intranet, monitoring mechanisms provide means for observing and auditing the processing of data, while policy languages constitute a mechanism for specifying restrictions and obligations.
In this thesis, we present our contributions to these fields by providing improvements for auditing and restricting the data processing in distributed environments. We define formal qualities of auditing methods used in distributed environments. Based on these qualities, we provide a novel monitoring solution supporting a data-centric view on the distributed data processing. We present a solution for provenance-aware policies and a formal specification of obligations offering a procedure to decide whether obligatory processing steps can be met in the future.
The paper deals with a specific introduction into probability propagation nets. Starting from dependency nets (which in a way can be considered the maximum information which follows from the directed graph structure of Bayesian networks), the probability propagation nets are constructed by joining a dependency net and (a slightly adapted version of) its dual net. Probability propagation nets are the Petri net version of Bayesian networks. In contrast to Bayesian networks, Petri nets are transparent and easy to operate. The high degree of transparency is due to the fact that every state in a process is visible as a marking of the Petri net. The convenient operability consists in the fact that there is no algorithm apart from the firing rule of Petri net transitions. Besides the structural importance of the Petri net duality there is a semantic matter; common sense in the form of probabilities and evidencebased likelihoods are dual to each other.
Dualizing marked Petri nets results in tokens for transitions (t-tokens). A marked transition can strictly not be enabled, even if there are sufficient "enabling" tokens (p-tokens) on its input places. On the other hand, t-tokens can be moved by the firing of places. This permits flows of t-tokens which describe sequences of non-events. Their benefiit to simulation is the possibility to model (and observe) causes and effects of non-events, e.g. if something is broken down.
In this paper, we demonstrate by means of two examples how to work with probability propagation nets (PPNs). The fiirst, which comes from the book by Peng and Reggia [1], is a small example of medical diagnosis. The second one comes from [2]. It is an example of operational risk and is to show how the evidence flow in PPNs gives hints to reduce high losses. In terms of Bayesian networks, both examples contain cycles which are resolved by the conditioning technique [3].
Cheops für VNUML - Erstellen und beobachten einer VNUML-Simulation per MausklickrnEs wird untersucht, wie Virtual Network User Mode Linux (VNUML), eine Software zur Simulation von Rechnernetzen, die aus virtualisierten Linux Instanzen aufgebaut werden, für den Benutzer besser handhabbar gemacht werden kann. Mit dem Linux-Paket VNUML, welches die dateigesteuerte Konfiguration virtueller Betriebssysteminstanzen ermöglicht, erhält der Anwender die Möglichkeit, komplexe Netzwerktopologien zu simulieren. Verschiedene Netzwerküberwachungsprogramme werden auf ihre Fähigkeit hin untersucht, eine laufende VNUML-Simulation zu erfassen und sinnvoll abzubilden. Dabei soll der Benutzer einen schnellen Überblick über die Funktion der simulierten Netzwerkumgebung, sowie nach Möglichkeit auch über deren Topologie erhalten können. Das Programm Cheops, welches der Netzwerküberwachung dient, wird erweitert, um nicht nur eine laufende Simulation abbilden und beobachten zu können, sondern darüber hinaus in der Lage zu sein, in jedem Schritt der Arbeit mit VNUML eingesetzt zu werden.
Das erweiterte Programm gestattet sowohl die Erstellung der VNUML-Topologiedatei, als auch das Starten und Steuern der Simulation. Damit werden der lange Kommandozeilenaufruf, sowie das Editieren der Konfigurationsdatei, durch einfach zu benutzende Funktionen in einer grafischen Oberfläche (GUI) ersetzt. Zur schnellen Kontrolle der vollen Funktionsfähigkeit der gestarteten Simulation sind keine weiteren Eingaben oder Konfigurationen nötig. Ebenso kann eine differenzierte Beobachtung verschiedener Netzwerkdienste während der Laufzeit der Simulation erfolgen. Die hierzu nötigen Werkzeuge sind im Paket Cheops für VNUML ebenfalls enthalten und speziell zur Anwendung mit VNUML-Simulationen vorkonfiguriert.
The semantic web and model-driven engineering are changing the enterprise computing paradigm. By introducing technologies like ontologies, metadata and logic, the semantic web improves drastically how companies manage knowledge. In counterpart, model-driven engineering relies on the principle of using models to provide abstraction, enabling developers to concentrate on the system functionality rather than on technical platforms. The next enterprise computing era will rely on the synergy between both technologies. On the one side, ontology technologies organize system knowledge in conceptual domains according to its meaning. It addresses enterprise computing needs by identifying, abstracting and rationalizing commonalities, and checking for inconsistencies across system specifications. On the other side, model-driven engineering is closing the gap among business requirements, designs and executables by using domain-specific languages with custom-built syntax and semantics. In this scenario, the research question that arises is: What are the scientific and technical results around ontology technologies that can be used in model-driven engineering and vice versa? The objective is to analyze approaches available in the literature that involve both ontologies and model-driven engineering. Therefore, we conduct a literature review that resulted in a feature model for classifying state-of-the-art approaches. The results show that the usage of ontologies and model-driven engineering together have multiple purposes: validation, visual notation, expressiveness and interoperability. While approaches involving both paradigms exist, an integrated approach for UML class-based modeling and ontology modeling is lacking so far. Therefore, we investigate the techniques and languages for designing integrated models. The objective is to provide an approach to support the design of integrated solutions. Thus, we develop a conceptual framework involving the structure and the notations of a solution to represent and query software artifacts using a combination of ontologies and class-based modeling. As proof of concept, we have implemented our approach as a set of open source plug-ins -- the TwoUse Toolkit. The hypothesis is that a combination of both paradigms yields improvements in both fields, ontology engineering and model-driven engineering. For MDE, we investigate the impact of using features of the Web Ontology Language in software modeling. The results are patterns and guidelines for designing ontology-based information systems and for supporting software engineers in modeling software. The results include alternative ways of describing classes and objects and querying software models and metamodels. Applications show improvements on changeability and extensibility. In the ontology engineering domain, we investigate the application of techniques used in model-driven engineering to fill the abstraction gap between ontology specification languages and programming languages. The objective is to provide a model-driven platform for supporting activities in the ontology engineering life cycle. Therefore, we study the development of core ontologies in our department, namely the core ontology for multimedia (COMM) and the multimedia metadata ontology. The results are domain-specific languages that allow ontology engineers to abstract from implementation issues and concentrate on the ontology engineering task. It results in increasing productivity by filling the gap between domain models and source code.
Die Arbeit beschäftigt sich mit der für Router und Kleingeräte entwickelten Linux-Distribution OpenWrt. Im ersten Teil der Arbeit wird die Erstellung von Softwarepaketen für OpenWrt erläutert und eine Anzeige- und Konfigurationskomponente für die Dateien des Quagga RIP-Daemons für die Benutzeroberfläche LuCI erstellt. Im zweiten Teil wird der Aufbau eines Testnetzwerkes mit OpenWrt Routern dokumentiert, sowie die Anbindung an VNUML-virtualisierte Netzwerke.
Folksonomies are Web 2.0 platforms where users share resources with each other. Furthermore, they can assign keywords (called tags) to the resources for categorizing and organizing the resources. Numerous types of resources like websites (Delicious), images (Flickr), and videos (YouTube) are supported by different folksonomies. The folksonomies are easy to use and thus attract the attention of millions of users. Together with the ease they offer, there are also some problems. This thesis addresses different problems of folksonomies and proposes solutions for these problems. The first problem occurs when users search for relevant resources in folksonomies. Often, the users are not able to find all relevant resources because they don't know which tags are relevant. The second problem is assigning tags to resources. Although many folksonomies (like Delicious) recommend tags for the resources, other folksonomies (like Flickr) do not recommend any tags. Tag recommendation helps the users to easily tag their resources. The third problem is that tags and resources are lacking semantics. This leads for example to ambiguous tags. The tags are lacking semantics because they are freely chosen keywords. The automatic identification of the semantics of tags and resources helps in reducing problems that arise from this freedom of the users in choosing the tags. This thesis proposes methods which exploit semantics to address the problems of search, tag recommendation, and the identification of tag semantics. The semantics are discovered from a variety of sources. In this thesis, we exploit web search engines, online social communities and the co-occurrences of tags as sources of semantics. Using different sources for discovering semantics reduces the efforts to build systems which solve the problems mentioned earlier. This thesis evaluates the proposed methods on a large scale data set. The evaluation results suggest that it is possible to exploit the semantics for improving search, recommendation of tags, and automatic identification of the semantics of tags and resources.
Location Provider
(2011)
In dieser Arbeit wurde gemäß den in Kapitel 1.3 definierten Anforderungen ein System entwickelt, um mobile und in der Rechenleistung beschränkte Geräte wie Mobiltelefone und PDAs sowohl im Freien als auch in geschlossenen Räumen verorten zu können. Der eingeschränkten Leistungsfähigkeit der Geräte wird durch die Umsetzung einer Client-Server Lösung Rechnung getragen, bei der der Client als Sensorträger dient, der Daten aus seiner Umgebung registriert und über ein Kommunikationsnetz zur weiteren Berechnung an einen Server überträgt. Die Datenübertragung ist als Webservice realisiert, der auf offenen Standards wie XML und SOAP basiert und leicht um weitere Funktionen erweitert werden kann.
An der Universität Koblenz wurde mit dem RMTI (RIP with Metric based Topology Investigation) eine Erweiterung zu RIP entwickelt, die das Counting-to-Infinity Problem beheben soll. Eine lauffähige Implementierung des RMTI Algorithmus, auf Basis des RIP Daemons der Quagga Software Routing Suite, liegt vor. Openwrt ist ein, auf Linux basierendes, quelloffenes Betriebssystem für Router, wie z.B. den WRT54G der Firma Linksys. Die Möglichkeiten der Konfiguration der Router mit Openwrt gehen weit über die der original Firmware hinaus. So können die Router mit Software aus dem Linux-Bereich erweitert werden. Ziel dieser Studienarbeit ist die Installation des RMTI Daemons in Openwrt auf den Linksys WRT54G Routern der AG Rechnernetze. Anschließend sollen typische Netzwerksituationen aufgebaut und das Verhalten der Router untersucht werden.
Improvements to the RMTI network routing daemon implementation and preparation of a public release
(2011)
Routing with Metric based Topology Investigation (RMTI) is an algorithm meant to extend distance-vector routing protocols. It is under research and development at the University of Koblenz-Landau since 1999 and currently implemented on top of the well-known Routing Information Protocol (RIP). Around midyear 2009, the latest implementation of RMTI included a lot of deprecated functionality. Because of this, the first goal of this thesis was the reduction of the codebase to a minimum. Beside a lot of reorganization and a general cleanup, this mainly involved the removal of some no longer needed modes as well as the separation of the formerly mandatory XTPeer test environment. During the second part, many test series were carried out in order to ensure the correctness of the latest RMTI implementation. A replacement for XTPeer was needed and several new ways of testing were explored. In conjunction with this thesis, the RMTI source code was finally released to the public under a free software license.
Towards Improving the Understanding of Image Semantics by Gaze-based Tag-to-Region Assignments
(2011)
Eye-trackers have been used in the past to identify visual foci in images, find task-related image regions, or localize affective regions in images. However, they have not been used for identifying specific objects in images. In this paper, we investigate whether it is possible to assign image regions showing specific objects with tags describing these objects by analyzing the users' gaze paths. To this end, we have conducted an experiment with 20 subjects viewing 50 image-tag-pairs each. We have compared the tag-to-region assignments for nine existing and four new fixation measures. In addition, we have investigated the impact of extending region boundaries, weighting small image regions, and the number of subjects viewing the images. The paper shows that a tag-to-region assignment with an accuracy of 67% can be achieved by using gaze information. In addition, we show that multiple regions on the same image can be differentiated with an accuracy of 38%.
Model-Driven Engineering (MDE) aims to raise the level of abstraction in software system specifications and increase automation in software development. Modelware technological spaces contain the languages and tools for MDE that software developers take into consideration to model systems and domains. Ontoware technological spaces contain ontology languages and technologies to design, query, and reason on knowledge. With the advent of the Semantic Web, ontologies are now being used within the field of software development, as well. In this thesis, bridging technologies are developed to combine two technological spaces in general. Transformation bridges translate models between spaces, mapping bridges relate different models between two spaces, and, integration bridges merge spaces to new all-embracing technological spaces. API bridges establish interoperability between the tools used in the space. In particular, this thesis focuses on the combination of modelware and ontoware technological spaces. Subsequent to a sound comparison of languages and tools in both spaces, the integration bridge is used to build a common technological space, which allows for the hybrid use of languages and the interoperable use of tools. The new space allows for language and domain engineering. Ontology-based software languages may be designed in the new space where syntax and formal semantics are defined with the support of ontology languages, and the correctness of language models is ensured by the use of ontology reasoning technologies. These languages represent a core means for exploiting expressive ontology reasoning in the software modeling domain, while remaining flexible enough to accommodate varying needs of software modelers. Application domains are conceptually described by languages that allow for defining domain instances and types within one domain model. Integrated ontology languages may provide formal semantics for domain-specific languages and ontology technologies allow for reasoning over types and instances in domain models. A scenario in which configurations for network device families are modeled illustrates the approaches discussed in this thesis. Furthermore, the implementation of all bridging technologies for the combination of technological spaces and all tools for ontology-based language engineering and use is illustrated.
This dissertation investigates the emergence of dialects in a model of a multi-agent simulation based on neural networks that is developed within this thesis. First the linguistic foundation of language is illustrated. Besides discussing some important definitions of language, this is achieved by giving a summary of the evolutionary steps in language evolution followed by an overview of the elements of human modern languages including the ways of language change. Subsequently some examples of socially learned animals" communicative behaviour and its formations of dialects are shown.
In the following the computational and mathematical basis are to be explained. Besides the expressions model and simulation, these comprehend the setup of multi-agent simulations and the functionality of artificial neural networks. Based on the abovementioned basics the model of this dissertation is derived and described in a detailed way. Results drawn out of several hundreds of simulation runs are explicated thereafter. Each destructive factor defined in the model is examined separately and its domain is divided into intervals with different effects on the outcome of the simulation.
Furthermore, existing interdependences between the single factors and the process of language merging after a prior dialect divide are shown. Results and outlook are followed by specification, draft, architecture, a detailed illustration of the implementation and a user guide of the tool named DiaLex. DiaLex is a java based tool providing users the opportunity to simulate and analyse the influence of different destructive factors on dialect formation within a commonly used language of one or multiple communities of agents.
Nagios unter VNUML
(2011)
This paper documents the development of an abstract physics layer (APL) for Simspark. After short introductions to physics engines and Simspark, reasons why an APL was developed are explained. The biggest part of this paper describes the new design and why certain design choices were made based on requirements that arose during developement. It concludes by explaining how the new design was eventually implemented and what future possibilities the new design holds.
In this work, some network protocols with Wireshark Protokollanalyser should be observed and described the deal with them. Wireshark is an offshoot of "Ethereal", one of the most popular protocol analyzer. Wireshark analysis network traffic, draws on it and make it clear . For the simulation of the network is used VNUML. Since VNUML can only be used under Linux, andLinux is running as a virtual machine in between to work in Windows to be able to.
Dieses Dokument schlägt ein Konzept für eine Personal Key Infrastruktur in iCity vor. Über ein Trust Center (TC) ausgestellte Zertiffkate gewährleisten einen sicheren Schlüsselaustausch mit nachweisbarer Authentisierung des Kommunikationspartners, Abhörsicherheit sowie Unverf älschtheit und Nachweisbarkeit der Nachrichten. Das gemeinsam vertrauensw ürdige TC muss während der Kommunikation nicht erreichbar sein. Es erhält lediglich öffentliche Informationen. Das Konzept stellt mehrere Sicherheitsstufen vor, die sichere Identiffkation und Anonymität unterschiedlich gewichten.
This minor thesis shows a way to optimise a generated oracle to achieve shorter runtimes. Shorter runtimes of test cases allows the execution of more test cases in the same time. The execution of more test cases leads to a higher confidence in the software-quality. Oracles can be derived from specifications. However specifications are used for different purposes and therefore are not necessarily executable. Even if the are executable it might be with only a high runtime. Those two facts come mostly from the use of quantifiers in the logic. If the quantifier-range is not bounded, respectively if the bounds are outside the target language-datatype-limits, the specification is too expressive to be exported into a program. Even if the bounds inside the used datatype-limits, the quantification is represented as a loop which leads to a runtime blowup, especially if quantifiers are nested. This work explains four different possibilities to reduce the execution time of the oracle by manipulating the quantified formular whereas this approach is only applicable if the quantified variables are of type Integer.