Refine
Year of publication
- 2010 (27) (remove)
Document Type
- Part of Periodical (11)
- Doctoral Thesis (4)
- Study Thesis (4)
- Diploma Thesis (3)
- Master's Thesis (3)
- Bachelor Thesis (2)
Language
- English (27) (remove)
Has Fulltext
- yes (27) (remove)
Keywords
- API (2)
- hybrid automata (2)
- multi-agent systems (2)
- ontology (2)
- Abbildung <Mathematik> (1)
- Absolutismus (1)
- Agentenorientiertes Software Engineering (1)
- Algorithm Engineering (1)
- Artificial Neural Networks (1)
- Autoritarismus (1)
Institute
This dissertation provides an interdisciplinary contribution to the project ReGLaN-Health & Logistics. ReGLaN-Health & Logistics, is an international cooperation deriving benefits from the capabilities of scientists working on different fields. The aim of the project is the development of a socalled SDSS that supports decision makers working within health systems with a special focus on rural areas. In this dissertation, one important component for the development of the DSS named EWARS is proposed and described in detail. This component called SPATTB is developed with the intention of dealing with spatial data, i.e. data with additional geocoded information with regard to the special requirements of the EWARS.rnrnAn important component in the process of developing the EWARS is the concept of GIS. Classically, geocoded information with a vectorial character numerically describing spatial phenomena is managed and processed in a GIS. For the development of the EWARS, the manageability of the type of data exemplarily given by (x,y,o) with coordinates x,y ) and Ozon-concentration o is not sufficient. It is described, that the manageable data has to be extended to data of type (x,y,f ), where (x,y) are the geocoded information, but where f is not only a numerical value but a functional description of a certain phenomenom. An example for the existence and appearance of that type of data is the geocoded information about the variation of the Ozon-concentration in time or depending on temperature. A knowledge-base as important subsystem of DSS containing expert knowledge is mentioned. This expert-knowledge can be made manageable when using methods from the field of fuzzy logic. Thereby mappings, socalled fuzzy-sets, are generated. Within the EWARS, these mappings will be used with respect to additional geocoded data. The knowledge about the geocoded mapping information only at a finite set of locations (x,y) associated with mapping information f is not sufficient in applications that need continuous statements in a certain geographical area. To provide a contribution towards solving this problem, methods from the field of computer geometry and CAD, so-called Bezier-methods, are used for interpolating this geocoded mapping information. Classically, these methods operates on vectors a the multidimensional vector-space whose elements contain real-valued components but in terms of dealing with mapping information, there has to be an extension on topological vector spaces since mapping spaces can be defined as such spaces. This builds a new perspective and possibility in the application of these methods. Therefore, the according algorithms have to be extended; this work is presented. The field of Artificial Neural Networks plays an important role for the processing and management of the data within the EWARS, where features of biological processes and structures are modeled and implemented as algorithms. Generally, the developed methods can be divided as usable in terms of interpolation or approximation functional coherences and in such being applicable to classification problems. In this dissertation one method from each type is regarded in more detailed. Thereby, the classical algorithms of the so-called Backpropagation-Networks for approximation and the Kohonen-Networks for classification are described. Within the thesis, an extension of these algorithms is then proposed using coherences from mathematical measure-theory and approximation theory. The mentioned extension of these algorithms is based on a preprocessing of the mapping data using integration methods from measure theory.
This thesis presents an analysis of API usage in a large corpus of Java software retrieved from the open source repositories hosted at SourceForge. Most larger software projects use software libraries, which offer a public "application programming interface" or API as an interface for the programmer. In order to facilitate the transition between different APIs, there are emerging research projects in the field of automated API migration. However, there is a lack of basic statistical background information about in-the-wild usage of APIs as such measurements have, until now, only been done on rather small corpora. We thus present an analysis method suitable for measurements with large corpora. First, we create a corpus of open source projects hosted on SourceForge, as well as a corpus of software libraries. Then, all projects in the corpus are compiled with an instrumented compiler. We use a compiler plugin for javac that gives detailed information about every method created by the compiler. This information is stored in a database and analyzed.
Mobile payment has been a payment option in the market for a long time now and was predicted to become a widely used payment method. However, over the years, the market penetration rate of mPayments has been relatively low, despite it having all characteristics required of a convenient payment method. The primaryrnreason for this has been cited as a lack of customer acceptance mainly caused due to the lack of perceived security by the end-user. Although biometric authentication is not a new technology, it is experiencing a revival in the light of the present day terror threats and increased security requirements in various industries. The application of biometric authentication in mPayments is analysed here and a suitable biometric authentication method for use with mPayments is recommended. The issue of enrolment, human and technical factors to be considered are discussed and the STOF business model is applied to a BiMoP (biometric mPayment) application.
Knowledge compilation is a common technique for propositional logic knowledge bases. A given knowledge base is transformed into a normal form, for which queries can be answered efficiently. This precompilation step is expensive, but it only has to be performed once. We apply this technique to concepts defined in the Description Logic ALC. We introduce a normal form called linkless normal form for ALC concepts and discuss an efficient satisability test for concepts given in this normal form. Furthermore, we will show how to efficiently calculate uniform interpolants of precompiled concepts w.r.t. a given signature.
We propose a new approach for mobile visualization and interaction of temporal information by integrating support for time with today's most prevalent visualization of spatial information, the map. Our approach allows for an easy and precise selection of the time that is of interest and provides immediate feedback to the users when interacting with it. It has been developed in an evolutionary process gaining formative feedback from end users.
Existing tools for generating application programming interfaces (APIs) for ontologies lack sophisticated support for mapping the logics-based concepts of the ontology to an appropriate object-oriented implementation of the API. Such a mapping has to overcome the fundamental differences between the semantics described in the ontology and the pragmatics, i.e., structure, functionalities, and behavior implemented in the API. Typically, concepts from the ontology are mapped one-to-one to classes in the targeted programming language. Such a mapping only produces concept representations but not an API at the desired level of granularity expected by an application developer. We present a Model-Driven Engineering (MDE) process to generate customized APIs for ontologies. This API generation is based on the semantics defined in the ontology but also leverages additional information the ontology provides. This can be the inheritance structure of the ontology concepts, the scope of relevance of an ontology concept, or design patterns defined in the ontology.
In recent years ontologies have become common on the WWW to provide high-level descriptions of specific domains. These descriptions could be effectively used to build applications with the ability to find implicit consequences of their represented knowledge. The W3C developed the Resource Description Framework RDF, a language to describe the semantics of the data on the web, and the Ontology Web Language OWL, a family of knowledge representation languages for authoring ontologies. In this thesis we propose an ontology API engineering framework that makes use of the state-of-the-art ontology modeling technologies as well as of software engineering technologies. This system simplifies the design and implementation process of developing dedicated APIs for ontologies. Developers of semantic web applications usually face the problem of mapping entities or complex relations described in the ontology to object-oriented representations. Mapping complex relationship structures that come with complex ontologies to a useful API requires more complicated API representations than does the mere mapping of concepts to classes. The implementation of correct object persistence functions in such class representations also becomes quite complex.
In recent development, attempts have been made to integrate UML and OWL into one hybrid modeling language, namely TwoUse. This aims at making use of the benefits of both modeling languages and overcoming the restrictions of each. In order to create a modeling language that will actually be used in software development an integration with OCL is needed. This integration has already been described at the contextual level in, however an implementation is lacking so far. The scope of this paper is the programatical implementation of the integration of TwoUse with OCL. In order to achieve this, two different OCL implementations that already provide parsing and interpretation functionalities for expressions over regular UML. This paper presents two attempts to extend existing OCL implementations, as well as a comparison of the existing approaches.
Unlocking the semantics of multimedia presentations in the web with the multimedia metadata ontology
(2010)
The semantics of rich multimedia presentations in the web such as SMIL, SVG and Flash cannot or only to a very limited extend be understood by search engines today. This hampers the retrieval of such presentations and makes their archival and management a difficult task. Existing metadata models and metadata standards are either conceptually too narrow, focus on a specific media type only, cannot be used and combined together, or are not practically applicable for the semantic description of rich multimedia presentations. In this paper, we propose the Multimedia Metadata Ontology (M3O) for annotating rich, structured multimedia presentations. The M3O provides a generic modeling framework for representing sophisticated multimedia metadata. It allows for integrating the features provided by the existing metadata models and metadata standards. Our approach bases on Semantic Web technologies and can be easily integrated with multimedia formats such as the W3C standards SMIL and SVG. With the M3O, we unlock the semantics of rich multimedia presentations in the web by making the semantics machine-readable and machine-understandable. The M3O is used with our SemanticMM4U framework for the multi-channel generation of semantically-rich multimedia presentations.
Im Rahmen dieser Bachelorarbeit wurde ein Back-Office für die elektronische Version des Europäischen Schadensberichtes erstellt. Es wurde bereits in anderen Arbeiten ein mobiler Client, welcher auf einem Windows Mobile Handy läuft, sowie ein Polizei Client erstellt. Diese greifen auf das Back-Office zu, um Daten, wie z.B. die Autodaten (Automarke, der Typ, das Baujahr und Bilder eines 3D-Modells des Autos) zu einem bestimmten Kennzeichen oder die Personendaten des jeweiligen Autobesitzers zu erhalten. Der mobile Client sendet zudem die Unfallakte an das Back-Office, damit die Daten über einen Unfall in diesem abgespeichert und weiter bearbeitet werden können. Ziel der Arbeit war es ein erweiterbares, modulares System zu entwickeln, welches später um weitere Module ergänzt werden kann, um neue Funktionen bereitstellen zu können. Diese Module können jeweils beliebige Daten in einer Datenbank abspeichern und diese von der Datenbank auch wieder abfragen, sowie verändern, ohne dass das relationale Schema der Datenbank verändert werden muss.
The processing of data is often restricted by contractual and legal requirements for protecting privacy and IPRs. Policies provide means to control how and by whom data is processed. Conditions of policies may depend on the previous processing of the data. However, existing policy languages do not provide means to express such conditions. In this work we present a formal model and language allowing for specifying conditions based on the history of data processing. We base the model and language on XACML.
The development of a pan-European public E-Procurement system is an important target of the European Union to enhance the efficiency, transparency and competitiveness of public procurement procedures conducted within the European single market. A great obstacle for cross-border electronic procurement is the heterogeneity of national procurement systems in terms of technical, organizational and legal differences. To overcome this obstacle the European Commission funds several initiatives that contribute to the aim of achieving interoperability for pan-European public procurement. Pan European Public Procurement OnLine (PEPPOL) is one of these initiatives that aims at piloting an interoperable pan-European E-Procurement solution to support businesses and public purchasing entities from different member states to conduct their procurement processes electronically.rnrnAs interoperability and inter-connection of distributed heterogeneous information systems are the major requirements in the European procurement domain, and the VCD sub-domain in particular, service-oriented architecture (SOA) seems to provide a promising approach to realize such an architecture, as it promotes loose coupling and interoperability. This master thesis therefore discusses the SOA approach and how its concepts, methodologies and technologies can be used for the development of interoperable IT systems for electronic public procurement. This discussion is enhanced through a practical application of the discussed SOA methodologies by conceptualizing and prototyping of a sub-system derived from the overall system domain of the Virtual Company Dossier. For that purpose, important aspects of interoperability and related standards and technologies will be examined and put into the context of public electronic procurement. Furthermore, the paradigm behind SOA will be discussed, including the derivation of a top-down development methodology for service-oriented systems.
The goal of this minor thesis is to integrate a robotic arm into an existing robotics software. A robot built on top of this stack should be able to participate successfully RoboCup @Home league. The robot Lisa (Lisa is a service android) needs to manipulate objects, lifting them from shelves or handing them to people. Up to now, the only possibility to do this was a small gripper attached to the robot platform. A "Katana Linux Robot" of Swiss manufacturer Neuronics has been added to the robot for this thesis. This arm needs a driver software and path planner, so that the arm can reach its goal object "intelligently", avoiding obstacles and creating smooth, natural motions.
Texture-based text detection in digital images using wavelet features and support vector machines
(2010)
In this bachelor thesis a new texture-based approach for the detection of text in digital images is presented. The procedure can be essentially divided into two main tasks, in detection of text blocks and detection of individual words, whereby the individual words are extracted from the detected text blocks. Roughly, the developed method acts with multiple support vector machines, which classify possible text regions of an image into real text regions, using wavelet-based features. In the process the possible text regions are defifined by edge projections with diσerent orientations. The results of the approach are X/Y coordinates, width and height of rectangular regions of an image, which contains individual words. This knowledge can be further processed, for example by an optical character recognition software to get the important and useful text information.
Hybrid automata are used as standard means for the specification and analysis of dynamical systems. Several researches have approached them to formally specify reactive Multi-agent systems situated in a physical environment, where the agents react continuously to their environment. The specified systems, in turn, are formally checked with the help of existing hybrid automata verification tools. However, when dealing with multi-agent systems, two problems may be raised. The first problem is a state space problem raised due to the composition process, where the agents have to be parallel composed into an agent capturing all possible behaviors of the multi-agent system prior to the verification phase. The second problem concerns the expressiveness of verification tools when modeling and verifying certain behaviors. Therefore, this paper tackles these problems by showing how multi-agent systems, specified as hybrid automata, can be modeled and verified using constraint logic programming(CLP). In particular, a CLP framework is presented to show how the composition of multi-agent behaviors can be captured dynamically during the verification phase. This can relieve the state space complexity that may occur as a result of the composition process. Additionally, the expressiveness of the CLP model flexibly allows not only to model multi-agent systems, but also to check various properties by means of the reachability analysis. Experiments are promising to show the feasibility of our approach.
Specifying behaviors of multi-agent systems (MASs) is a demanding task, especially when applied in safety-critical systems. In the latter systems, the specification of behaviors has to be carried out carefully in order to avoid side effects that might cause unwanted or even disastrous behaviors. Thus, formal methods based on mathematical models of the system under design are helpful. They not only allow us to formally specify the system at different levels of abstraction, but also to verify the consistency of the specified systems before implementing them. The formal specification aims a precise and unambiguous description of the behavior of MASs, whereas the verification aims at proving the satisfaction of specified requirements. A behavior of an agent can be described as discrete changes of its states with respect to external or internal actions. Whenever an action occurs, the agent moves from one state to another one. Therefore, an efficient way to model this type of discrete behaviors is to use a kind of state transition diagrams such as finite automata. One remarkable advantage of such transition diagrams is that they lend themselves formal analysis techniques using model checking. The latter is an automatic verification technique which determines whether given properties are satisfied within a model underlying a particular system. In realistic physical environments, however, it is necessary to consider continuous behaviors in addition to discrete behaviors of MASs. Examples of those type of behaviors include the movement of a soccer agent to kick off or to go to the ball, the process of putting out the fire by a fire brigade agent in a rescue scenario, or any other behaviors that depend on any timed physical law. The traditional state transition diagrams are not sufficient to combine these types of behaviors. Hybrid automata offer an elegant method to capture such types of behaviors. Hybrid automata extend regular state transition diagrams with methods that deal with those continuous actions such that the state transition diagrams are used to model the discrete changes of behaviors, while differential equations are used to model the continuous changes. The semantics of hybrid automata make them accessible to formal verification by means of model checking. The main goal of this thesis is to approach hybrid automata for specifying and verifying behaviors of MASs. However, specifying and and verifying behaviors of MASs by means of hybrid automata raises several issues that should be considered. These issues include the complexity, modularity, and the expressiveness of MASs' models. This thesis addresses these issues and provides possible solutions to tackle them.
Conventional security infrastructures in the Internet cannot be directly adopted to ambient systems, especially if based on short-range communication channels: Personal, mobile devices are used and the participants are present during communication, so privacy protection is a crucial issue. As ambient systems cannot rely on an uninterrupted connection to a Trust Center, certiffed data has to be veriffed locally. Security techniques have to be adjusted to the special environment. This paper introduces a public key infrastructure (PKI) to provide secure communication channels with respect to privacy, confidentiality, data integrity, non-repudiability, and user or device authentication. It supports three certiffcate levels with a different balance between authenticity and anonymity. This PKI is currently under implementation as part of the iCity project.
Investigating the environmental fate of iodinated X-ray contrast media in the urban water cycle
(2010)
Iodinated X-ray contrast media (ICM) are a group of emerging contaminants which have been detected at elevated concentrations in the aquatic environment. These compounds are excreted unmetabolized into hospital wastewater, and eventually treated at wastewater treatment plants (WWTPs). The removal of ICM in WWTPs has not been very effective and therefore the ICM enter the aquatic environment via WWTP effluent discharges. Research has investigated the removal of selected ICM via abiotic and biotic processes, however limited work has attempted to determine the fate of these compounds once released into the environment. This thesis investigates the biotransformation of four selected ICM (diatrizoate, iohexol, iomeprol, and iopamidol) in aerobic soil-water and sediment-water systems as well as in different environmental matrices. Iohexol, iomeprol and iopamidol were biotransformed to several TPs in the aerobic batch systems, while no biotransformation was observed for the ionic ICM, diatrizoate. In total 34 biotransformation products (TPs) of the three non-ionic ICM were identified. The combination of semi-preparative HPLC-UV, hybrid triple quadrupole-linear ion trap mass spectrometry (Qq-LIT-MS) was found to be an accurate approach for the structural elucidation of ICM TPs. The ICM TPs resulted in microbial transformation occurring at the side chains of the parent ICM, with the iodinated aromatic ring unmodified.
Die Entwicklung von Algorithmen im Sinne des Algorithm Engineering geschieht zyklisch. Der entworfene Algorithmus wird theoretisch analysiert und anschließend implementiert. Nach der praktischen Evaluierung wird der Entwurf anhand der gewonnenen Kenntnisse weiter entwickelt. Formale Verifffizierung der Implementation neben der praktischen Evaluierung kann den Entwicklungsprozess verbessern. Mit der Java Modeling Language (JML) und dem KeY tool stehen eine einfache Spezififfkationssprache und ein benutzerfreundliches, automatisiertes Verififfkationstool zur Verfügung. Diese Arbeit untersucht, inwieweit das KeY tool für die Verifffizierung von komplexeren Algorithmen geeignet ist und welche Rückmeldungen für Algorithmiker aus der Verififfkation gewonnen werden können.Die Untersuchung geschieht anhand von Dijkstras Algorithmus zur Berechnung von kürzesten Wegen in einem Graphen. Es sollen eine konkrete Implementation des Standard-Algorithmus und anschließend Implementationen weiterer Varianten verifffiziert werden. Dies ahmt den Entwicklungsprozess des Algorithmus nach, um in jeder Iteration nach möglichen Rückmeldungen zu suchen. Bei der Verifffizierung der konkreten Implementation merken wir, dass es nötig ist, zuerst eine abstraktere Implementation mit einfacheren Datenstrukturen zu verififfzieren. Mit den dort gewonnenen Kenntnissen können wir dann die Verifikation der konkreten Implementation fortführen. Auch die Varianten des Algorithmus können dank der vorangehenden Verififfkationen verifiziert werden. Die Komplexität von Dijkstras Algorithmus bereitet dem KeY tool einige Schwierigkeiten bezüglich der Performanz, weswegen wir während der Verifizierung die Automatisierung etwas reduzieren müssen. Auf der anderenrn Seite zeigt sich, dass sich aus der Verifffikation einige Rückmeldungen ableiten lassen.