Refine
Year of publication
Document Type
- Doctoral Thesis (27) (remove)
Language
- English (27) (remove)
Keywords
- Software Engineering (3)
- Modellgetriebene Entwicklung (2)
- Semantic Web (2)
- AUTOSAR (1)
- Abduktion <Logik> (1)
- Ad-hoc-Netz (1)
- Adaptation (1)
- Algorithmische Geometrie (1)
- Anpassung (1)
- Architektur <Informatik> (1)
Institute
- Institut für Informatik (27) (remove)
Model-Driven Engineering (MDE) aims to raise the level of abstraction in software system specifications and increase automation in software development. Modelware technological spaces contain the languages and tools for MDE that software developers take into consideration to model systems and domains. Ontoware technological spaces contain ontology languages and technologies to design, query, and reason on knowledge. With the advent of the Semantic Web, ontologies are now being used within the field of software development, as well. In this thesis, bridging technologies are developed to combine two technological spaces in general. Transformation bridges translate models between spaces, mapping bridges relate different models between two spaces, and, integration bridges merge spaces to new all-embracing technological spaces. API bridges establish interoperability between the tools used in the space. In particular, this thesis focuses on the combination of modelware and ontoware technological spaces. Subsequent to a sound comparison of languages and tools in both spaces, the integration bridge is used to build a common technological space, which allows for the hybrid use of languages and the interoperable use of tools. The new space allows for language and domain engineering. Ontology-based software languages may be designed in the new space where syntax and formal semantics are defined with the support of ontology languages, and the correctness of language models is ensured by the use of ontology reasoning technologies. These languages represent a core means for exploiting expressive ontology reasoning in the software modeling domain, while remaining flexible enough to accommodate varying needs of software modelers. Application domains are conceptually described by languages that allow for defining domain instances and types within one domain model. Integrated ontology languages may provide formal semantics for domain-specific languages and ontology technologies allow for reasoning over types and instances in domain models. A scenario in which configurations for network device families are modeled illustrates the approaches discussed in this thesis. Furthermore, the implementation of all bridging technologies for the combination of technological spaces and all tools for ontology-based language engineering and use is illustrated.
Traditional Driver Assistance Systems (DAS) like for example Lane Departure Warning Systems or the well-known Electronic Stability Program have in common that their system and software architecture is static. This means that neither the number and topology of Electronic Control Units (ECUs) nor the presence and functionality of software modules changes after the vehicles leave the factory.
However, some future DAS do face changes at runtime. This is true for example for truck and trailer DAS as their hardware components and software entities are spread over both parts of the combination. These new requirements cannot be faced by state-of-the-art approaches of automotive software systems. Instead, a different technique of designing such Distributed Driver Assistance Systems (DDAS) needs to be developed. The main contribution of this thesis is the development of a novel software and system architecture for dynamically changing DAS using the example of driving assistance for truck and trailer. This architecture has to be able to autonomously detect and handle changes within the topology. In order to do so, the system decides which degree of assistance and which types of HMI can be offered every time a trailer is connected or disconnected. Therefore an analysis of the available software and hardware components as well as a determination of possible assistance functionality and a re-configuration of the system take place. Such adaptation can be granted by the principles of Service-oriented Architecture (SOA). In this architectural style all functionality is encapsulated in self-contained units, so-called Services. These Services offer the functionality through well-defined interfaces whose behavior is described in contracts. Using these Services, large-scale applications can be built and adapted at runtime. This thesis describes the research conducted in achieving the goals described by introducing Service-oriented Architectures into the automotive domain. SOA deals with the high degree of distribution, the demand for re-usability and the heterogeneity of the needed components.
It also applies automatic re-configuration in the event of a system change. Instead of adapting one of the frameworks available to this scenario, the main principles of Service-orientation are picked up and tailored. This leads to the development of the Service-oriented Driver Assistance (SODA) framework, which implements the benefits of Service-orientation while ensuring compatibility and compliance to automotive requirements, best-practices and standards. Within this thesis several state-of-the-art Service-oriented frameworks are analyzed and compared. Furthermore, the SODA framework as well as all its different aspects regarding the automotive software domain are described in detail. These aspects include a well-defined reference model that introduces and relates terms and concepts and defines an architectural blueprint. Furthermore, some of the modules of this blueprint such as the re-configuration module and the Communication Model are presented in full detail. In order to prove the compliance of the framework regarding state-of-the-art automotive software systems, a development process respecting today's best practices in automotive design procedures as well as the integration of SODA into the AUTOSAR standard are discussed. Finally, the SODA framework is used to build a full-scale demonstrator in order to evaluate its performance and efficiency.
The term “Software Chrestomaty” is defined as a collection of software systems meant to be useful in learning about or gaining insight into software languages, software technologies, software concepts, programming, and software engineering. 101companies software chrestomathy is a community project with the attributes of a Research 2.0 infrastructure for various stakeholders in software languages and technology communities. The core of 101companies combines a semantic wiki and confederated open source repositories. We designed and developed an integrated ontology-based knowledge base about software languages and technologies. The knowledge is created by the community of contributors and supported with a running example and structured documentation. The complete ecosystem is exposed by using Linked Data principles and equipped with the additional metadata about individual artifacts. Within the context of software chrestomathy we explored a new type of software architecture – linguistic architecture that is targeted on the language and technology relationships within a software product and based on the megamodels. Our approach to documentation of the software systems is highly structured and makes use of the concepts of the newly developed megamodeling language MegaL. We “connect” an emerging ontology with the megamodeling artifacts to raise the cognitive value of the linguistic architecture.
Initial goal of the current dissertation was the determination of image-based biomarkers sensitive for neurodegenerative processes in the human brain. One such process is the demyelination of neural cells characteristic for Multiple sclerosis (MS) - the most common neurological disease in young adults for which there is no cure yet. Conventional MRI techniques are very effective in localizing areas of brain tissue damage and are thus a reliable tool for the initial MS diagnosis. However, a mismatch between the clinical fndings and the visualized areas of damage is observed, which renders the use of the standard MRI diffcult for the objective disease monitoring and therapy evaluation. To address this problem, a novel algorithm for the fast mapping of myelin water content using standard multiecho gradient echo acquisitions of the human brain is developed in the current work. The method extents a previously published approach for the simultaneous measurement of brain T1, T∗ 2 and total water content. Employing the multiexponential T∗ 2 decay signal of myelinated tissue, myelin water content is measured based on the quantifcation of two water pools (myelin water and rest) with different relaxation times. Whole brain in vivo myelin water content maps are acquired in 10 healthy controls and one subject with MS. The in vivo results obtained are consistent with previous reports. The acquired quantitative data have a high potential in the context of MS. However, the parameters estimated in a multiparametric acquisition are correlated and constitute therefore an ill-posed, nontrivial data analysis problem. Motivated by this specific problem, a new data clustering approach is developed called Nuclear Potential Clustering, NPC. It is suitable for the explorative analysis of arbitrary dimensional and possibly correlated data without a priori assumptions about its structure. The developed algorithm is based on a concept adapted from nuclear physics. To partition the data, the dynamic behavior of electrically even charged nucleons interacting in a d-dimensional feature space is modeled. An adaptive nuclear potential, comprised of a short-range attractive (Strong interaction) and a long-range repulsive term (Coulomb potential), is assigned to each data point. Thus, nucleons that are densely distributed in space fuse to build nuclei (clusters), whereas single point clusters are repelled (noise). The algorithm is optimized and tested in an extensive study with a series of synthetic datasets as well as the Iris data. The results show that it can robustly identify clusters even when complex configurations and noise are present. Finally, to address the initial goal, quantitative MRI data of 42 patients are analyzed employing NPC. A series of experiments with different sets of image-based features show a consistent grouping tendency: younger patients with low disease grade are recognized as cohesive clusters, while those of higher age and impairment are recognized as outliers. This allows for the definition of a reference region in a feature space associated with phenotypic data. Tracking of the individual's positions therein can disclose patients at risk and be employed for therapy evaluation.
One of the main goals of the artificial intelligence community is to create machines able to reason with dynamically changing knowledge. To achieve this goal, a multitude of different problems have to be solved, of which many have been addressed in the various sub-disciplines of artificial intelligence, like automated reasoning and machine learning. The thesis at hand focuses on the automated reasoning aspects of these problems and address two of the problems which have to be overcome to reach the afore-mentioned goal, namely 1. the fact that reasoning in logical knowledge bases is intractable and 2. the fact that applying changes to formalized knowledge can easily introduce inconsistencies, which leads to unwanted results in most scenarios.
To ease the intractability of logical reasoning, I suggest to adapt a technique called knowledge compilation, known from propositional logic, to description logic knowledge bases. The basic idea of this technique is to compile the given knowledge base into a normal form which allows to answer queries efficiently. This compilation step is very expensive but has to be performed only once and as soon as the result of this step is used to answer many queries, the expensive compilation step gets worthwhile. In the thesis at hand, I develop a normal form, called linkless normal form, suitable for knowledge compilation for description logic knowledge bases. From a computational point of view, the linkless normal form has very nice properties which are introduced in this thesis.
For the second problem, I focus on changes occurring on the instance level of description logic knowledge bases. I introduce three change operators interesting for these knowledge bases, namely deletion and insertion of assertions as well as repair of inconsistent instance bases. These change operators are defined such that in all three cases, the resulting knowledge base is ensured to be consistent and changes performed to the knowledge base are minimal. This allows us to preserve as much of the original knowledge base as possible. Furthermore, I show how these changes can be applied by using a transformation of the knowledge base.
For both issues I suggest to adapt techniques successfully used in other logics to get promising methods for description logic knowledge bases.
The provision of electronic participation services (e-participation) is a complex socio-technical undertaking that needs comprehensive design and implementation strategies. E-participation service providers, in the most cases administrations and governments, struggle with changing requirements that demand more transparency, better connectivity and increased collaboration among different actors. At the same time, less staff are available. As a result, recent research assesses only a minority of e-participation services as successful. The challenge is that the e-participation domain lacks comprehensive approaches to design and implement (e-)participation services. Enterprise Architecture (EA) frameworks have evolved in information systems research as an approach to guide the development of complex socio-technical systems. This approach can guide the design and implementation services, if the collection of organisations with the commonly held goal to provide participation services is understood as an E Participation Enterprise (EE). However, research & practice in the e participation domain has not yet exploited EA frameworks. Consequently, the problem scope that motivates this dissertation is the existing gap in research to deploy EA frameworks in e participation design and implementation. The research question that drives this research is: What methodical and technical guides do architecture frameworks provide that can be used to design and implement better and successful e participation?
This dissertation presents a literature study showing that existing approaches have not covered yet the challenges of comprehensive e participation design and implementation. Accordingly, the research moves on to investigate established EA frameworks such as the Zachman Framework, TOGAF, the DoDAF, the FEA, the ARIS, and the ArchiMate for their use. While the application of these frameworks in e participation design and implementation is possible, an integrated approach is lacking so far. The synthesis of literature review and practical insights in design and implementation of e participation services from four projects show the challenges of adapting architecture frameworks for this domain. However, the research shows also the potential of a combination of the different approaches. Consequently, the research moves on to develop the E-Participation Architecture Framework (EPART-Framework). Therefore, the dissertation applies design science research including literature review and action research. Two independent settings test an initial EPART-Framework version. The results yield into the EPART-Framework presented in this dissertation.
The EPART-Framework comprises of the EPART-Metamodel with six EPART-Viewpoints, which frame the stakeholder concerns: the Participation Scope, the Participant Viewpoint, the Participation Viewpoint, the Data & Information Viewpoint, the E-participation Viewpoint, and Implementation & Governance Viewpoint. The EPART-Method supports the stakeholders to design the EE and implement e participation and stores its output in an architecture description and a solution repository. It consists of five consecutive phases accompanied by requirements management: Initiation, Design, Implementation and Preparation, Participation, and Evaluation. The EPART-Framework fills the gap between the e participation domain and the enterprise architecture framework domain. The evaluation gives reasonable evidence that the framework is a valuable addition in academia and in practice to improve e-participation design and implementation. The same time, it shows opportunities for future research to extend and advance the framework.
Diffusion imaging captures the movement of water molecules in tissue by applying varying gradient fields in a magnetic resonance imaging (MRI)-based setting. It poses a crucial contribution to in vivo examinations of neuronal connections: The local diffusion profile enables inference of the position and orientation of fiber pathways. Diffusion imaging is a significant technique for fundamental neuroscience, in which pathways connecting cortical activation zones are examined, and for neurosurgical planning, where fiber reconstructions are considered as intervention related risk structures.
Diffusion tensor imaging (DTI) is currently applied in clinical environments in order to model the MRI signal due to its fast acquisition and reconstruction time. However, the inability of DTI to model complex intra-voxel diffusion distributions gave rise to an advanced reconstruction scheme which is known as high angular resolution diffusion imaging (HARDI). HARDI received increasing interest in neuroscience due to its potential to provide a more accurate view of pathway configurations in the human brain.
In order to fully exploit the advantages of HARDI over DTI, advanced fiber reconstructions and visualizations are required. This work presents novel approaches contributing to current research in the field of diffusion image processing and visualization. Diffusion classification, tractography, and visualizations approaches were designed to enable a meaningful exploration of neuronal connections as well as their constitution. Furthermore, an interactive neurosurgical planning tool with consideration of neuronal pathways was developed.
The research results in this work provide an enhanced and task-related insight into neuronal connections for neuroscientists as well as neurosurgeons and contribute to the implementation of HARDI in clinical environments.
Modern Internet and Intranet techniques, such as Web services and virtualization, facilitate the distributed processing of data providing improved flexibility. The gain in flexibility also incurs disadvantages. Integrated workflows forward and distribute data between departments and across organizations. The data may be affected by privacy laws, contracts, or intellectual property rights. Under such circumstances of flexible cooperations between organizations, accounting for the processing of data and restricting actions performed on the data may be legally and contractually required. In the Internet and Intranet, monitoring mechanisms provide means for observing and auditing the processing of data, while policy languages constitute a mechanism for specifying restrictions and obligations.
In this thesis, we present our contributions to these fields by providing improvements for auditing and restricting the data processing in distributed environments. We define formal qualities of auditing methods used in distributed environments. Based on these qualities, we provide a novel monitoring solution supporting a data-centric view on the distributed data processing. We present a solution for provenance-aware policies and a formal specification of obligations offering a procedure to decide whether obligatory processing steps can be met in the future.
This dissertation investigates the usage of theorem provers in automated question answering (QA). QA systems attempt to compute correct answers for questions phrased in a natural language. Commonly they utilize a multitude of methods from computational linguistics and knowledge representation to process the questions and to obtain the answers from extensive knowledge bases. These methods are often syntax-based, and they cannot derive implicit knowledge. Automated theorem provers (ATP) on the other hand can compute logical derivations with millions of inference steps. By integrating a prover into a QA system this reasoning strength could be harnessed to deduce new knowledge from the facts in the knowledge base and thereby improve the QA capabilities. This involves challenges in that the contrary approaches of QA and automated reasoning must be combined: QA methods normally aim for speed and robustness to obtain useful results even from incomplete of faulty data, whereas ATP systems employ logical calculi to derive unambiguous and rigorous proofs. The latter approach is difficult to reconcile with the quantity and the quality of the knowledge bases in QA. The dissertation describes modifications to ATP systems in order to overcome these obstacles. The central example is the theorem prover E-KRHyper which was developed by the author at the Universität Koblenz-Landau. As part of the research work for this dissertation E-KRHyper was embedded into a framework of components for natural language processing, information retrieval and knowledge representation, together forming the QA system LogAnswer.
Also presented are additional extensions to the prover implementation and the underlying calculi which go beyond enhancing the reasoning strength of QA systems by giving access to external knowledge sources like web services. These allow the prover to fill gaps in the knowledge during the derivation, or to use external ontologies in other ways, for example for abductive reasoning. While the modifications and extensions detailed in the dissertation are a direct result of adapting an ATP system to QA, some of them can be useful for automated reasoning in general. Evaluation results from experiments and competition participations demonstrate the effectiveness of the methods under discussion.
In the recent years, Software Engineering research has shown the rise of interest in the empirical studies. Such studies are often based on empirical evidence derived from corpora - collections of software artifacts. While there are established forms of carrying out empirical research (experiments, case studies, surveys, etc.), the common task of preparing the underlying collection of software artifacts is typically addressed in ad hoc manner.
In this thesis, by means of a literature survey we show how frequently software engineering research employs software corpora and using a developed classification scheme we discuss their characteristics. Addressing the lack of methodology, we suggest a method of corpus (re-)engineering and apply it to an existing collection of Java projects.
We report two extensive empirical studies, where we perform a broad and diverse range of analyses on the language for privacy preferences (P3P) and on object-oriented application programming interfaces (APIs). In both cases, we are driven by the data at hand, by the corpus itself, discovering the actual usage of the languages.