Refine
Document Type
- Doctoral Thesis (5)
- Master's Thesis (4)
- Bachelor Thesis (2)
- Part of Periodical (2)
- Study Thesis (1)
Language
- English (14) (remove)
Keywords
- model-based (3)
- Abduktion <Logik> (1)
- BPMN (1)
- Bayes-Netz (1)
- Bayesian Networks (1)
- Computersimulation (1)
- Data protection (1)
- Datenschutz (1)
- Diagnose (1)
- Diagnosis (1)
Institute
- Institut für Softwaretechnik (14) (remove)
Nowadays, almost any IT system involves personal data processing. In
such systems, many privacy risks arise when privacy concerns are not
properly addressed from the early phases of the system design. The
General Data Protection Regulation (GDPR) prescribes the Privacy by
Design (PbD) principle. As its core, PbD obliges protecting personal
data from the onset of the system development, by effectively
integrating appropriate privacy controls into the design. To
operationalize the concept of PbD, a set of challenges emerges: First, we need a basis to define privacy concerns. Without such a basis, we are not able to verify whether personal data processing is authorized. Second, we need to identify where precisely in a system, the controls have to be applied. This calls for system analysis concerning privacy concerns. Third, with a view to selecting and integrating appropriate controls, based on the results of system analysis, a mechanism to identify the privacy risks is required. Mitigating privacy risks is at the core of the PbD principle. Fourth, choosing and integrating appropriate controls into a system are complex tasks that besides risks, have to consider potential interrelations among privacy controls and the costs of the controls.
This thesis introduces a model-based privacy by design methodology to handle the above challenges. Our methodology relies on a precise definition of privacy concerns and comprises three sub-methodologies: model-based privacy analysis, modelbased privacy impact assessment and privacy-enhanced system design modeling. First, we introduce a definition of privacy preferences, which provides a basis to specify privacy concerns and to verify whether personal data processing is authorized. Second, we present a model-based methodology to analyze a system model. The results of this analysis denote a set of privacy design violations. Third, taking into account the results of privacy analysis, we introduce a model-based privacy impact assessment methodology to identify concrete privacy risks in a system model. Fourth, concerning the risks, and taking into account the interrelations and the costs of the controls, we propose a methodology to select appropriate controls and integrate them into a system design. Using various practical case studies, we evaluate our concepts, showing a promising outlook on the applicability of our methodology in real-world settings.
Software systems have an increasing impact on our daily lives. Many systems process sensitive data or control critical infrastructure. Providing secure software is therefore inevitable. Such systems are rarely being renewed regularly due to the high costs and effort. Oftentimes, systems that were planned and implemented to be secure, become insecure because their context evolves. These systems are connected to the Internet and therefore also constantly subject to new types of attacks. The security requirements of these systems remain unchanged, while, for example, discovery of a vulnerability of an encryption algorithm previously assumed to be secure requires a change of the system design. Some security requirements cannot be checked by the system’s design but only at run time. Furthermore, the sudden discovery of a security violation requires an immediate reaction to prevent a system shutdown. Knowledge regarding security best practices, attacks, and mitigations is generally available, yet rarely integrated part of software development or covering evolution.
This thesis examines how the security of long-living software systems can be preserved taking into account the influence of context evolutions. The goal of the proposed approach, S²EC²O, is to recover the security of model-based software systems using co-evolution.
An ontology-based knowledge base is introduced, capable of managing common, as well as system-specific knowledge relevant to security. A transformation achieves the connection of the knowledge base to the UML system model. By using semantic differences, knowledge inference, and the detection of inconsistencies in the knowledge base, context knowledge evolutions are detected.
A catalog containing rules to manage and recover security requirements uses detected context evolutions to propose potential co-evolutions to the system model which reestablish the compliance with security requirements.
S²EC²O uses security annotations to link models and executable code and provides support for run-time monitoring. The adaptation of running systems is being considered as is round-trip engineering, which integrates insights from the run time into the system model.
S²EC²O is amended by prototypical tool support. This tool is used to show S²EC²O’s applicability based on a case study targeting the medical information system iTrust.
This thesis at hand contributes to the development and maintenance of long-living software systems, regarding their security. The proposed approach will aid security experts: It detects security-relevant changes to the system context, determines the impact on the system’s security and facilitates co-evolutions to recover the compliance with the security requirements.
Entwicklung eines Regelungsverfahrens zur Pfadverfolgung für ein Modellfahrzeug mit Sattelanhänger
(2009)
Besides the progressive automation of internal goods traffic, there is an important area that should also be considered. This area is the carriage of goods in selected external areas. The use of driverless trucks in logistic centers can report economic efficiency. In particular, these precise control procedures require that trucks drive on predetermined paths. The general aim of this work is the adaption and evaluation of a path following control method for articulated vehicles. The differences in the kinematic behavior between trucks with one-axle trailer and semi-trailer vehicles will be emphasized. Additionally, the characteristic kinematic properties of semi-trailers for the adaptation of a control procedure will be considered. This control procedure was initially designed for trucks with one-axle trailer. It must work in forwards and backwards movements. This control process will be integrated as a closed component on the control software of the model vehicle. Thus, the geometry of the model vehicle will be specified, and the possible special cases of the control process will be discovered. The work also documents the most relevant software components of the implemented control process.
The STOR project aims at the development of a scientific component system employing models and knowledge for object recognition in images. This interim report elaborates on the requirements for such a component system, structures the application area by identifying a large set of basic operations, and shows how a set of appropriate data structures and components can be derived. A small case studies exemplifies the approach.
101worker is the modular knowledge engineering component of the 101companies project. It has developed maintainability and performance problems due to growing organically, rather than following best software design practices. This thesis lays out these problems, drafts a set of requirements for refactoring the system and then describes and analyzes the resulting implementation. The solution involves collation of scattered and redundant information, setup of unit and functional test suites and incrementalization of the bus architecture of 101worker.
Over the past few decades society’s dependence on software systems has grown significantly. These systems are utilized in nearly every matter of life today and often handle sensitive, private data. This situation has turned software security analysis into an essential and widely researched topic in the field of computer science. Researchers in this field tend to make the assumption that the quality of the software systems' code directly affects the possibility for security gaps to arise in it. Because this assumption is based on properties of the code, proving it true would mean that security assessments can be performed on software, even before a certain version of it is released. A study based on this implication has already attempted to mathematically assess the existence of such a correlation, studying it based on quality and security metric calculations. The present study builds upon that study in finding an automatic method for choosing well-fitted software projects as a sample for this correlation analysis and extends the variety of projects considered for the it. In this thesis, the automatic generation of graphical representations both for the correlations between the metrics as well as for their evolution is also introduced. With these improvements, this thesis verifies the results of the previous study with a different and broader project input. It also focuses on analyzing the correlations between the quality and security metrics to real-world vulnerability data metrics. The data is extracted and evaluated from dedicated software vulnerability information sources and serves to represent the existence of proven security weaknesses in the studied software. The study discusses some of the difficulties that arise when trying to gather such information and link it to the difference in the information contained in the repositories of the studied projects. This thesis confirms the significant influence that quality metrics have on each other. It also shows that it is important to view them together as a whole and suppose that their correlation could influence the appearance of unwanted vulnerabilities as well. One of the important conclusions I can draw from this thesis is that the visualization of metric evolution graphs, helps the understanding of the values as well as their connection to each other in a more meaningful way. It allows for better grasp of their influence on each other as opposed to only studying their correlation values. This study confirms that studying metric correlations and evolution trends can help developers improve their projects and prevent them from becoming difficult to extend and maintain, increasing the potential for good quality as well as more secure software code.
Mapping ORM to TGraph
(2017)
Object Role Modeling (ORM) is a semantic modeling language used to describe objects and their relations amongst each other. Both objects and relations may be subject to rules or ORM constraints.
TGraphs are ordered, attributed, typed and directed graphs. The type of a TGraph and its components, the edges and vertices, is defined using the schema language graph UML (grUML), a profiled version of UML class diagrams. The goal of this thesis is to map ORM schemas to grUML schemas in order to be able to represent ORM schema instances as TGraphs.
Up to this point, the preferred representation for ORM schema instances is in form of relational tables. Though mappings from ORM schemas to relational schemas exist, those publicly available do not support most of the constraints ORM has to offer.
Constraints can be added to grUML schemas using the TGraph query language GReQL, which can efficiently check whether a TGraph validates the constraint or not. The graph library JGraLab provides efficient implementations of TGraphs and their query language GReQL and supports the generation of grUML schemas.
The first goal of this work is to perform a complete mapping from ORM schemas to grUML schemas, using GReQL to sepcify constraints. The second goal is to represent ORM instances in form of TGraphs.
This work gives an overview of ORM, TGraphs, grUML and GReQL and the theoretical mapping from ORM schemas to grUML schemas. It also describes the implementation of this mapping, deals with the representation of ORM schema instances as TGraphs and the question how grUML constraints can be validated.
Probability propagation nets
(2008)
This work introduces a Petri net representation for the propagation of probabilities and likelihoods, which can be applied to probabilistic Horn abduction, fault trees, and Bayesian networks. These so-called "probability propagation nets" increase the transparency of propagation processes by integrating structural and dynamical aspects into one homogeneous representation. It is shown by means of popular examples that probability propagation nets improve the understanding of propagation processes - especially with respect to the Bayesian propagation algorithms - and thus are well suited for the analysis and diagnosis of probabilistic models. Representing fault trees with probability propagation nets transfers these possibilities to the modeling of technical systems.
Data-minimization and fairness are fundamental data protection requirements to avoid privacy threats and discrimination. Violations of data protection requirements often result from: First, conflicts between security, data-minimization and fairness requirements. Second, data protection requirements for the organizational and technical aspects of a system that are currently dealt with separately, giving rise to misconceptions and errors. Third, hidden data correlations that might lead to influence biases against protected characteristics of individuals such as ethnicity in decision-making software. For the effective assurance of data protection needs,
it is important to avoid sources of violations right from the design modeling phase. However, a model-based approach that addresses the issues above is missing.
To handle the issues above, this thesis introduces a model-based methodology called MoPrivFair (Model-based Privacy & Fairness). MoPrivFair comprises three sub-frameworks: First, a framework that extends the SecBPMN2 approach to allow detecting conflicts between security, data-minimization and fairness requirements. Second, a framework for enforcing an integrated data-protection management throughout the development process based on a business processes model (i.e., SecBPMN2 model) and a software architecture model (i.e., UMLsec model) annotated with data protection requirements while establishing traceability. Third, the UML extension UMLfair to support individual fairness analysis and reporting discriminatory behaviors. Each of the proposed frameworks is supported by automated tool support.
We validated the applicability and usability of our conflict detection technique based on a health care management case study, and an experimental user study, respectively. Based on an air traffic management case study, we reported on the applicability of our technique for enforcing an integrated data-protection management. We validated the applicability of our individual fairness analysis technique using three case studies featuring a school management system, a delivery management system and a loan management system. The results show a promising outlook on the applicability of our proposed frameworks in real-world settings.