Refine
Year of publication
Document Type
- Doctoral Thesis (245)
- Master's Thesis (91)
- Part of Periodical (84)
- Bachelor Thesis (45)
- Diploma Thesis (27)
- Article (13)
- Study Thesis (11)
- Conference Proceedings (10)
- Habilitation (4)
- Other (2)
Language
- English (534) (remove)
Keywords
- Pestizid (8)
- Pflanzenschutzmittel (6)
- Software Engineering (6)
- Internet of Things (5)
- Biodiversität (4)
- Bluetooth (4)
- Bodenchemie (4)
- Landwirtschaft (4)
- Semantic Web (4)
- ecotoxicology (4)
Institute
- Fachbereich 4 (116)
- Institut für Informatik (81)
- Fachbereich 7 (78)
- Institut für Wirtschafts- und Verwaltungsinformatik (53)
- Institut für Computervisualistik (52)
- Institute for Web Science and Technologies (50)
- Institut für Management (30)
- Institut für Integrierte Naturwissenschaften, Abt. Biologie (23)
- Institut für Umweltwissenschaften (23)
- Fachbereich 8 (20)
We examine the systematic underrecognition of female scientists (Matilda effect) by exploring the citation network of papers published in the American Physical Society (APS) journals. Our analysis shows that articles written by men (first author, last author and dominant gender of authors) receive more citations than similar articles written by women (first author, last author and dominant gender of authors) after controlling for the journal of publication, year of publication and content of the publication. Statistical significance of the overlap between the lists of references was considered as the measure of similarity between articles in our analysis. In addition, we found that men are less likely to cite articles written by women and women are less likely to cite articles written by men. This pattern leads to receiving more citations by articles written by men than similar articles written by women because the majority of authors who published in APS journals are male (85%). We also observed Matilda effect reduces when articles are published in journals with the highest impact factors. In other words, people’s evaluation of articles published in these journals is not affected by the gender of authors significantly. Finally, we suggested a method that can be applied by editors in academic journals to reduce the evaluation bias to some extent. Editors can identify missing citations using our proposed method to complete bibliographies. This policy can reduce the evaluation bias because we observed papers written by female scholars (first author, last author, the dominant gender of authors) miss more citations than articles written by male scholars (first author, last author, the dominant gender of authors).
The development of a game engine is considered a non-trivial problem. [3] The architecture of such simulation software must be able to manage large amounts of simulation objects in real-time while dealing with “crosscutting concerns” [3,p. 36] between subsystems. The use of object oriented paradigms to model simulation objects in class hierarchies has been reported as incompatible with constantly changing demands during game development [2, p. 9], resulting in anti-patterns and eventual, messy refactoring.[13]
Alternative architectures using data oriented paradigms revolving around object composition and aggregation have been proposed as a result. [13, 9, 1, 11]
This thesis describes the development of such an architecture with the explicit goals to be simple, inherently compatible with data oriented design, and to make reasoning about performance characteristics possible. Concepts are formally defined to help analyze the problem and evaluate results. A functional implementation of the architecture is presented together with use cases common to simulation software.
In this thesis the feasibility of a GPGPU (general-purpose computing on graphics processing units) approach to natural feature description on mobile phone GPUs is assessed. To this end, the SURF descriptor [4] has been implemented with OpenGL ES 2.0/GLSL ES 1.0 and evaluated across different mobile devices. The implementation is multiple times faster than a comparable CPU variant on the same device. The results proof the feasibility of modern mobile graphics accelerators for GPGPU tasks especially for the detection phase in natural feature tracking used in augmented reality applications. Extensive analysis and benchmarking of this approach in comparison to state of the art methods have been undertaken. Insights into the modifications necessary to adapt and modify the SURF algorithm to the limitations of a mobile GPU are presented. Further, an outlook for a GPGPU-based tracking pipeline on a mobile device is provided.
For a comprehensive understanding of evolutionary processes and for providing reliable prognoses about the future consequences of environmental change, it is essential to reveal the genetic basis underlying adaptive responses. The importance of this goal increases in light of ongoing climate change, which confronts organisms worldwide with new selection pressures and requires rapid evolutionary change to avoid local extinction. Thereby, freshwater ectotherms like daphnids are particularly threatened. Unraveling the genetic basis of local adaptation is complicated by the interplay of forces affecting patterns of genetic divergence among populations. Due to their key position in freshwater communities, cyclic parthenogenetic mode of reproduction and resting propagules (which form biological archives), daphnids are particularly suited for this purpose.
The aim of this thesis was to assess the impact of local thermal selection on the Daphnia longispina complex and to reveal the underlying genetic loci. Therefore, I compared genetic differentiation among populations containing Daphnia galeata, Daphnia longispina and their interspecific hybrids across time, space, and species boundaries. I revealed strongly contrasting patterns of genetic differentiation between selectively neutral and functional candidate gene markers, between the two species, and among samples from different lakes, suggesting (together with a correlation with habitat temperatures) local thermal selection acting on candidate gene TRY5F and indicating adaptive introgression. To reveal the candidate genes’ impact on fitness, I performed association analyses among data on genotypes and phenotypic traits of D. galeata clones from seven populations. The tests revealed a general temperature effect as well as inter-population differences in phenotypic traits and imply a possible contribution of the candidate genes to life-history traits. Finally, utilizing a combined population transcriptomic and reverse ecology approach, I introduced a methodology with a wide range of applications in evolutionary biology and revealed that local thermal selection was probably a minor force in shaping sequence and gene expression divergence among four D. galeata populations, but contributed to sequence divergence among two populations. I identified many transcripts possibly under selection or contributing strongly to population divergence, a large amount thereof putatively under local thermal selection, and showed that genetic and gene expression variation is not depleted specifically in temperature-related candidate genes.
In conclusion, I detected signs of local adaptation in the D. longispina complex across space, time, and species barriers. Populations and species remained genetically divergent, although increased gene flow possibly contributed, together with genotypes recruited from the resting egg bank, to the maintenance of standing genetic variation. Further work is required to accurately determine the influence of introgression and the effects of candidate genes on individual fitness. While I found no evidence suggesting a response to intense local thermal selection, the high resilience and adaptive potential regarding environmental change I observed suggest positive future prospects for the populations of the D. longispina complex. However, overall, due to the continuing environmental degradation, daphnids and other aquatic invertebrates remain vulnerable and threatened.
In this thesis we examined the question whether personality traits of early child care workers influence process quality in preschool.
Research has shown that in educational settings such as preschool, pedagogical quality affects children’s developmental outcome (e.g. NICHD, 2002; Peisner-Feinberg et al., 1999). A substantial part of pedagogical quality known to be vital in this respect is the interaction between teacher and children (e.g., Tietze, 2008). Results of prior classroom research indicate that the teachers’ personality might be an important factor for good teacher-child-interaction (Mayr, 2011). Thus, personality traits might play a vital role for the interaction in preschool. Therefore, the aims of this thesis were to a) identify pivotal personality traits of child care workers, b) assess ideal levels of the identified personality traits and c) examine the relationship between pivotal personality traits and process quality. On that account, we conducted two requirement analyses and a video study. The results of these studies showed that subject matter experts (parents, child care workers, lecturers) partly agreed as to which personality traits are pivotal for child care workers. Furthermore, the experts showed high consensus with regard to the minimum, ideal and maximum personality trait profiles. Furthermore, child care workers whose profiles lay closer to the experts’ ideal also showed higher process quality. In addition, regression analyses showed that the child care workers’ levels of the Big Two (Communion and Agency) related significantly to their process quality.
Microbial pollution of surface waters poses substantial risks for public health, amongst others during recreational use. Microbial pollution was studied at selected sampling sites in rivers Rhine, Moselle and Lahn (Germany) on the basis of commonly used fecal indicator organisms (FIO) indicating bacterial (Escherichia coli, intestinal enterococci) and viral (somatic coliphages) fecal contamination. In addition, blaCTX-Mantibiotic resistance genes (ARG) were quantified at twosites in river Lahn and were used as markers for tracking the spread of antibiotic resistance in the aquatic environment. The impact of changes in climate-related parameters on FIO was examined by studying monitoring results of contrasting flow conditions at rivers Rhine and Moselle. Analyses at all studied river sites clearly indicate that high discharge and precipitation enhance the influx of FIO, ARG and thus potentially (antibiotic resistant) pathogens into rivers. In contrast, a decrease in hygienic microbial pollution was observed under high solar irradiation and increasing water temperatures. Based on identified contributing key factors, multiple linear regression (MLR) models for five sites at a stretch of river Lahn were established that allow a timely assessment of fecal indicator abundances. An interaction between abiotic and biotic factors (i.e. enhanced grazing pressure) considerably contributed to the formation of seasonal patterns among FIO abundances. This was enhanced during extraordinary low flow conditions in rivers with pronounced trophic interactions, clearly hampering a transfer of model approaches between rivers of different biological and hydrological characteristics. Bacterial indicatorswere stronger influenced by grazing pressure than phages. Hence, bacterial indicators alone do not sufficiently describe viral pollution in rivers. BlaCTX-Mgenes were omnipresent in Lahn River water and corresponded to distribution patterns of FIO, indicating fecal sources. Agriculture and waste watertreatment plant effluents contributed to ARG loads and participants in non-bathing water sports were found to be at risk of ingesting antibiotic resistant bacteria (ARB) including ARG, bearing the risk of infection or colonization. Results of the present study highlight the need to be aware of such risks not only in designated bathing waters. ARG abundance at both riverine sampling sites could largely be explained by E. coliabundance and may thus also be incorporated into multiple regression models using E. colispecific environmental predictors. It can be expected that the frequency of short-term microbial pollution events will increase over the next decades due to climate change. Several challenges were identified with regard to the implementation of early warning systems to protect the public from exposure to pathogens in rivers. Most importantly, the concept of the Bathing Water Directive (Directive 2006/7/EC) itself as well as the lack of harmonization in the regulatory framework at European Union (EU) level are major drawbacks and require future adjustments to reliably manage health risks related to microbial water pollution in waters used in multifunctional ways.
Water scarcity is already an omnipresent problem in many parts of the world, especially in sub-Saharan Africa. The dry years 2018 and 2019 showed that also in Germany water resources are finite. Projections and predictions for the next decades indicate that renewal rates of existing water resources will decline due the growing influence of climate change, but that water extraction rates will increase due to population growth. It is therefore important to find alternative and sustainable methods to make optimal use of the water resources currently available. For this reason, the reuse of treated wastewater for irrigation and recharge purposes has become one focus of scientific research in this field. However, it must be taken into account that wastewater contains so-called micropollutants, i.e., substances of anthropogenic origin. These are, e.g., pharmaceuticals, pesticides and industrial chemicals which enter the wastewater, but also metabolites that are formed in the human body from pharmaceuticals or personal care products. Through the treatment in wastewater treatment plants (WWTPs) as well as through chemical, biological and physical processes in the soil passage during the reuse of water, these micropollutants are transformed to new substances, known as transformation products (TPs), which further broaden the number of contaminants that can be detected within the whole water cycle.
Despite the fact that the presence of human metabolites and environmental TPs in untreated and treated wastewater has been known for a many years, they are rarely included in common routine analysis methods. Therefore, a first goal of this thesis was the development of an analysis method based on liquid chromatography - tandem mass spectrometry (LC-MS/MS) that contains a broad spectrum of frequently detected micropollutants including their known metabolites and TPs. The developed multi-residue analysis method contained a total of 80 precursor micropollutants and 74 metabolites and TPs of different substance classes. The method was validated for the analysis of different water matrices (WWTP influent and effluent, surface water and groundwater from a bank filtration site). The influence of the MS parameters on the quality of the analysis data was studied. Despite the high number of analytes, a sufficient number of datapoints per peak was maintained, ensuring a high sensitivity and precision as well as a good recovery for all matrices. The selection of the analytes proved to be relevant as 95% of the selected micropollutants were detected in at least one sample. Several micropollutants were quantified that were not in the focus of other current multi-residue analysis methods (e.g. oxypurinol). The relevance of including metabolites and TPs was demonstrated by the frequent detection of, e.g., clopidogrel acid and valsartan acid at higher concentrations than their precursors, the latter even being detected in samples of bank filtrate water.
By the integration of metabolites, which are produced in the body by biological processes, and biological and chemical TPs, the multi-residue analysis method is also suitable for elucidating degradation mechanisms in treatment systems for water reuse that, e.g., use a soil passage for further treatment. In the second part of the thesis, samples from two treatment systems based on natural processes were analysed: a pilot-scale above-ground sequential biofiltration system (SBF) and a full-scale soil aquifer treatment (SAT) site. In the SBF system mainly biological degradation was observed, which was clearly demonstrated by the detection of biological TPs after the treatment. The efficiency of the degradation was improved by an intermediate aeration, which created oxic conditions in the upper layer of the following soil passage. In the SAT system a combination of biodegradation and sorption processes occurred. By the different behaviour of some biodegradable micropollutants compared to the SBF system, the influence of redox conditions and microbial community was observed. An advantage of the SAT system over the SBF system was found in the sorption capacity of the natural soil. Especially positively charged micropollutants showed attenuation due to ionic interactions with negatively charged soil particles. Based on the physicochemical properties at ambient pH, the degree of removal in the investigated systems and the occurrence in the source water, a selection of process-based indicator substances was proposed.
Within the first two parts of this thesis a micropollutant was frequently detected at elevated concentrations in WWTPs effluents, which was not previously in the focus of environmental research: the antidiabetic drug sitagliptin (STG). STG showed low degradability in biological systems and thus it was investigated to what extend chemical treatment by ozonation can ensure attenuation of it. STG contains an aliphatic primary amine as the principal point of attack for the ozone molecule. There is only limited information about the behaviour of this functional group during ozonation and thus, STG served as an example for other micropollutants containing aliphatic primary amines. A pH-dependent degradation kinetic was observed due to the protonation of the primary amine at lower pH values. At pH values in the range 6 - 8, which is typical for the environment and in WWTPs, STG showed degradation kinetics in the range of 103 M-1s-1 and thus belongs to the group of readily degradable substances. However, complete degradation can only be expected at significantly higher pH values (> 9). The transformation of the primary amine moiety into a nitro group was observed as the major degradation mechanism for STG during ozonation. Other mechanisms involved the formation of a diketone, bond breakages and the formation of trifluoroacetic acid (TFA). Investigations at a pilot-scale ozonation plant using the effluent of a biological degradation of a municipal WWTP as source water confirmed the results of the laboratory studies: STG could not be removed completely even at high ozone doses and the nitro compound was formed as the main TP and remained stable during further ozonation and subsequent biological treatment. It can therefore be assumed that under realistic conditions both a residual concentration of STG and the formed main TP as well as other stable TPs such as TFA can be detected in the effluents of a WWTP consisting of conventional biological treatment followed by ozonation and subsequent biological polishing steps.
We consider variational discretization of three different optimal control problems.
The first being a parabolic optimal control problem governed by space-time measure controls. This problem has a nice sparsity structure, which motivates our aim to achieve maximal sparsity on the discrete level. Due to the measures on the right hand side of the partial differential equation, we consider a very weak solution theory for the state equation and need an embedding into the continuous functions for the pairings to make sense. Furthermore, we employ Fenchel duality to formulate the predual problem and give results on solution theory of both the predual and the primal problem. Later on, the duality is also helpful for the derivation of algorithms, since the predual problem can be differentiated twice so that we can apply a semismooth Newton method. We then retrieve the optimal control by duality relations.
For the state discretization we use a Petrov-Galerkin method employing piecewise constant states and piecewise linear and continuous test functions in time. For the space discretization we choose piecewise linear and continuous functions. As a result the controls are composed of Dirac measures in space-time, centered at points on the discrete space-time grid. We prove that the optimal discrete states and controls converge strongly in L^q and weakly-* in Μ, respectively, to their smooth counterparts, where q ϵ (1,min{2,1+2/d}] is the spatial dimension. The variational discrete version of the state equation with the above choice of spaces yields a Crank-Nicolson time stepping scheme with half a Rannacher smoothing step.
Furthermore, we compare our approach to a full discretization of the corresponding control problem, precisely a discontinuous Galerkin method for the state discretization, where the discrete controls are piecewise constant in time and Dirac measures in space. Numerical experiments highlight the sparsity features of our discrete approach and verify the convergence results.
The second problem we analyze is a parabolic optimal control problem governed by bounded initial measure controls. Here, the cost functional consists of a tracking term corresponding to the observation of the state at final time. Instead of a regularization term for the control in the cost functional, we consider a bound on the measure norm of the initial control. As in the first problem we observe a sparsity structure, but here the control resides only in space at initial time, so we focus on the space discretization to achieve maximal sparsity of the control. Again, due to the initial measure in the partial differential equation, we rely on a very weak solution theory of the state equation.
We employ a dG(0) approximation of the state equation, i.e. we choose piecewise linear and continuous functions in space, which are piecewise constant in time for our ansatz and test space. Then, the variational discretization of the problem together with the optimality conditions induce maximal discrete sparsity of the initial control, i.e. Dirac measures in space. We present numerical experiments to illustrate our approach and investigate the sparsity structure
As third problem we choose an elliptic optimal control governed by functions of bounded variation (BV) in one space dimension. The cost functional consists of a tracking term for the state and a BV-seminorm in terms of the derivative of the control. We derive a sparsity structure for the derivative of the BV control. Additionally, we utilize the mixed formulation for the state equation.
A variational discretization approach with piecewise constant discretization of the state and piecewise linear and continuous discretization of the adjoint state yields that the derivative of the control is a sum of Dirac measures. Consequently the control is a piecewise constant function. Under a structural assumption we even get that the number of jumps of the control is finite. We prove error estimates for the variational discretization approach in combination with the mixed formulation of the state equation and confirm our findings in numerical experiments that display the convergence rate.
In summary we confirm the use of variational discretization for optimal control problems with measures that inherit a sparsity. We are able to preserve the sparsity on the discrete level without discretizing the control variable.
The erosion of the closed innovation paradigm in conjunction with increasing competitive pressure has boosted the interest of both researchers and organizations in open innovation. Despite such rising interest, several companies remain reluctant to open their organizational boundaries to practice open innovation. Among the many reasons for such reservation are the pertinent complexity of transitioning toward open innovation and a lack of understanding of the procedures required for such endeavors. Hence, this thesis sets out to investigate how organizations can open their boundaries to successfully transition from closed to open innovation by analyzing the current literature on open innovation. In doing so, the transitional procedures are structured and classified into a model comprising three phases, namely unfreezing, moving, and institutionalizing of changes. Procedures of the unfreezing phase lay the foundation for a successful transition to open innovation, while procedures of the moving phase depict how the change occurs. Finally, procedures of the institutionalizing phase contribute to the sustainability of the transition by employing governance mechanisms and performance measures. Additionally, the individual procedures are characterized along with their corresponding barriers and critical success factors. As a result of this structured depiction of the transition process, a guideline is derived. This guideline includes the commonly employed actions of successful practitioners of open innovation, which may serve as a baseline for interested parties of the paradigm. With the derivation of the guideline and concise depiction of the individual transitional phases, this thesis consequently reduces the overall complexity and increases the comprehensibility of the transition and its implications for organizations.
The goal of this Bachelor thesis is to implement and evaluate the "Simulating of Collective Misbelief"-model into the NetLogo programming language. Therefore, the model requirements have to be specified and implemented into the NetLogo environment. Further tool-related re-quirements have to be specified to enable the model to work in NetLogo. After implementation several simulations will be conducted to answer the research question stated above.
The formulation of the decoding problem for linear block codes as an integer program (IP) with a rather tight linear programming (LP) relaxation has made a central part of channel coding accessible for the theory and methods of mathematical optimization, especially integer programming, polyhedral combinatorics and also algorithmic graph theory, since the important class of turbo codes exhibits an inherent graphical structure. We present several novel models, algorithms and theoretical results for error-correction decoding based on mathematical optimization. Our contribution includes a partly combinatorial LP decoder for turbo codes, a fast branch-and-cut algorithm for maximum-likelihood (ML) decoding of arbitrary binary linear codes, a theoretical analysis of the LP decoder's performance for 3-dimensional turbo codes, compact IP models for various heuristic algorithms as well as ML decoding in combination with higher-order modulation, and, finally, first steps towards an implementation of the LP decoder in specialized hardware. The scientific contributions are presented in the form of seven revised reprints of papers that appeared in peer-reviewed international journals or conference proceedings. They are accompanied by an extensive introductory part that reviews the basics of mathematical optimization, coding theory, and the previous results on LP decoding that we rely on afterwards.
We are entering the 26th year from the time the World Wide Web (WWW) became reality. Since the birth of the WWW in 1990, the Internet and therewith websites have changed the way businesses compete, shifting products, services and even entire markets.
Therewith, gathering and analysing visitor traffic on websites can provide crucial information to un- derstand customer behavior and numerous other aspects.
Web Analytics (WA) tools offer a quantity of diverse functionality, which calls for complex decision- making in information management. Website operators implement Web Analytic tools such as Google Analytics to analyse their website for the purpose of identifying web usage to optimise website design and management. The gathered data leads to emergent knowledge, which provides new marketing opportunities and can be used to improve business processes and understand customer behavior to increase profit. Moreover, Web Analytics plays a significant role to measure performance and has therefore become an important component in web-based environments to make business decisions.
However, many small and medium –sized enterprises try to keep up with the web business competi- tion, but do not have the equivalent resources in manpower and knowledge to stand the pace, there- fore some even resign entirely on Web Analytics.
This research project aims to develop a Web Analytics framework to assist small and medium-sized enterprises in making better use of Web Analytics. By identifying business requirements of SMEs and connecting them to the functionality of Google Analytics, a Web Analytics framework with attending guidelines is developed, which guides SMEs on how to proceed in using Google Analytics to achieve actionable outcomes.
This paper documents the development of an abstract physics layer (APL) for Simspark. After short introductions to physics engines and Simspark, reasons why an APL was developed are explained. The biggest part of this paper describes the new design and why certain design choices were made based on requirements that arose during developement. It concludes by explaining how the new design was eventually implemented and what future possibilities the new design holds.
Augmented Reality bedeutet eine reale Umgebung mit, meistens grafischen, virtuellen Inhalten zu erweitern. Oft sind dabei die virtuellen Inhalte der Szene jedoch nur ein Overlay und interagieren nicht mit den realen Bestandteilen der Szene. Daraus ergibt sich ein Authentizitätsproblem für Augmented Reatliy Anwendungen. Diese Arbeit betrachtet Augmented Reality in einer speziellen Umgebung, mit deren Hilfe eine authentischere Darstellung möglich ist. Ziel dieserArbeitwar die Erstellung eines Systems, das Zeichnungen durch Techniken der Augmented Reality mit virtuellen Inhalten erweitert. Durch das Anlegen einer Repräsentation soll es der Anwendung dabei möglich sein die virtuellen Szeneelementemit der Zeichnung interagieren zu lassen. Dazu wurden verschiedene Methoden aus den Bereichen des Pose Tracking und der Sketch Recognition disktutiert und für die Implementierung in einem prototypischen System ausgewählt. Als Zielhardware fungiert ein Android Smartphone. Kontext der Zeichnungen ist eine Dungeon Karte, wie sie in Rollenspielen vorkommt. Die virtuellen Inhalte nehmen dabei die Form von Bewohnern des Dungeons an, welche von einer Agentensimulation verwaltet werden. Die Agentensimulation ist Gegenstand einer eigenen Diplomarbeit [18]. Für das Pose Tracking wurde ARToolkitPlus eingesetzt, ein optisches Tracking System, das auf Basis von Markern arbeitet. Die Sketch Recognition ist dafür zuständig die Inhalte der Zeichnung zu erkennen und zu interpretieren. Dafür wurde ein eigener Ansatz implementiert der Techniken aus verschiedenen Sketch Recognition Systemen kombiniert. Die Evaluation konzentriert sich auf die technischen Aspekte des Systems, die für eine authentische Erweiterung der Zeichnung mit virtuellen Inhalten wichtig sind.
For software engineers, conceptually understanding the tools they are using in the context of their projects is a daily challenge and a prerequisite for complex tasks. Textual explanations and code examples serve as knowledge resources for understanding software languages and software technologies. This thesis describes research on integrating and interconnecting
existing knowledge resources, which can then be used to assist with understanding and comparing software languages and software technologies on a conceptual level. We consider the following broad research questions that we later refine: What knowledge resources can be systematically reused for recovering structured knowledge and how? What vocabulary already exists in literature that is used to express conceptual knowledge? How can we reuse the
online encyclopedia Wikipedia? How can we detect and report on instances of technology usage? How can we assure reproducibility as the central quality factor of any construction process for knowledge artifacts? As qualitative research, we describe methodologies to recover knowledge resources by i.) systematically studying literature, ii.) mining Wikipedia, iii.) mining available textual explanations and code examples of technology usage. The theoretical findings are backed by case studies. As research contributions, we have recovered i.) a reference semantics of vocabulary for describing software technology usage with an emphasis on software languages, ii.) an annotated corpus of Wikipedia articles on software languages, iii.) insights into technology usage on GitHub with regard to a catalog of pattern and iv.) megamodels of technology usage that are interconnected with existing textual explanations and code examples.
Ontologies are valuable tools for knowledge representation and important building blocks of the Semantic Web. They are not static and can change over time. Changing an ontology can be necessary for various reasons: the domain that is represented by an ontology can change or an ontology is reused and must be adapted to the new context. In addition, modeling errors could have been introduced into the ontology which must be found and removed. The non-triviality of the change process has led to the emerge of ontology change as an own field of research. The removal of knowledge from ontologies is an important aspect of this change process, because even the addition of new knowledge to an ontology potentially requires the removal of older, conflicting knowledge. Such a removal must be performed in a thought-out way. A naïve change of concepts within the ontology can easily remove other, unrelated knowledge or alter the semantics of concepts in an unintended way [2]. For these reasons, this thesis introduces a formal operator for the fine-grained retraction of knowledge from EL concepts which is partially based on the postulates for belief set contraction and belief base contraction [3, 4, 5] and the work of Suchanek et al. [6]. For this, a short introduction to ontologies and OWL 2 is given and the problem of ontology change is explained. It is then argued why a formal operator can support this process and why the Description Logic EL provides a good starting point for the development of such an operator. After this, a general introduction to Description Logic is given. This includes its history, an overview of its applications and common reasoning tasks in this logic. Following this, the logic EL is defined. In a next step, related work is examined and it is shown why the recovery postulate and the relevance postulate cannot be naïvely employed in the development of an operator that removes knowledge from EL concepts. Following this, the requirements to the operator are formulated and properties are given which are mainly based on the postulates for belief set and belief base contraction. Additional properties are developed which make up for the non-applicability of the recovery and relevance postulates. After this, a formal definition of the operator is given and it is shown that the operator is applicable to the task of a fine-grained removal of knowledge from EL concepts. In a next step, it is proven that the operator fulfills all the previously defined properties. It is then demonstrated how the operator can be combined with laconic justifications [7] to assist a human ontology editor by automatically removing unwanted consequences from an ontology. Building on this, a plugin for the ontology editor Protégé is introduced that is based on algorithms that were derived from the formal definition of the operator. The content of this work is then summarized and a final conclusion is drawn. The thesis closes with an outlook into possible future work.
In Western personnel psychology, competence- and control beliefs (CCB) are of widespread use to predict typical work-related outcomes such as well-being, achievement motivation and job performance. The predictive value and comprehension of CCB in East Africa is examined, comparing a Kenyan target with a German source sample (N=143). Responses to personality tests included qualitative interviews on items capturing control orientations (self concept of ability, internality, powerful others, and chance). Linear regression analyses,
explorative factor analyses, and a procrustean target rotation showed comparable, but not fully congruent predictability for the connection of CCB with outcome variables. Factor structures of control responses did not resemble each other sufficiently. Content analyses including scale intercorrelations, quantitative and qualitative item information served for an explanation of this predictability gap, specifying differences between the German and Kenyan samples that are associated with the social-relational domain of personality. Results
fit in the picture depicted by the African Ubuntu philosophy and the South African Personality Inventory project (SAPI), both emphasizing social-relational aspects. In particular, the powerful others control orientation diverges the most between the cultures. Being perceived as a negative and external factor in the German sample with its individualistic culture, powerful others is of mixed emotional quality and just as well internal, when asked for in the Kenyan sample with its Ubuntu-worldview. An uncritical transfer of CCB measures from one culture to another is assumed to be inappropriate. More emic-etic based research is demanded concerning intra- and intercultural variability of CCB to depict a
transcultural applicable model.
Scientific and public interest in epidemiology and mathematical modelling of disease spread has increased significantly due to the current COVID-19 pandemic. Political action is influenced by forecasts and evaluations of such models and the whole society is affected by the corresponding countermeasures for containment. But how are these models structured?
Which methods can be used to apply them to the respective regions, based on real data sets? These questions are certainly not new. Mathematical modelling in epidemiology using differential equations has been researched for quite some time now and can be carried out mainly by means of numerical computer simulations. These models are constantly being refinded and adapted to corresponding diseases. However, it should be noted that the more complex a model is, the more unknown parameters are included. A meaningful data adaptation thus becomes very diffcult. The goal of this thesis is to design applicable models using the examples of COVID-19 and dengue, to adapt them adequately to real data sets and thus to perform numerical simulations. For this purpose, first the mathematical foundations are presented and a theoretical outline of ordinary differential equations and optimization is provided. The parameter estimations shall be performed by means of adjoint functions. This procedure represents a combination of static and dynamical optimization. The objective function corresponds to a least squares method with L2 norm which depends on the searched parameters. This objective function is coupled to constraints in the form of ordinary differential equations and numerically minimized, using Pontryagin's maximum (minimum) principle and optimal control theory. In the case of dengue, due to the transmission path via mosquitoes, a model reduction of an SIRUV model to an SIR model with time-dependent transmission rate is performed by means of time-scale separation. The SIRUV model includes uninfected (U) and infected (V ) mosquito compartments in addition to the susceptible (S), infected (I) and recovered (R) human compartments, known from the SIR model. The unknwon parameters of the reduced SIR model are estimated using data sets from Colombo (Sri Lanka) and Jakarta (Indonesia). Based on this parameter estimation the predictive power of the model is checked and evaluated. In the case of Jakarta, the model is additionally provided with a mobility component between the individual city districts, based on commuter data. The transmission rates of the SIR models are also dependent on meteorological data as correlations between these and dengue outbreaks have been demonstrated in previous data analyses. For the modelling of COVID-19 we use several SEIRD models which in comparison to the SIR model also take into account the latency period and the number of deaths via exposed (E) and deaths (D) compartments. Based on these models a parameter estimation with adjoint functions is performed for the location Germany. This is possible because since the beginning of the pandemic, the cumulative number of infected persons and deaths
are published daily by Johns Hopkins University and the Robert-Koch-Institute. Here, a SEIRD model with a time delay regarding the deaths proves to be particularly suitable. In the next step, this model is used to compare the parameter estimation via adjoint functions with a Metropolis algorithm. Analytical effort, accuracy and calculation speed are taken into account. In all data fittings, one parameter each is determined to assess the estimated number of unreported cases.
We are living in a world where environmental crises come to a head. To curb aggravation of these problems, a socio-ecological transformation within society is needed, going along with human behavior change. How to encourage such behavior changes on an individual level is the core issue of this dissertation. It takes a closer look at the role of individuals as consumers resulting in purchase decisions with more or less harmful impact on the environment. By using the example of plastic pollution, it takes up a current environmental problem and focuses on an understudied behavioral response to this problem, namely reduction behavior. More concrete, this dissertation examines which psychological factors can encourage the mitigation of plastic packaging consumption. Plastic packaging accounts for the biggest amount of current plastic production and is associated with products of daily relevance. Despite growing awareness of plastic pollution in society, behavioral responses do not follow accordingly and plastic consumption is still very high. As habits are often a pitfall when implementing more resource-saving behavior, this dissertation further examines if periods of discontinuity can open a ’window of opportunity’ to break old habits and facilitate behavior change. Four manuscripts approach this matter from the gross to the subtle. Starting with a literature review, a summary of 187 studies addresses the topic of plastic pollution and human behavior from a societal-scientific perspective. Based on this, a cross-sectional study (N = 648) examines the deter-minants of plastic-free behavior intentions in the private-sphere and public-sphere by structural equation modeling. Two experimental studies in pre-post design build upon this, by integrating the determinants in intervention studies. In addition, it was evaluated if the intervention presented during Lent (N = 140) or an action month of ‘Plastic Free July’ (N = 366) can create a ‘window of opportunity’ to mitigate plastic packaging consumption. The literature review emphasized the need for research on behavioral solutions to reduce plastic consumption. The empirical results revealed moral and control beliefs to be the main determinants of reduction behavior. Furthermore, the time point of an intervention influenced the likelihood to try out the new behavior. The studies gave first evidence that a ‘window of opportunity’ can facilitate change towards pro-environmental behavior within the application field of plastic consumption. Theoretical and practical implications of creating the right opportunity for individuals to contribute to a socio-ecological transformation are finally discussed.
Mathematical models of species dispersal and the resilience of metapopulations against habitat loss
(2021)
Habitat loss and fragmentation due to climate and land-use change are among the biggest threats to biodiversity, as the survival of species relies on suitable habitat area and the possibility to disperse between different patches of habitat. To predict and mitigate the effects of habitat loss, a better understanding of species dispersal is needed. Graph theory provides powerful tools to model metapopulations in changing landscapes with the help of habitat networks, where nodes represent habitat patches and links indicate the possible dispersal pathways between patches.
This thesis adapts tools from graph theory and optimisation to study species dispersal on habitat networks as well as the structure of habitat networks and the effects of habitat loss. In chapter 1, I will give an introduction to the thesis and the different topics presented in this thesis. Chapter 2 will then give a brief summary of tools used in the thesis.
In chapter 3, I present our model on possible range shifts for a generic species. Based on a graph-based dispersal model for a generic aquatic invertebrate with a terrestrial life stage, we developed an optimisation model that models dispersal directed to predefined habitat patches and yields a minimum time until these patches are colonised with respect to the given landscape structure and species dispersal capabilities. We created a time-expanded network based on the original habitat network and solved a mixed integer program to obtain the minimum colonisation time. The results provide maximum possible range shifts, and can be used to estimate how fast newly formed habitat patches can be colonised. Although being specific for this simulation model, the general idea of deriving a surrogate can in principle be adapted to other simulation models.
Next, in chapter 4, I present our model to evaluate the robustness of metapopulations. Based on a variety of habitat networks and different generic species characterised by their dispersal traits and habitat demands, we modeled the permanent loss of habitat patches and subsequent metapopulation dynamics. The results show that species with short dispersal ranges and high local-extinction risks are particularly vulnerable to the loss of habitat across all types of networks. On this basis, we then investigated how well different graph-theoretic metrics of habitat networks can serve as indicators of metapopulation robustness against habitat loss. We identified the clustering coefficient of a network as the only good proxy for metapopulation robustness across all types of species, networks, and habitat loss scenarios.
Finally, in chapter 5, I utilise the results obtained in chapter 4 to identify the areas in a network that should be improved in terms of restoration to maximise the metapopulation robustness under limited resources. More specifically, we exploit our findings that a network’s clustering coefficient is a good indicator for metapopulation robustness and develop two heuristics, a Greedy algorithm and a deducted Lazy Greedy algorithm, that aim at maximising the clustering coefficient of a network. Both algorithms can be applied to any network and are not specific to habitat networks only.
In chapter 6, I will summarize the main findings of this thesis, discuss their limitations and give an outlook of future research topics.
Overall this thesis develops frameworks to study the behaviour of habitat networks and introduces mathematical tools to ecology and thus narrows the gap between mathematics and ecology. While all models in this thesis were developed with a focus on aquatic invertebrates, they can easily be adapted to other metapopulations.