Refine
Year of publication
Document Type
- Doctoral Thesis (245)
- Master's Thesis (91)
- Part of Periodical (84)
- Bachelor Thesis (45)
- Diploma Thesis (27)
- Article (13)
- Study Thesis (11)
- Conference Proceedings (10)
- Habilitation (4)
- Other (2)
Language
- English (534) (remove)
Keywords
- Pestizid (8)
- Pflanzenschutzmittel (6)
- Software Engineering (6)
- Internet of Things (5)
- Biodiversität (4)
- Bluetooth (4)
- Bodenchemie (4)
- Landwirtschaft (4)
- Semantic Web (4)
- ecotoxicology (4)
Institute
- Fachbereich 4 (116)
- Institut für Informatik (81)
- Fachbereich 7 (78)
- Institut für Wirtschafts- und Verwaltungsinformatik (53)
- Institut für Computervisualistik (52)
- Institute for Web Science and Technologies (50)
- Institut für Management (30)
- Institut für Integrierte Naturwissenschaften, Abt. Biologie (23)
- Institut für Umweltwissenschaften (23)
- Fachbereich 8 (20)
Distance vector routing protocols are interior gateway protocols in which every router sets up a routing table with the help of the information it receives from its neighboring routers. The routing table contains the next hops and associated distances on the shortest paths to every other router in the network. Security mechanisms implemented in distance vector routing protocols are insufficient. It is rather assumed that the environment is trustworthy. However, routers can be malicious for several reasons and manipulate routing by injecting false routing updates. Authenticity and integrity of transmitted routing updates have to be guaranteed and at the same time performance and benefits should be well-balanced.
In this paper several approaches that aim at meeting the above mentioned conditions are examined and their advantages and disadvantages are compared.
The trends of industry 4.0 and the further enhancements toward an ever changing factory lead to more mobility and flexibility on the factory floor. With that higher need of mobility and flexibility the requirements on wireless communication rise. A key requirement in that setting is the demand for wireless Ultra-Reliability and Low Latency Communication (URLLC). Example use cases therefore are cooperative Automated Guided Vehicles (AGVs) and mobile robotics in general. Working along that setting this thesis provides insights regarding the whole network stack. Thereby, the focus is always on industrial applications. Starting on the physical layer, extensive measurements from 2 GHz to 6 GHz on the factory floor are performed. The raw data is published and analyzed. Based on that data an improved Saleh-Valenzuela (SV) model is provided. As ad-hoc networks are highly depended onnode mobility, the mobility of AGVs is modeled. Additionally, Nodal Encounter Patterns (NEPs) are recorded and analyzed. A method to record NEP is illustrated. The performance by means of latency and reliability are key parameters from an application perspective. Thus, measurements of those two parameters in factory environments are performed using Wireless Local Area Network (WLAN) (IEEE 802.11n), private Long Term Evolution (pLTE) and 5G. This showed auto-correlated latency values. Hence, a method to construct confidence intervals based on auto-correlated data containing rare events is developed. Subsequently, four performance improvements for wireless networks on the factory floor are proposed. Of those optimization three cover ad-hoc networks, two deal with safety relevant communication, one orchestrates the usage of two orthogonal networks and lastly one optimizes the usage of information within cellular networks.
Finally, this thesis is concluded by an outlook toward open research questions. This includes open questions remaining in the context of industry 4.0 and further the ones around 6G. Along the research topics of 6G the two most relevant topics concern the ideas of a network of networks and overcoming best-effort IP.
Engineering criminal agents
(2019)
This PhD thesis with the title "Engineering Criminal Agents" demonstrates the interplay of three different research fields captured in the title: In the centre are Engineering and Simulation, both set in relation with the application field of Criminology - and the social science aspect of the latter. More precisely,
this work intends to show how specific agent-based simulation models can be created using common methods from software engineering.
Agent-based simulation has proven to be a valuable method for social science since decades, and the trend to increasingly complex simulation models is apparent, not at least due to advancing computational and simulation techniques. An important cause of complexity is the inclusion of 'evidence' as basis of simulation models. Evidence can be provided by various stakeholders, reflecting their different viewpoints on the topic to model.
This poses a particular burden by interrelating the two relevant perspectives on the topic of simulation: on the one hand the user of the simulation model who provides the requirements and is interested in the simulation results, on the other hand the developer of the simulation model who has to program a verified and validated formal model. In order to methodically link these two perspectives, substantial efforts in research and development are needed, where this PhD thesis aims to make a contribution.
The practical results - in terms of software - were achieved by using the multi-faceted approach mentioned above: using methods from software engineering, in order to become able to apply methods from computational social sciences, in order to gain insights into social systems, such as in the internal dynamics of criminal networks.
The PhD thesis shows the research involved to create these practical results, and gives technical details and specifications of the developed software.
The frame for research and development to achieve these results was provided mainly by two research projects: OCOPOMO and GLODERS.
Rivers play an important role in the global water cycle, support biodiversity and ecological integrity. However, river flow and thermal regimes are heavily altered in dammed rivers. These impacts are being exacerbated and become more apparent in rivers fragmented by multiple dams. Recent studies mainly focused on evaluating the cumulative impact of cascade reservoirs on flow or thermal regimes, but the role of upstream reservoirs in shaping the hydrology and hydrodynamics of downstream reservoirs remains poorly understood. To improve the understanding of the hydrodynamics in cascade reservoirs, long-term observational data are used in combination with numerical modeling to investigate the changes in flow and thermal regime in three cascade reservoirs at the upper reach of the Yangtze River. The three studied reservoirs are Xiluodu (XLD), Xiangjiaba (XJB) and Three Gorges Reservoir (TGR). In addition, the effects of single reservoir operation (at seasonal/daily time scale) on hydrodynamics are examined in a large tributary of TGR. The results show that the inflow of TGR has been substantially altered by the two upstream reservoirs with a higher discharge in spring and winter and a reduced peak flow in summer. XJB had no obvious contribution to the variations in inflow of TGR. The seasonal water temperature of TGR was also widely affected by the upstream two reservoirs, i.e., an increase in winter and decrease in spring, associated with a delay in water temperature rise and fall. These effects will probably be intensified in the coming years due to the construction of new reservoirs. The study also underlines the importance of reservoir operation in shaping the hydrodynamics of TGR. The seasonal dynamics of density currents in a tributary bay of TGR are closely related to seasonal reservoir operations. In addition, high-frequency water level fluctuations and flow velocity variations were observed in response to periodic tributary bay oscillations, which are driven by the diurnal discharge variations caused by the operation of TGR. As another consequence of operation of cascade reservoirs, the changes in TGR inflow weakened spring thermal stratification and caused warming in spring, autumn and winter. In response to this change, the intrusions from TGR occurred more frequently as overflow and earlier in spring, which caused a sharp reduction in biomass and frequency of phytoplankton blooms in tributary bays of TGR. This study suggests that high-frequency bay oscillations can potentially be used as an efficient management strategy for controlling algal blooms, which can be included in future multi-objective ecological conservation strategies.
Virtual reality is a growing field of interest as it provides a particular intuitive way of user-interaction. However, there are still open technical issues regarding latency — the delay between interaction and display reaction — and the trade-off between visual quality and frame-rate of real-time graphics, especially when taking visual effects like specular and semi-transparent surfaces and volumes into account. One solution, a distributed rendering setup, is presented in this thesis, in which the image synthesis is divided into an accurate but costly physically based rendering thread with a low refresh rate and a fast reprojection thread to remain a responsive interactivity with a high frame-rate. Two novel reprojection techniques are proposed that cover reflections and refractions produced by surface ray-tracing as well as volumetric light transport generated by volume ray-marching. The introduced setup can enhance the VR experience within several domains. In this thesis, three innovative training applications have been realized to investigate the added value of virtual reality to the three learning stages of observation, interaction and collaboration. For each stage an interdisciplinary curriculum, currently taught with traditional media, was transferred to a VR setting in order to investigate how virtual reality is capable of providing a natural, flexible and efficient learning environment
Lakes and reservoirs are important sources of methane, a potent greenhouse gas. Although freshwaters cover only a small fraction of the global surface, their contribution to global methane emission is significant and this is expected to increase, as a positive feedback to climate warming and exacerbated eutrophication. Yet, global estimates of methane emission from freshwaters are often based on point measurements that are spatio-temporally biased. To better constrain the uncertainties in quantifying methane fluxes from inland waters, a closer examination of the processes transporting methane from sediment to atmosphere is necessary. Among these processes, ebullition (bubbling) is an important transport pathway and is a primary source of uncertainty in quantifying methane emissions from freshwaters. This thesis aims to improve our understanding of ebullition in freshwaters by studying the processes of methane bubble formation, storage and release in aquatic sediments. The laboratory experiments demonstrate that aquatic sediments can store up to ~20% (volumetric content) gas and the storage capacity varies with sediment properties. The methane produced is stored as gas bubbles in sediment with minimal ebullition until the storage capacity is reached. Once the sediment void spaces are created by gas bubble formation, they are stable and available for future bubble storage and transport. Controlled water level drawdown experiments showed that the amounts of gas released from the sediment scaled with the total volume of sediment gas storage and correlated linearly to the drop in hydrostatic pressure. It was hypothesized that not only the timing of ebullition is controlled by sediment gas storage, but also the spatial distribution of ebullition. A newly developed freeze corer, capable of characterizing sediment gas content under in situ environments, enabled the possibility to test the hypothesis in a large subtropical lake (Lake Kinneret, Israel). The results showed that gas content was variable both vertically and horizontally in the lake sediment. Sediment methane production rate and sediment characteristics could explain these variabilities. The spatial distribution of ebullition generally was in a good agreement with the horizontal distribution of depth-averaged (surface 1 m) sediment gas content. While discrepancies were found between sediment depth-integrated methane production and the snapshot ebullition rate, they were consistent in a long term (multiyear average). These findings provide a solid basis for the future development of a process-based ebullition model. By coupling a sediment transport model with a sediment diagenetic model, general patterns of ebullition hotspots can be predicted at a system level and the uncertainties in ebullition flux measurements can be better constrained both on long-term (months to years) and short-term (minutes to hours) scales.
Object recognition is a well-investigated area in image-based computer vision and several methods have been developed. Approaches based on Implicit Shape Models have recently become popular for recognizing objects in 2D images, which separate objects into fundamental visual object parts and spatial relationships between the individual parts. This knowledge is then used to identify unknown object instances. However, since the emergence of aσordable depth cameras like Microsoft Kinect, recognizing unknown objects in 3D point clouds has become an increasingly important task. In the context of indoor robot vision, an algorithm is developed that extends existing methods based on Implicit Shape Model approaches to the task of 3D object recognition.
The mitral valve is one of the four valves in the human heart. It is located in the left heart chamber and its function is to control the blood flow from the left atrium to the left ventricle. Pathologies can lead to malfunctions of the valve so that blood can flow back to the atrium. Patients with a faulty mitral valve function may suffer from fatigue and chest pain. The functionality can be surgically restored, which is often a long and exhaustive intervention. Thorough planning is necessary to ensure a safe and effective surgery. This can be supported by creating pre-operative segmentations of the mitral valve. A post-operative analysis can determine the success of an intervention. This work will combine existing and new ideas to propose a new approach to (semi-)automatically create such valve models. The manual part can guarantee a high quality model and reliability, whereas the automatic part contributes to saving valuable labour time.
The main contributions of the automatic algorithm are an estimated semantic separation of the two leaflets of the mitral valve and an optimization process that is capable of finding a coaptation-line and -area between the leaflets. The segmentation method can perform a fully automatic segmentation of the mitral leaflets if the annulus ring is already given. The intermediate steps of this process will be integrated into a manual segmentation method so a user can guide the whole procedure. The quality of the valve models generated by the method proposed in this work will be measured by comparing them to completely manually segmented models. This will show that commonly used methods to measure the quality of a segmentation are too general and do not suffice to reflect the real quality of a model. Consequently the work at hand will introduce a set of measurements that can qualify a mitral valve segmentation in more detail and with respect to anatomical landmarks. Besides the intra-operative support for a surgeon, a segmented mitral valve provides additional benefits. The ability to patient-specifically obtain and objectively describe the valve anatomy may be the base for future medical research in this field and automation allows to process large data sets with reduced expert dependency. Further, simulation methods that use the segmented models as input may predict the outcome of a surgery.
Bio-medical data comes in various shapes and with different representations.
Domain experts use such data for analysis or diagnosis,
during research or clinical applications. As the opportunities to obtain
or to simulate bio-medical data become more complex and productive,
the experts face the problem of data overflow. Providing a
reduced, uncluttered representation of data, that maintains the data’s
features of interest falls into the area of Data Abstraction. Via abstraction,
undesired features are filtered out to give space - concerning the
cognitive and visual load of the viewer - to more interesting features,
which are therefore accentuated. To address this challenge, the dissertation
at hand will investigate methods that deal with Data Abstraction
in the fields of liver vasculature, molecular and cardiac visualization.
Advanced visualization techniques will be applied for this purpose.
This usually requires some pre-processing of the data, which will also
be covered by this work. Data Abstraction itself can be implemented
in various ways. The morphology of a surface may be maintained,
while abstracting its visual cues. Alternatively, the morphology may
be changed to a more comprehensive and tangible representation.
Further, spatial or temporal dimensions of a complex data set may
be projected to a lower space in order to facilitate processing of the
data. This thesis will tackle these challenges and therefore provide an
overview of Data Abstraction in the bio-medical field, and associated
challenges, opportunities and solutions.
Coordination and awareness mechanisms are important in systems for Computer-Supported Cooperative Work (CSCW) and traditional groupware systems. It has been a key focus of research into collaborative groupware and its capability to enable people to efficiently collaborate and coordinate work. Until now, no classification of the mechanisms has been undertaken to identify commonalities and differences in coordination and awareness mechanisms and to show their significance in collaborative environments. In addition, there is a little investigation of coordination and awareness mechanisms in new forms of groupware such as socially enabled Enterprise Collaboration Systems (ECS). Indeed, both in science and in practices, ECS incorporating social software have become increasingly important. Based on the combination of traditional groupware and social software, ECS also include coordination and awareness mechanisms that may simplify collaboration, but these have not yet been investigated.
Therefore, the aim of this thesis is to identify coordination and awareness mechanisms in the academic literature to provide a general overview of those mechanisms examples. Additionally, this thesis aims to classify the mechanism examples. Based on a deep literature analysis, concepts described in literature are chosen and applied with the intension to analyse the mechanisms and to reach a classification. Based on the classification of the identified mechanisms their commonalities and differences are examined and described to gain a better understanding of them. For illustration purpose, examples of coordination and awareness mechanisms and their application are portrayed. The mechanisms examples refer to the classification groups derived. The selection of the mechanisms for the visualization is based on significant differences in their functionality. Subsequently, the selected mechanisms, more based on traditional groupware, are checked to a limited extend whether they can be found in socially enabled ECS. The collaborative platform of IBM Connections serves as a practical example of ECS incorporating social software. IBM Connections is used at the University of Koblenz to run the platform "UniConnect". On the platform it is investigated which of the identified mechanisms examples of the literature are applied in IBM Connections and which additional mechanisms are created by users. This work is the first step in the study of coordination and awareness mechanisms in socially-enabled ECS. In addition, it is expected to detect new mechanisms which are used while the social factor to collaborative work is new.
The purpose of this thesis is to examine and collect coordination and awareness mechanisms examples in literature to analyse them. Additionally, the purpose is to provide a first overview of mechanisms and to classify them by investigating their commonalities. Beside this thesis should give incentive for further investigations to investigate coordination and awareness mechanisms in socially integrated ECS.
With 47% land coverage in 2016, agricultural land was one of the largest terrestrial biomes in Germany. About 70% of the agricultural land was cropped area with associated pesticide applications. Agricultural land also represents an essential habitat for amphibians. Therefore, exposure of amphibians to agrochemicals, such as fertilizers and pesticides, seems likely. Pesticides can be highly toxic for amphibians, even a fraction of the original application rate may result in high amphibian mortality.
To evaluate the potential risk of pesticide exposure for amphibians, the temporal coincidence of amphibian presence on agricultural land and pesticide applications (N = 331) was analyzed for the fire-bellied toad (Bombina bombina), moor frog (Rana arvalis), spadefoot toad (Pelobates fuscus) and crested newt (Triturus cristatus) during spring migration. In 2007 and 2008, up to 80% of the migrating amphibians temporally coincided with pesticide applications in the study area of Müncheberg, about 50 km east of Berlin. Pesticide interception by plants ranged between 50 to 90% in winter cereals and 80 to 90% in winter rape. The highest coincidence was observed for the spadefoot toad, where 86.6% of the reproducing population was affected by a single pesticide in winter rape during stem elongation with 80% pesticide interception by plants. Late migrating species, such as the fire-bellied toad and the spadefoot toad, overlapped more with pesticide applications than early migrating species, such as the moor frog, did. Under favorable circumstances, the majority of early migrants may not coincide with the pesticide applications of arable fields during spring migration.
To evaluate the potential effect of pesticide applications on populations of the common frog (Rana temporaria), a landscape genetic study was conducted in the vinicultural area of Southern Palatinate. Due to small sample sizes at breeding sites within viniculture, several DNA sampling methods were tested. Furthermore, the novel repeated randomized selection of genotypes approach was developed to utilize genetic data from siblings for more reliable estimates of genetic parameters. Genetic analyses highlighted three of the breeding site populations located in viniculture as isolated from the meta-population. Genetic differentiation among breeding site populations in the viniculture (median pairwise FST=0.0215 at 2.34 km to 0.0987 at 2.39 km distance) was higher compared to genetic differentiation among breeding site populations in the Palatinate Forest (median pairwise FST=0.0041 at 5.39 km to 0.0159 at 9.40 km distance).
The presented studies add valuable information about the risk of pesticide exposure for amphibians in the terrestrial life stage and possible effects of agricultural land on amphibian meta-populations. To conserve endemic amphibian species and their (genetic) diversity in the long run, the risk assessment of pesticides and applied agricultural management measures need to be adjusted to protect amphibians adequately. In addition, other conservation measures such as the creation of new suitable breeding site should be considered to improve connectivity between breeding site populations and ensure the persistence of amphibians in the agricultural land.
Graph-based data formats are flexible in representing data. In particular semantic data models, where the schema is part of the data, gained traction and commercial success in recent years. Semantic data models are also the basis for the Semantic Web - a Web of data governed by open standards in which computer programs can freely access the provided data. This thesis is concerned with the correctness of programs that access semantic data. While the flexibility of semantic data models is one of their biggest strengths, it can easily lead to programmers accidentally not accounting for unintuitive edge cases. Often, such exceptions surface during program execution as run-time errors or unintended side-effects. Depending on the exact condition, a program may run for a long time before the error occurs and the program crashes.
This thesis defines type systems that can detect and avoid such run-time errors based on schema languages available for the Semantic Web. In particular, this thesis uses the Web Ontology Language (OWL) and its theoretic underpinnings, i.e., description logics, as well as the Shapes Constraint Language (SHACL) to define type systems that provide type-safe data access to semantic data graphs. Providing a safe type system is an established methodology for proving the absence of run-time errors in programs without requiring execution. Both schema languages are based on possible world semantics but differ in the treatment of incomplete knowledge. While OWL allows for modelling incomplete knowledge through an open-world semantics, SHACL relies on a fixed domain and closed-world semantics. We provide the formal underpinnings for type systems based on each of the two schema languages. In particular, we base our notion of types on sets of values which allows us to specify a subtype relation based on subset semantics. In case of description logics, subsumption is a routine problem. For
the type system based on SHACL, we are able to translate it into a description
logic subsumption problem.
Ponds in agricultural landscapes are often used by amphibians as breeding habitat. However, the characteristics of agricultural ponds and especially the surrounding area are usually said to be suboptimal for many amphibian species. Using suboptimal habitats might allow a species’ survival and reproduction, but can have negative consequences at the individual and population level. In the present study, we investigated Palmate Newt (Lissotriton helveticus) populations from an intensive wine-growing region in southern Germany and compared them with populations located in a nearby forested area in terms of biometric traits, age and genetic structure. By analyzing over 900 adult newts from 11 ponds, we could show that newts reproducing in forest ponds were larger than newts reproducing in agricultural ponds. We did not find differences in the newt age and growth rate between habitat types. Therefore, differences in the body size of newts might already existed in larvae and/or juveniles, what might be related to a lower habitat quality for larvae and/or juveniles in the agricultural landscape. Body mass, body condition and sexual dimorphic traits (length of the caudal filament and max. height of the tail) correlated with body size, but no additional effect of the habitat type was found. The analysis of microsatellites revealed a higher genetic diversity in forest ponds. However, no clear sign of inbreeding was observed in any agricultural population, suggesting some degree of gene flow between them. We conclude, that agricultural ponds can be suitable habitats for the Palmate Newt and that conservation effort should aim to preserve them. The observed effects on body size indicate the need to increase the quality of the aquatic and terrestrial habitat for early life stages of this newt species in agricultural landscapes.
In the last decades, it became evident that the world is facing an unprecedented, human-induced global biodiversity crisis with amphibians being one of the most threatened species groups. About 41% of the amphibian species are classified as endangered by the IUCN, but even in amphibian species that are listed as "least concern", population declines can be observed on a local level. With land-use change and agrochemicals (i.e. pesticides), two of the main drivers for this amphibian decline are directly linked to intensive agriculture, which is the dominant landscape type in large parts of Europe. Thus, understanding the situation of amphibians in the agricultural landscape is crucial for conservation measures. In the present thesis, I investigated the effects of viticulture on amphibian populations around Landau in der Pfalz (Germany) in terms of habitat use, pesticide exposure, biometric traits as well as genetic and age structure. From the perspective of amphibians, land-use change means usually the destruction of habitats in agricultural landscapes, which often leads to landscape fragmentation. Thus, I followed the question if also vineyards lead to the fragmentation of the landscape and if pesticides that are frequently used in viticulture have to be considered as a factor too, so if there is a chemical landscape fragmentation. Using telemetry, I could show that common toads (Bufo bufo) can be found directly in vineyards, but that they tend to avoid them as habitat. Analysing the genetic structure of common frogs (Rana temporaria) revealed that vineyards have to be considered as a barrier for amphibians. To identify if pesticides contribute to the resulting landscape fragmentation, I conducted an arena choice experiment in the laboratory in which I found evidence for an avoidance of pesticide-contaminated soil. Such an avoidance could be one of the underlying reasons for a potential chemical landscape fragmentation. By combining telemetry data with information about pesticide applications from local wine growers, I could show that a large part of the common toads is likely to come in contact with pesticides. Further, I demonstrated that the agricultural landscape, probably due to the application of pesticides, can have negative effects on the reproduction capacity of common toads. By studying palmate newts (Lissotriton helveticus) I found that adult newts from agricultural ponds are smaller than those from forest ponds. As I did not find differences in the age structure and growth, these differences might be carry-over effects from earlier life stages. While agricultural ponds might be suitable habitats for adult palmate newts, the potential carry-over effect indicates suboptimal conditions for larvae and/or juveniles. I conclude that the best management measure for sustaining amphibians in the agricultural landscape would be a heterogeneous cultural landscape with a mosaic of different habitat patches that work without or at least a reduced amount of pesticides. Green corridors between populations and different habitats would allow migrating individuals to avoid agricultural and thus pesticide-contaminated areas. This would reduce the pesticide exposure risk of amphibians, while preventing the fragmentation of the landscape and thus the isolation of populations.
This thesis examined two specific cases of point and diffuse pollution, pesticides and salinisation, which are two of the most concerning stressors of Germany’s freshwater bodies. The findings of this thesis were organized into three major components, of which the first component presents the contribution of WWTPs to pesticide toxicity (Chapter 2). The second component focuses on the current and future background salt ion concentrations under climate change with the absence of anthropogenic activities (Chapter 3). Finally, the third major component shows the response of invertebrate communities in terms of species turnover to levels of salinity change, considered as a proxy for human-driven salinisation (Chapter 4).
The paper deals with a specific introduction into probability propagation nets. Starting from dependency nets (which in a way can be considered the maximum information which follows from the directed graph structure of Bayesian networks), the probability propagation nets are constructed by joining a dependency net and (a slightly adapted version of) its dual net. Probability propagation nets are the Petri net version of Bayesian networks. In contrast to Bayesian networks, Petri nets are transparent and easy to operate. The high degree of transparency is due to the fact that every state in a process is visible as a marking of the Petri net. The convenient operability consists in the fact that there is no algorithm apart from the firing rule of Petri net transitions. Besides the structural importance of the Petri net duality there is a semantic matter; common sense in the form of probabilities and evidencebased likelihoods are dual to each other.
In this paper, we demonstrate by means of two examples how to work with probability propagation nets (PPNs). The fiirst, which comes from the book by Peng and Reggia [1], is a small example of medical diagnosis. The second one comes from [2]. It is an example of operational risk and is to show how the evidence flow in PPNs gives hints to reduce high losses. In terms of Bayesian networks, both examples contain cycles which are resolved by the conditioning technique [3].
Probability propagation nets
(2007)
A class of high level Petri nets, called "probability propagation nets", is introduced which is particularly useful for modeling probability and evidence propagation. These nets themselves are well suited to represent the probabilistic Horn abduction, whereas specific foldings of them will be used for representing the flows of probabilities and likelihoods in Bayesian networks.
Dualizing marked Petri nets results in tokens for transitions (t-tokens). A marked transition can strictly not be enabled, even if there are sufficient "enabling" tokens (p-tokens) on its input places. On the other hand, t-tokens can be moved by the firing of places. This permits flows of t-tokens which describe sequences of non-events. Their benefiit to simulation is the possibility to model (and observe) causes and effects of non-events, e.g. if something is broken down.
The paper is devoted to solving the problem of assessing the quality of the medical electronic service. A variety of dimensions and factors of quality, methods and models applied in different scopes of activity for assessing quality of service is researched. The basic aspects, requirements and peculiarities of implementing medical electronic services are investigated. The results of the analysis and the set of information models describing the processes of assessing quality of the electronic service "Booking an appointment with a physician" and developed for this paper allowed us to describe the methodology and to state the problem of the assessment of quality of this service.
The paper is devoted to solving a problem of the development of the website of Russian municipal policlinics and provides a selection of a set of elements which should be posted on a website. Such elements are necessary to provide citizens with correct and ergonomic e-services. The insufficient development of an infrastructure of institutions of public and municipal administration (particularly, healthcare institutions) in Russia made it necessary to analyze webresources used in different countries at different levels of providing medical services. The information resources of medical treatment facilities of the United Kingdom, of the United Statesrnof America and of the Federal Republic of Germany were researched separately for three existing economic models of healthcare. A set of criteria for the assessment of medical webresources was developed.
The aim of this dissertational work was to examine physiological (heart rate variability measures) and biomechanical parameters (step features) as possible anticipating indicators of psychological mood states. 420 participants (275 male and 145 female, age: M=34.7 years ± 9.7) engaged in a 60-minute slow endurance run while they were asked questions via a mobile answering and recording device. We measured several mood states, physiological measures, and biomechanical parameters. We used a latent growth curve analysis to examine the cross-lagged effects. Results demonstrated significant (p ≤.05) relationships between biomechanical shoe features anticipating psychological mood states, as well as psychological mood states anticipating physiological parameters.
Next word prediction is the task of suggesting the most probable word a user will type next. Current approaches are based on the empirical analysis of corpora (large text files) resulting in probability distributions over the different sequences that occur in the corpus. The resulting language models are then used for predicting the most likely next word. State-of-the-art language models are based on n-grams and use smoothing algorithms like modified Kneser-Ney smoothing in order to reduce the data sparsity by adjusting the probability distribution of unseen sequences. Previous research has shown that building word pairs with different distances by inserting wildcard words into the sequences can result in better predictions by further reducing data sparsity. The aim of this thesis is to formalize this novel approach and implement it by also including modified Kneser-Ney smoothing.
We present the conceptual and technological foundations of a distributed natural language interface employing a graph-based parsing approach. The parsing model developed in this thesis generates a semantic representation of a natural language query in a 3-staged, transition-based process using probabilistic patterns. The semantic representation of a natural language query is modeled in terms of a graph, which represents entities as nodes connected by edges representing relations between entities. The presented system architecture provides the concept of a natural language interface that is both independent in terms of the included vocabularies for parsing the syntax and semantics of the input query, as well as the knowledge sources that are consulted for retrieving search results. This functionality is achieved by modularizing the system's components, addressing external data sources by flexible modules which can be modified at runtime. We evaluate the system's performance by testing the accuracy of the syntactic parser, the precision of the retrieved search results as well as the speed of the prototype.
Structure of soil organic matter (SOM) is a hot topic of discussion among scientific community for several decades. The mostly discussed models, among many, are polymer model and supramolecular model. While the former considers SOM as macromolecules consisting of amorphous and crystalline domains, the latter explains SOM as a physicochemical entity dominated by weak hydrophobic and H-bond interactions in the secondary level, which holds individual molecules of primary structure together. The weak forces in secondary level impart characteristic mobility of SOM. Very important consequence of this multidimensional formulation is that physicochemical structure plays a crucial role in most biogeochemical functions of SOM, apart from the chemical composition. Recently introduced concept of cation and water molecule mediated bridges between OM molecular segments (CaB and WaMB, respectively) evolved from physicochemical understanding of SOM structure. Even though several indirect evidences were produced for CaB and WaMB during last years, no clear-cut understanding of these processes has been achieved yet. Experimental difficulty due to overlapping effects of equally important CaB-governing parameters such a pH and competing cations raises huge challenge in investigating CaB-related influences. This thesis, therefore, aims to validate an experimental set-up for inducing CaB within OM structures and assessing it from various chemical and physicochemical aspects.
The method involved removal of omnipresent cations and adjustment of pH before cation addition. This helped to separate pH effects and cation effects. Based on results obtained on two different types of organic matter, it can be deduced that multivalent cations can cross-link SOM, given that functional group density of the SOM material is enough for the functional groups to be arranged in sufficient spatial proximity to each other. Physicochemical structural reorganisation during aging causes formation of more and/or stronger CaB and WaMB. As for inducing CaB directly after cation treatment, cationic size and valency were found determinant also for aging effect. A strongly cross-linked system in the beginning is less vulnerable to structural changes and undergoes aging with lower intensity, than an initially weakly cross-linked system. Responsible for the structural changes is, the inherent mobility of SOM within its physicochemical assemblage. Thus the information on structural requirement of CaB and its consequences on OM matrix rigidity will help to obtain insight into the physicochemical SOM structure. Additionally, organic matter quality (assessed by thermal analysis) and pore structure of SOM formed in a set of artificial soils showed that mineral materials are important for the chemical nature of SOM molecules, but not for the physical structure of organo-mineral associations, at least after several months of SOM development.
Furthermore, nanothermal analysis using atomic force microscopy (AFM-nTA) was implemented in soils for the first time to reveal nanoscale thermal properties and their spatial distribution in nano- and micrometer scales. This helped to identify physicochemical processes, such as disruption of WaMB, in low-organic soils, in which bulk methods fail due to their low sensitivity. Further, various types of materials constituting in soils were distinguished with high resolution by advanced application of the method, in combination with other AFM parameters. Attempts were done to identify various materials, with the usage of defined test materials. Above all, the method is potent to reveal microspatial heterogeneity on sample surfaces, which could help understanding process-relevant hotspots, for example.
This thesis thus contributes to the scientific understanding on physicochemical structural dynamics via cross-linking by cations and via nanoscale thermal properties. Direct investigation on CaB demonstrated here will potentially help making a big leap in knowledge about the interaction. The observed aging effects add well to the understanding of supramolecular consideration of SOM. By introducing nanothermal analysis to the field of soil science, it is made possible to face the problem of heterogeneity and spatial distribution of thermal characteristics. Another important achievement of AFM-nTA is that it can be used to detect physicochemical processes, which are of low intensity.
In this thesis, I study the spectral characteristics of large dynamic networks and formulate the spectral evolution model. The spectral evolution model applies to networks that evolve over time, and describes their spectral decompositions such as the eigenvalue and singular value decomposition. The spectral evolution model states that over time, the eigenvalues of a network change while its eigenvectors stay approximately constant.
I validate the spectral evolution model empirically on over a hundred network datasets, and theoretically by showing that it generalizes arncertain number of known link prediction functions, including graph kernels, path counting methods, rank reduction and triangle closing. The collection of datasets I use contains 118 distinct network datasets. One dataset, the signed social network of the Slashdot Zoo, was specifically extracted during work on this thesis. I also show that the spectral evolution model can be understood as a generalization of the preferential attachment model, if we consider growth in latent dimensions of a network individually. As applications of the spectral evolution model, I introduce two new link prediction algorithms that can be used for recommender systems, search engines, collaborative filtering, rating prediction, link sign prediction and more.
The first link prediction algorithm reduces to a one-dimensional curve fitting problem from which a spectral transformation is learned. The second method uses extrapolation of eigenvalues to predict future eigenvalues. As special cases, I show that the spectral evolution model applies to directed, undirected, weighted, unweighted, signed and bipartite networks. For signed graphs, I introduce new applications of the Laplacian matrix for graph drawing, spectral clustering, and describe new Laplacian graph kernels. I also define the algebraic conflict, a measure of the conflict present in a signed graph based on the signed graph Laplacian. I describe the problem of link sign prediction spectrally, and introduce the signed resistance distance. For bipartite and directed graphs, I introduce the hyperbolic sine and odd Neumann kernels, which generalize the exponential and Neumann kernels for undirected unipartite graphs. I show that the problem of directed and bipartite link prediction are related by the fact that both can be solved by considering spectral evolution in the singular value decomposition.
The purpose of this research is to examine various existing cloud-based Internet of Things (IoT) development platforms and evaluate one platform (IBM Watson IoT) in detail using a use case scenario. Internet of Things IoT is an emerging technology that has a vision of interconnecting the virtual world (e.g. clouds, social networks) and the physical world (e.g. device, cars, fridge, people, animals) through the Internet technology. For example, the IoT concept of smart cities which has the objectives to improve the efficiency and development of business, social and cultural services in the city, can be achieved by using sensors, actuators, clouds and mobile devices (IEEE, 2015). A sensor (e.g. temperature sensor) in the building (global world) can send the real-time data to the IoT cloud platform (virtual world), where it can be monitored, stored, analysed, or used to trigger some action (e.g. turn on the cooling system in the building if temperature exceeds a threshold limit). Although, the IoT creates vast opportunities in different areas (e.g. transportation, healthcare, manufacturing industry), it also brings challenges such as standardisation, interoperability, scalability, security and privacy. In this research report, IoT concepts and related key issues are discussed.
The focus of this research is to compare various cloud-based IoT platforms in order to understand the business and technical features they offer. The cloud-based IoT platforms from IBM, Google, Microsoft, PTC and Amazon have been studied.
To design the research, the Design Science Research (DSR) methodology has been followed, and to model the real-time IoT system the IOT-A modelling approach has been used.
The comparison of different cloud based IoT development platforms shows that all of the studied platforms provide basic IoT functionalities such as connecting the IoT devices to the cloud based IoT platform, collecting data from the IoT devices, data storage and data analytics. However, the IBM’s IoT platform appears to have an edge over the other platforms studied in this research because of the integrated run-time environment which also makes it more developer friendly. Therefore, IBM Watson IoT for Bluemix is selected for further examination of its capabilities. The IBM Watson IoT for Bluemix offerings include analytics, risk management, connect and information management. A use case was implemented to assess the capabilities that IBM Watson IoT platform offers. The digital artifacts (i.e. applications) are produced to evaluate the IBM’s IoT solution. The results show that IBM offers a very scalable, developer and deployment friendly IoT platform. Its cognitive, contextual and predictive analytics provide a promising functionality that can be used to gain insights from the IoT data transmitted by the sensors and other IoT devices.
Larvae of Cx.pipiens coocurred with Cladocera, but the latter established delayed in time. Biotope structure influenced time of species occurrence with ponds at reed-covered wetlands favouring crustacean development, while ponds at grassland biotopes favoured colonization by mosquito larvae. The mechanisms driving the negative effect of crustaceans on mosquito larvae were investigated within an experiment under artificial conditions. Crustacean communities were found to reduce both oviposition and larval development of Cx.pipiens. Crustacean communities of high taxa diversity, including both predatory and competing crustaceans, were more effective compared with crustacean communities dominated by single taxa. Presence of crustacean communities characterised by high taxa diversity increased the sensitivity of Cx.pipiens larvae towards Bti and prolonged the time of recolonization. In a final step the combined approach, using Bti and crustaceans, was evaluated under field conditions. The joint application of Bti and crustaceans was found to reduce mosquito larval populations over the whole observation period, while single application of Bti caused only short-term reduction of mosquito larvae. Single application of crustaceans had no significant effect, because high abundances of prior established mosquito larvae impeded propagation of crustaceans. At combined treatment, mosquito larvae were reduced by Bti application and hence crustaceans were able to proliferate without disturbance by interspecific competition. In conclusion, natural competitors were found to have a strong negative impact on mosquito larval populations. However, a time span of about 2 weeks has to be bridged, before crustacean communities reached a level sufficient for mosquito control. Results of a combined approach, complementing the short-term effect of the biological insecticide Bti with the long-term effect of crustaceans, were promising. Using natural competitors within an integrated control strategy could be an important tool for an effective, environmentally friendly and sustainable mosquito management.
This paper describes results of the simulation of social objects, the dependence of schoolchildren's professional abilities on their personal characteristics. The simulation tool is the artificial neural network (ANN) technology. Results of a comparison of the time expense for training the ANN and for calculating the weight coefficients with serial and parallel algorithms, respectively, are presented.
An estimation of the number of multiplication and addition operations for training artififfcial neural networks by means of consecutive and parallel algorithms on a computer cluster is carried out. The evaluation of the efficiency of these algorithms is developed. The multilayer perceptron, the Volterra network and the cascade-correlation network are used as structures of artififfcial neural networks. Different methods of non-linear programming such as gradient and non-gradient methods are used for the calculation of the weight coefficients.
Nanotemplates for the combined structural and functional analysis of membrane-associated proteins
(2019)
Plasma membranes are essential for life because they give cells an identity. Plasma membranes are almost impermeable to fluids and substances. Still, transport between inside and outside needs to be possible. An important transport way is endocytosis. This mechanism relies on membrane-associated proteins that sense and induce curvature to the plasma membrane. However, the physics and structural dynamics behind proteins acting on membranes is not well understood. There is a standard method in vitro to investigate membrane-associated proteins sensing spherical geometries: They are incubated on unilamellar vesicles. This procedure allows to analyze these proteins in their bound state. This approach is inappropriate for GRAF1 (GTPase Regulator Associated with Focal Adhesion Kinase-1), a key player in endocytosis because it senses tubular geometries instead. However, GRAF1 extrudes lipid tubes from vesicles that can be analyzed. Still, this is a limited method because these tubes suffer from inhomogeneity and they do not enable the observation of intermediate and lower concentration binding states. To overcome this issue they can be incubated on pre-tubular structures called nanotemplates. There have been studies using carbon nanotubes and Galactosylceramide lipid tubes as nanotemplates. These approaches require complex chemical modifications or expensive components and they are not necessarily flexible. In this work we present a simple and easy new approach to prepare nanotemplates using Folch lipid mixture. We show on the basis of BPG, a truncate of GRAF1, that our nanotemplates are suitable for Cryo-EM and that it is possible to use IHRSR (Iterative Helical Real Space Reconstruction) to analyze the structure of BPG in its bound state. Moreover, the qualification for Cryo-EM allows to use plunge freezing to interrupt the incubation on our nanotemplates abruptly. This enables the analysis of intermediate binding states to understand the binding process.
Large amounts of qualitative data make the utilization of computer-assisted methods for their analysis inevitable. In this thesis Text Mining as an interdisciplinary approach, as well as the methods established in the empirical social sciences for analyzing written utterances are introduced. On this basis a process of extracting concept networks from texts is outlined and the possibilities of utilitzing natural language processing methods within are highlighted. The core of this process is text processing, to whose execution software solutions supporting manual as well as automated work are necessary. The requirements to be met by these solutions, against the background of the initiating project GLODERS, which is devoted to investigating extortion racket systems as part of the global fiσnancial system, are presented, and their fulσlment by the two most preeminent candidates reviewed. The gap between theory and pratical application is closed by a prototypical application of the method to a data set of the research project utilizing the two given software solutions.
Molecular dynamics (MD) as a field of molecular modelling has great potential to revolutionize our knowledge and understanding of complex macromolecular structures. Its field of application is huge, reaching from computational chemistry and biology over material sciences to computer-aided drug design. This thesis on one hand provides insights into the underlying physical concepts of molecular dynamics simulations and how they are applied in the MD algorithm, and also briefly illustrates different approaches, as for instance the molecular mechanics and molecular quantum mechanics approaches.
On the other hand an own all-atom MD algorithm is implemented utilizing and simplifying a version of the molecular mechanics based AMBER force field published by \big[\cite{cornell1995second}\big]. This simulation algorithm is then used to show by the example of oxytocin how individual energy terms of a force field function. As a result it has been observed, that applying the bond stretch forces alone caused the molecule to be compacted first in certain regions and then as a whole, and that with adding more energy terms the molecule got to move with increasing flexibility.
In order to enhance the company’s appeal for potential employees and improve the satisfaction of already salaried workers, it is necessary to offer a variety of work-life balance measures. But as their implementation causes time and financial costs, a prioritization of measures is needed. To express a recommendation for companies, this study is led by the questions if there are work-life balance measures which have more impact on employee satisfaction than others, how big the relative impact of work-life balance measures on job satisfaction in comparison to other work and private life variables is, if there is a relation between the effectiveness of measures and their use and if there is a difference between the measures which are most important from the employees’ perspective and the companies’ offers.
These questions are formulated in eight research hypotheses which are examined in a quantitative research design with online survey data from 289 employees of fifteen different German companies. The formation of a hierarchy of the effectiveness of measures towards job satisfaction as well as the investigation of the relative impact in comparison to other variables is performed using a multiple regression analysis, whilst the differences between employees’ expectations and the availability of offers are examined with t-tests.
Support in childcare, support in voluntary activities and teambuilding events have a significantly higher impact on job satisfaction than other work-life balance measures, and their potential use is higher than the actual use which leads to the conclusion that there is yet potential for companies to improve their employees’ satisfaction by implementing these measures. In addition, flexible work hours, flexible work locations and free time and overtime accounts are the most important measures from the employees’ point of view and already widely offered by the surveyed companies. In general, the overall use of the available measures and the quantity of offered measures are more important with regard to job satisfaction than the specific kind of measure. In addition, work-life balance measures are more important towards job satisfaction for younger people.
The present thesis investigates attitudes and prosocial behavior between workgroups from a social identity and intergroup contact perspective. Based on the Common In-group Identity Model (CIIM; Gaertner & Dvoidio, 2000), it is hypothesized that "optimal" conditions for contact (Allport, 1954) create a common identity at the organizational level which motivates workgroups to cooperate and show organizational citizenship behavior (OCB) rather than intergroup bias. Predictions based on the CIIM are extended with hypotheses derived from the In-group Projection Model (IPM; Mummendey & Wenzel, 1999) and the Self-Categorization Model of Group Norms (Terry & Hogg, 1996). Hypotheses are tested with data from N1 = 281 employees of N2 = 49 different workgroups and their workgroup managers of a German mail-order company (Study 1). Results indicate that group- and individual-level contact conditions are predictive of lower levels of intergroup bias and higher levels of cooperation and helping behavior. A common in-group representation mediates the effect on out-group attitudes and intergroup cooperation. In addition, the effect of a common in-group representation on intergroup bias is moderated by relative prototypicality, as predicted by the IPM, and the effect of prosocial group norms on helping behavior is moderated by workgroup identification, as predicted by the Self-Categorization Model of Group Norms. A longitudinal study with Ntotal = 57 members of different student project groups replicates the finding that contact under "optimal" conditions reduces intergroup bias and increases prosocial behavior between organizational groups. However, a common in-group representation is not found to mediate this effect in Study 2. Initial findings also indicate that individual-level variables, such as helping behavior toward members of another workgroup, may be better accounted for by variables at the same level of categorization (cf. Haslam, 2004). Thus, contact in a context that makes personal identities of workgroup members salient (i.e., decategorization) may be more predictive of interpersonal prosocial behavior, while contact in a context that makes workgroup identities salient (i.e., categorization) may be more predictive of intergroup prosocial behavior (cf. Tajfel, 1978). Further data from Study 1 support such a context-specific effect of contact between workgroups on interpersonal and intergroup prosocial behavior, respectively. In the last step, a temporal integration of the contact contexts that either lead to decategorization, categorization, or recategorization are examined based on the Longitudinal Contact Model (Pettigrew, 1998). A first indication that a temporal sequence from decategorization via categorization to recategorization may be particularly effective in fostering intergroup cooperation is obtained with data from Study 2. In order to provide a heuristic model for research on prosocial behavior between workgroups, findings are integrated into a Context-Specific Contact Model. The model proposes specific effects of contact in different contexts on prosocial behavior at different levels of categorization. Possible mediator and moderator processes are suggested. A number of implications for theory, future research and the management of relations between workgroups are discussed.
This thesis addresses the reduced basis methods for parametrized quasilinear elliptic and parabolic partial differential equations with strongly monotone differential operator. It presents all of the ingredients of the reduced basis method: basis generation for reduced basis approximation, certification of the approximation error by suitable a-posteriori error control and an Offine-Online decomposition. The methodology is further applied to the magnetostatic and magnetoquasistatic approximations of Maxwell’s equations and its validity is confirmed by numerical examples.
Investigating the environmental fate of iodinated X-ray contrast media in the urban water cycle
(2010)
Iodinated X-ray contrast media (ICM) are a group of emerging contaminants which have been detected at elevated concentrations in the aquatic environment. These compounds are excreted unmetabolized into hospital wastewater, and eventually treated at wastewater treatment plants (WWTPs). The removal of ICM in WWTPs has not been very effective and therefore the ICM enter the aquatic environment via WWTP effluent discharges. Research has investigated the removal of selected ICM via abiotic and biotic processes, however limited work has attempted to determine the fate of these compounds once released into the environment. This thesis investigates the biotransformation of four selected ICM (diatrizoate, iohexol, iomeprol, and iopamidol) in aerobic soil-water and sediment-water systems as well as in different environmental matrices. Iohexol, iomeprol and iopamidol were biotransformed to several TPs in the aerobic batch systems, while no biotransformation was observed for the ionic ICM, diatrizoate. In total 34 biotransformation products (TPs) of the three non-ionic ICM were identified. The combination of semi-preparative HPLC-UV, hybrid triple quadrupole-linear ion trap mass spectrometry (Qq-LIT-MS) was found to be an accurate approach for the structural elucidation of ICM TPs. The ICM TPs resulted in microbial transformation occurring at the side chains of the parent ICM, with the iodinated aromatic ring unmodified.
The objective of this contribution is to conceptually analyze the potentials of entrepreneurial design thinking as being a rather new method for entrepreneurship education. Based on a literature review of different design thinking concepts we carve out a generic design thinking model upon we conceptually build a new model that considers entrepreneurial thinking as a valuable characteristic.
The results of our work show that the characteristics of entrepreneurial design thinking can enhance entrepreneurship education by supporting respective action fields of entrepreneurial learning. In addition we reveal that entrepreneurial design thinking offers beneficial guidelines for the design of entrepreneurship education programs.
Diet-related effects of antimicrobials in aquatic decomposer-shredder and periphyton-grazer systems
(2022)
Leaf-associated microbial decomposers as well as periphyton serve as important food sources for detritivorous and herbivorous macroinvertebrates (shredders and grazers) in streams. Shredders and grazers, in turn, provide not only collectors with food but also serve as prey for predators. Therefore, decomposer-shredder and periphyton-grazer systems (here summarized as freshwater biofilm-consumer systems) are highly important for the energy and nutrient supply in heterotrophic and autotrophic stream food webs. However, both systems can be affected by chemical stressors, amongst which antimicrobials (e.g., antibiotics, fungicides and algaecides) are of particular concern. Antimicrobials can impair shredders and grazers not only via waterborne exposure (waterborne effect pathway) but also through dietary exposure and microorganism-mediated alterations in the food quality of their diet (dietary effect pathway). Even though the relevance of the latter pathway received more attention in recent years, little is known about the mechanisms that are responsible for the observed effects in shredders and grazers. Therefore, the first objective of this thesis was to broaden the knowledge of indirect antimicrobial effects in a model shredder and grazer via the dietary pathway. Moreover, although freshwater biofilm-consumer systems are most likely exposed to antimicrobial mixtures comprised of different stressor groups, virtually nothing is known of these mixture effects in both systems. Therefore, the second objective was to assess and predict diet-related antimicrobial mixture effects in a model freshwater biofilm-consumer system. During this thesis, positive diet-related effects of a model antibiotic on the energy processing and physiology of the shredder Gammarus fossarum were observed. They were probably triggered by shifts in the leaf-associated microbial community in favor of aquatic fungi that increased the food quality of leaves for the shredder. Contrary to that, a model fungicide induced negative effects on the energy processing of G. fossarum via the dietary pathway, which can be explained by negative impacts on the microbial decomposition efficiency leading to a reduced food quality of leaf litter for gammarids. For diet-related antimicrobial effects in periphyton-grazer systems, a model algaecide altered the periphyton community composition by increasing nutritious and palatable algae. This resulted in an enhanced consumption and physiological fitness of the grazer Physella acuta. Finally, it was shown that complex horizontal interactions among leaf-associated microorganisms are involved, making diet-related antimicrobial mixture effects in the shredder G. fossarum difficult to predict. Thus, this thesis provides new insights into indirect diet-related effects of antimicrobials on shredders and grazers as well as demonstrates uncertainties of antimicrobial mixture effect predictions for freshwater biofilm-consumer systems. Moreover, the findings in this thesis are not only informative for regulatory authorities, as indirect effects and effects of mixtures across chemical classes are not considered in the environmental risk assessment of chemical substances, but also stimulate future research to close knowledge gaps identified during this work.
Wild boars belong to the most wide spread ungulates in the world. They are characterized by a well performed adaption to their environment mainly due to their omnivorous dietary. The wild boar population in Germany increased during the past three decades. Nowadays their high density leads to problems in agricultural areas due to damage of crops and plays a significant role as disease vector as the classical swine fever. For an effective population management population size information is of crucial importance. Different traditional methods exist to estimate population sizes as direct sightnings, faecal drop counts or hunting harvest which provide only relative estimates and population trends. Absolute population sizes could be yielded by a Capture-Mark-Recapture (CMR) approach. However, capturing of wild boars is difficult to realize and costly in terms of personnel and field effort.
Furthermore the capture probabilities are heterogeneous due to the variable behaviour of individuals influenced by age, sex, and experience of the animals. Non-invasive genetic methods are a promising complement to the traditional methods for population size estimation particularly for wild boar. These methods reduce stress and capture bias and increase the number of re-captures. Faeces proved to be a suitable DNA source for wild boar genotyping, due to almost equal capture probability. However working with faeces implicates difficulties such as low DNA rnquality and quantity, genotyping errors as dropout and false alleles.
The main aim of the present study was to develop a reliable, cost-efficient, reproducible and practicable method for wild boar genotyping. This method should provide a reliable dataset of genotypes obtained from the collected faeces samples. Individual identification forms the basis for an improved mark-recapture approach. As there is no sound method for absolute population counts in free living wild boar, reference values for the validation of this new approach are missing. Therefore, different routines to reduce and to assess genotyping errors were compared within this thesis. For maximum amplification rate, the storage, the extraction methods and the PCR-procedure were optimised. A step by step procedure was evaluated in order to determine the minimum required microsatellite (MS) number for reliable individual identification including a test with family groups (female and embryo tissue) to distinguish even between close relatives. A multiple-tubes approach, post-amplification checking and different correction procedures were applied to reduce genotyping errors. In order to quantify real genotyping error rates (GER) of datasets derived from sampling in the Palatinate Forest in western Germany, different methods for GER determination were compared with each other, obtaining GERs between 0% and 57.5%. As a consequence, more strict criteria for the multi-tube approach and increased repetition number of homozygous samples were used. An additional method validation was the implementation of a blind test to achieve the reliability of the genotyping and error checking procedure. Finally a strict and practicable proposal for the lab procedure was developed, by beginning with faecal sample collection and ending with a reliable dataset with genotypes of each sample.
The results of the presented method were derived from two sampling periods in a 4000 ha area in the Palatinate Forest in Rhineland-Palatinate in December 2006 and 2007. Both provided high confidence intervals (CI) applying inaccurate estimates (eg. for 2006 population size amounted to 215 with CI 95% of 156-314 and for 2007 population size amounted to 415 with CI 95% of 318-561) due to low sampling sizes (for 2006 n = 141 and for 2007 n = 326), successfully analysed samples (for 2006 n = 89 and for 2007 n = 156) and recapture numbers (for 2006 n = 12 and for 2007 n = 24). Furthermore, the population estimates even for the lowest values were considerably higher than previously assumed by hunting statistics, which implicates an ineffective hunting regime in the study area. For the future prospect, to obtain more precise population size estimations the increase of sampling sizes is inevitable, because absolute and reliable estimates are highly desirable for wildlife management and the control of diseases transmission. Nevertheless, the method for individual genotyping of wild boars evaluated in this thesis could be successfully established resulting in reliable datasets for population estimation modelling with sufficiently low GER.
The establishment of aquatic alien species can strongly affect community and food web structure of the invaded systems and thus represents a major threat to native biodiversity. One of the most important aquatic invasive species in European rivers is the Ponto-Caspian amphipod Dikerogammarus villosus. The species invaded most of the major European waterways within two decades, often associated with a decline of many other macroinvertebrate species, including other amphipods. Based on laboratory results predation by the so called ‘killer shrimp’ is often regarded as the key driver for observed displacement effects, but recent studies indicated a minor relevance of predation by D. villosus in the field. To allow the determination of exact predator-prey interactions from field samples, I established 22 group-specific rDNA primers for freshwater taxa suitable for prey species identification in dietary samples (Chapter II) and an approach for the screening of D. villosus gut contents using 16 of these primers. Combining genetic gut content analyses, with one of these primers, and stable isotope analyses, I examined the importance of intraguild predation (IGP) by D. villosus, which is often assumed the key driver for the displacement of native amphipod species, at an invasion front of the species in Switzerland (Chapter III). The results of this study revealed a low importance of IGP during this particular D. villosus invasion and indicated an overall sparsely predacious feeding behaviour of the species. As the feeding behaviour of D. villosus is supposed to differ between habitats and this study was only conducted at a few sampling sites of one river, I also investigated the role of predation by D. villosus at multiple sites of the River Rhine system, covering a broad range of microhabitats (Chapter IV). In keeping with the results from the invasion front results of this study strongly indicated a sparsely predacious feeding but rather a flexible feeding behaviour of D. villosus even within the same microhabitat.
However, established populations of D. villosus have changed aquatic food webs and can be expected to affect aquatic-terrestrial energy fluxes. In Chapter V of my thesis, I present a field study investigating the impact of D. villosus on the diet of two riparian spider taxa. The results of this study indicate an effect of D. villosus on the terrestrial food web via cross-ecosystem resource flow.
In conclusion, D. villosus influences terrestrial food webs by altering cross-ecosystem resource fluxes, but it is rather an opportunistic omnivore than a predator in the field.
Biodiversity is not only threatened by habitat loss, climate change and pollution, but also by invasive species. The impact of introduced species is immense and causes substantial ecological and economical costs worldwide. With the start of domestications of the African wildcat (Felis lybica) in the Near East, the transport of house cats (Felis catus) around the world as a commensal and domesticate began. The general aim of my thesis was to investigate the impact of invasive feral cats on native species as well as underlying population genetic structures, diversity and phylogeography. This was studied in the context of the demographic history in Australia and Hawai'i. My studies confirmed that the main introductions of cats to Australia began in the 19th century via ships of European settlers, traders and workers. Similarly, I was able to confirm cat introductions to Hawai'i by European traders and explorers; which has to the present a devastating effect on Hawaiian endemic species. Likewise, cats are widespread across Australia, can be found on most islands and are recognized as one of the major threats to Australian native species. A selective feeding behaviour by invasive predators was found in one of my studies. This study additionally gives an indication for possible population recovery of small Western Australianrnvertebrate species after predator removal. Advancement and the combination of various management techniques allow, if adequately funded, a more efficient planning and implementation of eradication campaigns. Population genetic approaches are able to give insights into population genetic structure, diversity and kinship, thereby enabling management campaigns to be more cost effective and successful. No pattern of isolation by distance between populations of Hawai"i and Australia indicated that trade routes, such as the "Golden Round" of the maritime fur trade, facilitated a link between far off global cat populations. Multiple introductions to Australia and intermixing with domestic breed cats resulted in feral cat populations which showrnno signs of reduced genetic variability. My studies also revealed the advantages of bioproxies in combination with phylogeography, which enable the inference and reconstruction of introduction routes, history and origin of invasive species. Genetic signals of historically introduced genotypesrnare still discernible on islands with low number of introductions over time and thereby low intermixing with domestic fancy breeds. Feral cats' adaptability as an invader was reconfirmed and possible underlying genetic mechanisms enabling their success as a global invader ("global supercat") are discussed. Research into the feralisation process of cats will provide new information regarding the domestication of cats, the genetic basis of feralisation and allow additional insights into cats" adaptive potential.
Data visualization is an effective way to explore data. It helps people to get a valuable insight of the data by placing it in a visual context. However, choosing a good chart without prior knowledge in the area is not a trivial job. Users have to manually explore all possible visualizations and decide upon ones that reflect relevant and desired trend in the data, are insightful and easy to decode, have a clear focus and appealing appearance. To address these challenges we developed a Tool for Automatic Generation of Good viSualizations using Scoring (TAG²S²). The approach tackles the problem of identifying an appropriate metric for judging visualizations as good or bad. It consists of two modules: visualization detection: given a data-set it creates a list of combination of data attributes for scoring and visualization ranking: scores each chart and decides which ones are good or bad. For the later, an utility metric of ten criteria was developed and each visualization detected in the first module is evaluated on these criteria. Only those visualizations that received enough scores are then presented to the user. Additionally to these data parameters, the tool considers user perception regarding the choice of visual encoding when selecting a visualization. To evaluate the utility of the metric and the importance of each criteria, test cases were developed, executed and the results presented.
The application of pesticides to agricultural areas can result in transport to adjacent non-target environments. In particular, surface water systems are likely to receive agricultural pesticide input. When pesticides enter aquatic environments, they may pose a substantial threat to the ecological integrity of surface water systems. To minimize the risk to non-target ecosystems the European Union prescribes an ecotoxicological risk assessment within the registration procedure of pesticides, which consists of an effect and an exposure assessment.
This thesis focuses on the evaluation of the exposure assessment and the implications to the complete regulatory risk assessment, and is based on four scientific publications. The main part of the thesis focuses on evaluation of the FOCUS modelling approach, which is used in regulatory risk assessment to predict pesticide surface water concentrations. This was done by comparing measured field concentrations (MFC) of agricultural insecticides (n = 466) and fungicides (n = 417) in surface water to respective predicted environmental concentrations (PEC) calculated with FOCUS step 1 to step 4 at two different levels of field relevance. MFCs were extracted from the scientific literature and were measured in field studies conducted primarily in Europe (publications 1 and 3).
In addition, an alternative fugacity-based multimedia mass-balance model, which needs fewer input parameters and less computing effort, was used to calculate PECs for the same insecticide MFC dataset and compared to the FOCUS predictions (publication 3). Furthermore, FOCUS predictions were also conducted for veterinary pharmaceuticals in runoff from an experimental plot study, to assess the FOCUS predictions for a different class of chemicals with a different relevant entry pathway (publication 2).
In publication 4, the FOCUS step-3 approach was used to determine relevant insecticide exposure patterns. These patterns were analysed for different monitoring strategies and the implications for the environmental risk assessment (publication 4).
The outcome of this thesis showed that the FOCUS modelling approach is neither protective nor appropriate in predicting insecticide and fungicide field concentrations. Up to one third of the MFCs were underpredicted by the model calculations, which means that the actual risk might be underestimated. Furthermore, the results show that a higher degree of realism even reduces the protectiveness of model results and that the model predictions are worse for highly hydrophobic and toxic pyrethroids.
In addition, the absence of any relationship between measured and predicted concentrations questions the general model performance quality (publication 1 and 3). Further analyses revealed that deficiencies in protectiveness and predictiveness of the environmental exposure assessment might even be higher than shown in this thesis, because actual short-term peak concentrations are only detectable with an event-related sampling strategy (publication 4). However, it was shown that the PECs of a much simpler modelling approach are much more appropriate for the prediction of insecticide MFC, especially for calculations with a higher field relevance (publication 3). The FOCUS approach also failed to predict concentrations of veterinary pharmaceuticals in runoff water (publication 2). In conclusion, the findings of this thesis showed that there is an urgent need for the improvement of exposure predictions conducted in the environmental risk assessment of pesticides as a group of highly relevant environmental chemicals, to ensure that the increasing use of those chemicals does not lead to further harmful effects in aquatic ecosystems.
Despite widespread plans of big companies like Amazon and Google to develop unmanned delivery drones, scholarly research in this field is scarce, especially in the information systems field. From technical and legal perspectives, drone delivery in last-mile scenarios is in a quite mature state. However, estimates of user acceptance are varying between high skepticism and exaggerated optimism. This research follows a mixed method approach consisting both qualitative and quantitative research, to identify and test determinants of consumer delivery drone service adoption. The qualitative part rests on ten interviews among average consumers, who use delivery services on a regular basis. Insights gained from the qualitative part were used to develop an online survey and to assess the influence of associated risks on adoption intentions. The quantitative results show that especially financial and physical risks impede drone delivery service adoption. Delivery companies who are currently thinking about providing a delivery drone service may find these results useful when evaluating usage behaviors in the future market for delivery drones.
API migration refers to the change of a used API to a different API in a program. A special case is called wrapper-based API migration. The API change is done without touching the program but the old API is reimplemented by means of the from now on used one. This so called wrapper has the interface of the reimplemented API but uses the implementation of the new one.
This is an interesting approach because the wrapper can be used in each program, which uses the old API.
To make this approach reproducible we study a ranking-based method for implementing a wrapper, where we always implement the method with the highest priority depending on failing test cases. Thus, we can reconstruct each implementation step.
We first develop an infrastructure to run and log test suites of Java projects that use an API, which we want to change.
We then build a wrapper for a given API using the ranking-based approach.
The history of human kind is characterized by social conflict. Every conflict can be the starting point of social change or the escalation into more destructive forms. The social conflict in regard to rising numbers of refugees and their acceptance that arose in most host countries in 2015 already took on destructive forms – in Germany, right-wing extremists attacked refugee shelters and even killed multiple people, including political leaders who openly supported refugees. Thus, incompatible expectancies and values of different parts of the society led to violent action tendencies, which tremendously threaten intergroup relations. Psychological research has developed several interventions in past decades to improve intergroup relations, but they fall short, for example, when it comes to the inclusion of people with extreme attitudes and to precisely differentiate potential prosocial outcomes of the interventions. Thus, this dissertation aimed to a) develop psychological interventions, that could also be applied to people with more extreme attitudes, thereby putting a special emphasis on collecting a diverse sample; b) gain knowledge about target- and outcome specific effects: Who benefits from which intervention and how can specific prosocial actions be predicted in order to develop interventions that guide needs-based actions; and c) shed light on potential underlying mechanisms of the interventions.
The dissertation will be introduced by the socio-political background that motivated the line of research pursued, before providing an overview of the conceptualization of social conflicts and potential psychological inhibitors and catalyzers for conflict transformation. Based on past research on socio-psychological interventions and their limitations, the aims of the dissertation will be presented in more detail, followed by a short summary of each manuscript. Overall, the present thesis comprises four manuscripts that were summarized in the general discussion into a road map for social-psychological interventions to put them into a broader perspective. The road map aspires to provide recommendations for increasing – either approach-oriented or support-oriented actions – by the socio-psychological interventions for a variety of host society groups depending on their pre-existing attitude towards refugees.
A Paradoxical Intervention targeting central beliefs of people with negative attitudes towards refugees influenced inhibitory and catalyzing factors for conflict transformation over the course of three experiments – thereby providing an effective tool to establish approach-oriented action tendencies, such as the willingness to get in contact with refugees. Further, the dissertation presents a novel mechanism – namely Cognitive Flexibility – which could explain the Paradoxical Interventions’ effect of past research. By positively affecting a context-free mindset, the Paradoxical Intervention could impact more flexible thought processes in general, irrespective of the topic tackled in the Paradoxical Intervention itself. For people with rather positive attitudes addressing emotions may increase specific support-oriented action tendencies. The dissertation provides evidence of a positive relation between moral outrage and hierarchy-challenging actions, such as solidarity-based collective action, and sympathy with prosocial hierarchy-maintaining support-oriented actions, such as dependency-oriented helping. These exclusive relations between specific emotions and action intentions provide important implications for the theorizing of emotion-behavior relations, as well as for practical considerations. In addition, a diversity workshop conducted with future diplomats showed indirect effects on solidarity-based collective action via diversity perception and superordinate group identification, thereby extending past research by including action intentions and going beyond the focus on grassroot-initiatives by presenting an implementable intervention for future leaders in a real world context.
Taken together, this dissertation provides important insights for the development of socio-psychological interventions. By integrating a diverse sample, including members of institutions on meso- and macro-levels (non-governmental organizations and future politicians) of our society, this dissertation presents a unique multi-perspective of host society members on the social conflict of refugee acceptance and support. Thereby, this work contributes to theoretical and practical advancement of how social psychology can contribute not only to negative peace – by for example (indirectly) reducing support of violence against refugees – but also to positive peace – by for example investigating precursors of hierarchy-challenging actions that enable equal rights.
This thesis presents novel approaches for integrating context information into probabilistic models. Data from social media is typically associated with metadata, which includes context information such as timestamps, geographical coordinates or links to user profiles. Previous studies showed the benefits of using such context information in probabilistic models, e.g.\ improved predictive performance. In practice, probabilistic models which account for context information still play a minor role in data analysis. There are multiple reasons for this. Existing probabilistic models often are complex, the implementation is difficult, implementations are not publicly available, or the parameter estimation is computationally too expensive for large datasets. Additionally, existing models are typically created for a specific type of content and context and lack the flexibility to be applied to other data.
This thesis addresses these problems by introducing a general approach for modelling multiple, arbitrary context variables in probabilistic models and by providing efficient inference schemes and implementations.
In the first half of this thesis, the importance of context and the potential of context information for probabilistic modelling is shown theoretically and in practical examples. In the second half, the example of topic models is employed for introducing a novel approach to context modelling based on document clusters and adjacency relations in the context space. They can cope with areas of sparse observations and These models allow for the first time the efficient, explicit modelling of arbitrary context variables including cyclic and spherical context (such as temporal cycles or geographical coordinates). Using the novel three-level hierarchical multi-Dirichlet process presented in this thesis, the adjacency of ontext clusters can be exploited and multiple contexts can be modelled and weighted at the same time. Efficient inference schemes are derived which yield interpretable model parameters that allow analyse the relation between observations and context.
We present the user-centered, iterative design of Mobile Facets, a mobile application for the faceted search and exploration of a large, multi-dimensional data set of social media on a touchscreen mobile phone. Mobile Facets provides retrieval of resources such as places, persons, organizations, and events from an integration of different open social media sources and professional content sources, namely Wikipedia, Eventful, Upcoming, geo-located Flickr photos, and GeoNames. The data is queried live from the data sources. Thus, in contrast to other approaches we do not know in advance the number and type of facets and data items the Mobile Facets application receives in a specific contextual situation. While developingrnMobile Facets, we have continuously evaluated it with a small group of fifive users. We have conducted a task-based, formative evaluation of the fifinal prototype with 12 subjects to show the applicability and usability of our approach for faceted search and exploration on a touchscreen mobile phone.
Software is vital for modern society. The efficient development of correct and reliable software is of ever-growing importance. An important technique to achieve this goal is deductive program verification: the construction of logical proofs that programs are correct. In this thesis, we address three important challenges for deductive verification on its way to a wider deployment in the industry: 1. verification of thread-based concurrent programs 2. correctness management of verification systems 3. change management in the verification process. These are consistently brought up by practitioners when applying otherwise mature verification systems. The three challenges correspond to the three parts of this thesis (not counting the introductory first part, providing technical background on the KeY verification approach). In the first part, we define a novel program logic for specifying correctness properties of object-oriented programs with unbounded thread-based concurrency. We also present a calculus for the above logic, which allows verifying actual Java programs. The calculus is based on symbolic execution resulting in its good understandability for the user. We describe the implementation of the calculus in the KeY verification system and present a case study. In the second part, we provide a first systematic survey and appraisal of factors involved in reliability of formal reasoning. We elucidate the potential and limitations of self-application of formal methods in this area and give recommendations based on our experience in design and operation of verification systems. In the third part, we show how the technique of similarity-based proof reuse can be applied to the problems of industrial verification life cycle. We address issues (e.g., coping with changes in the proof system) that are important in verification practice, but have been neglected by research so far.
Die Entwicklung von Algorithmen im Sinne des Algorithm Engineering geschieht zyklisch. Der entworfene Algorithmus wird theoretisch analysiert und anschließend implementiert. Nach der praktischen Evaluierung wird der Entwurf anhand der gewonnenen Kenntnisse weiter entwickelt. Formale Verifffizierung der Implementation neben der praktischen Evaluierung kann den Entwicklungsprozess verbessern. Mit der Java Modeling Language (JML) und dem KeY tool stehen eine einfache Spezififfkationssprache und ein benutzerfreundliches, automatisiertes Verififfkationstool zur Verfügung. Diese Arbeit untersucht, inwieweit das KeY tool für die Verifffizierung von komplexeren Algorithmen geeignet ist und welche Rückmeldungen für Algorithmiker aus der Verififfkation gewonnen werden können.Die Untersuchung geschieht anhand von Dijkstras Algorithmus zur Berechnung von kürzesten Wegen in einem Graphen. Es sollen eine konkrete Implementation des Standard-Algorithmus und anschließend Implementationen weiterer Varianten verifffiziert werden. Dies ahmt den Entwicklungsprozess des Algorithmus nach, um in jeder Iteration nach möglichen Rückmeldungen zu suchen. Bei der Verifffizierung der konkreten Implementation merken wir, dass es nötig ist, zuerst eine abstraktere Implementation mit einfacheren Datenstrukturen zu verififfzieren. Mit den dort gewonnenen Kenntnissen können wir dann die Verifikation der konkreten Implementation fortführen. Auch die Varianten des Algorithmus können dank der vorangehenden Verififfkationen verifiziert werden. Die Komplexität von Dijkstras Algorithmus bereitet dem KeY tool einige Schwierigkeiten bezüglich der Performanz, weswegen wir während der Verifizierung die Automatisierung etwas reduzieren müssen. Auf der anderenrn Seite zeigt sich, dass sich aus der Verifffikation einige Rückmeldungen ableiten lassen.
With the increasing importance and urgency of climate change, companies are challenged to contribute to sustainable development, especially by younger generations. However, existing corporate contributions have been criticized as insufficient, which could be particularly caused by a lack of employee engagement in corporate sustainability. In this context, gamification has been proposed and increasingly investigated in recent years as a promising, innovative tool to motivate sustainable employee behaviors in the workplace. However, there are few studies and applicable gamification solutions that address more than one specific sustainability issue and thus take a holistic perspective on sustainable behaviors in the workplace. Moreover, previous research lacks a comprehensive understanding of how different gamification elements elicit specific psychological effects, how these manifest in behavioral changes, and how these, in turn, cumulatively result in measurable corporate outcomes. The path from gamification as ”input” to corporate sustainability as ”output” thus remains unexplored.
This dissertation fills this gap by conceptualizing, designing, and evaluating a holistic gamified intervention that supports employees in various sustainable behaviors in their daily activities. The project uses a design science research approach that closely involves employees in the incremental development of the solution. As part of the iterative design process, this dissertation presents six studies to extend the theoretical understanding of gamification for sustainable employee behaviors. First, a comprehensive review of existing research on gamification for sustainable employee behavior is provided, analyzing gamification designs and results of previous studies and outlining an agenda for further research (Study 1). Theoretical foundations of research on gamification, serious games, and game-based learning (Study 2) and empirical design principles for gamification and persuasive systems (Study 3) are then systematically reviewed as a basis for the successful design of gamified applications. Subsequently, empirical studies explore employees’ motivations for sustainable behavior and illuminate their expectations for design features (Study 4), and identify contextual challenges and design dilemmas when implementing gamification in an organizational context (Study 5). Finally, a quantitative field study (Study 6) explores how different gamification designs influence sustainable employee behavior and corporate sustainability in organizations. Based on the findings, this dissertation presents a comprehensive framework of gamification for sustainable employee behavior that incorporates design, individual behavior, and organizational perspectives. Finally, building on these insights, it provides practical recommendations for designing gamification to encourage sustainable employee behavior at work.
Research has shown that people recognize personality, gender, inner states and many other items of information by simply observing human motion. Therefore the expressive human motion seems to be a valuable non-verbal communication channel. On the quest for more believable characters in virtual three dimensional simulations a great amount of visual realism has been achieved during the last decades. However, while interacting with synthetic characters in real-time simulations, often human users still sense an unnatural stiffness. This disturbance in believability is generally caused by a lack of human behavior simulation. Expressive motions, which convey personality and emotional states can be of great help to create more plausible and life-like characters. This thesis explores the feasibility of an automatic generation of emotionally expressive animations from given neutral character motions. Such research is required since common animation methods, such as manual modeling or motion capturing techniques, are too costly to create all possible variations of motions needed for interactive character behavior. To investigate how emotions influence human motion relevant literature from various research fields has been viewed and certain motion rules and features have been extracted. These movement domains were validated in a motion analysis and implemented in a system in an exemplary manner capable of automating the expression of angry, sad and happy states in a virtual character through its body language. Finally, the results were evaluated in user test.
Studies on the toxicity of chemical mixtures find that components at levels below no-observed-effect concentrations (NOECs) may cause toxicity resulting from the combined effects of mixed chemicals. However, chemical risk assessment frequently focuses on individual chemical substances, although most living organisms are substantially exposed to chemical mixtures rather than single substances. The concepts of additive toxicity, concentration addition (CA), and independent action (IA) models are often applied to predict the mixture toxicity of similarly and dissimilarly acting chemicals, respectively. However, living organisms and the environment may be exposed to both types of chemicals at the same time and location. In addition, experimental acquisition of toxicity data for every conceivable mixture is unfeasible since the number of chemical combinations is extremely large. Therefore, an integrated model to predict mixture toxicity on the basis of single mixture components having various modes of toxic action (MoAs) needs to be developed. The objectives of the present study were to analyze the challenges in predicting mixture toxicity in the environment, and to develop integrated models that overcome the limitations of the existing prediction models for estimating the toxicity of non-interactive mixtures through computational models. For these goals, four sub-topics were generated in this study. Firstly, applicable domains and limitations of existing integrated models were analyzed and grouped into three kinds of categories in this study. There are current approaches used to assess mixture toxicity; however, there is a need for a new research concept to overcome challenges associated with such approaches, which recent studies have addressed. These approaches are discussed with particular emphasis on those studies involved in computational approaches to predict the toxicity of chemical mixtures based on the toxicological data of individual chemicals. Secondly, through a case study and a computational simulation, it was found that the Key Critical Component (KCC) and Composite Reciprocal (CR) methods (as described in the European Union (EU) draft technical guidance notes for calculating the Predicted No Effect Concentration (PNEC) and Derived No Effect Level (DNEL) of mixtures) could derive significantly different results. As the third and fourth sub-topics of this study, the following two integrated addition models were developed and successfully applied to overcome the inherent limitations of the CA and IA models, which could be theoretically used for either similarly or dissimilarly acting chemicals: i) a Partial Least Squares-Based Integrated Addition Model (PLS-IAM), and, ii) a Quantitative Structure-Activity Relationship-Based Two-Stage Prediction (QSAR-TSP) model. In this study, it was shown that the PLS-IAM might be useful to estimate mixture toxicity when the toxicity data of similar mixtures having the same compositions were available. In the case of the QSAR-TSP model, it showed the potential to overcome the critical limitation of the conventional TSP model, which requires knowledge of the MoAs for all chemicals. Therefore, this study presented good potential for the advanced integrated models (e.g., PLS-IAM and QSAR-TSP), while considering various non-interactive constituents that have different MoAs in order to increase the reliance of conventional models and simplify the procedure for risk assessment of mixtures.
Ray tracing acceleration through dedicated data structures has long been an important topic in computer graphics. In general, two different approaches are proposed: spatial and directional acceleration structures. The thesis at hand presents an innovative combined approach of these two areas, which enables a further acceleration of the tracing process of rays. State-of-the-art spatial data structures are used as base structures and enhanced by precomputed directional visibility information based on a sophisticated abstraction concept of shafts within an original structure, the Line Space.
In the course of the work, novel approaches for the precomputed visibility information are proposed: a binary value that indicates whether a shaft is empty or non-empty as well as a single candidate approximating the actual surface as a representative candidate. It is shown how the binary value is used in a simple but effective empty space skipping technique, which allows a performance gain in ray tracing of up to 40% compared to the pure base data structure, regardless of the spatial structure that is actually used. In addition, it is shown that this binary visibility information provides a fast technique for calculating soft shadows and ambient occlusion based on blocker approximations. Although the results contain a certain inaccuracy error, which is also presented and discussed, it is shown that a further tracing acceleration of up to 300% compared to the base structure is achieved. As an extension of this approach, the representative candidate precomputation is demonstrated, which is used to accelerate the indirect lighting computation, resulting in a significant performance gain at the expense of image errors. Finally, techniques based on two-stage structures and a usage heuristic are proposed and evaluated. These reduce memory consumption and approximation errors while maintaining the performance gain and also enabling further possibilities with object instancing and rigid transformations.
All performance and memory values as well as the approximation errors are measured, presented and discussed. Overall, the Line Space is shown to result in a considerate improvement in ray tracing performance at the cost of higher memory consumption and possible approximation errors. The presented findings thus demonstrate the capability of the combined approach and enable further possibilities for future work.
These proceedings contain 6 papers presented at the 1st Interdisciplinary Conference on Gamification and Entrepreneurship (StartPlay) 2022. The conference was held at the University of Koblenz-Landau in Koblenz, Germany, on August 05-06, 2022.
Game-Balance Simulation as a Tool for the Evaluation of
Systematically Designed Gamification Strategies
Authors: David Kessing, Manuel Löwer
A Canvas Framework for Gameful Design Concepts
Authors: Max Höllen, Thomas Voit
Gamified Sustainable Entrepreneurship Education –
A digital Educational Escape Room for economy classes
in German High Schools
Authors: Jürgen Frentz, Marie Tuchscherer, Claudia Wiepcke
Playing Positive Psychology: The Development of a
Positive-Psychological Board Game for Team Building
Authors: Leonie Kloep, Anna-Lena Helten, Corinna Peifer
Gamification Design for Goal Activation and Goal Striving
in Digital Marketing and Innovation Management
Authors: Jenny V. Bittner, Christian Wellmann
Gamification of Assembly Routines: Planned User Study
Evaluating a Level System with Customized Feedback
Elements
Authors: Jessica Ulmer, Sebastian Braun, Jörg Wollert
Inland waters play an active role in the global carbon cycle. They collect carbon from upstream landmasses and transport it downstream until it finally reaches the ocean. Along this path, manifold processing steps are evident, resulting in (permanent) retention of carbon by sediment burial as well as loss by evasion to the atmosphere. Constraining these carbon fluxes and their anthropogenic perturbation is an urgent need. In this context, attention needs to be set on a widespread feature of inland waters: their partial desiccation. This results in the emergence of formerly inundated sediments to the atmosphere, referred to as dry inland waters. One observed feature of dry inland waters are disproportional high carbon dioxide (CO2) emissions. However, this observation was so far based on local case studies and knowledge on the global prevalence and fundamental mechanisms of these emissions is lacking. Against this background, this thesis aims to provide a better understanding of the magnitude and mechanisms of carbon emissions from dry inland waters on the global and local scale and to assess the impact of dry inland waters on the global carbon cycle. The specific research questions of this thesis were: (1) How do gaseous carbon emissions from dry inland waters integrate into the global carbon cycle and into global greenhouse gas (GHG) budgets? (2) What effect do seasonal and long term drying have on the carbon cycling of inland waters? The thesis revealed that dry inland waters emit disproportional large amounts of CO 2 on a global scale and that these emissions share common drivers across ecosystems. Quantifying global reservoir drawdown and upscaling carbon fluxes to the global scale suggests that reservoirs emit more carbon than they bury, challenging the current understanding of reservoirs as net carbon sinks. On the local scale, this thesis revealed that both, heterogeneous emission pattern between different habitats and seasonal variability of carbon emissions from the drawdown area, needs to be considered. Further, this thesis showed that re-mobilization of buried carbon upon permanent desiccation of water bodies can explain the observed emission rates, supporting the hypothesis of a positive feedback-loop between climate change and desiccation of inland waters. Overall, the present thesis highlights the importance of adding emissions from dry inland waters as a pathway to the global carbon cycle of inland waters.
How entrepreneurs become successful with their business ideas has been a topic of discussion within entrepreneurial research circles for many years. Business success has been related to psychological characteristics and many research findings have explained how people become entrepreneurs and furthermore, how they find success in regards to their business ventures. People always wanted to know what defines an entrepreneur and more importantly, what is needed to become a successful entrepreneur. Curious minds began to search for the recipe for success. Personality Traits have been researched for years, leading to the discovery of many diverse and distinct forms which are still recognized today. Researchers compared the Behavioral Approach or the analysis of demographical characteristics to success.
This raises the question of what an entrepreneur needs to become successful which is made more difficult due to the fact that entrepreneurship and who an entrepreneur is can be difficult do define exactly. Entrepreneurs are diverse and distinct individuals that each possess their own characteristics, behavior and reasons for starting a business. Many people assume that it is just someone who starts their own business. While partially correct, this only scratches the surface of what it is to be an entrepreneur.
This thesis answers the following three research questions: 1. Which personality traits were found in the almost sixty years of researching? 2. What triggers do entrepreneurs need for starting a new business? 3. Which factor makes the entrepreneur successful with their business idea? Also, reading the thesis, one may gain a better understanding of what entrepreneurship is, what defines an entrepreneur and what one needs to become a successful entrepreneur.
In this thesis the reviewing literature method developed by Wolfswinkel et al. (2011) was used to identify suitable literature and references for the main chapters. For that the five stages of the grounded-theory method for reviewing the literature were used in an iterative fashion. In step one inclusion and exclusion criterias for literature were defined. Then literature was searched matching the criterias of step one. In step three the samples were refined. Then in step four the found literature was analyzed and coded before the results of this research was presented in the following text. Many entrepreneurial personality traits were discovered and confirmed during the studies conducted over the last sixty years, these include but are not limited to Need for Achievement, Locus of Control, or risk taking propensity. Motivation is clearly the main trigger to start your own business. During the studies reasons were discovered of how and why an entrepreneur may become successful outside of posessing the right combination of traits. For example: having human, financial and social capital, an entrepreneurial behavior and the right economic conditions to start a business that will survive for more than five years.
Iterative Signing of RDF(S) Graphs, Named Graphs, and OWL Graphs: Formalization and Application
(2013)
When publishing graph data on the web such as vocabulariesrnusing RDF(S) or OWL, one has only limited means to verify the authenticity and integrity of the graph data. Today's approaches require a high signature overhead and do not allow for an iterative signing of graph data. This paper presents a formally defined framework for signing arbitrary graph data provided in RDF(S), Named Graphs, or OWL. Our framework supports signing graph data at different levels of granularity: minimum self-contained graphs (MSG), sets of MSGs, and entire graphs. It supports for an iterative signing of graph data, e. g., when different parties provide different parts of a common graph, and allows for signing multiple graphs. Both can be done with a constant, low overhead for the signature graph, even when iteratively signing graph data.
Confidentiality, integrity, and availability are often listed as the three major requirements for achieving data security and are collectively referred to as the C-I-A triad. Confidentiality of data restricts the data access to authorized parties only, integrity means that the data can only be modified by authorized parties, and availability states that the data must always be accessible when requested. Although these requirements are relevant for any computer system, they are especially important in open and distributed networks. Such networks are able to store large amounts of data without having a single entity in control of ensuring the data's security. The Semantic Web applies to these characteristics as well as it aims at creating a global and decentralized network of machine-readable data. Ensuring the confidentiality, integrity, and availability of this data is therefore also important and must be achieved by corresponding security mechanisms. However, the current reference architecture of the Semantic Web does not define any particular security mechanism yet which implements these requirements. Instead, it only contains a rather abstract representation of security.
This thesis fills this gap by introducing three different security mechanisms for each of the identified security requirements confidentiality, integrity, and availability of Semantic Web data. The mechanisms are not restricted to the very basics of implementing each of the requirements and provide additional features as well. Confidentiality is usually achieved with data encryption. This thesis not only provides an approach for encrypting Semantic Web data, it also allows to search in the resulting ciphertext data without decrypting it first. Integrity of data is typically implemented with digital signatures. Instead of defining a single signature algorithm, this thesis defines a formal framework for signing arbitrary Semantic Web graphs which can be configured with various algorithms to achieve different features. Availability is generally supported by redundant data storage. This thesis expands the classical definition of availability to compliant availability which means that data must only be available as long as the access request complies with a set of predefined policies. This requirement is implemented with a modular and extensible policy language for regulating information flow control. This thesis presents each of these three security mechanisms in detail, evaluates them against a set of requirements, and compares them with the state of the art and related work.
In this work a framework is developed that is used to create an evaluation scheme for the evaluation of text processing tools. The evaluation scheme is developed using a model-dependent software evaluation approach and the focus of the model-dependent part is the text-processing process which is derived from the Conceptual Analysis Process developed in the GLODERS project. As input data a German court document is used containing two incidents of extortion racketeering which happened in 2011 and 2012. The evaluation of six different tools shows that one tool offers great results for the given dataset when it is compared to manual results. It is able to identify and visualize relations between concepts without any additional manual work. Other tools also offer good results with minor drawbacks. The biggest drawback for some tools is the unavailability of models for the German language. They can perform automated tasks only on English documents. Nonetheless some tools can be enhanced by self-written code which allows users with development experience to apply additional methods.
Different techniques (weight loss, electrochemical, and spray
corrosion measurements) have been used to evaluate four sarcosine derivatives to inhibit corrosion and one commercial compound as synergist. The basic metal was low carbon steel CR4 tested at different conditions. As working media mainly neutral water and 0.1 M NaCl was applied. The protective film was formed on the steel surface via direct absorption of the tested substances during the immersion process. A highly improved corrosion protection with direct correlation to the molecular weight and carbon chain length of the tested compounds was detected. The protection of steel CR4 against corrosion in 0.1 M NaCl enhanced with increasing concentration of selected sarcosine compounds. The best inhibitor throughout all tested concentrations and all evaluation systems was Oleoylsarcosine (O) with efficiencies up to 97 % in potentiodynamic polarization (PP), 83 % electrochemical impedance spectroscopy (EIS), and 85 % weight loss (WL) at 100 mmol/L as highest concentration tested here. The second best inhibitor was Myristoylsarcosine (M) with efficiencies up to 82 % in PP, 69 % in EIS, and 75 % in WL at highest concentration. The inhibitor with the shortest hydrocarbon chain in this series is Lauroylsarcosine (L). It showed lowest effects to inhibit corrosion compared to O and M. The efficiencies of L were a bit more than 50 % at 75 and 100 mmol/L and less than 50 % at 25 and 50 mmol/L in all used evaluation systems. Furthermore, the overall efficiency is promoted with longer dip coating times during the steel CR4 immersion as shown for 50 mmol/L for all present derivatives. This survey indicated 10 min as best time in respect of cost and protection efficiency. The commercial inhibitor Oley-Imidazole (OI) improved significantly the effectiveness of compound Cocoylsarcosine (C), which contains the naturally mixture of carbon chain lengths from coconut oil (C8 - C18), and enhanced protection when used in combination (C+OI, 1:1 molar ration). In this system the efficiency increased from 47 % to 91 % in PP, from 40 % to 84 % in EIS, and from 45 % to 82 % in WL at highest concentration. Spray corrosion tests were used to evaluate all present sarcosine substances on steel CR4 in a more realistic system. The best inhibitor after a 24 h test was O followed by the combination C+OI and M with efficiencies up to 99 %, 80 %, and 79 %, respectively. The obtained results indicate a good stability of the protective film formed by the present inhibitors even after 24 h. All evaluation systems used in the current investigation were in good agreement and resulted in the same inhibitor sequence. Furthermore, the adsorption process of the tested compounds is assumed to follow the Langmuir type isotherm. Response surface methodology (RSM) is an optimization method depending on Box- Behnken Design (BBD). It was used in the current system to find the optimum efficiency for inhibitor O to protect steel CR4 against corrosion in salt water. Four independent variables were used here: inhibitor concentration (A), dip coating time (B), temperature (C), and NaCl concentration (D); each with three respective levels: lower (-1), mid (0), and upper (+1). According to the present result, temperature has the greatest effect on the protection process as individual parameter followed by the inhibitor concentration itself. In this investigation an optimum efficiency of 99 % is calculated by the following parameter and level combination: upper level (+1) for inhibitor concentration, dip coating time, and NaCl concentration while lower level (-1) for temperature.
A taxonomic revision of the genus Pteris in tropical Africa revealed 26 species. An identification key to the species is provided. Morphological characters were used to prepare a cladistic analysis of the relevant taxa. Each species was evaluated concerning the IUCN red list status. Only Pteris mkomaziensis was considered as Near Threatened, and all other species only as Least Concern. An inventory of the ferns of Kakamega Forest / Kenya and Budongo Forest / Uganda revealed 85 species in Kakamega and 66 species in Budongo. Life form spectra were analysed and the ferns were studied for their value for bioindication.
Homonegative discrimination such as the denial of leadership qualities and higher salaries concern not only lesbians and gay men but also individuals who were perceived as lesbian or gay (Fasoli et al., 2017). Hence, it is assumed that especially straight people become victims of homonegative discrimination (Plöderl, 2014). The perception of sexual orientation is indeed stereotype-driven (e.g., Cox et al., 2015) but there is a lack of knowledge on how accurate stereotypes are – particularly those referring to speech. Despite a variety of sociophonetic and social psychological research related to sexual orientation and gender, an encompassing understanding is missing on how sexual orientation is expressed and perceived.
The present thesis aims to fill these gaps. The two major aims of the present work are a) the examination of the accuracy of speech stereotypes in the context of sexual orientation and b) the development of a model on how sexual orientation is interpersonally construed. Overall, the present thesis comprises five manuscripts with the following aspects in common: They integratively deal with social psychological and sociophonetic perspectives, share a social identity approach, and primarily center speech instead of facial appearance. Moreover, mostly German and German native speaking participants, respectively, have been investigated.
Manuscript 1 establishes the Traditional Masculinity/Femininity-Scale as a reliable and valid instrument for assessing gender-role self-concept. The invention was necessary because existing scales insufficiently represented the self-ascribed masculinity/femininity yet (e.g., Abele, 2003; Evers & Sieverding, 2014). Manuscripts 2, 3, and 4 address the (in)accuracy of speech stereotypes regarding stereotypic content and suggested within-group homogeneity. This is carried out by the application of different methodological approaches. On the one hand, relevant acoustic parameters of lesbian/gay and straight women and men were averaged for each group. On the other hand, voice morphing was applied in order to create prototypical and naturally sounding voice averages (Kawahara et al., 2008). Lesbians and straight women differed in none, gay and straight men in one of the analyzed acoustic parameters only. In contrast, a fine-grained psychological analysis yielded various evidence for acoustic within-group heterogeneity. In particular, the exclusivity of sexual orientation and gender-role self-concept have been acoustically indexicalized which suggests that speech stereotypes are inaccurate. However, voice averages do carry perceivable sexual orientation information. Hence, speech stereotypes can be considered as exaggerations of tiny kernels of truth. In Manuscript 5, previous literature on the interpersonal construction of sexual orientation is integrated in a model: The Expression and Perception of Sexual Orientation Model (EPSOM). This model postulates an indirect route and describes how sexual orientation information is transmitted from producer to perceiver by proposing three mediating components. Thereby, the model is able to offer an explanation why sexual orientation can be perceived with above-chance but far-away-from-perfect accuracy.
Overall, the present thesis provides meaningful impulses for enhancements of research on social markers of sexual orientation and gender. This thesis offers a model on how sexual orientation is expressed and perceived, shows the benefits of combining sociophonetic and social psychological approaches, and points out the value of applying novel methods and technologies. Beyond that, the present thesis offers useful implications for practice. Speech stereotypes in the context of sexual orientation can be rejected as inaccurate – for example, native German straight men do not nasalize more or less than gay men. Thereby, the present thesis contributes to an erosion of stereotypes and a potential reduction of homonegative discrimination.
Blockchain in Healthcare
(2020)
The underlying characteristics of blockchain can facilitate data provenance, data integrity, data security, and data management. It has the potential to transform the healthcare sector. Since the introduction of Bitcoin in the fintech industry, the blcockhain technology has been gaining a lot of traction and its purpose is not just limited to finance. This thesis highlights the inner workings of blockchain technology and its application areas with possible existing solutions. Blockchain could lay the path for a new revolution in conventional healthcare systems. We presented how individual sectors within the healthcare industry could use blockchain and what solution persists. Also, we have presented our own concept to improve the existing paper-based prescription management system which is based on Hyperledger framework. The results of this work suggest that healthcare can benefit from blockchain technology bringing in the new ways patients can be treated.
Taxonomy and Systematics of Spiny-Backed Treefrogs, Genus Osteocephalus (Amphibia: Anura: Hylidae)
(2015)
The pan-Amazonian treefrog genus Ostecephalus is poorly understood both on a taxonomic and phylogenetic level. The status of several frogs already or not yet referred to the genus is unclear and the relationships among the genus and with respect to related genera is not understood. In this work O. cabrerai (Cochran and Goin, 1970) from Colombia and Peru is redescribed and O. festae (Peracca, 1904) from the foothills of the Andes in Ecuador is revalidated. Hyla inframaculata Boulenger, 1882, from the lower Amazon in Brazil, is reallocated to Osteocephalus and O. elkejungingerae (Henle, 1981) from the Andean foothills in Peru is shown to be a synonym of Hyla mimetica (Melin, 1941), the valid name being O. mimeticus. Hyla vilarsi Melin, 1941 is considered a valid species in the genus Osteocephalus and revalidated from the synonymies of several other frogs. Three new species, O. castaneicola from northern Bolivia and southern Peru, O. duellmani from a sub-Andean mountain range in southern Ecuador, and O. camufatus from central Amazonian Brazil, are described. A phylogenetic analysis based on up to nine mitochondrial genes and one nuclear one reveals the paraphyly of the genus as previously understood with respect to the genus Tepuihyla. A new taxonomy is proposed, securing the monophyly of Osteocephalus and Tepuihyla by rearranging and redefining the content of both genera. A new genus, Dryaderces, is erected for the sister group of Osteocephalus. The colouration of newly metamorphosed individuals is proposed as a morphological synapomorphy for Osteocephalus. Five monophyletic species groups within Osteocephalus are recognized, three species of Osteocephalus (O. germani, O.rnphasmatus, O. vilmae) and three species of Tepuihyla (T. celsae, T. galani, T. talbergae) synonymized and three species (Hyla helenae to Osteocephalus, O.rnexophthalmus to Tepuihyla and O. pearsoni to Dryaderces gen. n.) reallocated. Furthermore, nine putative new species are flagged (an increase to 138% of the current diversity), an indication that species numbers are largely underestimated, with most hidden diversity centred on widespread and polymorphic nominal species. The evolutionary origin of breeding strategies within Osteocephalus is discussed in the light of this new phylogeny and a novel type of amplexus (gular amplexus) is described.
Over the last three decades researchers of Cognitive Metaphor Theory have shown conclusively that metaphor is motivated rather than arbitrary and often used to systematically map out conceptual territory. This cognitive semantic proposal holds the potential for alternative L2 teaching strategies. As an abstract domain, business discourse is naturally rich in metaphors and is additionally filled with consciously used metaphorical language to strategically manipulate clients and business partners. Business English courses especially stand to profit from metaphor-oriented language teaching, as (future) managers aim to quickly improve their language performance to be prepared for international business communication. In using metaphors, speakers as well as hearers conceptualize and thus experience one thing in terms of another. Having been made aware of the conceptual linkage, students are immediately equipped with a whole set of vocabulary they may already have learned for a concrete domain and are then able to elaborate in the more abstract area of business discourse. Enhanced metaphor awareness may thus prove to be a valuable vehicle for vocabulary acquisition as well as for vocabulary retention. This thesis is subdivided into ten chapters. With each successive chapter, the focus will increasingly sharpen on the main hypothesis that metaphor awareness raising and explicit teaching in the business English classroom assists the students to dip into their savings' and transfer already acquired vocabulary to abstract business discourse and thus to become more proficient business communicators. After an introduction to the main objectives, chapter two critically looks at the different strands of Cognitive Linguistic contributions to metaphor theory made within the last three decades and discusses the structure, function and processing of figurative language to single out relevant aspects of the language classroom applications. Chapter three narrows the perspective to the socio-economic discourse as the very target domain in focus and surveys the conceptual metaphors that have been identified for this target domain, namely the source domains most productive for the target and therefore most valuable for the language classroom. In chapter four Cognitive Linguistic findings are put in contact with language didactics; i.e., the Cognitive Linguistic basis is discussed in the context of language teaching and learning theories and a first classification of metaphor teaching in the theoretical framework of language didactics is proposed. Ten cornerstones summarize the theoretical output of the previous chapters and the respective didactic consequences are considered. Theories of cognitive psychology pertaining to noticing, processing, and storing metaphors are systematically revisited and expanded to formulate further didactic implications for metaphor teaching. The consequences drawn from both linguistic as well as didactic theory are translated into a list of ten short guidelines identifying essentials for the explicit integration of metaphors into the language classroom. In chapter five those experimental studies that have already been conducted in the field of Cognitive Linguistic-inspired figurative language teaching are systematically summarized and possible contributions to set up a didactic framework for metaphor teaching are investigated. Chapters six to nine then present a piece of original research. Starting out from five research questions tackling receptive and productive vocabulary acquisition and retention as well as the influence of and on the learner- level of language proficiency, a three-fold study was designed and conducted in a regular business English classroom and results are discussed in detail. The last chapter deals again with specific implications for teaching. Earlier statements about and claims for the language classroom are revisited and refined on the basis of the theoretical linguistic, didactic and empirical findings, and an agenda for further empirical investigations is sketched out.
The presence of anthropogenic chemicals in the natural environment may impact both habitats and human use of natural resources. In particular the contamination of aquatic resources by organic compounds used as pharmaceuticals or household chemicals has become evident. The newly identified environmental pollutants, also known as micropollutants, often have i) unknown ecotoxicological impacts, ii) unknown partitioning mechanisms, e.g. sorption to sediments, and iii) limited regulation to control their emission. Furthermore, like any compound, micropollutants can be transformed while in the environmental matrix to unknown transformation products (TPs), which add to the number of unknown chemicals to consider and thus increase the complexity of risk management. Transformation is at the same time a natural mechanism for the removal of anthropogenic compounds, either by complete degradation (mineralisation) or to innocuous TPs. However, how transformation occurs in real-world conditions is still largely unknown. During the transport of micropollutants from household wastewater to surface water, a large amount of transformation can occur during wastewater treatment—specifically during biological nitrifying–denitrifying treatment processes. The thesis considers the systematic optimisation of laboratory investigative techniques, application of sensitive mass-spectrometry-based analysis techniques and the monitoring of full-scale wastewater treatment plants (WWTPs) to elucidate transformation processes of five known micropollutants.
The first of the five compounds investigated was the antibiotic trimethoprim. Incubation experiments were conducted at different analyte spike concentrations and different sludge to wastewater ratios. Using high-resolution mass spectrometry, a total of six TPs were identified from trimethoprim. The types of TPs formed was clearly influenced by the spike concentration. To the best of our knowledge, such impacts have not been previously described in the literature. Beginning from the lower spike concentration, a relatively stable final TP was formed (2,4-diaminopyrimidine-5-carboxylic acid, DAPC), which could account for almost all of the transformed trimethoprim quantity. The results were compared to the process in a reference reactor. Both by the detection of TPs (e.g., DAPC) and by modelling the removal kinetics, it could be concluded that only experimental results at the low spike concentrations mirrored the real reactor. The limits of using elevated spike concentrations in incubation experiments could thus be shown.
Three phenolic micropollutants, the antiseptic ortho-phenylphenol (OPP), the plastics additive bisphenol A (BPA) and the psychoactive drug dextrorphan were investigated with regard to the formation of potentially toxic, nitrophenolic TPs. Nitrite is an intermediate in the nitrification– denitrification process occurring in activated sludge and was found to cause nitration of these phenols. To elucidate the processes, incubation experiments were conducted in purified water in the presence of nitrite with OPP as the test substance. The reactive species HNO2, N2O3 and the radicals ·NO and ·NO2 were likely involved as indicated by scavenger experiments. In conditions found at WWTPs the wastewater is usually at neutral pH, and nitrite, being an intermediate, usually has a low concentration. By conducting incubation experiments inoculated with sludge from a conventional WWTP, it was found that the three phenolic micropollutants, OPP, BPA and dextrorphan were quickly transformed to biological TPs. Nitrophenolic TPs were only formed after artificial increase of the nitrite concentration or lowering of the pH. However, nitrophenolic-TPs can be formed as sample preparation artefacts through acidification or freezing for preservation, creating optimal conditions for the reaction to take place.
The final micropollutant to be studied was the pain-reliever diclofenac, a micropollutant on the EU-watch list due to ecotoxicological effects on rainbow trout. The transformation was compared in two different treatment systems, one employing a reactor with suspended carriers as a biofilm growth surface, while the other system employed conventional activated sludge. In the biofilm-based system, the pathway was found to produce many TPs each at relatively low concentration, many of which were intermediate TPs that were further degraded to unknown tertiary TPs. In the conventional activated sludge system some of the same reactions took place but all at much slower rates. The main difference between the two systems was due to different reaction rates rather than different transformation pathways. The municipal WWTPs were monitored to verify these results. In the biofilm system, a 10-day monitoring campaign confirmed an 88% removal of diclofenac and the formation of the same TPs as those observed in the laboratory experiments. The proposed environmental quality standard of 0.05 μg/L might thus be met without the need for additional treatment processes such as activated carbon filtration or ozonation.
This paper introduces Vocville, a causal online game for learning vocabularies. I am creating this application for my master thesis of my career as a "Computervisualist" (computer visions) for the University of Koblenz - Landau. The application is an online browser game based on the idea of the really successful Facebook game FarmVille. The application is seperated in two parts; a Grails application manages a database which holds the game objects like vocabulary, a Flex/Flash application generates the actual game by using these data. The user can create his own home with everything in it. For creating things, the user has to give the correct translation of the object he wants to create several times. After every query he has to wait a certain amount of time to be queried again. When the correct answer is given sufficient times, the object is builded. After building one object the user is allowed to build others. After building enough objects in one area (i.e. a room, a street etc.) the user can activate other areas by translating all the vocabularies of the previous area. Users can also interact with other users by adding them as neighbors and then visiting their homes or sending them gifts, for which they have to fill in the correct word in a given sentence.
Only little information is available about the diffusion of cloud computing in German higher educational institutions. A better understanding of the state of the art in this field would support the modernization of the higher educational institutions in Germany and allow the development of more adequate cloud products and more appropriate business models for this niche. For this purpose, a literature research on Cloud Computing and IT-diffusion will be run and an empirical investigation with an online questionnaire addressed to higher educational institutions in Germany will be performed to illustrate the state of the art of Cloud Computing in German higher educational institutions as well as the threats and opportunities perceived by employees of higher educational institutions data centers connected to the usage of the cloud.
In addition to that, different experts from universities and businesses will be interviewed to complete the knowledge and information collected through the online questionnaire and during the research phase. The expected results will serve to create a recommendation for higher educational institutions in Germany about either they should migration to the cloud or not and introduce a list of guiding questions of critical issues to consider before using cloud-computing technologies.
This dissertation deals with the opportunities and restrictions that parties face in an election campaign at the supranational level of the EU. Using communication science concepts of agenda-setting (focus: media) and agenda-building (focus: political parties), the first part of the study is based on the election campaign for the European Parliament (EP) in 2014. It analyses to what extent political parties put the EU on the agenda. Second, it is examined whether parties have used their structural advantage of being able to influence the media agenda at the supranational level during the election campaign in the context of the EP election campaign. Third, it is examined whether parties can gain an advantage for the visibility of their campaigns by rejecting EU integration and the associated conflictual communication. Fourth and final, it will be explored whether agenda-building can influence the rankings of specific policy issues on the media agenda in the European context.
First, the analyses show that a European political focus of election campaign communication can no longer be found only on the part of the small (eurosceptic) parties. Second, parties have a good chance of being present in media coverage if the they pursue a European political focus in their campaign communication. Third, a negative tone in party communication turns out not to be decisive for the parties' visibility in the election campaign. Fourth, a clear positioning on political issues also prepares parties for restrictions of the further development of a European thematic agenda. After a discussion of these results, the paper concludes with an assessment of the analysis limitations and an outlook on further research approaches.
The distributed setting of RDF stores in the cloud poses many challenges. One such challenge is how the data placement on the compute nodes can be optimized to improve the query performance. To address this challenge, several evaluations in the literature have investigated the effects of existing data placement strategies on the query performance. A common drawback in theses evaluations is that it is unclear whether the observed behaviors were caused by the data placement strategies (if different RDF stores were evaluated as a whole) or reflect the behavior in distributed RDF stores (if cloud processing frameworks like Hadoop MapReduce are used for the evaluation). To overcome these limitations, this thesis develops a novel benchmarking methodology for data placement strategies that uses a data-placement-strategy-independent distributed RDF store to analyze the effect of the data placement strategies on query performance.
With this evaluation methodology the frequently used data placement strategies have been evaluated. This evaluation challenged the commonly held belief that data placement strategies that emphasize local computation, such as minimal edge-cut cover, lead to faster query executions. The results indicate that queries with a high workload may be executed faster on hash-based data placement strategies than on, e.g., minimal edge-cut covers. The analysis of the additional measurements indicates that vertical parallelization (i.e., a well-distributed workload) may be more important than horizontal containment (i.e., minimal data transport) for efficient query processing.
Moreover, to find a data placement strategy with a high vertical parallelization, the thesis tests the hypothesis that collocating small connected triple sets on the same compute node while balancing the amount of triples stored on the different compute nodes leads to a high vertical parallelization. Specifically, the thesis proposes two such data placement strategies. The first strategy called overpartitioned minimal edge-cut cover was found in the literature and the second strategy is the newly developed molecule hash cover. The evaluation revealed a balanced query workload and a high horizontal containment, which lead to a high vertical parallelization. As a result these strategies showed a better query performance than the frequently used data placement strategies.
In the last decade, policy-makers around the world have turned their attention toward the creative industry as the economic engine and significant driver of employments. Yet, the literature suggests that creative workers are one of the most vulnerable work-forces of today’s economy. Because of the highly deregulated and highly individuated environment, failure or success are believed to be the byproduct of individual ability and commitment, rather than a structural or collective issue. This thesis taps into the temporal, spatial, and social resolution of digital behavioural data to show that there are indeed structural and historical issues that impact individuals’ and
groups’ careers. To this end, this thesis offers a computational social science research framework that brings together the decades-long theoretical and empirical knowledge of inequality studies, and computational methods that deal with the complexity and scale of digital data. By taking music industry and science as use cases, this thesis starts off by proposing a novel gender detection method that exploits image search and face-detection methods.
By analysing the collaboration patterns and citation networks of male and female computer scientists, it sheds lights on some of the historical biases and disadvantages that women face in their scientific career. In particular, the relation of scientific success and gender-specific collaboration patterns is assessed. To elaborate further on the temporal aspect of inequalities in scientific careers, this thesis compares the degree of vertical and horizontal inequalities among the cohorts of scientists that started their career at different point in time. Furthermore, the structural inequality in music industry is assessed by analyzing the social and cultural relations that breed from live performances and musics releases. The findings hint toward the importance of community belonging at different stages of artists’ careers. This thesis also quantifies some of the underlying mechanisms and processes of inequality, such as the Matthew Effect and the Hipster Paradox, in creative careers. Finally, this thesis argues that online platforms such as Wikipedia could reflect and amplify the existing biases.
Over the past few decades society’s dependence on software systems has grown significantly. These systems are utilized in nearly every matter of life today and often handle sensitive, private data. This situation has turned software security analysis into an essential and widely researched topic in the field of computer science. Researchers in this field tend to make the assumption that the quality of the software systems' code directly affects the possibility for security gaps to arise in it. Because this assumption is based on properties of the code, proving it true would mean that security assessments can be performed on software, even before a certain version of it is released. A study based on this implication has already attempted to mathematically assess the existence of such a correlation, studying it based on quality and security metric calculations. The present study builds upon that study in finding an automatic method for choosing well-fitted software projects as a sample for this correlation analysis and extends the variety of projects considered for the it. In this thesis, the automatic generation of graphical representations both for the correlations between the metrics as well as for their evolution is also introduced. With these improvements, this thesis verifies the results of the previous study with a different and broader project input. It also focuses on analyzing the correlations between the quality and security metrics to real-world vulnerability data metrics. The data is extracted and evaluated from dedicated software vulnerability information sources and serves to represent the existence of proven security weaknesses in the studied software. The study discusses some of the difficulties that arise when trying to gather such information and link it to the difference in the information contained in the repositories of the studied projects. This thesis confirms the significant influence that quality metrics have on each other. It also shows that it is important to view them together as a whole and suppose that their correlation could influence the appearance of unwanted vulnerabilities as well. One of the important conclusions I can draw from this thesis is that the visualization of metric evolution graphs, helps the understanding of the values as well as their connection to each other in a more meaningful way. It allows for better grasp of their influence on each other as opposed to only studying their correlation values. This study confirms that studying metric correlations and evolution trends can help developers improve their projects and prevent them from becoming difficult to extend and maintain, increasing the potential for good quality as well as more secure software code.
Manmade dams have been constructed from centuries for multiple purposes, and in the past decades they have been constructed in a fast pace, with the hotspot in tropical and subtropical regions. However, studies that explore hydrodynamics in these areas are scarce and biased to the rich literature available for temperate regions. Lakes and reservoirs have the same controlling mechanisms for physical processes and primary production, hence, analyses that were initially conceptualized for lakes are frequently applied for reservoirs. Nevertheless, longitudinal gradients in reservoirs challenges the application of these approaches.
Degradation of water quality in reservoirs is a major concern, and it is expected to be aggravated with climate change. Therefore, studies that explore mechanisms controlling water quality are essential for the maintenance of these systems, especially in tropical and subtropical regions. The aim of this thesis is to comprehend the role of hydrodynamic processes in the fate of nutrients in reservoirs and its implications on water quality, in a subtropical region. With focus on the relevance of different density current patterns. For that, analyses combining field measurements and numerical simulations were performed in a medium to small size subtropical drinking water reservoir for a complete seasonal cycle. Measurements were conducted combining several approaches: traditional sampling, sensors in high temporal and spatial resolution, and remote sensing. Besides, hydrodynamic models were set up and calibrated to reproduce observations, and to simulate scenarios that assisted on the analysis.
Results showed that different flow paths of density currents did not influence on phytoplankton dynamics. At the regions where the main nutrient supply was the river inflow (upstream), the density currents did not vary, the euphotic zone usually covered the entire depth, and vertical mixing was observed on a daily basis, turning the flow path of the density currents irrelevant. At downstream regions, the remobilization of nutrients in the sediment was the main source for primary production. Even though density currents had a seasonal pattern in the downstream region, thermal stratification conditions were the driver for variations in chlorophyll-a concentrations, with peaks after vertical mixing. This mechanism had in its favor the frequent anoxic conditions in the hypolimnion that enhanced the dissolution of reactive phosphorus from the sediment. Anoxic conditions were easily reached because the sediment in the downstream area was rich in organic matter. Phytoplankton produced in the upstream area was transported by the density currents, and for this reason, large concentrations of chl-a was observed below the euphotic zone. Further, the extensive measurements of temperature, and flow velocities, together with the hydrodynamic models, provided insights about the hydrodynamics of reservoirs. For instance, that the relevant processes occurred along the longitudinal, and mixing conditions varied along it. The relevance of inflow conditions regarding the presence of structures such as forebays and pre-dams, and the degree of stream shading in the catchment was assessed. And turbulence and internal waves had different features than the documented for high latitudes. Those findings can assist on the management of reservoirs, based on the comprehension of the physical processes.
Computed tomography (CT) and magnetic resonance imaging (MRI) in the medical area deliver huge amounts of data, which doctors have to handle in a short time. These data can be visualised efficiently with direct volume rendering. Consequently most direct volume rendering applications on the market are specialised on medical tasks or integrated in medical visualisa- tion environments. Highly evolved applications for tasks like diagnosis or surgery simulation are available in this area. In the last years, however, another area is making increasing use of com- puted tomography. Companies like phoenix |x-ray, founded in 1999 pro- duce CT-scanners especially dedicated to industrial applications like non destructive material testing (NDT). Of course an application like NDT has different demands on the visualisation than a typical medical application. For example a typical task for non destructive testing would be to high- light air inclusions (pores) in a casting. These inclusions usually cover a very small area and are very hard to classify only based on their density value as this would also highlight the air around the casting. This thesis presents multiple approaches to improve the rendering of in- dustrial CT data, most of them based on higher dimensional transfer func- tions. Therefore the existing volume renderer application of VRVis was extended with a user interface to create such transfer functions and exist- ing render modes were adapted to profit from the new transfer functions. These approaches are especially suited to improve the visualisation of sur- faces and material boundaries as well as pores. The resulting renderings make it very easy to identify these features while preserving interactive framerates.
This thesis addresses the problem of terrain classification in unstructured outdoor environments. Terrain classification includes the detection of obstacles and passable areas as well as the analysis of ground surfaces. A 3D laser range finder is used as primary sensor for perceiving the surroundings of the robot. First of all, a grid structure is introduced for data reduction. The chosen data representation allows for multi-sensor integration, e.g., cameras for color and texture information or further laser range finders for improved data density. Subsequently, features are computed for each terrain cell within the grid. Classification is performedrnwith a Markov random field for context-sensitivity and to compensate for sensor noise and varying data density within the grid. A Gibbs sampler is used for optimization and is parallelized on the CPU and GPU in order to achieve real-time performance. Dynamic obstacles are detected and tracked using different state-of-the-art approaches. The resulting information - where other traffic participants move and are going to move to - is used to perform inference in regions where the terrain surface is partially or completely invisible for the sensors. Algorithms are tested and validated on different autonomous robot platforms and the evaluation is carried out with human-annotated ground truth maps of millions of measurements. The terrain classification approach of this thesis proved reliable in all real-time scenarios and domains and yielded new insights. Furthermore, if combined with a path planning algorithm, it enables full autonomy for all kinds of wheeled outdoor robots in natural outdoor environments.
Empirical studies in software engineering use software repositories as data sources to understand software development. Repository data is either used to answer questions that guide the decision-making in the software development, or to provide tools that help with practical aspects of developers’ everyday work. Studies are classified into the field of Empirical Software Engineering (ESE), and more specifically into Mining Software Repositories (MSR). Studies working with repository data often focus on their results. Results are statements or tools, derived from the data, that help with practical aspects of software development. This thesis focuses on the methods and high order methods used to produce such results. In particular, we focus on incremental methods to scale the processing of repositories, declarative methods to compose a heterogeneous analysis, and high order methods used to reason about threats to methods operating on repositories. We summarize this as technical and methodological improvements. We contribute the improvements to methods and high-order methods in the context of MSR/ESE to produce future empirical results more effectively. We contribute the following improvements. We propose a method to improve the scalability of functions that abstract over repositories with high revision count in a theoretically founded way. We use insights on abstract algebra and program incrementalization to define a core interface of highorder functions that compute scalable static abstractions of a repository with many revisions. We evaluate the scalability of our method by benchmarks, comparing a prototype with available competitors in MSR/ESE. We propose a method to improve the definition of functions that abstract over a repository with a heterogeneous technology stack, by using concepts from declarative logic programming and combining them with ideas on megamodeling and linguistic architecture. We reproduce existing ideas on declarative logic programming with languages close to Datalog, coming from architecture recovery, source code querying, and static program analysis, and transfer them from the analysis of a homogeneous to a heterogeneous technology stack. We provide a prove-of-concept of such method in a case study. We propose a high-order method to improve the disambiguation of threats to methods used in MSR/ESE. We focus on a better disambiguation of threats, operationalizing reasoning about them, and making the implications to a valid data analysis methodology explicit, by using simulations. We encourage researchers to accomplish their work by implementing ‘fake’ simulations of their MSR/ESE scenarios, to operationalize relevant insights about alternative plausible results, negative results, potential threats and the used data analysis methodologies. We prove that such way of simulation based testing contributes to the disambiguation of threats in published MSR/ESE research.
The following thesis analyses the functionality and programming capabilitiesrnof compute shaders. For this purpose, chapter 2 gives an introductionrnto compute shaders by showing how they work and how they can be programmed. In addition, the interaction of compute shaders and OpenGL 4.3 is shown through two introductory examples. Chapter 3 describes an NBodyrnsimulation that has been implemented in order to show the computational power of compute shaders and the use of shared memory. Then it is shown in chapter 4 how compute shaders can be used for physical simulationsrnand where problems may arise. In chapter 5 a specially conceived and implemented algorithm for detecting lines in images is described and then compared with the Hough transform. Lastly, a final conclusion is drawn in chapter 6.
This work covers techniques for interactive and physically - based rendering of hair for computer generated imagery (CGI). To this end techniques
for the simulation and approximation of the interaction of light with hair are derived and presented. Furthermore it is described how hair, despite such computationally expensive algorithms, can be rendered interactively.
Techniques for computing the shadowing in hair as well as approaches to render hair as transparent geometry are also presented. A main focus of
this work is the DBK-Buffer, which was conceived, implemented and evaluated. Using the DBK-Buffer, it is possible to render thousands of hairs as
transparent geometry without being dependent on either the newest GPU hardware generation or a great amount of video memory. Moreover, a comprehensive evaluation of all the techniques described was conducted with respect to the visual quality, performance and memory requirements. This
revealed that hair can be rendered physically - based at interactive or even at real - time frame rates.
The largest population of the anadromous Allis shad (A. alosa) of the 19th century was found in River Rhine and has to be considered extinct today. To facilitate the return of A. alosa into River Rhine an EU LIFE-project was initiated in 2007. The overall objective of this thesis was to assist aquaculture and stocking-measures at River Rhine, as well as to support restoration and conservation of populations of Allis shad in Europe.
By culturing the free-swimming nematode T. aceti in a solution of cider vinegar we developed a cost-effective live food organism for the larviculture of fish. As indicated by experiments with C. maraena, T. aceti cannot be regarded as an alternative to Artemia nauplii. However it has to be considered a suitable supplemental feed in the early rearing of C. maraena by providing essential fatty acids, thereby optimizing growth.
Also mass-marking practices with Oxytetracycline, as they are applied in the restocking of Allis shad have been evaluated. In experiments with D. rerio we demonstrated that water hardness can detrimentally affect mortality during marking and has to be considered crucial in the development of marking protocols for freshwater fish.
In order to get independent from wild spawners an ex-situ Broodstock-facility for Allis shad was established in 2011. Upon examination of two complete year classes of this broodstock, we found a high prevalence of various malformations, which could be traced back to distinct cysts developing one month post hatch. Despite applying a variety of clinical tests we could not identify any infectious agents causing these malformations. The observed malformations are probably a consequence of suboptimal feeding practices or the properties of the physio-chemical rearing environment.
The decline of stocks of A. alosa in Europe has been largely explained with the increase of river temperatures as a consequence of global warming. By investigating the temperature physiology of larval Allis shad we demonstrated that A. alosa ranges among the most thermo-tolerant species in Europe and that correlations between rising temperatures and the disappearance of this species have to be understood in a synecological context and by integrating a variety of stressors other than temperature. By capturing and examining juvenile and adult Allis shad from River Rhine, we demonstrated the first natural reproduction of A. alosa in River Rhine since nearly 100 years and the success of stocking measures within the framework of the LIFE project.
In this work the Navier-Stokes equations for non-stationary incompressible
flow of the Newtonian fluid in time dependent domain are studied. The geometry of the flow domain changes in time according to fluid properties such as stress tensor. The motivation for our study comes from medicine—the simulation of blood flow in arteries and veins.
After choosing an appropriate mathematical model of the flow in a domain with viscoelastic compliant walls, we deal with its theoretical analysis. We prove the existence of a weak solution using the weak compressible approximation in a moving domain with given deformation function. In our approach the fluid-structure interface condition is treated using a permeable-wall approach decoupling the fluid and the deformable structure.
Finally we present some numerical experiments illustrating the convergence of the iteration with respect to the domain deformation function as well as the behavior of the moving wall for decreased permeability.
Six and Gimmler have identified concrete capabilities that enable users to use the Internet in a competent way. Their media competence model can be used for the didactical design of media usage in secondary schools. However, the special challenge of security awareness is not addressed by the model. In this paper, the important dimension of risk and risk assessment will be introduced into the model. This is especially relevant for the risk of the protection of personal data and privacy. This paper will apply the method of IT risk analysis in order to select those dimensions of the Six/Gimmler media competence model that are appropriate to describe privacy aware Internet usage. Privacy risk aware decisions for or against the Internet usage is made visible by the trust model of Mayer et al.. The privacy extension of the competence model will lead to a measurement of the existing privacy awareness in secondary schools, which, in turn, can serve as a didactically well-reasoned design of Informatics modules in secondary schools. This paper will provide the privacy-extended competence model, while empirical measurement and module design is planned for further research activities.
Interactive video retrieval
(2006)
The goal of this thesis is to develop a video retrieval system that supports relevance feedback. One research approach of the thesis is to find out if a combination of implicit and explicit relevance feedback returns better retrieval results than a system using explicit feedback only. Another approach is to identify a model to weight existing feature categories. For this purpose, a state-of-the-art analysis is presented and two systems implemented, which run under the conditions of the international TRECVID workshop. It will be a basis system for further research approaches in the field of interactive video retrieval. Amongst others, it shall participate in the 2006 search task of the mentioned workshop.
Statistical Shape Models (SSMs) are one of the most successful tools in 3Dimage analysis and especially medical image segmentation. By modeling the variability of a population of training shapes, the statistical information inherent in such data are used for automatic interpretation of new images. However, building a high-quality SSM requires manually generated ground truth data from clinical experts. Unfortunately, the acquisition of such data is a time-consuming, error-prone and subjective process. Due to this effort, the majority of SSMs is often based on a limited set of this ground truth training data, which makes the models less statistically meaningful. On the other hand, image data itself is abundant in clinics from daily routine. In this work, methods for automatically constructing a reliable SSM without the need of manual image interpretation from experts are proposed. Thus, the training data is assumed to be the result of any segmentation algorithm or may originate from other sources, e.g. non-expert manual delineations. Depending on the algorithm, the output segmentations will contain errors to a higher or lower degree. In order to account for these errors, areas of low probability of being a boundary should be excluded from the training of the SSM. Therefore, the probabilities are estimated with the help of image-based approaches. By including many shape variations, the corrupted parts can be statistically reconstructed. Two approaches for reconstruction are proposed - an Imputation method and Weighted Robust Principal Component Analysis (WRPCA). This allows the inclusion of many data sets from clinical routine, covering a lot more variations of shape examples. To assess the quality of the models, which are robust against erroneous training shapes, an evaluation compares the generalization and specificity ability to a model build from ground truth data. The results show, that especially WRPCA is a powerful tool to handle corrupted parts and yields to reasonable models, which have a higher quality than the initial segmentations.
We examine the systematic underrecognition of female scientists (Matilda effect) by exploring the citation network of papers published in the American Physical Society (APS) journals. Our analysis shows that articles written by men (first author, last author and dominant gender of authors) receive more citations than similar articles written by women (first author, last author and dominant gender of authors) after controlling for the journal of publication, year of publication and content of the publication. Statistical significance of the overlap between the lists of references was considered as the measure of similarity between articles in our analysis. In addition, we found that men are less likely to cite articles written by women and women are less likely to cite articles written by men. This pattern leads to receiving more citations by articles written by men than similar articles written by women because the majority of authors who published in APS journals are male (85%). We also observed Matilda effect reduces when articles are published in journals with the highest impact factors. In other words, people’s evaluation of articles published in these journals is not affected by the gender of authors significantly. Finally, we suggested a method that can be applied by editors in academic journals to reduce the evaluation bias to some extent. Editors can identify missing citations using our proposed method to complete bibliographies. This policy can reduce the evaluation bias because we observed papers written by female scholars (first author, last author, the dominant gender of authors) miss more citations than articles written by male scholars (first author, last author, the dominant gender of authors).
The development of a game engine is considered a non-trivial problem. [3] The architecture of such simulation software must be able to manage large amounts of simulation objects in real-time while dealing with “crosscutting concerns” [3,p. 36] between subsystems. The use of object oriented paradigms to model simulation objects in class hierarchies has been reported as incompatible with constantly changing demands during game development [2, p. 9], resulting in anti-patterns and eventual, messy refactoring.[13]
Alternative architectures using data oriented paradigms revolving around object composition and aggregation have been proposed as a result. [13, 9, 1, 11]
This thesis describes the development of such an architecture with the explicit goals to be simple, inherently compatible with data oriented design, and to make reasoning about performance characteristics possible. Concepts are formally defined to help analyze the problem and evaluate results. A functional implementation of the architecture is presented together with use cases common to simulation software.
In this thesis the feasibility of a GPGPU (general-purpose computing on graphics processing units) approach to natural feature description on mobile phone GPUs is assessed. To this end, the SURF descriptor [4] has been implemented with OpenGL ES 2.0/GLSL ES 1.0 and evaluated across different mobile devices. The implementation is multiple times faster than a comparable CPU variant on the same device. The results proof the feasibility of modern mobile graphics accelerators for GPGPU tasks especially for the detection phase in natural feature tracking used in augmented reality applications. Extensive analysis and benchmarking of this approach in comparison to state of the art methods have been undertaken. Insights into the modifications necessary to adapt and modify the SURF algorithm to the limitations of a mobile GPU are presented. Further, an outlook for a GPGPU-based tracking pipeline on a mobile device is provided.
For a comprehensive understanding of evolutionary processes and for providing reliable prognoses about the future consequences of environmental change, it is essential to reveal the genetic basis underlying adaptive responses. The importance of this goal increases in light of ongoing climate change, which confronts organisms worldwide with new selection pressures and requires rapid evolutionary change to avoid local extinction. Thereby, freshwater ectotherms like daphnids are particularly threatened. Unraveling the genetic basis of local adaptation is complicated by the interplay of forces affecting patterns of genetic divergence among populations. Due to their key position in freshwater communities, cyclic parthenogenetic mode of reproduction and resting propagules (which form biological archives), daphnids are particularly suited for this purpose.
The aim of this thesis was to assess the impact of local thermal selection on the Daphnia longispina complex and to reveal the underlying genetic loci. Therefore, I compared genetic differentiation among populations containing Daphnia galeata, Daphnia longispina and their interspecific hybrids across time, space, and species boundaries. I revealed strongly contrasting patterns of genetic differentiation between selectively neutral and functional candidate gene markers, between the two species, and among samples from different lakes, suggesting (together with a correlation with habitat temperatures) local thermal selection acting on candidate gene TRY5F and indicating adaptive introgression. To reveal the candidate genes’ impact on fitness, I performed association analyses among data on genotypes and phenotypic traits of D. galeata clones from seven populations. The tests revealed a general temperature effect as well as inter-population differences in phenotypic traits and imply a possible contribution of the candidate genes to life-history traits. Finally, utilizing a combined population transcriptomic and reverse ecology approach, I introduced a methodology with a wide range of applications in evolutionary biology and revealed that local thermal selection was probably a minor force in shaping sequence and gene expression divergence among four D. galeata populations, but contributed to sequence divergence among two populations. I identified many transcripts possibly under selection or contributing strongly to population divergence, a large amount thereof putatively under local thermal selection, and showed that genetic and gene expression variation is not depleted specifically in temperature-related candidate genes.
In conclusion, I detected signs of local adaptation in the D. longispina complex across space, time, and species barriers. Populations and species remained genetically divergent, although increased gene flow possibly contributed, together with genotypes recruited from the resting egg bank, to the maintenance of standing genetic variation. Further work is required to accurately determine the influence of introgression and the effects of candidate genes on individual fitness. While I found no evidence suggesting a response to intense local thermal selection, the high resilience and adaptive potential regarding environmental change I observed suggest positive future prospects for the populations of the D. longispina complex. However, overall, due to the continuing environmental degradation, daphnids and other aquatic invertebrates remain vulnerable and threatened.
In this thesis we examined the question whether personality traits of early child care workers influence process quality in preschool.
Research has shown that in educational settings such as preschool, pedagogical quality affects children’s developmental outcome (e.g. NICHD, 2002; Peisner-Feinberg et al., 1999). A substantial part of pedagogical quality known to be vital in this respect is the interaction between teacher and children (e.g., Tietze, 2008). Results of prior classroom research indicate that the teachers’ personality might be an important factor for good teacher-child-interaction (Mayr, 2011). Thus, personality traits might play a vital role for the interaction in preschool. Therefore, the aims of this thesis were to a) identify pivotal personality traits of child care workers, b) assess ideal levels of the identified personality traits and c) examine the relationship between pivotal personality traits and process quality. On that account, we conducted two requirement analyses and a video study. The results of these studies showed that subject matter experts (parents, child care workers, lecturers) partly agreed as to which personality traits are pivotal for child care workers. Furthermore, the experts showed high consensus with regard to the minimum, ideal and maximum personality trait profiles. Furthermore, child care workers whose profiles lay closer to the experts’ ideal also showed higher process quality. In addition, regression analyses showed that the child care workers’ levels of the Big Two (Communion and Agency) related significantly to their process quality.
Microbial pollution of surface waters poses substantial risks for public health, amongst others during recreational use. Microbial pollution was studied at selected sampling sites in rivers Rhine, Moselle and Lahn (Germany) on the basis of commonly used fecal indicator organisms (FIO) indicating bacterial (Escherichia coli, intestinal enterococci) and viral (somatic coliphages) fecal contamination. In addition, blaCTX-Mantibiotic resistance genes (ARG) were quantified at twosites in river Lahn and were used as markers for tracking the spread of antibiotic resistance in the aquatic environment. The impact of changes in climate-related parameters on FIO was examined by studying monitoring results of contrasting flow conditions at rivers Rhine and Moselle. Analyses at all studied river sites clearly indicate that high discharge and precipitation enhance the influx of FIO, ARG and thus potentially (antibiotic resistant) pathogens into rivers. In contrast, a decrease in hygienic microbial pollution was observed under high solar irradiation and increasing water temperatures. Based on identified contributing key factors, multiple linear regression (MLR) models for five sites at a stretch of river Lahn were established that allow a timely assessment of fecal indicator abundances. An interaction between abiotic and biotic factors (i.e. enhanced grazing pressure) considerably contributed to the formation of seasonal patterns among FIO abundances. This was enhanced during extraordinary low flow conditions in rivers with pronounced trophic interactions, clearly hampering a transfer of model approaches between rivers of different biological and hydrological characteristics. Bacterial indicatorswere stronger influenced by grazing pressure than phages. Hence, bacterial indicators alone do not sufficiently describe viral pollution in rivers. BlaCTX-Mgenes were omnipresent in Lahn River water and corresponded to distribution patterns of FIO, indicating fecal sources. Agriculture and waste watertreatment plant effluents contributed to ARG loads and participants in non-bathing water sports were found to be at risk of ingesting antibiotic resistant bacteria (ARB) including ARG, bearing the risk of infection or colonization. Results of the present study highlight the need to be aware of such risks not only in designated bathing waters. ARG abundance at both riverine sampling sites could largely be explained by E. coliabundance and may thus also be incorporated into multiple regression models using E. colispecific environmental predictors. It can be expected that the frequency of short-term microbial pollution events will increase over the next decades due to climate change. Several challenges were identified with regard to the implementation of early warning systems to protect the public from exposure to pathogens in rivers. Most importantly, the concept of the Bathing Water Directive (Directive 2006/7/EC) itself as well as the lack of harmonization in the regulatory framework at European Union (EU) level are major drawbacks and require future adjustments to reliably manage health risks related to microbial water pollution in waters used in multifunctional ways.
Water scarcity is already an omnipresent problem in many parts of the world, especially in sub-Saharan Africa. The dry years 2018 and 2019 showed that also in Germany water resources are finite. Projections and predictions for the next decades indicate that renewal rates of existing water resources will decline due the growing influence of climate change, but that water extraction rates will increase due to population growth. It is therefore important to find alternative and sustainable methods to make optimal use of the water resources currently available. For this reason, the reuse of treated wastewater for irrigation and recharge purposes has become one focus of scientific research in this field. However, it must be taken into account that wastewater contains so-called micropollutants, i.e., substances of anthropogenic origin. These are, e.g., pharmaceuticals, pesticides and industrial chemicals which enter the wastewater, but also metabolites that are formed in the human body from pharmaceuticals or personal care products. Through the treatment in wastewater treatment plants (WWTPs) as well as through chemical, biological and physical processes in the soil passage during the reuse of water, these micropollutants are transformed to new substances, known as transformation products (TPs), which further broaden the number of contaminants that can be detected within the whole water cycle.
Despite the fact that the presence of human metabolites and environmental TPs in untreated and treated wastewater has been known for a many years, they are rarely included in common routine analysis methods. Therefore, a first goal of this thesis was the development of an analysis method based on liquid chromatography - tandem mass spectrometry (LC-MS/MS) that contains a broad spectrum of frequently detected micropollutants including their known metabolites and TPs. The developed multi-residue analysis method contained a total of 80 precursor micropollutants and 74 metabolites and TPs of different substance classes. The method was validated for the analysis of different water matrices (WWTP influent and effluent, surface water and groundwater from a bank filtration site). The influence of the MS parameters on the quality of the analysis data was studied. Despite the high number of analytes, a sufficient number of datapoints per peak was maintained, ensuring a high sensitivity and precision as well as a good recovery for all matrices. The selection of the analytes proved to be relevant as 95% of the selected micropollutants were detected in at least one sample. Several micropollutants were quantified that were not in the focus of other current multi-residue analysis methods (e.g. oxypurinol). The relevance of including metabolites and TPs was demonstrated by the frequent detection of, e.g., clopidogrel acid and valsartan acid at higher concentrations than their precursors, the latter even being detected in samples of bank filtrate water.
By the integration of metabolites, which are produced in the body by biological processes, and biological and chemical TPs, the multi-residue analysis method is also suitable for elucidating degradation mechanisms in treatment systems for water reuse that, e.g., use a soil passage for further treatment. In the second part of the thesis, samples from two treatment systems based on natural processes were analysed: a pilot-scale above-ground sequential biofiltration system (SBF) and a full-scale soil aquifer treatment (SAT) site. In the SBF system mainly biological degradation was observed, which was clearly demonstrated by the detection of biological TPs after the treatment. The efficiency of the degradation was improved by an intermediate aeration, which created oxic conditions in the upper layer of the following soil passage. In the SAT system a combination of biodegradation and sorption processes occurred. By the different behaviour of some biodegradable micropollutants compared to the SBF system, the influence of redox conditions and microbial community was observed. An advantage of the SAT system over the SBF system was found in the sorption capacity of the natural soil. Especially positively charged micropollutants showed attenuation due to ionic interactions with negatively charged soil particles. Based on the physicochemical properties at ambient pH, the degree of removal in the investigated systems and the occurrence in the source water, a selection of process-based indicator substances was proposed.
Within the first two parts of this thesis a micropollutant was frequently detected at elevated concentrations in WWTPs effluents, which was not previously in the focus of environmental research: the antidiabetic drug sitagliptin (STG). STG showed low degradability in biological systems and thus it was investigated to what extend chemical treatment by ozonation can ensure attenuation of it. STG contains an aliphatic primary amine as the principal point of attack for the ozone molecule. There is only limited information about the behaviour of this functional group during ozonation and thus, STG served as an example for other micropollutants containing aliphatic primary amines. A pH-dependent degradation kinetic was observed due to the protonation of the primary amine at lower pH values. At pH values in the range 6 - 8, which is typical for the environment and in WWTPs, STG showed degradation kinetics in the range of 103 M-1s-1 and thus belongs to the group of readily degradable substances. However, complete degradation can only be expected at significantly higher pH values (> 9). The transformation of the primary amine moiety into a nitro group was observed as the major degradation mechanism for STG during ozonation. Other mechanisms involved the formation of a diketone, bond breakages and the formation of trifluoroacetic acid (TFA). Investigations at a pilot-scale ozonation plant using the effluent of a biological degradation of a municipal WWTP as source water confirmed the results of the laboratory studies: STG could not be removed completely even at high ozone doses and the nitro compound was formed as the main TP and remained stable during further ozonation and subsequent biological treatment. It can therefore be assumed that under realistic conditions both a residual concentration of STG and the formed main TP as well as other stable TPs such as TFA can be detected in the effluents of a WWTP consisting of conventional biological treatment followed by ozonation and subsequent biological polishing steps.
We consider variational discretization of three different optimal control problems.
The first being a parabolic optimal control problem governed by space-time measure controls. This problem has a nice sparsity structure, which motivates our aim to achieve maximal sparsity on the discrete level. Due to the measures on the right hand side of the partial differential equation, we consider a very weak solution theory for the state equation and need an embedding into the continuous functions for the pairings to make sense. Furthermore, we employ Fenchel duality to formulate the predual problem and give results on solution theory of both the predual and the primal problem. Later on, the duality is also helpful for the derivation of algorithms, since the predual problem can be differentiated twice so that we can apply a semismooth Newton method. We then retrieve the optimal control by duality relations.
For the state discretization we use a Petrov-Galerkin method employing piecewise constant states and piecewise linear and continuous test functions in time. For the space discretization we choose piecewise linear and continuous functions. As a result the controls are composed of Dirac measures in space-time, centered at points on the discrete space-time grid. We prove that the optimal discrete states and controls converge strongly in L^q and weakly-* in Μ, respectively, to their smooth counterparts, where q ϵ (1,min{2,1+2/d}] is the spatial dimension. The variational discrete version of the state equation with the above choice of spaces yields a Crank-Nicolson time stepping scheme with half a Rannacher smoothing step.
Furthermore, we compare our approach to a full discretization of the corresponding control problem, precisely a discontinuous Galerkin method for the state discretization, where the discrete controls are piecewise constant in time and Dirac measures in space. Numerical experiments highlight the sparsity features of our discrete approach and verify the convergence results.
The second problem we analyze is a parabolic optimal control problem governed by bounded initial measure controls. Here, the cost functional consists of a tracking term corresponding to the observation of the state at final time. Instead of a regularization term for the control in the cost functional, we consider a bound on the measure norm of the initial control. As in the first problem we observe a sparsity structure, but here the control resides only in space at initial time, so we focus on the space discretization to achieve maximal sparsity of the control. Again, due to the initial measure in the partial differential equation, we rely on a very weak solution theory of the state equation.
We employ a dG(0) approximation of the state equation, i.e. we choose piecewise linear and continuous functions in space, which are piecewise constant in time for our ansatz and test space. Then, the variational discretization of the problem together with the optimality conditions induce maximal discrete sparsity of the initial control, i.e. Dirac measures in space. We present numerical experiments to illustrate our approach and investigate the sparsity structure
As third problem we choose an elliptic optimal control governed by functions of bounded variation (BV) in one space dimension. The cost functional consists of a tracking term for the state and a BV-seminorm in terms of the derivative of the control. We derive a sparsity structure for the derivative of the BV control. Additionally, we utilize the mixed formulation for the state equation.
A variational discretization approach with piecewise constant discretization of the state and piecewise linear and continuous discretization of the adjoint state yields that the derivative of the control is a sum of Dirac measures. Consequently the control is a piecewise constant function. Under a structural assumption we even get that the number of jumps of the control is finite. We prove error estimates for the variational discretization approach in combination with the mixed formulation of the state equation and confirm our findings in numerical experiments that display the convergence rate.
In summary we confirm the use of variational discretization for optimal control problems with measures that inherit a sparsity. We are able to preserve the sparsity on the discrete level without discretizing the control variable.
The erosion of the closed innovation paradigm in conjunction with increasing competitive pressure has boosted the interest of both researchers and organizations in open innovation. Despite such rising interest, several companies remain reluctant to open their organizational boundaries to practice open innovation. Among the many reasons for such reservation are the pertinent complexity of transitioning toward open innovation and a lack of understanding of the procedures required for such endeavors. Hence, this thesis sets out to investigate how organizations can open their boundaries to successfully transition from closed to open innovation by analyzing the current literature on open innovation. In doing so, the transitional procedures are structured and classified into a model comprising three phases, namely unfreezing, moving, and institutionalizing of changes. Procedures of the unfreezing phase lay the foundation for a successful transition to open innovation, while procedures of the moving phase depict how the change occurs. Finally, procedures of the institutionalizing phase contribute to the sustainability of the transition by employing governance mechanisms and performance measures. Additionally, the individual procedures are characterized along with their corresponding barriers and critical success factors. As a result of this structured depiction of the transition process, a guideline is derived. This guideline includes the commonly employed actions of successful practitioners of open innovation, which may serve as a baseline for interested parties of the paradigm. With the derivation of the guideline and concise depiction of the individual transitional phases, this thesis consequently reduces the overall complexity and increases the comprehensibility of the transition and its implications for organizations.
The goal of this Bachelor thesis is to implement and evaluate the "Simulating of Collective Misbelief"-model into the NetLogo programming language. Therefore, the model requirements have to be specified and implemented into the NetLogo environment. Further tool-related re-quirements have to be specified to enable the model to work in NetLogo. After implementation several simulations will be conducted to answer the research question stated above.