Doctoral Thesis
Refine
Year of publication
Document Type
- Doctoral Thesis (246) (remove)
Language
- English (246) (remove)
Keywords
- Pestizid (6)
- Pflanzenschutzmittel (5)
- Landwirtschaft (4)
- Software Engineering (4)
- ecotoxicology (4)
- Biodiversität (3)
- Bodenchemie (3)
- Insektizid (3)
- Nanopartikel (3)
- Pesticides (3)
Institute
- Fachbereich 7 (73)
- Institut für Informatik (27)
- Institut für Integrierte Naturwissenschaften, Abt. Biologie (23)
- Fachbereich 8 (20)
- Institut für Umweltwissenschaften (19)
- Institute for Web Science and Technologies (11)
- Institut für Computervisualistik (9)
- Institut für Integrierte Naturwissenschaften, Abt. Chemie (9)
- Mathematisches Institut (8)
- Institut für Anglistik und Amerikanistik (7)
The aim of this dissertational work was to examine physiological (heart rate variability measures) and biomechanical parameters (step features) as possible anticipating indicators of psychological mood states. 420 participants (275 male and 145 female, age: M=34.7 years ± 9.7) engaged in a 60-minute slow endurance run while they were asked questions via a mobile answering and recording device. We measured several mood states, physiological measures, and biomechanical parameters. We used a latent growth curve analysis to examine the cross-lagged effects. Results demonstrated significant (p ≤.05) relationships between biomechanical shoe features anticipating psychological mood states, as well as psychological mood states anticipating physiological parameters.
How to begin? This short question addresses a problem that is anything but simple, especially when regarding something as sophisticated and multilayered as musical theatre. However, scholars of this vast research area have mostly neglected this question so far. This study analyses and compares the initial sections of late Victorian popular musical theatre and is therefore a contribution to several fields of research: the analysis of initial sections of musical theatre in general, the analysis of the music of popular musical theatre in particular, and therefore operetta studies. The 1890s are especially interesting times for popular musical theatre in London: The premiered works include the last collaborations of Gilbert and Sullivan as well as offshoots of Savoy opera; but the so-called ‘naughty nineties’ also saw the emergence of a new genre, musical comedy, which captured the late Victorian zeitgeist like no other. This new form of theatrical entertainment was carefully and consciously constructed and promoted as modern and fashionable, walking a fine line between respectability and mildly risqué excitement.
Because a deep understanding of the developments and new tendencies concerning popular musical theatre in the 1890s is crucial in order to interpret differences as well as similarities, the analyses of the opening numbers are preceded by a detailed discussion of the relevant genres: comic opera, musical comedy, musical play and operetta. Since the producers of the analysed works wanted to distance themselves from former and supposedly old-fashioned traditions, this book also considers influences from their British predecessors, but also from Viennese operetta and French opéra bouffe.
Human action recognition from a video has received growing attention in computer vision and has made significant progress in recent years. Action recognition is described as a requirement to decide which human actions appear in videos. The difficulties involved in distinguishing human actions are due to the high complexity of human behaviors as well as appearance variation, motion pattern variation, occlusions, etc. Many applications use human action recognition on captured video from cameras, resulting in video surveillance systems, health monitoring, human-computer interaction, and robotics. Action recognition based on RGB-D data has increasingly drawn more attention to it in recent years. RGB-D data contain color (Red, Green, and Blue (RGB)) and depth data that represent the distance from the sensor to every pixel in the object (object point). The main problem that this thesis deals with is how to automate the classification of specific human activities/actions through RGB-D data. The classification process of these activities utilizes a spatial and temporal structure of actions. Therefore, the goal of this work is to develop algorithms that can distinguish these activities by recognizing low-level and high-level activities of interest from one another. These algorithms are developed by introducing new features and methods using RGB-D data to enhance the detection and recognition of human activities. In this thesis, the most popular state-of-the-art techniques are reviewed, presented, and evaluated. From the literature review, these techniques are categorized into hand-crafted features and deep learning-based approaches. The proposed new action recognition framework is based on these two categories that are approved in this work by embedding novel methods for human action recognition. These methods are based on features extracted from RGB-D data that are
evaluated using machine learning techniques. The presented work of this thesis improves human action recognition in two distinct parts. The first part focuses on improving current successful hand-crafted approaches. It contributes into two significant areas of state-of-the-art: Execute the existing feature detectors, and classify the human action in the 3D spatio-temporal domains by testing a new combination of different feature representations. The contributions of this part are tested based on machine learning techniques that include unsupervised and supervised learning to evaluate this suitability for the task of human action recognition. A k-means clustering represents the unsupervised learning technique, while the supervised learning technique is represented by: Support Vector Machine, Random Forest, K-Nearest Neighbor, Naive Bayes, and Artificial Neural Networks classifiers. The second part focuses on studying the current deep-learning-based approach and how to use it with RGB-D data for the human action recognition task. As the first step of each contribution, an input video is analyzed as a sequence of frames. Then, pre-processing steps are applied to the video frames, like filtering and smoothing methods to remove the noisy data from each frame. Afterward, different motion detection and feature representation methods are used to extract features presented in each frame. The extracted features
are represented by local features, global features, and feature combination besides deep learning methods, e.g., Convolutional Neural Networks. The feature combination achieves an excellent accuracy performance that outperforms other methods on the same RGB-D datasets. All the results from the proposed methods in this thesis are evaluated based on publicly available datasets, which illustrate that using spatiotemporal features can improve the recognition accuracy. The competitive experimental results are achieved overall. In particular, the proposed methods can be better applied to the test set compared to the state-of-the-art methods using the RGB-D datasets.
In the new epoch of Anthropocene, global freshwater resources are experiencing extensive degradation from a multitude of stressors. Consequently, freshwater ecosystems are threatened by a considerable loss of biodiversity as well as substantial decrease in adequate and secured freshwater supply for human usage, not only on local scales, but also on regional to global scales. Large scale assessments of human and ecological impacts of freshwater degradation enable an integrated freshwater management as well as complement small scale approaches. Geographic information systems (GIS) and spatial statistics (SS) have shown considerable potential in ecological and ecotoxicological research to quantify stressor impacts on humans and ecological entitles, and disentangle the relationships between drivers and ecological entities on large scales through an integrated spatial-ecological approach. However, integration of GIS and SS with ecological and ecotoxicological models are scarce and hence the large scale spatial picture of the extent and magnitude of freshwater stressors as well as their human and ecological impacts is still opaque. This Ph.D. thesis contributes novel GIS and SS tools as well as adapts and advances available spatial models and integrates them with ecological models to enable large scale human and ecological impacts identification from freshwater degradation. The main aim was to identify and quantify the effects of stressors, i.e climate change and trace metals, on the freshwater assemblage structure and trait composition, and human health, respectively, on large scales, i.e. European and Asian freshwater networks. The thesis starts with an introduction to the conceptual framework and objectives (chapter 1). It proceeds with outlining two novel open-source algorithms for quantification of the magnitude and effects of catchment scale stressors (chapter 2). The algorithms, i.e. jointly called ATRIC, automatically select an accumulation threshold for stream network extraction from digital elevation models (DEM) by assuring the highest concordance between DEM-derived and traditionally mapped stream networks. Moreover, they delineate catchments and upstream riparian corridors for given stream sampling points after snapping them to the DEM-derived stream network. ATRIC showed similar or better performance than the available comparable algorithms, and is capable of processing large scale datasets. It enables an integrated and transboundary management of freshwater resources by quantifying the magnitude of effects of catchment scale stressors. Spatially shifting temporal points (SSTP), outlined in chapter 3, estimates pooled within-time series (PTS) variograms by spatializing temporal data points and shifting them. Data were pooled by ensuring consistency of spatial structure and temporal stationarity within a time series, while pooling sufficient number of data points and increasing data density for a reliable variogram estimation. SSTP estimated PTS variograms showed higher precision than the available method. The method enables regional scale stressors quantification by filling spatial data gaps integrating temporal information in data scarce regions. In chapter 4, responses of the assumed climate-associated traits from six grouping features to 35 bioclimatic indices for five insect orders were compared, their potential for changing distribution pattern under future climate change was evaluated and the most influential climatic aspects were identified (chapter 4). Traits of temperature preference grouping feature and the insect order Ephemeroptera exhibited the strongest response to climate as well as the highest potential for changing distribution pattern, while seasonal radiation and moisture were the most influential climatic aspects that may drive a change in insect distribution pattern. The results contribute to the trait based freshwater monitoring and change prediction. In chapter 5, the concentrations of 10 trace metals in the drinking water sources were predicted and were compared with guideline values. In more than 53% of the total area of Pakistan, inhabited by more than 74 million people, the drinking water was predicted to be at risk from multiple trace metal contamination. The results inform freshwater management by identifying potential hot spots. The last chapter (6) synthesizes the results and provides a comprehensive discussion on the four studies and on their relevance for freshwater resources conservation and management.
In the present study the flora and vegetation of Kakamega Forest, an East African rainforest in Western Kenya, was investigated. Kakamega Forest is highly degraded and fragmented and is an ideal model to study the anthropogenic influence on the forest inventory. The main focus was to analyse the influence of human impact on the vascular plant species composition. During five field phases in the years 2001 to 2004 a total of 19 study sites scattered over the whole forest including all fragments were investigated regarding forest structure, species composition and plant communities. The different forest sites were analysed by three different methods, phytosociological relevés, line-transect and with the variable-area transect method. The forest survey revealed about 400 taxa of vascular plant species, among them 112 trees, 62 shrubs, 58 climbers and 114 herbs. Several species are restricted to this forest in Kenya, but only one endemic species, the herb Commelina albiflora, could be discovered. About 15 species were recorded as new for Kenya and probably at least one species is new to science. Kakamega Forest is a unique mixture of Guineo-Congolian and Afromontane floral elements. About one half of the vascular plant species has its origin in the lowland forests of the Congo basin and one third originates from Afromontane habitats. The present study represents the first description of plant communities of Kakamega Forest. An analysis of different forest sites and plantations resulted in 17 different vegetation units. For the mature forest sites eleven plant communities were described. The young succession stage consists of two plant communities. Since the disturbance history and the age of the different plant communities could be estimated, their chronology was also described. An exception are the study sites within the plantations and afforested sites. The four defined vegetation units were not described as plant communities, because they are highly affected by man and do not belong to the natural succession of Kakamega Forest. Nevertheless, the regeneration potential of such forests was investigated. Due to the different succession stages the changing species composition along a disturbance gradient could be analysed. Most of Kakamega Forest consists of middle-aged secondary forest often surrounded by very young secondary forest. A true primary rainforest could not be found due the massive influence by over-exploitation. In all parts of the forest the anthropogenic influence could be observed. The forest develops towards a climax stage, but a 2 Abstract comparison with former surveys shows that the regeneration is much slower than expected. Human impact has to be avoided to allow the forest to develop into a primary-like rainforest. But several climax tree species might be missing anyway, because after the broad logging activities in the past there are not enough seed trees remaining. Species richness was highest in disturbed forest sites. A mixture of pioneer, climax and bushland species could be recorded there. Therefore, a high species richness is not a suitable indicator for forest quality. The proportion of climax species typical for Kakamega Forest would be a better measure. Compared to the main forest block the forest fragments do not lack in diversity as expected due to fragmentation processes. Instead, the only near primary forest could be recorded in Kisere, a northern fragment. The high amount of climax species and the more or less undisturbed forest structure is a result of the strict protection by the Kenya Wildlife Service and due to low logging activities. Differences in species composition between the studied forest sites are either a result of the different logging history or management regime rather than due to different edaphic or climatic conditions.
Specifying behaviors of multi-agent systems (MASs) is a demanding task, especially when applied in safety-critical systems. In the latter systems, the specification of behaviors has to be carried out carefully in order to avoid side effects that might cause unwanted or even disastrous behaviors. Thus, formal methods based on mathematical models of the system under design are helpful. They not only allow us to formally specify the system at different levels of abstraction, but also to verify the consistency of the specified systems before implementing them. The formal specification aims a precise and unambiguous description of the behavior of MASs, whereas the verification aims at proving the satisfaction of specified requirements. A behavior of an agent can be described as discrete changes of its states with respect to external or internal actions. Whenever an action occurs, the agent moves from one state to another one. Therefore, an efficient way to model this type of discrete behaviors is to use a kind of state transition diagrams such as finite automata. One remarkable advantage of such transition diagrams is that they lend themselves formal analysis techniques using model checking. The latter is an automatic verification technique which determines whether given properties are satisfied within a model underlying a particular system. In realistic physical environments, however, it is necessary to consider continuous behaviors in addition to discrete behaviors of MASs. Examples of those type of behaviors include the movement of a soccer agent to kick off or to go to the ball, the process of putting out the fire by a fire brigade agent in a rescue scenario, or any other behaviors that depend on any timed physical law. The traditional state transition diagrams are not sufficient to combine these types of behaviors. Hybrid automata offer an elegant method to capture such types of behaviors. Hybrid automata extend regular state transition diagrams with methods that deal with those continuous actions such that the state transition diagrams are used to model the discrete changes of behaviors, while differential equations are used to model the continuous changes. The semantics of hybrid automata make them accessible to formal verification by means of model checking. The main goal of this thesis is to approach hybrid automata for specifying and verifying behaviors of MASs. However, specifying and and verifying behaviors of MASs by means of hybrid automata raises several issues that should be considered. These issues include the complexity, modularity, and the expressiveness of MASs' models. This thesis addresses these issues and provides possible solutions to tackle them.
The polysaccharide hydration phenomenon is nowadays the subject of intense research. The interaction of native and modified polysaccharides and polysaccharides-based bioconjugates with water has an important influence on their functional behaviour. Notwithstanding that the hydration phenomenon has been studied for decades, there is still a lack of awareness about the influence of hydration water on the polysaccharide´s structure and consequences for industrial or medicinal applications. The hydration of polysaccharides is often described by the existence of water layers differing in their physical properties depending on the distance from the polysaccharide. Using the differential scanning calorimetry (DSC) such water layers were categorized according their properties upon cooling in hyaluronan (HYA, sodium salt of ß-1,4-linked units of ß-1,3-linked D-glucuronic acid and N-acetyl-D-glucosamine), a model polysaccharide in the present work. The amount of non-freezing water, i.e. water in close proximity of HYA chain which does not freeze et all, was determined around 0.74gH2O/gHYA for HYA with molecular weight from 100 to 740kDa and 0.84gH2O/gHYA for molecular weight of 1390kDa. The amount of freezing-bound water, the water pool which is affected by presence of HYA but freezes, was determined in the range from 0.74 to 2gH2O/gHYA. Above this value only non-freezing and bulk water are present since melting enthalpy measured above this concentration reached the same value as for pure water. Since this approach suffers from several experimental problems, a new approach, based on the evaporation enthalpy determination, was suggested. The analysis of the evaporation enthalpies revealed an additional process associated with apparent energy release taking part below the water content of 0.34gH2O/gHYA. Existence of this phenomenon was observed also for protonated form of HYA. The existence of energy compensating process was confirmed with the Kissinger-Akahira-Sunose method which allowed determination of actual water evaporation/desorption enthalpies in all stages of the evaporation process. In fact, the apparent evaporation enthalpy value increased until water content of 0.34gH2O/gHYA, and then dropped down to lower values which were, still higher than the value of the pure water evaporation enthalpy. By the use of time domain nuclear magnetic resonance (TD-NMR) technique it was revealed that this phenomenon is the plasticisation of HYA.
Further, it was revealed that the non-freezing water determined by the use of DSC consists of two water fractions, i.e. 15% of water structurally integrated, interacting directly with polar sites, and 85% of water structurally restricted, embedded in-between the HYA chains. The occurrence of plasticisation concentration close to equilibrium moisture content provided the possibility to influence the HYA physical structure during the drying. In this way three samples of native HYA, dried under various conditions were prepared and their physical properties were analyzed. The samples differed in kinetics of rehydration, plasticisation concentration, glass transitions, and morphology. The properties of water pool were studied in solutions of 10"25mg HYA/mL as well. The fast filed cycling (FFC) NMR relaxometry showed the existence of three water fractions which correlation times spanned from 10"6 to 10"10 seconds, progressively decreasing in dependency on its distance from HYA chain. The formation of a weak and transient intramolecular water bridge between HYA chains was observed. It was shown that, unlike the inorganic electrolytes, polyelectrolytes hydration is a dynamic process which reflects not only the technique used for the analysis, experimental conditions but also the conformation of the polysaccharide and its "thermal" and "hydration" history.
It was demonstrated that some native polysaccharide structures can be easily modified by manipulation of preparation conditions, giving fractions with specific physicochemical properties without necessity of any chemical modification.
Engineered nanoparticles are emerging pollutants. Their increasing use in commercial products suggests a similar increase of their concentrations in the environment. Studying the fate of engineered colloids in the environment is highly challenging due to the complexity of their possible interactions with the main actors present in aquatic systems. Solution chemistry is one of the most central aspects. In particular, the interactions with dissolved organic matter (DOM) and with natural colloids are still weakly understood.
The aim of this work was to further develop the dedicated analytical methods required for investigating the fate of engineered colloids in environmental media as influenced by DOM. Reviewing the literature on DOM interactions with inorganic colloids revealed that a systematic characterization of both colloids and DOM, although essential, lacks in most studies and that further investigations on the fractionation of DOM on the surface of engineered colloids is needed. Another knowledge gap concerns the effects of DOM on the dynamic structure of colloid agglomerates. For this question, analytical techniques dedicated to the characterization of agglomerates in environmental media at low concentrations are required. Such techniques should remain accurate at low concentrations, be specific, widely matrix independent and free of artefact due to sample preparation. Unfortunately, none of the currently available techniques (microscopy, light scattering based methods, separation techniques etc.) fulfills these requirements.
However, a compromise was found with hydrodynamic chromatography coupled to inductively coupled plasma mass spectrometry (HDC-ICP-MS). This method has the potential to size inorganic particles in complex media in concentration ranges below ppb and is element specific; however, its limitations were not systematically explored. In this work, the potential of this method has been further explored. The simple size separation mechanism ensures a high flexibility of the elution parameters and universal calibration can be accurately applied to particles of different compositions and surface chemistries. The most important limitations of the method are its low size resolution and the effect of the particle shape on the retention factor. The implementation of HDC coupled to single particle ICP-MS (HDC-SP-ICP-MS) offers new possibilities for the recognition of particle shape and hence the differentiation between primary particles and homoagglomerates. Therefore, this coupling technique is highly attractive for monitoring the effects of DOM on the stability of colloids in complex media. The versatility of HDC ICP MS is demonstrated by its successful applications to diverse samples. In particular, it has been used to investigate the stability of citrate stabilized silver colloids in reconstituted natural water in the presence of different types of natural organic matter. These particles were stable for at least one hour independently of the type of DOM used and the pH, in accordance with a coauthored publication addressing the stability of silver colloids in the River Rhine. Direct monitoring of DOM adsorption on colloids was not possible using UV and fluorescence detectors. Preliminary attempts to investigate the adsorption mechanism of humic acids on silver colloids using fluorescence spectroscopy suggest that fluorescent molecules are not adsorbed on silver particles. Several solutions for overcoming the encountered difficulties in the analysis of DOM interactions are proposed and the numerous perspectives offered by further developments and applications of HDC-(SP)-ICP-MS in environmental sciences are discussed in detail.
The aquatic environment is exposed to multiple environmental pressures and mixtures of chemical substances, among them petroleum and petrochemicals, metals, and pesticides. Aquatic invertebrate communities are used as bioindicators to reflect long-term and integral effects. Information on the presence of species can be supplemented with information on their traits. SPEAR-type bioindicators integrate such trait information on the community level.
This thesis aimed at enhancing specificity of SPEAR-type bioindicators towards particular groups of chemicals, namely to mixtures of oil sands-derived compounds, hydrocarbons, and metals.
For developing a bioindicator for discontinuous contamination with oil-derived organic toxicants, a field study was conducted in the Canadian oil sands development region in Northern Alberta. The traits ‘physiological sensitivity towards organic chemicals’ and ‘generation time’ were integrated to develop the bioindicator SPEARoil, reflecting the community sensitivity towards oil sands derived contamination in relation to fluctuating hydrological conditions.
According to the SPEARorganic approach, a physiological sensitivity ranking of taxa was developed for hydrocarbon contamination originating from crude oil or petroleum distillates. For this purpose, ecotoxicological information from acute laboratory tests was enriched with rapid and mesocosm test results. The developed Shydrocarbons sensitivity values can be used in SPEAR-type bioindicators.
To specifically reflect metal contamination in streams via bioindicators, Australian field studies were re-evaluated with focus on the traits ‘physiological metal sensitivity’ and ‘feeding type’. Metal sensitivity values, however, explained community effects in the field only weakly. Instead, the trait ‘feeding type’ was strongly related to metal exposure. The fraction of predators in a community can, thus, serve as an indicator for metal contamination in the field.
Furthermore, several metrics reflecting exposure to chemical cocktails in the environment were compared using existing pesticide datasets. Exposure metrics based on the 5% fraction of species sensitivity distributions were found to perform best, however, closely followed by Toxic Unit metrics based on the most sensitive species of a community or Daphnia magna.
Leaf litter breakdown is a fundamental process in aquatic ecosystems, being mainly mediated by decomposer-detritivore systems that are composed of microbial decomposers and leaf-shredding, detritivorous invertebrates. The ecological integrity of these systems can, however, be disturbed, amongst others, by chemical stressors. Fungicides might pose a particular risk as they can have negative effects on the involved microbial decomposers but may also affect shredders via both waterborne toxicity and their diet; the latter by toxic effects due to dietary exposure as a result of fungicides’ accumulation on leaf material and by negatively affecting fungal leaf decomposers, on which shredders’ nutrition heavily relies. The primary aim of this thesis was therefore to provide an in-depth assessment of the ecotoxicological implications of fungicides in a model decomposer-detritivore system using a tiered experimental approach to investigate (1) waterborne toxicity in a model shredder, i.e., Gammarus fossarum, (2) structural and functional implications in leaf-associated microbial communities, and (3) the relative importance of waterborne and diet-related effects for the model shredder.
Additionally, knowledge gaps were tackled that were related to potential differences in the ecotoxicological impact of inorganic (also authorized for organic farming in large parts of the world) and organic fungicides, the mixture toxicity of these substances, the field-relevance of their effects, and the appropriateness of current environmental risk assessment (ERA).
In the course of this thesis, major differences in the effects of inorganic and organic fungicides on the model decomposer-detritivore system were uncovered; e.g., the palatability of leaves for G. fossarum was increased by inorganic fungicides but deteriorated by organic substances. Furthermore, non-additive action of fungicides was observed, rendering mixture effects of these substances hardly predictable. While the relative importance of the waterborne and diet-related effect pathway for the model shredder seems to depend on the fungicide group and the exposure concentration, it was demonstrated that neither path must be ignored due to additive action. Finally, it was shown that effects can be expected at field-relevant fungicide levels and that current ERA may provide insufficient protection for decomposer-detritivore systems. To safeguard aquatic ecosystem functioning, this thesis thus recommends including leaf-associated microbial communities and long-term feeding studies using detritus feeders in ERA testing schemes, and identifies several knowledge gaps whose filling seems mandatory to develop further reasonable refinements for fungicide ERA.
The establishment of aquatic alien species can strongly affect community and food web structure of the invaded systems and thus represents a major threat to native biodiversity. One of the most important aquatic invasive species in European rivers is the Ponto-Caspian amphipod Dikerogammarus villosus. The species invaded most of the major European waterways within two decades, often associated with a decline of many other macroinvertebrate species, including other amphipods. Based on laboratory results predation by the so called ‘killer shrimp’ is often regarded as the key driver for observed displacement effects, but recent studies indicated a minor relevance of predation by D. villosus in the field. To allow the determination of exact predator-prey interactions from field samples, I established 22 group-specific rDNA primers for freshwater taxa suitable for prey species identification in dietary samples (Chapter II) and an approach for the screening of D. villosus gut contents using 16 of these primers. Combining genetic gut content analyses, with one of these primers, and stable isotope analyses, I examined the importance of intraguild predation (IGP) by D. villosus, which is often assumed the key driver for the displacement of native amphipod species, at an invasion front of the species in Switzerland (Chapter III). The results of this study revealed a low importance of IGP during this particular D. villosus invasion and indicated an overall sparsely predacious feeding behaviour of the species. As the feeding behaviour of D. villosus is supposed to differ between habitats and this study was only conducted at a few sampling sites of one river, I also investigated the role of predation by D. villosus at multiple sites of the River Rhine system, covering a broad range of microhabitats (Chapter IV). In keeping with the results from the invasion front results of this study strongly indicated a sparsely predacious feeding but rather a flexible feeding behaviour of D. villosus even within the same microhabitat.
However, established populations of D. villosus have changed aquatic food webs and can be expected to affect aquatic-terrestrial energy fluxes. In Chapter V of my thesis, I present a field study investigating the impact of D. villosus on the diet of two riparian spider taxa. The results of this study indicate an effect of D. villosus on the terrestrial food web via cross-ecosystem resource flow.
In conclusion, D. villosus influences terrestrial food webs by altering cross-ecosystem resource fluxes, but it is rather an opportunistic omnivore than a predator in the field.
Over the past few decades, Single-Particle Analysis (SPA), in combination with cryo-transmission electron microscopy, has evolved into one of the leading technologies for structural analysis of biological macromolecules. It allows the investigation of biological structures in a close to native state at the molecular level. Within the last five years the achievable resolution of SPA surpassed 2°A and is now approaching atomic resolution, which so far has only been possible with Xray crystallography in a far from native environment. One remaining problem of Cryo-Electron Microscopy (cryo-EM) is the weak image contrast. Since the introduction of cryo-EM in the 1980s phase plates have been investigated as a potential tool to overcome these contrast limitations. Until now, technical problems and instrumental deficiencies have made the use of phase plates difficult; an automated workflow, crucial for the acquisition of 1000s of micrographs needed for SPA, was not possible. In this thesis, a new Zernike-type Phase Plate (PP) was developed and investigated. Freestanding metal films were used as a PP material to overcome the ageing and contamination problems of standard carbon-based PPs. Several experiments, evaluating and testing various metals, ended with iridium as the best-suited material. A thorough investigation of the properties of iridium PP followed in the second part of this thesis. One key outcome is a new operation mode, the rocking PP. By using this rocking-mode, fringing artifacts, another obstacle of Zernike PPs, could be solved. In the last part of this work, acquisition and reconstruction of SPA data of apoferritin was performed using the iridium PP in rocking-mode. A special semi-automated workflow for the acquisition of PP data was developed and tested. The recorded PP data was compared to an additional reference dataset without a PP, acquired following a conventional workflow.
The Web is an essential component of moving our society to the digital age. We use it for communication, shopping, and doing our work. Most user interaction in the Web happens with Web page interfaces. Thus, the usability and accessibility of Web page interfaces are relevant areas of research to make the Web more useful. Eye tracking is a tool that can be helpful in both areas, performing usability testing and improving accessibility. It can be used to understand users' attention on Web pages and to support usability experts in their decision-making process. Moreover, eye tracking can be used as an input method to control an interface. This is especially useful for people with motor impairment, who cannot use traditional input devices like mouse and keyboard. However, interfaces on Web pages become more and more complex due to dynamics, i.e., changing contents like animated menus and photo carousels. We need general approaches to comprehend dynamics on Web pages, allowing for efficient usability analysis and enjoyable interaction with eye tracking. In the first part of this thesis, we report our work on improving gaze-based analysis of dynamic Web pages. Eye tracking can be used to collect the gaze signals of users, who browse a Web site and its pages. The gaze signals show a usability expert what parts in the Web page interface have been read, glanced at, or skipped. The aggregation of gaze signals allows a usability expert insight into the users' attention on a high-level, before looking into individual behavior. For this, all gaze signals must be aligned to the interface as experienced by the users. However, the user experience is heavily influenced by changing contents, as these may cover a substantial portion of the screen. We delineate unique states in Web page interfaces including changing contents, such that gaze signals from multiple users can be aggregated correctly. In the second part of this thesis, we report our work on improving the gaze-based interaction with dynamic Web pages. Eye tracking can be used to retrieve gaze signals while a user operates a computer. The gaze signals may be interpreted as input controlling an interface. Nowadays, eye tracking as an input method is mostly used to emulate mouse and keyboard functionality, hindering an enjoyable user experience. There exist a few Web browser prototypes that directly interpret gaze signals for control, but they do not work on dynamic Web pages. We have developed a method to extract interaction elements like hyperlinks and text inputs efficiently on Web pages, including changing contents. We adapt the interaction with those elements for eye tracking as the input method, such that a user can conveniently browse the Web hands-free. Both parts of this thesis conclude with user-centered evaluations of our methods, assessing the improvements in the user experience for usability experts and people with motor impairment, respectively.
Human population pressure increased with the population growth around the NNP and Cyamudongo with disturbance impacts on the forests isolating populations into fragments and today, Cyamudongo natural forest is located a way at a distance of at least 8.5 km horizontal distance to Nyungwe main block with a surface area estimated at 300 ha. Under Cyamudongo project implementation, there was a need to understand how the flora diversity responded to human imposed challenges and to forest restoration initiatives. Three physiognomic landscapes forest were identified and considered for three phases of vegetation survey in Cyamudongo and related to the closest area of Nyungwe main block. In this study, 15 transects were laid in each physiognomic forest landscape and 10 and 5 plots were set respectively in Cyamudongo and Gasumo study area. In total, 315 phytosociological relevés were performed and the Braun-Blanquet methods used for three times vegetation surveys. Species life-forms and chorophyte were evaluated and tree species dbh and height have been measured. Data were subjected to different statistical analyses using different softwares such as PAST, R 3.5.2, and SPSS. The mapping was done using Arc GIS and the Multi-Spectral Remote Sensing used to find NDVI for the vegetation classification.
NDVI trends showed that there has been fluctuations in vegetation classifications of the studied area. In this study, 494 vascular plant species from 106 families were harbored in the study area and distributed differently among forest landscapes and study phases. Although, 43.54% were common to Cyamudongo and Gasumo landscapes while 48.54 % of species diversity were hold only by Cyamudongo and 7.92% confined to Gasumo and 12 in total were found new records for Rwanda while several others suspected require detailed research for identification showing how the flora diversity of Cyamudongo is of special interest and extremely important for discoveries.
The finding of the study on diversity indices, the PCA, CA and the Cluster analysis, all statistical analyses (MANOVA, ANOVA) and life form spectra unanimously showed that the anthropogenic disturbance shaped the vegetation cover, the floristic composition, the species diversity, the forest landscapes community structure, the life form spectrum and the phytoclimate of Cyamudongo and Gasumo forest landscapes. Although, the vegetation analysis couldn’t clearly identify communities and sub-communities at the initial and final vegetation surveys and cluster groups were heterogeneous as well as overlapping and species associations not clearly defined due to the high level of similarities in species composition among forest landscapes and vegetation surveys. The species diversity was found high in secondary forest and Gasumo landscape forest and low in the primary forest and the buffer zone of Cyamudongo and the disturbance with gaps openings was found to be associated to the species diversity with a seasonal variation. The patterns of dbh for the buffer zone and of the size classes of all landscapes with an inverted ‘J’ indicated a healthy regeneration in the forest landscapes and tree species explained a good regeneration and recruitment capacity. Different shapes in the pattern of dbh with respectively an inverted ‘J’, ‘J ‘and ‘U’ for the buffer zone, primary and together the secondary and Gasumo forest landscapes indicated differences in the landscapes health and degree of regeneration and recruitment capacity.
Findings from differents measuements showed at which extend human activities have shaped the flora diversity and structure of forest landcapes studied. For instance, disturbances due human activities were daily oberved and trees were logged by neighboring communities such as Batwa populations at Cyamudongo and local populations at Gasumo. Some species were evenly observed targeted for their barks such as Ocotea usambarensis, Parinari excelsa for medicines and many others for their wood quality, fire wood collection and for agricultural purposes.
In the period of Cyamudongo project implementation, important achievements included the increase of forest biomass and therefore the photosynthetic capacity and the evapotranspiration potential that influence the rainfall regime; the regulation of weather conditions and then species diversity; supporting local communities and limiting human activities; raising awareness on conservation and protection of biodiversity and improving of living conditions of neighboring populations by providing paid employment and so to restore to the Cyamudongo forest ecosystem functions. Moreover, Cyamudongo forest remains vulnerable as surrounded by local communities with a high population pressure relying on forest resources for its survival. Cyamudongo harbors a high level of endemism and is a small hotspot for biodiversity conservation. It is therefore recommended to strengthen conservation and protection measures and continue the support of local communities.
To assess the effect of organic compounds on the aquatic environment, organisms are typically exposed to toxicant solutions and the adverse effects observed are linked to the concentration in the surrounding media. As compounds generally need to be taken up into the organism and distributed to the respective target sites for the induction of effects, the internal exposure is postulated to best represent the observed effects.
The aim of this work is to contribute to an improved effect assessment of organic compounds by describing experimental and modelling methods to obtain information on the internal exposure of contaminants in organisms.
Chapter 2 details a protocol for the determination of bioconcentration parameter for uptake (k1) and elimination (k2) of organic compounds in zebrafish (Danio rerio) eggs. This enables the simulation of the internal exposure in zebrafish eggs from an ambient exposure concentration over time. The accumulated contaminant amount in zebrafish eggs was also determined, using a biomimetic extraction method. Different bioconc-entration estimation models for the determination of internal steady-state concentrat-ion of pharmaceutical compounds in fish to an environmental exposure are presented in Chapter 3. Bioconcentration factors were estimated from the compounds octanol: water partition coefficient (KOW) to determine the internal exposure to an ambient concentration.
To assess the integral bioavailable fraction from the water and sediment phase of environmental contaminants for rooted aquatic plants, the internal exposure in river-living Myriophyllum aquaticum plants were determined over time, presented in Chapter 4. The plants were collected at different time points, with the accumulated organic contaminants determined using a liquid extraction method.
In Chapter 5 a protocol was established to enable the non-invasive observation of effects in M. aquaticum plants exposed to contaminated sediments over time. Since the toxicant effects are a result of all uptake and distribution processes to the target site and the toxico-dynamic process leading to an observed effect during static exposure, information on the internal exposure could thus be gained from the temporal effect expression.rn
Structure of soil organic matter (SOM) is a hot topic of discussion among scientific community for several decades. The mostly discussed models, among many, are polymer model and supramolecular model. While the former considers SOM as macromolecules consisting of amorphous and crystalline domains, the latter explains SOM as a physicochemical entity dominated by weak hydrophobic and H-bond interactions in the secondary level, which holds individual molecules of primary structure together. The weak forces in secondary level impart characteristic mobility of SOM. Very important consequence of this multidimensional formulation is that physicochemical structure plays a crucial role in most biogeochemical functions of SOM, apart from the chemical composition. Recently introduced concept of cation and water molecule mediated bridges between OM molecular segments (CaB and WaMB, respectively) evolved from physicochemical understanding of SOM structure. Even though several indirect evidences were produced for CaB and WaMB during last years, no clear-cut understanding of these processes has been achieved yet. Experimental difficulty due to overlapping effects of equally important CaB-governing parameters such a pH and competing cations raises huge challenge in investigating CaB-related influences. This thesis, therefore, aims to validate an experimental set-up for inducing CaB within OM structures and assessing it from various chemical and physicochemical aspects.
The method involved removal of omnipresent cations and adjustment of pH before cation addition. This helped to separate pH effects and cation effects. Based on results obtained on two different types of organic matter, it can be deduced that multivalent cations can cross-link SOM, given that functional group density of the SOM material is enough for the functional groups to be arranged in sufficient spatial proximity to each other. Physicochemical structural reorganisation during aging causes formation of more and/or stronger CaB and WaMB. As for inducing CaB directly after cation treatment, cationic size and valency were found determinant also for aging effect. A strongly cross-linked system in the beginning is less vulnerable to structural changes and undergoes aging with lower intensity, than an initially weakly cross-linked system. Responsible for the structural changes is, the inherent mobility of SOM within its physicochemical assemblage. Thus the information on structural requirement of CaB and its consequences on OM matrix rigidity will help to obtain insight into the physicochemical SOM structure. Additionally, organic matter quality (assessed by thermal analysis) and pore structure of SOM formed in a set of artificial soils showed that mineral materials are important for the chemical nature of SOM molecules, but not for the physical structure of organo-mineral associations, at least after several months of SOM development.
Furthermore, nanothermal analysis using atomic force microscopy (AFM-nTA) was implemented in soils for the first time to reveal nanoscale thermal properties and their spatial distribution in nano- and micrometer scales. This helped to identify physicochemical processes, such as disruption of WaMB, in low-organic soils, in which bulk methods fail due to their low sensitivity. Further, various types of materials constituting in soils were distinguished with high resolution by advanced application of the method, in combination with other AFM parameters. Attempts were done to identify various materials, with the usage of defined test materials. Above all, the method is potent to reveal microspatial heterogeneity on sample surfaces, which could help understanding process-relevant hotspots, for example.
This thesis thus contributes to the scientific understanding on physicochemical structural dynamics via cross-linking by cations and via nanoscale thermal properties. Direct investigation on CaB demonstrated here will potentially help making a big leap in knowledge about the interaction. The observed aging effects add well to the understanding of supramolecular consideration of SOM. By introducing nanothermal analysis to the field of soil science, it is made possible to face the problem of heterogeneity and spatial distribution of thermal characteristics. Another important achievement of AFM-nTA is that it can be used to detect physicochemical processes, which are of low intensity.
In an attempt to put pronunciation training back in the limelight and to engage in a fruitful discussion on pronunciation- equal importance with other language skills, the current research project focuses on pronunciation. Specifically, one area that causes problems for L1-German speakers is concentrated on, namely the central approximant. German learners should not despair too much as even many native-English children stumble with /r/. For some, acquiring this sound takes a long time and even some professional help (see Secord 2007: 7-8). One problem that these children have is the substitution of /r/ with /w/. "Children with developmental speech substitution," write Raphael et al., "often produce the more easily articulated (…) /w/ for /r/" (2011: 119). Most native-English children require a long time to acquire this sound. According to Bleile, /r/ belongs to the last stage (i.e. stage 4) of a child- language development (see Bleile 2004: 106). This final stage can range from five years of age to adolescence. One of the greatest difficulties, for instance, occurs with postvocalic /r/ (or "[r] colored schwa") and consonant clusters with /r/ (see Bleile 2004: 106).
It is important at this juncture to direct attention to the organization of the current research project and how it addresses the /r/-problem. At the start of the current study, an appropriate and concise definition of the /r/-sound is proposed. While it can be easily stated that the letter <r> is the 18th letter in the English alphabet, it is much more arduous to define what is meant by the /r/-sound. In chapter 2, a number of definitions of /r/ are explored. Section 2.1 presents an articulatory description and section 2.2 offers an acoustic description. A theoretical definition is provided in section 2.3. By exploring the distinguishing features of the /r/-sound, it can be distinguished from other sounds. From these three perspectives, a more encompassing view of the complicated English /r/-sound can be obtained.
In chapter 3, three hypotheses are presented. The first two are the main hypotheses for the current study. There is also a third (minor) hypothesis and it is an offshoot from some of the research findings from the first hypothesis. In section 3.1.1, hypothesis 1 (H1) investigates the frequency of the problem that L1-German speakers have when speaking the American English /r/-sound aloud when it is postvocalic and in a weakened syllable. Hypothesis 2 (H2) is put forward in section 3.1.2. It explores the frequency of /w/-substitution for /r/ when the latter sound is prevocalic in a stressed syllable. For the final and third hypothesis (H3), one specific occassion where L1 interference occurs (i.e. /ɐ/ for /ə/) is researched in section 3.1.3.
Some organizational details concerning the study are also included in the third chapter. The general set-up of the study is provided in section 3.2.1. Information pertaining to both the disqualified participants of the study as well as the 50 test subjects who were selected for the study is available in section 3.2.1.2. Details on technical equipment and software programs are disclosed in sections 3.2.1.1 and 3.2.1.3. The prescribed reading texts that the test subjects needed to read aloud are presented in the subsections of section 3.2.2.
For a better understanding of the acoustic properties of the prevocalic /r/-sound, five expressions from the reading text are selected. In section 3.3, the expressions are also articulated by the author (a native speaker of a variety of English from North America) in four different scenarios. The scenarios try to illustrate a limited number of possible articulatory variations. The author utters the expressions in four different ways: 1.) with lip spreading, 2.) in isolation, 3.) in connected speech and 4.) with /w/-substitution for /r/. These articulations serve as a template for understanding how the acoustic features of /r/ changes. Some of these changes are noticeable in the articulations of the test subjects (presented in chapter 4).
In chapter 4, the data collected for the three hypotheses are presented. For H1, section 4.1 and its subsections document the formant readings of a number of English expressions which contain the postvocalic /r/. In this analysis, it is important to detect where the central approximant is elided. In cases of elision, the samples have their schwa-like sounds plotted in a vowel quadrilateral, so that their locations can be detected. In section 4.2 and its subsections, data observations for H2 are discussed. From these data, the extent of /w/-substitution for /r/ when it is prevocalic in a stressed syllable can be observed. In section 4.3, an analysis of the /ɐ/-sound from three German expressions as they are spoken by the 50 test subjects is presented for H3. Together with pertinent data (i.e. formant readings) from H1, a comparison is made between the locations of /ə/ and /ɐ/. This section concludes with a plotting of all samples of these two vowels in a vowel chart.
In the final chapter of the current study, the main findings of the three hypotheses are summarized. Chapter 5 also provides some insights into a few of the gaps or limitations in the present research. Future research endeavors are proposed in order to suggest possible avenues to advance the findings reported on in the fourth chapter. In addition, several teaching suggestions and pronunciation exercises for implementing the /r/-sound in class are presented.
Lastly, there is an appendix section that consists of 14 smaller appendices (i.e Volume 2). The spectrograms and formant readings for each expression spoken by each of the 50 test subjects are documented in Appendix A to M. In Appendix N, some charts, tables, diagrams, etc. that are discussed throughout the following chapters are made available. Copies of the original questionnaire that were filled out by the test subjects are also in Appendix N.
Focusing on the triangulation of detective fiction, masculinity studies and disability studies, "Investigating the Disabled Detective – Disabled Masculinity and Masculine Disability in Contemporary Detective Fiction" shows that disability challenges common ideals of (hegemonic) masculinity as represented in detective fiction. After a theoretical introduction to the relevant focal points of the three research fields, the dissertation demonstrates that even the archetypal detectives Dupin and Holmes undermine certain nineteenth-century masculine ideals with their peculiarities. Shifting to contemporary detective fiction and adopting a literary disability studies perspective, the dissertation investigates how male detectives with a form of neurodiversity or a physical impairment negotiate their masculine identity in light of their disability in private and professional contexts. It argues that the occupation as a detective supports the disabled investigator to achieve ‘masculine disability’. Inversing the term ‘disabled masculinity’, predominantly used in research, ‘masculine disability’ introduces a decisively gendered reading of neurodiversity and (acquired) physical impairment in contemporary detective fiction. The term implies that the disabled detective (re)negotiates his masculine identity by implementing the disability in his professional investigations and accepting it as an important, yet not defining, characteristic of his (gender) identity. By applying this approach to five novels from contemporary British and American detective fiction, the dissertation demonstrates that masculinity and disability do not negate each other, as commonly assumed. Instead, it emphasises that disability allows the detective, as much as the reader, to rethink masculinity.
Investigating the environmental fate of iodinated X-ray contrast media in the urban water cycle
(2010)
Iodinated X-ray contrast media (ICM) are a group of emerging contaminants which have been detected at elevated concentrations in the aquatic environment. These compounds are excreted unmetabolized into hospital wastewater, and eventually treated at wastewater treatment plants (WWTPs). The removal of ICM in WWTPs has not been very effective and therefore the ICM enter the aquatic environment via WWTP effluent discharges. Research has investigated the removal of selected ICM via abiotic and biotic processes, however limited work has attempted to determine the fate of these compounds once released into the environment. This thesis investigates the biotransformation of four selected ICM (diatrizoate, iohexol, iomeprol, and iopamidol) in aerobic soil-water and sediment-water systems as well as in different environmental matrices. Iohexol, iomeprol and iopamidol were biotransformed to several TPs in the aerobic batch systems, while no biotransformation was observed for the ionic ICM, diatrizoate. In total 34 biotransformation products (TPs) of the three non-ionic ICM were identified. The combination of semi-preparative HPLC-UV, hybrid triple quadrupole-linear ion trap mass spectrometry (Qq-LIT-MS) was found to be an accurate approach for the structural elucidation of ICM TPs. The ICM TPs resulted in microbial transformation occurring at the side chains of the parent ICM, with the iodinated aromatic ring unmodified.
The presence of anthropogenic chemicals in the natural environment may impact both habitats and human use of natural resources. In particular the contamination of aquatic resources by organic compounds used as pharmaceuticals or household chemicals has become evident. The newly identified environmental pollutants, also known as micropollutants, often have i) unknown ecotoxicological impacts, ii) unknown partitioning mechanisms, e.g. sorption to sediments, and iii) limited regulation to control their emission. Furthermore, like any compound, micropollutants can be transformed while in the environmental matrix to unknown transformation products (TPs), which add to the number of unknown chemicals to consider and thus increase the complexity of risk management. Transformation is at the same time a natural mechanism for the removal of anthropogenic compounds, either by complete degradation (mineralisation) or to innocuous TPs. However, how transformation occurs in real-world conditions is still largely unknown. During the transport of micropollutants from household wastewater to surface water, a large amount of transformation can occur during wastewater treatment—specifically during biological nitrifying–denitrifying treatment processes. The thesis considers the systematic optimisation of laboratory investigative techniques, application of sensitive mass-spectrometry-based analysis techniques and the monitoring of full-scale wastewater treatment plants (WWTPs) to elucidate transformation processes of five known micropollutants.
The first of the five compounds investigated was the antibiotic trimethoprim. Incubation experiments were conducted at different analyte spike concentrations and different sludge to wastewater ratios. Using high-resolution mass spectrometry, a total of six TPs were identified from trimethoprim. The types of TPs formed was clearly influenced by the spike concentration. To the best of our knowledge, such impacts have not been previously described in the literature. Beginning from the lower spike concentration, a relatively stable final TP was formed (2,4-diaminopyrimidine-5-carboxylic acid, DAPC), which could account for almost all of the transformed trimethoprim quantity. The results were compared to the process in a reference reactor. Both by the detection of TPs (e.g., DAPC) and by modelling the removal kinetics, it could be concluded that only experimental results at the low spike concentrations mirrored the real reactor. The limits of using elevated spike concentrations in incubation experiments could thus be shown.
Three phenolic micropollutants, the antiseptic ortho-phenylphenol (OPP), the plastics additive bisphenol A (BPA) and the psychoactive drug dextrorphan were investigated with regard to the formation of potentially toxic, nitrophenolic TPs. Nitrite is an intermediate in the nitrification– denitrification process occurring in activated sludge and was found to cause nitration of these phenols. To elucidate the processes, incubation experiments were conducted in purified water in the presence of nitrite with OPP as the test substance. The reactive species HNO2, N2O3 and the radicals ·NO and ·NO2 were likely involved as indicated by scavenger experiments. In conditions found at WWTPs the wastewater is usually at neutral pH, and nitrite, being an intermediate, usually has a low concentration. By conducting incubation experiments inoculated with sludge from a conventional WWTP, it was found that the three phenolic micropollutants, OPP, BPA and dextrorphan were quickly transformed to biological TPs. Nitrophenolic TPs were only formed after artificial increase of the nitrite concentration or lowering of the pH. However, nitrophenolic-TPs can be formed as sample preparation artefacts through acidification or freezing for preservation, creating optimal conditions for the reaction to take place.
The final micropollutant to be studied was the pain-reliever diclofenac, a micropollutant on the EU-watch list due to ecotoxicological effects on rainbow trout. The transformation was compared in two different treatment systems, one employing a reactor with suspended carriers as a biofilm growth surface, while the other system employed conventional activated sludge. In the biofilm-based system, the pathway was found to produce many TPs each at relatively low concentration, many of which were intermediate TPs that were further degraded to unknown tertiary TPs. In the conventional activated sludge system some of the same reactions took place but all at much slower rates. The main difference between the two systems was due to different reaction rates rather than different transformation pathways. The municipal WWTPs were monitored to verify these results. In the biofilm system, a 10-day monitoring campaign confirmed an 88% removal of diclofenac and the formation of the same TPs as those observed in the laboratory experiments. The proposed environmental quality standard of 0.05 μg/L might thus be met without the need for additional treatment processes such as activated carbon filtration or ozonation.
Key mechanisms for the release of metal(loid)s from a construction material in hydraulic engineering
(2017)
Hydraulic engineering and thus construction materials are necessary to enable the navigability of water ways. Since, a variety of natural as well as artificial materials are used, this materials are world wide tested on a potential release of dangerous substances to prevent adverse effects on the environment. To determine the potential release, it is important to identify and to understand key mechanisms which are decisive for the release of hazardous substances. A limited correlation between the conditions used in regulatory tests and those found in environmental systems is given and hence, often the significance of results from standardised tests on construction materials is questioned, since they are not designed to mimic environmental conditions.
In Germany industrial by-products are used as armour stones in hydraulic engineering. Especially the by-product copper slag is used during the last 40 years for the construction of embankments, groynes and coastal protection. On the one hand, this material has a high density and natural resources (landscape) are protected. One the other hand, the material contains high quantities of metal(loid)s. Therefore the copper slag (product name: iron silicate stones) is very suitable as test material. Metal(loid)s examined were As, Sb and Mo as representatives for (hydr)oxide forming elements and Cd, Co, Cu, Fe, Ni, Pb and Zn were studied as representatives for elements forming cations during the release.
Questions addressed in this Thesis were: (i) can we transfer the results from batch experiments to construction scenarios under the prevalent environmental conditions, (ii) which long-term trends exist for the release of metal(loid)s from copper slags and (iii) how environmental conditions influence the leaching of metal(loid)s from water construction materials?
To answer the first question the surface depending release of the metal(loid)s from the construction materials was examined. Therefore, batch leaching experiments with different particle sizes and a constant liquid/solid ratio were performed. In a second step a comparison between different methods for the determination of the specific surface area of armour stones with a 3D laser scanning method as a reference were performed. In a last step it was possible to show that via a roughness factor the results of the specific surface area from small stones, measured with gas adsorption, can be connected with the results from armour stones, determined with an aluminium foil method. Based on calculations of the specific surface area, it was possible to significantly improve catchment scale calculation about the release of metal(loid)s and to evaluate a potential impact of construction materials in hydraulic engineering on the water chemistry of rivers and streams.
To answer the second question long-term leaching diffuse gradient in thin films supported experiments were performed for half a year. Diffuse gradients in thin films (DGT) is an in situ method to passive sample metal(loid)s in water, sediments and soils. They were used as a sink for metal(loid)s in the eluate to provide solution equilibriums. Thus the exchange of the eluent, which is performed normally in long-term experiments, was superfluous and long-term effects under undisturbed conditions were studied. The long-term leaching experiments with DGT have proven to be capable (i) to differentiate between the depletion of the material surface and the solution equilibriums and (ii) to study sorption processes with or without a further release of the analytes. This means for the practically relevant test material copper slag that: (i) the cations Cd, Co, Cu, Ni and Pb are confirmed to be released from the slag over the whole time period of six months, (ii) a surface depletion of Zn was detected, and (iii) that the (hydr)oxide forming elements As, Mo and Sb were released from the slag over the hole periods of six months but the release was masked by adsorption to Fe-oxide colloids, which were formed during the leaching experiments. It was confirmed, that sulphide minerals are the main source for long-term release of Cd, Cu, Ni, Pb and Mo.
To answer the third question short-term leaching experiments simulating environmental conditions in hydraulic engineering were performed. One factor is the salinity. The influence of this parameter was tested in batch experiments with sea salt solution (30 g/l), river Rhine water, ultra pure water and in addition with different NaCl concentration (5, 10, 20 and 30 g/l). In general, the ionic strength is an important factor for the metal(loid) release but the composition of the water (e.g. the HCO3- content) may superimpose this effect. Therefore, the concentrations of the metal(loid)s in the experiments with ultra-pure water spiked with sea salt or native river water and the ultra-pure water spiked with NaCl were significantly different. In a second experiment the influence of the environmental parameters and the interactions between the environmental parameters pH (4–10), sediment content (0 g–3.75 g), temperature (4 °C–36 °C) and ionic strength (0 g/l–30 g/l NaCl) on the release of metal(loid)s from the test material was examined. The statistical Design of Experiments (DoE) was used to study the influence of these factors as well as their interactions. All studied factors may impact the release of metal(loid)s from the test material to the eluent, whereas the release and the partitioning between sediment and eluate of metal(loid)s was impacted by interactions between the studied factors. The main processes were sorption, complexation, solubility, buffering and ion exchange. In addition, by separating the sediment from the slag after the experiments by magnetic separation, the enrichment of metal(loid)s in the sediment was visible. Thus, the sediment was the most important factor for the release of the metal(loid)s, via pH, temperature and ionic strength, because the sediment acted as a sink.
The work presented in this thesis investigated interactions of selected biophysical processes that affect zooplankton ecology at smaller scales. In this endeavour, the extent of changes in swimming behaviour and fluid disturbances produced by swimming Daphnia in response to changing physical environments were quantified. In the first research question addressed within this context, size and energetics of hydrodynamic trails produced by Daphnia swimming in non-stratified still waters were characterized and quantified as a function of organisms’ size and their swimming patterns.
The results revealed that neither size nor the swimming pattern of Daphnia affects the width of induced trails or dissipation rates. Nevertheless, as the size and swimming velocity of the organisms increased, trail volume increased in proportional to the cubic power of Reynolds number, and the biggest trail volume was about 500 times the body volume of the largest daphnids. Larger spatial extent of fluid perturbation and prolonged period to decay caused by bigger trail volumes would play a significant role in zooplankton ecology, e.g. increasing the risk of predation.
The study also found that increased trail volume brought about significantly enhanced total dissipated power at higher Reynolds number, and the magnitudes of total dissipated power observed varied in the range of (1.3-10)X10-9 W.
Furthermore, this study provided strong evidence that swimming speed of Daphnia and total dissipated power in Daphnia trails exceeded those of some other selected zooplankton species.
In recognizing turbulence as an intrinsic environmental perturbation in aquatic habitats, this thesis also examined the response of Daphnia to a range of turbulence flows, which correspond to turbu-lence levels that zooplankton generally encounter in their habitats. Results indicated that within the range of turbulent intensities to which the Daphnia are likely to be exposed in their natural habitats, increasing turbulence compelled the organisms to enhance their swimming activity and swim-ming speed. However, as the turbulence increased to extremely high values (10-4 m2s-3), Daphnia began to withdraw from their active swimming behaviour. Findings of this work also demonstrated that the threshold level of turbulence at which animals start to alleviate from largely active swimming is about 10-6 m2s-3. The study further illustrated that during the intermediate range of turbu-lence; 10-7 - 10-6 m2s-3, kinetic energy dissipation rates in the vicinity of the organisms is consistently one order of magnitude higher than that of the background turbulent flow.
Swarming, a common conspicuous behavioural trait observed in many zooplankton species, is considered to play a significant role in defining freshwater ecology of their habitats from food exploitation, mate encountering to avoiding predators through hydrodynamic flow structures produced by them, therefore, this thesis also investigated implications of Daphnia swarms at varied abundance & swarm densities on their swimming kinematics and induced flow field.
The results showed that Daphnia aggregated in swarms with swarm densities of (1.1-2.3)x103 L-1, which exceeded the abundance densities by two orders of magnitude (i.e. 1.7 - 6.7 L-1). The estimated swarm volume decreased from 52 cm3 to 6.5 cm3, and the mean neighbouring distance dropped from 9.9 to 6.4 body lengths. The findings of this work also showed that mean swimming trajectories were primarily horizontal concentric circles around the light source. Mean flow speeds found to be one order of magnitude lower than the corresponding swimming speeds of Daphnia. Furthermore, this study provided evidences that the flow fields produced by swarming Daphnia differed considerably between unidirectional vortex swarming and bidirectional swimming at low and high abundances respectively.
For software engineers, conceptually understanding the tools they are using in the context of their projects is a daily challenge and a prerequisite for complex tasks. Textual explanations and code examples serve as knowledge resources for understanding software languages and software technologies. This thesis describes research on integrating and interconnecting
existing knowledge resources, which can then be used to assist with understanding and comparing software languages and software technologies on a conceptual level. We consider the following broad research questions that we later refine: What knowledge resources can be systematically reused for recovering structured knowledge and how? What vocabulary already exists in literature that is used to express conceptual knowledge? How can we reuse the
online encyclopedia Wikipedia? How can we detect and report on instances of technology usage? How can we assure reproducibility as the central quality factor of any construction process for knowledge artifacts? As qualitative research, we describe methodologies to recover knowledge resources by i.) systematically studying literature, ii.) mining Wikipedia, iii.) mining available textual explanations and code examples of technology usage. The theoretical findings are backed by case studies. As research contributions, we have recovered i.) a reference semantics of vocabulary for describing software technology usage with an emphasis on software languages, ii.) an annotated corpus of Wikipedia articles on software languages, iii.) insights into technology usage on GitHub with regard to a catalog of pattern and iv.) megamodels of technology usage that are interconnected with existing textual explanations and code examples.
Virtual reality is a growing field of interest as it provides a particular intuitive way of user-interaction. However, there are still open technical issues regarding latency — the delay between interaction and display reaction — and the trade-off between visual quality and frame-rate of real-time graphics, especially when taking visual effects like specular and semi-transparent surfaces and volumes into account. One solution, a distributed rendering setup, is presented in this thesis, in which the image synthesis is divided into an accurate but costly physically based rendering thread with a low refresh rate and a fast reprojection thread to remain a responsive interactivity with a high frame-rate. Two novel reprojection techniques are proposed that cover reflections and refractions produced by surface ray-tracing as well as volumetric light transport generated by volume ray-marching. The introduced setup can enhance the VR experience within several domains. In this thesis, three innovative training applications have been realized to investigate the added value of virtual reality to the three learning stages of observation, interaction and collaboration. For each stage an interdisciplinary curriculum, currently taught with traditional media, was transferred to a VR setting in order to investigate how virtual reality is capable of providing a natural, flexible and efficient learning environment
Modern agriculture is a dominant land use in Europe, although it has been associated with negative effects on biodiversity in agricultural landscapes. One species-rich insect group in agro-ecosystems is the Lepidoptera (moths and butterflies); however, the populations of a number of Lepidoptera species are currently declining. The aims of this thesis were to assess the amount and structure of field margins in agricultural landscapes, study the effects of realistic field margin input rates of agrochemicals (fertilizer and pesticides) on Lepidoptera, and provide information on moth pollination services.
In general, field margins are common semi-natural habitat elements in agro-ecosystems; however, data on the structure, size, and width of field margins is limited. An assessment in two German agricultural landscapes (4,000 ha each) demonstrated that many of the evaluated field margins were less than 3 m wide (Rhineland‐Palatinate: 85% of margin length; Brandenburg: 45% margin length). In Germany, risk mitigation measures (such as buffer zones) to reduce pesticide inputs to terrestrial non-crop habitats do not have to be established by farmers next to narrow field margins. Thus, narrow field margins receive inputs of agrochemicals, especially via overspray and spray drift. These field margins were used as a development habitat for caterpillars, but the mean abundance of caterpillars was 35 – 60% lower compared with that in meadows. Caterpillars were sensitive to realistic field margin input rates of insecticide (pyrethroid, lambda-cyhalothrin) in a field experiment as well as in laboratory experiments. Moreover, 40% fewer Hadena bicruris eggs were observed on Silene latifolia plants treated with this insecticide compared with control plants, and the flowers of these insecticide-treated plants were less likely to be pollinated by moths. In addition, realistic field margin input rates of herbicides can also affect Lepidoptera. Ranunculus acris L. plants treated with sublethal rates of a sulfonylurea herbicide were used as host plants for Mamestra brassicae L. caterpillars, which resulted in significantly lower caterpillar weights, increased time to pupation, and increased overall development time compared with caterpillars feeding on control plants. These results might have been caused by lower nutritional value of the herbicide-treated plants or increased concentrations of secondary metabolites involved in plant defense. Fertilizer applications slightly increased the caterpillar abundance in the field experiment. However, fertilizers reduce plant diversity in the long term and thus, most likely, also reduce caterpillar diversity.
Moths such as Noctuidae and Sphingidae have been observed to act as pollinators for numerous plant species, including a number of Orchidaceae and Caryophyllaceae. Although in temperate agro-ecosystems moths are less likely to act as the main pollinators for crops, they can pollinate non-crop plants in semi-natural habitats. Currently, the role of moths as pollinators appears to be underestimated, and long-term research focusing on ecosystems is necessary to address temporal fluctuations in their abundance and community composition.
Lepidoptera represent a diverse organism group in agricultural landscapes and fulfill essential ecosystem services, such as pollination. To better protect moths and butterflies, agrochemical inputs to (narrow) field margins habitats should be reduced, for example, via risk mitigation measures and agro-environmental schemes.
This study had two main aims. The first one was to investigate the quality of lesson plans. Two important features of lesson plans were used as a basis to determine the quality of lesson plans. These are adaptability to preconditions and cognitive activation of students. The former refers to how the planning teacher considers the diversity of students pre-existing knowledge and skills. The latter refers to how the planning teacher sequences deep learning tasks and laboratory activities to promote the cognitive activation of students.
The second aim of the study was to explore teachers thinking about and explanation of externally generated feedback data on their students’ performance. The emphasis here was to understand how the teachers anticipate planning differentiated lessons to accommodate the variations in students learning outcomes revealed by the feedback data.
The study followed a qualitative approach with multiple sources of data. Concept maps, questionnaires, an online lesson planning tool, standardized tests, and semi-structured interviews were the main data collection instruments used in the study. Participants of this study were four physics teachers teaching different grade levels. For the purpose of generating feedback for the participant teachers, a test was administered to 215 students. Teachers were asked to plan five lessons for their ongoing practices. The analysis showed that the planned lessons were not adapted to the diversity in students pre-existing knowledge and skills. The analysis also indicated that the lessons planned had limitations with regard to cognitive activation of students. The analysis of the interview data also revealed that the participant teachers do not normally consider differentiating lessons to accommodate the differences in students learning, and place less emphasis on the cognitive activation of students. The analysis of the planned lessons showed a variation in teachers approach in integrating laboratory activities in the sequence of the lessons ranging from a complete absence through a demonstrative to an investigative approach. Moreover, the findings from the interviews indicated differences between the participant teachers espoused theory (i.e. what they said during interview) and their theory- in –use (i.e. what is evident from the planned lessons). The analysis of the interview data demonstrated that teachers did not interpret the data, identify learning needs, draw meaningful information from the data for adapting (or differentiating) instruction. They attributed their students’ poor performance to task difficulty, students’ ability, students’ motivation and interest. The teachers attempted to use the item level and subscale data only to compare the relative position of their class with the reference group. However, they did not read beyond the data, like identifying students learning needs and planning for differentiated instruction based on individual student’s performance.
This study explored the question whether greenhouse gas mitigation projects in Namibia could be initiated through local economic development programmes. In particular, research was done on whether the Clean Development Mechanism (CDM) of the Kyoto Protocol could play an essential role in the promotion of such mitigation projects.
In a first step supporting and inhibiting factors (potential for mitigation projects, business and investment climate, institutions, etc.) were discussed, which have a negative or positive influence on mitigation projects. In a second step the mind-set of climate and energy experts as well as of local economic development experts and practitioners was analysed with regard to the research questions. To this end, 229 questionnaires, 28 interviews and the output of a focus group discussion with 20 participants were evaluated. Additionally, the author conducted a real life case study to investigate the practicability of initiating greenhouse gas mitigation projects through local economic development efforts. Parallel to the development of an economic development strategy in the Namibian region of Otjozondjupa, the potential for greenhouse gas mitigation projects was explored. Based on the outcome of this investigation project ideas were developed and their potential socio-economic impact was evaluated. Promising projects were then included into the development strategy.
Due to various factors such as the complexity of CDM, low greenhouse gas emissions in Namibia, the low price of emission rights and insufficient financial means it is unlikely that CDM projects can be initiated through local economic development initiatives in Namibia. However, many stakeholders consider the idea of interlinking mitigation projects and local economic development initiatives favourably as long as such projects support the broader objectives of those initiatives. This research has shown that locally initiated mitigation projects do not contribute much to employment or income generation at the local level in Namibia. Thus, national strategic objectives should be considered, such as improving access to electricity to all strata of society or becoming less dependent on electricity imports. This requires, however, that local economic development strategies also cover the energy sector, that local governments are willing and capacitated to initiate mitigation projects, that national and local public institutions work together more closely, that national and local economic framework conditions are improved so as to attract private investments, and that the experiences and interests of the relevant stakeholders are considered throughout the project development process.
The Web contains some extremely valuable information; however, often poor quality, inaccurate, irrelevant or fraudulent information can also be found. With the increasing amount of data available, it is becoming more and more difficult to distinguish truth from speculation on the Web. One of the most, if not the most, important criterion used to evaluate data credibility is the information source, i.e., the data origin. Trust in the information source is a valuable currency users have to evaluate such data. Data popularity, recency (or the time of validity), reliability, or vagueness ascribed to the data may also help users to judge the validity and appropriateness of information sources. We call this knowledge derived from the data the provenance of the data. Provenance is an important aspect of the Web. It is essential in identifying the suitability, veracity, and reliability of information, and in deciding whether information is to be trusted, reused, or even integrated with other information sources. Therefore, models and frameworks for representing, managing, and using provenance in the realm of Semantic Web technologies and applications are critically required. This thesis highlights the benefits of the use of provenance in different Web applications and scenarios. In particular, it presents management frameworks for querying and reasoning in the Semantic Web with provenance, and presents a collection of Semantic Web tools that explore provenance information when ranking and updating caches of Web data. To begin, this thesis discusses a highly exible and generic approach to the treatment of provenance when querying RDF datasets. The approach re-uses existing RDF modeling possibilities in order to represent provenance. It extends SPARQL query processing in such a way that given a SPARQL query for data, one may request provenance without modifying it. The use of provenance within SPARQL queries helps users to understand how RDF facts arederived, i.e., it describes the data and the operations used to produce the derived facts. Turning to more expressive Semantic Web data models, an optimized algorithm for reasoning and debugging OWL ontologies with provenance is presented. Typical reasoning tasks over an expressive Description Logic (e.g., using tableau methods to perform consistency checking, instance checking, satisfiability checking, and so on) are in the worst case doubly exponential, and in practice are often likewise very expensive. With the algorithm described in this thesis, however, one can efficiently reason in OWL ontologies with provenance, i.e., provenance is efficiently combined and propagated within the reasoning process. Users can use the derived provenance information to judge the reliability of inferences and to find errors in the ontology. Next, this thesis tackles the problem of providing to Web users the right content at the right time. The challenge is to efficiently rank a stream of messages based on user preferences. Provenance is used to represent preferences, i.e., the user defines his preferences over the messages' popularity, recency, etc. This information is then aggregated to obtain a joint ranking. The aggregation problem is related to the problem of preference aggregation in Social Choice Theory. The traditional problem formulation of preference aggregation assumes a I fixed set of preference orders and a fixed set of domain elements (e.g. messages). This work, however, investigates how an aggregated preference order has to be updated when the domain is dynamic, i.e., the aggregation approach ranks messages 'on the y' as the message passes through the system. Consequently, this thesis presents computational approaches for online preference aggregation that handle the dynamic setting more efficiently than standard ones. Lastly, this thesis addresses the scenario of caching data from the Linked Open Data (LOD) cloud. Data on the LOD cloud changes frequently and applications relying on that data - by pre-fetching data from the Web and storing local copies of it in a cache - need to continually update their caches. In order to make best use of the resources (e.g., network bandwidth for fetching data, and computation time) available, it is vital to choose a good strategy to know when to fetch data from which data source. A strategy to cope with data changes is to check for provenance. Provenance information delivered by LOD sources can denote when the resource on the Web has been changed last. Linked Data applications can benefit from this piece of information since simply checking on it may help users decide which sources need to be updated. For this purpose, this work describes an investigation of the availability and reliability of provenance information in the Linked Data sources. Another strategy for capturing data changes is to exploit provenance in a time-dependent function. Such a function should measure the frequency of the changes of LOD sources. This work describes, therefore, an approach to the analysis of data dynamics, i.e., the analysis of the change behavior of Linked Data sources over time, followed by the investigation of different scheduling update strategies to keep local LOD caches up-to-date. This thesis aims to prove the importance and benefits of the use of provenance in different Web applications and scenarios. The exibility of the approaches presented, combined with their high scalability, make this thesis a possible building block for the Semantic Web proof layer cake - the layer of provenance knowledge.
This thesis addresses the problem of terrain classification in unstructured outdoor environments. Terrain classification includes the detection of obstacles and passable areas as well as the analysis of ground surfaces. A 3D laser range finder is used as primary sensor for perceiving the surroundings of the robot. First of all, a grid structure is introduced for data reduction. The chosen data representation allows for multi-sensor integration, e.g., cameras for color and texture information or further laser range finders for improved data density. Subsequently, features are computed for each terrain cell within the grid. Classification is performedrnwith a Markov random field for context-sensitivity and to compensate for sensor noise and varying data density within the grid. A Gibbs sampler is used for optimization and is parallelized on the CPU and GPU in order to achieve real-time performance. Dynamic obstacles are detected and tracked using different state-of-the-art approaches. The resulting information - where other traffic participants move and are going to move to - is used to perform inference in regions where the terrain surface is partially or completely invisible for the sensors. Algorithms are tested and validated on different autonomous robot platforms and the evaluation is carried out with human-annotated ground truth maps of millions of measurements. The terrain classification approach of this thesis proved reliable in all real-time scenarios and domains and yielded new insights. Furthermore, if combined with a path planning algorithm, it enables full autonomy for all kinds of wheeled outdoor robots in natural outdoor environments.
The semantic web and model-driven engineering are changing the enterprise computing paradigm. By introducing technologies like ontologies, metadata and logic, the semantic web improves drastically how companies manage knowledge. In counterpart, model-driven engineering relies on the principle of using models to provide abstraction, enabling developers to concentrate on the system functionality rather than on technical platforms. The next enterprise computing era will rely on the synergy between both technologies. On the one side, ontology technologies organize system knowledge in conceptual domains according to its meaning. It addresses enterprise computing needs by identifying, abstracting and rationalizing commonalities, and checking for inconsistencies across system specifications. On the other side, model-driven engineering is closing the gap among business requirements, designs and executables by using domain-specific languages with custom-built syntax and semantics. In this scenario, the research question that arises is: What are the scientific and technical results around ontology technologies that can be used in model-driven engineering and vice versa? The objective is to analyze approaches available in the literature that involve both ontologies and model-driven engineering. Therefore, we conduct a literature review that resulted in a feature model for classifying state-of-the-art approaches. The results show that the usage of ontologies and model-driven engineering together have multiple purposes: validation, visual notation, expressiveness and interoperability. While approaches involving both paradigms exist, an integrated approach for UML class-based modeling and ontology modeling is lacking so far. Therefore, we investigate the techniques and languages for designing integrated models. The objective is to provide an approach to support the design of integrated solutions. Thus, we develop a conceptual framework involving the structure and the notations of a solution to represent and query software artifacts using a combination of ontologies and class-based modeling. As proof of concept, we have implemented our approach as a set of open source plug-ins -- the TwoUse Toolkit. The hypothesis is that a combination of both paradigms yields improvements in both fields, ontology engineering and model-driven engineering. For MDE, we investigate the impact of using features of the Web Ontology Language in software modeling. The results are patterns and guidelines for designing ontology-based information systems and for supporting software engineers in modeling software. The results include alternative ways of describing classes and objects and querying software models and metamodels. Applications show improvements on changeability and extensibility. In the ontology engineering domain, we investigate the application of techniques used in model-driven engineering to fill the abstraction gap between ontology specification languages and programming languages. The objective is to provide a model-driven platform for supporting activities in the ontology engineering life cycle. Therefore, we study the development of core ontologies in our department, namely the core ontology for multimedia (COMM) and the multimedia metadata ontology. The results are domain-specific languages that allow ontology engineers to abstract from implementation issues and concentrate on the ontology engineering task. It results in increasing productivity by filling the gap between domain models and source code.
Mathematical Modelling of GIS Tailored GUI Design with the Application of Spatial Fuzzy Logic
(2014)
This PhD thesis is situated within the framework of the Research-Group Learning and Neurosciences (ReGLaN)-Health and Logistics project. The goal of this project is the optimisation of health service delivery in the rural areas of South Africa. Cooperation takes place between ReGLaN-Health and Logistics and the South African Council for Scientific and Industrial Research (CSIR) Meraka Institute, with Prof Dr Dr Marlien Herselman of Pretoria, South Africa, as the central contact person. This thesis deals with the mathematical modelling of Geographic Information System (GIS)-tailoredrnGraphical User Interface (GUI) design with the application of spatial fuzzy logic. This thesis considers the mathematical visualisation of risk and resource maps for epidemiological issues using GIS and adaptive GUI design for an Open Source (OS) application for digital devices. The intention ofrnthis thesis is to provide spatial decision support tailored to different user groups. In order for the GUI elements to be evaluated and initialised, empirical teaching-learning-research on dealing with geomedia and GUI elements was conducted.
Mathematical models of species dispersal and the resilience of metapopulations against habitat loss
(2021)
Habitat loss and fragmentation due to climate and land-use change are among the biggest threats to biodiversity, as the survival of species relies on suitable habitat area and the possibility to disperse between different patches of habitat. To predict and mitigate the effects of habitat loss, a better understanding of species dispersal is needed. Graph theory provides powerful tools to model metapopulations in changing landscapes with the help of habitat networks, where nodes represent habitat patches and links indicate the possible dispersal pathways between patches.
This thesis adapts tools from graph theory and optimisation to study species dispersal on habitat networks as well as the structure of habitat networks and the effects of habitat loss. In chapter 1, I will give an introduction to the thesis and the different topics presented in this thesis. Chapter 2 will then give a brief summary of tools used in the thesis.
In chapter 3, I present our model on possible range shifts for a generic species. Based on a graph-based dispersal model for a generic aquatic invertebrate with a terrestrial life stage, we developed an optimisation model that models dispersal directed to predefined habitat patches and yields a minimum time until these patches are colonised with respect to the given landscape structure and species dispersal capabilities. We created a time-expanded network based on the original habitat network and solved a mixed integer program to obtain the minimum colonisation time. The results provide maximum possible range shifts, and can be used to estimate how fast newly formed habitat patches can be colonised. Although being specific for this simulation model, the general idea of deriving a surrogate can in principle be adapted to other simulation models.
Next, in chapter 4, I present our model to evaluate the robustness of metapopulations. Based on a variety of habitat networks and different generic species characterised by their dispersal traits and habitat demands, we modeled the permanent loss of habitat patches and subsequent metapopulation dynamics. The results show that species with short dispersal ranges and high local-extinction risks are particularly vulnerable to the loss of habitat across all types of networks. On this basis, we then investigated how well different graph-theoretic metrics of habitat networks can serve as indicators of metapopulation robustness against habitat loss. We identified the clustering coefficient of a network as the only good proxy for metapopulation robustness across all types of species, networks, and habitat loss scenarios.
Finally, in chapter 5, I utilise the results obtained in chapter 4 to identify the areas in a network that should be improved in terms of restoration to maximise the metapopulation robustness under limited resources. More specifically, we exploit our findings that a network’s clustering coefficient is a good indicator for metapopulation robustness and develop two heuristics, a Greedy algorithm and a deducted Lazy Greedy algorithm, that aim at maximising the clustering coefficient of a network. Both algorithms can be applied to any network and are not specific to habitat networks only.
In chapter 6, I will summarize the main findings of this thesis, discuss their limitations and give an outlook of future research topics.
Overall this thesis develops frameworks to study the behaviour of habitat networks and introduces mathematical tools to ecology and thus narrows the gap between mathematics and ecology. While all models in this thesis were developed with a focus on aquatic invertebrates, they can easily be adapted to other metapopulations.
Conversion of natural vegetation into cattle pastures and croplands results in altered emissions of greenhouse gases (GHG), such as carbon dioxide (CO2), methane (CH4), and nitrous oxide (N2O). Their atmospheric concentration increase is attributed the main driver of climate change. Despite of successful private initiatives, e.g. the Soy Moratorium and the Cattle Agreement, Brazil was ranked the worldwide second largest emitter of GHG from land use change and forestry, and the third largest emitter from agriculture in 2012. N2O is the major GHG, in particular for the agricultural sector, as its natural emissions are strongly enhanced by human activities (e.g. fertilization and land use changes). Given denitrification the main process for N2O production and its sensitivity to external changes (e.g. precipitation events) makes Brazil particularly predestined for high soil-derived N2O fluxes.
In this study, we followed a bottom-up approach based on a country-wide literature research, own measurement campaigns, and modeling on the plot and regional scale, in order to quantify the scenario-specific development of GHG emissions from soils in the two Federal States Mato Grosso and Pará. In general, N2O fluxes from Brazilian soils were found to be low and not particularly dynamic. In addition to that, expected reactions to precipitation events stayed away. These findings emphasized elaborate model simulations in daily time steps too sophisticated for regional applications. Hence, an extrapolation approach was used to first estimate the influence of four different land use scenarios (alternative futures) on GHG emissions and then set up mitigation strategies for Southern Amazonia. The results suggested intensification of agricultural areas (mainly cattle pastures) and, consequently, avoided deforestation essential for GHG mitigation.
The outcomes of this study provide a very good basis for (a) further research on the understanding of underlying processes causing low N2O fluxes from Brazilian soils and (b) political attempts to avoid new deforestation and keep GHG emissions low.
Audiences' movie evaluations have often been explored as effects of experiencing movies. However, little attention has been paid to the evaluative process itself and its determinants before, during, and after movie exposure. Moreover, until recently, research on the subjective assessment of specific film features (e.g., story, photography) has played a less important role. Adding to this research, this dissertation introduces the idea of subjective movie evaluation criteria (SMEC) and describes the scale construction for their measurement and its validation process. Drawing on social cognition theories, SMEC can be defined as standards that viewers use for assessing the features of films and conceptualized as mental representations of - or attitudes towards - specific movie features guiding cognitive and affective information processing of movies and corresponding evaluative responses. Studies were conducted in five phases to develop and validate scales for measuring and examining the structure of SMEC. In Phase I, open-ended data were categorized and content validated via a modified structure formation technique and items were developed. Subsequently in Phase II, participants completed an online questionnaire including revised and pilot-tested items. Exploratory factor analyses were iteratively applied to explore the latent structure and to select items. The resulting 8-factor model was cross-validated with different samples in Phase III applying confirmatory factor analyses which yielded good fit indices, thereby supporting structural validity. In Phase IV, latent state"trait analyses were carried out to examine the reliability, occasion specificity, common consistency, and method specificity of the eight dimensions. All factors - Story Verisimilitude, Story Innovation, Cinematography, Special Effects, Recommendation, Innocuousness, Light-heartedness, and Cognitive Stimulation - are reliable and are largely determined by stable individual differences, albeit some of them also show substantial systematic, but unstable effects due to the situation or interaction. These results provide evidence for the substantive validity of the SMEC scales. Finally, in Phase V the nomological network of SMEC was explored (external validity by examining correlations with related constructs like film genre preferences and personality traits). Taken together, whereas the SMEC concept - compatible with contemporary social cognition theories - provides a framework to theorize and address research questions about the role of movie evaluation criteria and evaluative processes, the SMEC scales are the proper tool for investigating the role of these criteria and the processes they are involved in.
Lakes and reservoirs are important sources of methane, a potent greenhouse gas. Although freshwaters cover only a small fraction of the global surface, their contribution to global methane emission is significant and this is expected to increase, as a positive feedback to climate warming and exacerbated eutrophication. Yet, global estimates of methane emission from freshwaters are often based on point measurements that are spatio-temporally biased. To better constrain the uncertainties in quantifying methane fluxes from inland waters, a closer examination of the processes transporting methane from sediment to atmosphere is necessary. Among these processes, ebullition (bubbling) is an important transport pathway and is a primary source of uncertainty in quantifying methane emissions from freshwaters. This thesis aims to improve our understanding of ebullition in freshwaters by studying the processes of methane bubble formation, storage and release in aquatic sediments. The laboratory experiments demonstrate that aquatic sediments can store up to ~20% (volumetric content) gas and the storage capacity varies with sediment properties. The methane produced is stored as gas bubbles in sediment with minimal ebullition until the storage capacity is reached. Once the sediment void spaces are created by gas bubble formation, they are stable and available for future bubble storage and transport. Controlled water level drawdown experiments showed that the amounts of gas released from the sediment scaled with the total volume of sediment gas storage and correlated linearly to the drop in hydrostatic pressure. It was hypothesized that not only the timing of ebullition is controlled by sediment gas storage, but also the spatial distribution of ebullition. A newly developed freeze corer, capable of characterizing sediment gas content under in situ environments, enabled the possibility to test the hypothesis in a large subtropical lake (Lake Kinneret, Israel). The results showed that gas content was variable both vertically and horizontally in the lake sediment. Sediment methane production rate and sediment characteristics could explain these variabilities. The spatial distribution of ebullition generally was in a good agreement with the horizontal distribution of depth-averaged (surface 1 m) sediment gas content. While discrepancies were found between sediment depth-integrated methane production and the snapshot ebullition rate, they were consistent in a long term (multiyear average). These findings provide a solid basis for the future development of a process-based ebullition model. By coupling a sediment transport model with a sediment diagenetic model, general patterns of ebullition hotspots can be predicted at a system level and the uncertainties in ebullition flux measurements can be better constrained both on long-term (months to years) and short-term (minutes to hours) scales.
Method development for the quantification of pharmaceuticals in aqueous environmental matrices
(2021)
As a consequence of the world population increase and the resulting water scarcity, water quality is the object of growing attention. In that context, organic anthropogenic molecules — often defined as micropollutants— represent a threat for water resources. Among them, pharmaceuticals are the object of particular concerns due to their permanent discharge, their increasing consumption and their effect-based structures. Pharmaceuticals are mainly introduced in the environment via wastewater treatment plants (WWTPs), along with their metabolites and the on-site formed transformation products (TPs). Once in the aquatic environment, they partition between the different environmental compartments in particular the aqueous phase, suspended particulate matter(SPM) and biota. In the last decades, pharmaceuticals have been widely investigated in the water phase. However, extreme polar pharmaceuticals have rarely been monitored due to the lack of robust analytical methods. Moreover, metabolites and TPs have seldom been included in routine analysis methods although their environmental relevance is proven. Furthermore, pharmaceuticals have been only sporadically investigated in SPM and biota and adequate multi-residue methods are lacking to obtain comprehensive results about their occurrence in these matrices. This thesis endeavors to cover these gaps of knowledge by the development of generic multi-residue methods for pharmaceuticals determination in the water phase, SPM and biota and to evaluate the occurrence and partition of pharmaceuticals into these compartments. For a complete overview, a particular focus was laid on extreme polar pharmaceuticals, pharmaceutical metabolites and TPs. In total, three innovative multi-residue methods were developed, they include analytes covering a broad range of physico-chemical properties. First, a reliable multi-residue method was developed for the analysis of extreme polar pharmaceuticals, metabolites and TPs dissolved in water. The selected analytes covered a significant range of elevated polarity and the method would be easily expendable to further analytes. This versatility could be achieved by the utilization of freeze-drying as sample preparation and zwitterionic hydrophilic interaction liquid chromatography (HILIC) in gradient elution mode. The suitability of HILIC chromatography to simultaneously quantify a large range of micropollutants in aqueous environmental samples was thoroughly studied. Several limitations were pointed out: a very complex and time-consuming method development, a very high sensitivity with regards to modification of the acetonitrile to water ratio in the eluent or the diluent and high positive matrix effects for certain analytes. However, these limitations can be overcome by the utilization of a precise protocol and appropriate labeled internal standards. They are overmatched by the benefits of HILIC which permits the chromatographic separation of extreme polar micropollutants. Investigation of environmental samples showed elevated concentrations of the analytes in the water phase. In particular, gabapentin, metformin, guanylurea and oxypurinol were measured at concentrations in the µg/L range in surface water. Subsequently, a reliable multi-residue method was established for the determination of 57 pharmaceuticals, 47 metabolites and TPs sorbed to SPM down to the low ng/g range. This method was conceived to cover a large range of polarity in particular with the inclusion of extreme polar pharmaceuticals. The extraction procedure was based on pressurized liquid extraction (PLE) followed by a clean-up via solvent exchange and detection via direct injection-reversed-phase LC-MS/MS and freeze-drying HILIC-MS/MS. Pharmaceutical sorption was examined using laboratory experiments. Derived distribution coefficients Kd varied by five orders of magnitude among the analytes and confirmed a high sorption potential for positively charged and nonpolar pharmaceuticals. The occurrence of pharmaceuticals in German rivers SPM was evaluated by the investigation of annual composite SPM samples taken at four sites at the river Rhine and one site at the river Saar between the years 2005 and 2015. It revealed the ubiquitous presence of pharmaceuticals sorbed to SPM in these rivers. In particular, positively charged analytes, even very polar and nonpolar pharmaceuticals showed appreciable concentrations. For many pharmaceuticals, a distinct correlation was observed between the annual quantities consumed in Germany and the concentrations measured in SPM. Studies of composite SPM spatial distribution permitted to get hints about specific industrial discharge by comparing the pollution pattern along the river. For the first time, these results showed the potential of SPM for the monitoring of positively charged and nonpolar pharmaceuticals in surface water. Finally, a reliable and generic multi residue method was developed to investigate 35 pharmaceuticals and 28 metabolites and TPs in fish plasma, fish liver and fish fillet. For this matrix, it was very challenging to develop an adequate clean-up allowing for the sufficient separation of the matrix disturbances from the analytes. In the final method, fish tissue extraction was performed by cell disruption followed by a non-discriminating clean-up based on silica gel solid-phase extraction(SPE) and restrictive access media (RAM) chromatography. Application of the developed method to the measurement of bream and carp tissues from German rivers revealed that even polar micropollutants such as pharmaceuticals are ubiquitously present in fish tissues. In total, 17 analytes were detected for the first time in fish tissues, including 10 metabolites/TPs. The importance of monitoring metabolites and TPs in fish tissues was confirmed with their detection at similar concentrations as their parents. Liver and fillet were shown to be appropriate for the monitoring of pharmaceuticals in fish, whereas plasma is more inconvenient due to very low concentrations and collection difficulties. Elevated concentrations of certain metabolites suggest possible formation of human metabolites in fish. Measured concentrations indicate a low bioaccumulation potential for pharmaceuticals in fish tissues.
This thesis addresses the automated identification and localization of a time-varying number of objects in a stream of sensor data. The problem is challenging due to its combinatorial nature: If the number of objects is unknown, the number of possible object trajectories grows exponentially with the number of observations. Random finite sets are a relatively new theory that has been developed to derive at principled and efficient approximations. It is based around set-valued random variables that contain an unknown number of elements which appear in arbitrary order and are themselves random. While extensively studied in theory, random finite sets have not yet become a leading paradigm in practical computer vision and robotics applications. This thesis explores random finite sets in visual tracking applications. The first method developed in this thesis combines set-valued recursive filtering with global optimization. The problem is approached in a min-cost flow network formulation, which has become a standard inference framework for multiple object tracking due to its efficiency and optimality. A main limitation of this formulation is a restriction to unary and pairwise cost terms. This circumstance makes integration of higher-order motion models challenging. The method developed in this thesis approaches this limitation by application of a Probability Hypothesis Density filter. The Probability Hypothesis Density filter was the first practically implemented state estimator based on random finite sets. It circumvents the combinatorial nature of data association itself by propagation of an object density measure that can be computed efficiently, without maintaining explicit trajectory hypotheses. In this work, the filter recursion is used to augment measurements with an additional hidden kinematic state to be used for construction of more informed flow network cost terms, e.g., based on linear motion models. The method is evaluated on public benchmarks where a considerate improvement is achieved compared to network flow formulations that are based on static features alone, such as distance between detections and appearance similarity. A second part of this thesis focuses on the related task of detecting and tracking a single robot operator in crowded environments. Different from the conventional multiple object tracking scenario, the tracked individual can leave the scene and later reappear after a longer period of absence. Therefore, a re-identification component is required that picks up the track on reentrance. Based on random finite sets, the Bernoulli filter is an optimal Bayes filter that provides a natural representation for this type of problem. In this work, it is shown how the Bernoulli filter can be combined with a Probability Hypothesis Density filter to track operator and non-operators simultaneously. The method is evaluated on a publicly available multiple object tracking dataset as well as on custom sequences that are specific to the targeted application. Experiments show reliable tracking in crowded scenes and robust re-identification after long term occlusion. Finally, a third part of this thesis focuses on appearance modeling as an essential aspect of any method that is applied to visual object tracking scenarios. Therefore, a feature representation that is robust to pose variations and changing lighting conditions is learned offline, before the actual tracking application. This thesis proposes a joint classification and metric learning objective where a deep convolutional neural network is trained to identify the individuals in the training set. At test time, the final classification layer can be stripped from the network and appearance similarity can be queried using cosine distance in representation space. This framework represents an alternative to direct metric learning objectives that have required sophisticated pair or triplet sampling strategies in the past. The method is evaluated on two large scale person re-identification datasets where competitive results are achieved overall. In particular, the proposed method better generalizes to the test set compared to a network trained with the well-established triplet loss.
Semantic Web technologies have been recognized to be key for the integration of distributed and heterogeneous data sources on the Web, as they provide means to define typed links between resources in a dynamic manner and following the principles of dataspaces. The widespread adoption of these technologies in the last years led to a large volume and variety of data sets published as machine-readable RDF data, that once linked constitute the so-called Web of Data. Given the large scale of the data, these links are typically generated by computational methods that given a set of RDF data sets, analyze their content and identify the entities and schema elements that should be connected via the links. Analogously to any other kind of data, in order to be truly useful and ready to be consumed, links need to comply with the criteria of high quality data (e.g., syntactically and semantically accurate, consistent, up-to-date). Despite the progress in the field of machine learning, human intelligence is still essential in the quest for high quality links: humans can train algorithms by labeling reference examples, validate the output of algorithms to verify their performance on a data set basis, as well as augment the resulting set of links. Humans —especially expert humans, however, have limited availability. Hence, extending data quality management processes from data owners/publishers to a broader audience can significantly improve the data quality management life cycle.
Recent advances in human computation and peer-production technologies opened new avenues for human-machine data management techniques, allowing to involve non-experts in certain tasks and providing methods for cooperative approaches. The research work presented in this thesis takes advantage of such technologies and investigates human-machine methods that aim at facilitating link quality management in the Semantic Web. Firstly, and focusing on the dimension of link accuracy, a method for crowdsourcing ontology alignment is presented. This method, also applicable to entities, is implemented as a complement to automatic ontology alignment algorithms. Secondly, novel measures for the dimension of information gain facilitated by the links are introduced. These entropy-centric measures provide data managers with information about the extent the entities in the linked data set gain information in terms of entity description, connectivity and schema heterogeneity. Thirdly, taking Wikidata —the most successful case of a linked data set curated, linked and maintained by a community of humans and bots— as a case study, we apply descriptive and predictive data mining techniques to study participation inequality and user attrition. Our findings and method can help community managers make decisions on when/how to intervene with user retention plans. Lastly, an ontology to model the history of crowd contributions across marketplaces is presented. While the field of human-machine data management poses complex social and technical challenges, the work in this thesis aims to contribute to the development of this still emerging field.
Web 2.0 provides technologies for online collaboration of users as well as the creation, publication and sharing of user-generated contents in an interactive way. Twitter, CNET, CiteSeerX, etc. are examples of Web 2.0 platforms which facilitate users in these activities and are viewed as rich sources of information. In the platforms mentioned as examples, users can participate in discussions, comment others, provide feedback on various issues, publish articles and write blogs, thereby producing a high volume of unstructured data which at the same time leads to an information overload. To satisfy various types of human information needs arising from the purpose and nature of the platforms requires methods for appropriate aggregation and automatic analysis of this unstructured data. In this thesis, we propose methods which attempt to overcome the problem of information overload and help in satisfying user information needs in three scenarios.
To this end, first we look at two of the main challenges of sparsity and content quality in Twitter and how these challenges can influence standard retrieval models. We analyze and identify Twitter content features that reflect high quality information. Based on this analysis we introduce the concept of "interestingness" as a static quality measure. We empirically show that our proposed measure helps in retrieving and filtering high quality information in Twitter. Our second contribution relates to the content diversification problem in a collaborative social environment, where the motive of the end user is to gain a comprehensive overview of the pros and cons of a discussion track which results from social collaboration of the people. For this purpose, we develop the FREuD approach which aims at solving the content diversification problem by combining latent semantic analysis with sentiment estimation approaches. Our evaluation results show that the FREuD approach provides a representative overview of sub-topics and aspects of discussions, characteristic user sentiments under different aspects, and reasons expressed by different opponents. Our third contribution presents a novel probabilistic Author-Topic-Time model, which aims at mining topical trends and user interests from social media. Our approach solves this problem by means of Bayesian modeling of relations between authors, latent topics and temporal information. We present results of application of the model to the scientific publication datasets from CiteSeerX showing improved semantically cohesive topic detection and capturing shifts in authors" interest in relation to topic evolution.
Recent EU-frameworks enforce the implementation of risk mitigation measures for nonpoint-source pesticide pollution in surface waters. Vegetated surface flow treatments systems (VTS) can be a way to mitigate risk of adverse effects in the aquatic ecosystems following unavoidable pollution after rainfall-related runoff events. Studies in experimental wetland cells and vegetated ditch mesocosms with common fungicides, herbicides and insecticides were performed to assess efficiency of VTS. Comprehensive monitoring of fungicide exposure after rainfall-related runoff events and reduction of pesticide concentrations within partially optimised VTS was performed from 2006-2009 at five vegetated detention ponds and two vegetated ditches in the wine growing region of the Southern Palatinate (SW-Germany).
Influence of plant density, size related parameters and pesticide properties in the performance of the experimental devices, and the monitored systems were the focus of the analysis. A spatial tool for prediction of pesticide pollution of surface waters after rainfall-related runoff events was programmed in a geographic information system (GIS). A sophisticated and high resolution database on European scale was built for simulation. With the results of the experiments, the monitoring campaign and further results of the EU-Life Project ArtWET mitigation measures were implemented in a georeferenced spatial decision support system. The database for the GIS tools was built with open data. The REXTOX (ratio of exposure to toxicity) Risk Indicator, which was proposed by the OECD (Organisation for Economic Co-operation and Development), was extended, and used for modeling the risk of rainfall-related runoff exposure to pesticides, for all agricultural waterbodies on European scale. Results show good performance of VTS. The vegetated ditches and wetland cells of the experimental systems showed a very high reduction of more than 90% of pesticide concentrations and potential adverse effects. Vegetated ditches and wetland cells performed significantly better than devices without vegetation. Plant density and sorptivity of the pesticide were the variables with the highest explanatory power regarding the response variable reduction of concentrations. In the experimental vegetated ditches 65% of the reduction of peak concentrations was explained with plant density and KOC. The monitoring campaign showed that concentrations of the fungicides and potential adverse effects of the mixtures were reduced significantly within vegetated ditches (Median 56%) and detention ponds (Median 38%) systems. Regression analysis with data from the monitoring campaign identified plant density and size related properties as explanatory variables for mitigation efficiency (DP: R²=0.57, p<0.001; VD:
R²=0.19, p<0.001). Results of risk model runs are the input for the second tool, simulating three risk mitigation measures. VTS as risk mitigation measures are implemented using the results for plant density and size related performance of the experimental and monitoring studies, supported by additional data from the ArtWET project. Based on the risk tool, simulations can be performed for single crops, selected regions, different pesticide compounds and rainfall events. Costs for implementation of the mitigation measures are estimated. Experiments and monitoring, with focus on the whole range of pesticides, provide novel information on VTS for pesticide pollution. The monitoring campaign also shows that fungicide pollution may affect surface waters. Tools developed for this study are easy to use and are not only a good base for further spatial analysis but are also useful as decision support of the non-scientific community. On a large scale, the tools on the one hand can help to compute external costs of pesticide use with simulation of mitigation costs on three levels, on the other hand feasible measures mitigating or remediating the effects of nonpoint-source pollution can be identified for implementation. Further study of risk of adverse effects caused by fungicide pollution and long-time performance of optimised VTS is needed.
We are living in a world where environmental crises come to a head. To curb aggravation of these problems, a socio-ecological transformation within society is needed, going along with human behavior change. How to encourage such behavior changes on an individual level is the core issue of this dissertation. It takes a closer look at the role of individuals as consumers resulting in purchase decisions with more or less harmful impact on the environment. By using the example of plastic pollution, it takes up a current environmental problem and focuses on an understudied behavioral response to this problem, namely reduction behavior. More concrete, this dissertation examines which psychological factors can encourage the mitigation of plastic packaging consumption. Plastic packaging accounts for the biggest amount of current plastic production and is associated with products of daily relevance. Despite growing awareness of plastic pollution in society, behavioral responses do not follow accordingly and plastic consumption is still very high. As habits are often a pitfall when implementing more resource-saving behavior, this dissertation further examines if periods of discontinuity can open a ’window of opportunity’ to break old habits and facilitate behavior change. Four manuscripts approach this matter from the gross to the subtle. Starting with a literature review, a summary of 187 studies addresses the topic of plastic pollution and human behavior from a societal-scientific perspective. Based on this, a cross-sectional study (N = 648) examines the deter-minants of plastic-free behavior intentions in the private-sphere and public-sphere by structural equation modeling. Two experimental studies in pre-post design build upon this, by integrating the determinants in intervention studies. In addition, it was evaluated if the intervention presented during Lent (N = 140) or an action month of ‘Plastic Free July’ (N = 366) can create a ‘window of opportunity’ to mitigate plastic packaging consumption. The literature review emphasized the need for research on behavioral solutions to reduce plastic consumption. The empirical results revealed moral and control beliefs to be the main determinants of reduction behavior. Furthermore, the time point of an intervention influenced the likelihood to try out the new behavior. The studies gave first evidence that a ‘window of opportunity’ can facilitate change towards pro-environmental behavior within the application field of plastic consumption. Theoretical and practical implications of creating the right opportunity for individuals to contribute to a socio-ecological transformation are finally discussed.
Nowadays, almost any IT system involves personal data processing. In
such systems, many privacy risks arise when privacy concerns are not
properly addressed from the early phases of the system design. The
General Data Protection Regulation (GDPR) prescribes the Privacy by
Design (PbD) principle. As its core, PbD obliges protecting personal
data from the onset of the system development, by effectively
integrating appropriate privacy controls into the design. To
operationalize the concept of PbD, a set of challenges emerges: First, we need a basis to define privacy concerns. Without such a basis, we are not able to verify whether personal data processing is authorized. Second, we need to identify where precisely in a system, the controls have to be applied. This calls for system analysis concerning privacy concerns. Third, with a view to selecting and integrating appropriate controls, based on the results of system analysis, a mechanism to identify the privacy risks is required. Mitigating privacy risks is at the core of the PbD principle. Fourth, choosing and integrating appropriate controls into a system are complex tasks that besides risks, have to consider potential interrelations among privacy controls and the costs of the controls.
This thesis introduces a model-based privacy by design methodology to handle the above challenges. Our methodology relies on a precise definition of privacy concerns and comprises three sub-methodologies: model-based privacy analysis, modelbased privacy impact assessment and privacy-enhanced system design modeling. First, we introduce a definition of privacy preferences, which provides a basis to specify privacy concerns and to verify whether personal data processing is authorized. Second, we present a model-based methodology to analyze a system model. The results of this analysis denote a set of privacy design violations. Third, taking into account the results of privacy analysis, we introduce a model-based privacy impact assessment methodology to identify concrete privacy risks in a system model. Fourth, concerning the risks, and taking into account the interrelations and the costs of the controls, we propose a methodology to select appropriate controls and integrate them into a system design. Using various practical case studies, we evaluate our concepts, showing a promising outlook on the applicability of our methodology in real-world settings.
Enterprise collaboration platforms are increasingly gaining importance in organisations. Integrating groupware functionality and enterprise social software (ESS), they have substantially been transforming everyday work in organisations. While traditional collaboration systems have been studied in Computer Supported Cooperative Work (CSCW) for many years, the large-scale, infrastructural and heterogeneous nature of enterprise collaboration platforms remains uncharted. Enterprise collaboration platforms are embedded into organisations’ digital workplace and come with a high degree of complexity, ambiguity, and generativity. When introduced, they are empty shells with no pre-determined purposes of use. They afford interpretive flexibility, and thus are shaping and being shaped by and in their social context. Outcomes and benefits emerge and evolve over time in an open-ended process and as the digital platform is designed through use. In order to make the most of the platform and associated continuous digital transformation, organisations have to develop the necessary competencies and capabilities.
Extant literature on enterprise collaboration platforms has proliferated and provide valuable insights on diverse topics, such as implementation strategies, adoption hurdles, or collaboration use cases, however, they tend to disregard their evolvability and related multiple time frames and settings. Thus, this research aims to identify, investigate, and theorise the ways that enterprise collaboration platforms are changing over time and space and the ways that organisations build digital transformation capabilities. To address this research aim two different case study types are conducted: i) in-depth longitudinal qualitative case study, where case narratives and visualisations capturing hard-to-summarise complexities in the enterprise collaboration platform evolution are developed and ii) multiple-case studies to capture, investigate, and compare cross-case elements that contribute to the shaping of enterprise collaboration platforms in different medium-sized and large organisations from a range of industries. Empirical data is captured and investigated through a multi-method research design (incl. focus groups, surveys, in-depth interviews, literature reviews, qualitative content analysis, descriptive statistics) with shifting units of analysis. The findings reveal unique change routes with unanticipated outcomes and transformations, context-specific change strategies to deal with multiple challenges (e.g. GDPR, works council, developments in the technological field, competing systems, integration of blue-collar workers), co-existing platform uses, and various interacting actors from the immediate setting and broader context. The interpretation draws on information infrastructure (II) as a theoretical lens and related sociotechnical concepts and perspectives (incl. inscriptions, social worlds, biography of artefacts). Iteratively, a conceptual model of the building of digital transformation capabilities is developed, integrating the insights gained from the study of enterprise collaboration platform change and developed monitoring change tools (e.g. MoBeC framework). It assists researchers and practitioners in understanding the building of digital transformation capabilities from a theoretical and practical viewpoint and organisations implement the depicted knowledge in their unique digital transformation processes.
The bio-insecticide Bacillus thuringiensis israelensis (Bti) has worldwide become the most commonly used agentin mosquito control programs that pursue two main objectives: the control of vector-borne diseases and the reduction of nuisance, mainly coming frommosquitoes that emerge in large quantities from seasonal wetlands. The Upper Rhine Valley, a biodiversity hotspot in Germany, has been treated withBti for decades to reduce mosquito-borne nuisance and increase human well-being.Although Btiis presumed to be an environmentally safe agent,adverse effects on wetland ecosystems are still a matter of debate especially when it comes to long-term and indirect effects on non-target organisms. In light of the above, this thesis aims at investigating direct and indirect effects of Bti-based mosquito control on non-target organisms within wetland food chains.Effects were examinedin studies with increasingeco(toxico)logical complexity, ranging from laboratory over mesocosm to field approaches with a focus on the non-biting Chironomidae and amphibian larvae (Rana temporaria, Lissotriton sp.).In addition, public acceptance of environmentally less invasive alternative mosquito control methods was evaluated within surveys among the local population.
Chironomids were the most severely affected non-target aquatic invertebrates. Bti substantially reduced larval and adult chironomid abundances and modified their species composition. Repeated exposures to commonly used Bti formulations induced sublethal alterations of enzymatic biomarkers activityin frog tadpoles. Bti-induced reductions of chironomid prey availability indirectly decreased body size of newts at metamorphosis and increased predation on newt larvae in mesocosm experiments. Indirect effects of severe reductions in midge biomassmight equally be passed through aquatic but also terrestrial food chains influencing predators of higher trophic levels. The majority ofaffectedpeople in the Upper Rhine Valley expressed a high willingness to contributefinancially to environmentally less harmful mosquito control.Alternative approaches could still include Bti applications excepting treatment of ecologically valuable areas. Potentially rising mosquito levels could be counteracted with local acting mosquito traps in domestic and urban areas because mosquito presence was experienced as most annoying in the home environment.
As Bti-based mosquito control can adversely affect wetland ecosystems, its large-scale applications, including nature conservation areas, should be considered more carefully to avoid harmful consequences for the environmentat the Upper Rhine Valley.This thesis emphasizesthe importance to reconsiderthe current practice of mosquito control and encourage research on alternative mosquito control concepts that are endorsed by the local population. In the context ofthe ongoing amphibian and insect declinesfurther human-induced effects onwetlands should be avoided to preserve biodiversity in functioning ecosystems.
Efficient Cochlear Implant (CI) surgery requires prior knowledge of the cochlea’s size and its characteristics. This information helps to select suitable implants for different patients. Registered and fused images helps doctors by providing more informative image that takes advantages of different modalities. The cochlea’s small size and complex structure, in addition to the different resolutions and head positions during imaging, reveals a big challenge for the automated registration of the different image modalities. To obtain an automatic measurement of the cochlea length and the volume size, a segmentation method of cochlea medical images is needed. The goal of this dissertation is to introduce new practical and automatic algorithms for the human cochlea multi-modal 3D image registration, fusion, segmentation and analysis. Two novel methods for automatic cochlea image registration (ACIR) and automatic cochlea analysis (ACA) are introduced. The proposed methods crop the input images to the cochlea part and then align the cropped images to obtain the optimal transformation. After that, this transformation is used to align the original images. ACIR and ACA use Mattes mutual information as similarity metric, the adaptive stochastic gradient descent (ASGD) or the stochastic limited memory Broyden–Fletcher–Goldfarb–Shanno (s-LBFGS) optimizer to estimate the parameters of 3D rigid transform. The second stage of nonrigid registration estimates B-spline coefficients that are used in an atlas-model-based segmentation to extract cochlea scalae and the relative measurements of the input image. The image which has segmentation is aligned to the input image to obtain the non-rigid transformation. After that the segmentation of the first image, in addition to point-models are transformed to the input image. The detailed transformed segmentation provides the scala volume size. Using the transformed point-models, the A-value, the central scala lengths, the lateral and the organ of corti scala tympani lengths are computed. The methods have been tested using clinical 3D images of total 67 patients: from Germany (41 patients) and Egypt (26 patients). The atients are of different ages and gender. The number of images used in the experiments is 217, which are multi-modal 3D clinical images from CT, CBCT, and MRI scanners. The proposed methods are compared to the state of the arts ptimizers related medical image registration methods e.g. fast adaptive stochastic gradient descent (FASGD) and efficient preconditioned tochastic gradient descent (EPSGD). The comparison used the root mean squared distance (RMSE) between the ground truth landmarks and the resulted landmarks. The landmarks are located manually by two experts to represent the round window and the top of the cochlea. After obtaining the transformation using ACIR, the landmarks of the moving image are transformed using the resulted transformation and RMSE of the transformed landmarks, and at the same time the fixed image landmarks are computed. I also used the active length of the cochlea implant electrodes to compute the error aroused by the image artifact, and I found out an error ranged from 0.5 mm to 1.12 mm. ACIR method’s RMSE average was 0.36 mm with a standard deviation (SD) of 0.17 mm. The total time average required for registration of an image pair using ACIR was 4.62 seconds with SD of 1.19 seconds. All experiments are repeated 3 times for justifications. Comparing the RMSE of ACIR2017 and ACIR2020 using paired T-test shows no significant difference (p-value = 0.17). The total RMSE average of ACA method was 0.61 mm with a SD of 0.22 mm. The total time average required for analysing an image was 5.21 seconds with SD of 0.93 seconds. The statistical tests show that there is no difference between the results from automatic A-value method and the manual A-value method (p-value = 0.42). There is no difference also between length’s measurements of the left and the right ear sides (p-value > 0.16). Comparing the results from German and Egypt dataset shows there is no difference when using manual or automatic A-value methods (p-value > 0.20). However, there is a significant difference when using ACA2000 method between the German and the Egyptian results (p-value < 0.001). The average time to obtain the segmentation and all measurements was 5.21 second per image. The cochlea scala tympani volume size ranged from 38.98 mm3 to 57.67 mm3 . The combined scala media and scala vestibuli volume size ranged from 34.98 mm 3 to 49.3 mm 3 . The overall volume size of the cochlea should range from 73.96 mm 3 to 106.97 mm 3 . The lateral wall length of scala tympani ranged from 42.93 mm to 47.19 mm. The organ-of-Corti length of scala tympani ranged from 31.11 mm to 34.08 mm. Using the A-value method, the lateral length of scala tympani ranged from 36.69 mm to 45.91 mm. The organ-of-Corti length of scala tympani ranged from 29.12 mm to 39.05 mm. The length from ACA2020 method can be visualised and has a well-defined endpoints. The ACA2020 method works on different modalities and different images despite the noise level or the resolution. In the other hand, the A-value method works neither on MRI nor noisy images. Hence, ACA2020 method may provide more reliable and accurate measurement than the A-value method. The source-code and the datasets are made publicly available to help reproduction and validation of my result.
Nanotemplates for the combined structural and functional analysis of membrane-associated proteins
(2019)
Plasma membranes are essential for life because they give cells an identity. Plasma membranes are almost impermeable to fluids and substances. Still, transport between inside and outside needs to be possible. An important transport way is endocytosis. This mechanism relies on membrane-associated proteins that sense and induce curvature to the plasma membrane. However, the physics and structural dynamics behind proteins acting on membranes is not well understood. There is a standard method in vitro to investigate membrane-associated proteins sensing spherical geometries: They are incubated on unilamellar vesicles. This procedure allows to analyze these proteins in their bound state. This approach is inappropriate for GRAF1 (GTPase Regulator Associated with Focal Adhesion Kinase-1), a key player in endocytosis because it senses tubular geometries instead. However, GRAF1 extrudes lipid tubes from vesicles that can be analyzed. Still, this is a limited method because these tubes suffer from inhomogeneity and they do not enable the observation of intermediate and lower concentration binding states. To overcome this issue they can be incubated on pre-tubular structures called nanotemplates. There have been studies using carbon nanotubes and Galactosylceramide lipid tubes as nanotemplates. These approaches require complex chemical modifications or expensive components and they are not necessarily flexible. In this work we present a simple and easy new approach to prepare nanotemplates using Folch lipid mixture. We show on the basis of BPG, a truncate of GRAF1, that our nanotemplates are suitable for Cryo-EM and that it is possible to use IHRSR (Iterative Helical Real Space Reconstruction) to analyze the structure of BPG in its bound state. Moreover, the qualification for Cryo-EM allows to use plunge freezing to interrupt the incubation on our nanotemplates abruptly. This enables the analysis of intermediate binding states to understand the binding process.
Grapevine growers have struggled with defending their crops against pests and diseases since the domestication of grapevine over 6000 ears ago. Since then, new growing methods paired with a better nderstanding of the ecological processes in the vineyard ecosystem continue to improve quality and quantity of grape harvests. In this thesis I am describing the effects of two recent innovations in viticulture on pest and beneficial arthropods in vineyards; Fungus-resistant grapevine cultivars (PIWIs) and the pruning system semi-minimal pruned hedge (SMPH). The SMPH pruning system allows for a drastic reduction of manual labor in the vineyard, and PIWIs are resistant to two of the most common fungal diseases of grapevine and therefore allow a drastic reduction of fungicide applications compared to conventional varieties. Heavy use of pesticides is linked to a number of problems, including pollution of waterways, negative effects on human health, and biodiversity loss. Here, I studied the effects of fungicide reduction and minimal pruning on arthropods that are beneficial for natural pest suppression in the vineyard ecosystem such as predatory mites, spiders, ants, earwigs, and lacewings. All of these groups either benefitted from the reduction of fungicide sprayings or were not significantly affected. Structural changes in the canopy of SMPH grapevines altered the microclimate in the canopy which in turn influenced some of the arthropods living in it. Overall, my findings suggest that PIWIs and SMPH, both in combination or separately, improve conditions for natural pest control. This adds to other advantages of these innovative management practices such as a reduction in production cost and a smaller impact on the environment.
Non-Consumptive Effects of Spiders and Ants: Does Fear Matter in Terrestrial Interaction Webs?
(2014)
Most animals suffer from predators. Besides killing prey, predators can affect prey physiology, morphology and behaviour. Spiders are among the most diverse and frequent predators in terrestrial ecosystems. Our behavioural arena experiments revealed that behavioural changes under spider predation risk are relatively scarce among arthropods. Wood crickets (Nemobius sylvestris), in particular, changed their behaviour in response to cues of various spider species. Thereby, more common and relatively larger spider species induced stronger antipredator behaviour in crickets.
Behavioural changes under predation risk are expected to enhance predator avoidance, but they come at a cost. Crickets previously confronted with cues of the nursery web spider (Pisaura mirabilis) were indeed more successful in avoiding predation. Surprisingly, crickets slightly increased food uptake and lost less weight under predation risk, indicating that crickets are able to compensate for short-term cost under predation risk. In a following plant choice experiment, crickets strongly avoided plants bearing spider cues, which in turn reduced the herbivory on the respective plants.
Similar to spiders, ants are ubiquitous predators and can have a strong impact on herbivores, but also on other predators. Juvenile spiders increased their propensity for long-distance dispersal if exposed to ant cues. Thus, spiders use this passive dispersal through the air (ballooning) to avoid ants and colonise new habitats.
In a field experiment, we compared arthropod colonisation between plants bearing cues of the nursery web spider and cue-free plants. We followed herbivory during the experimental period and sampled the arthropod community on the plants. In accordance with the plant choice experiment, herbivory was reduced on plants bearing spider cues. In addition, spider cues led to changes in the arthropod community: smaller spiders and black garden ants (Lasius niger) avoided plants bearing spider cues. In contrast, common red ants (Myrmica rubra) increased the recruitment of workers, possibly to protect their aphids.
Although behavioural changes were relatively rare on filter papers bearing spider cues, more natural experimental setups revealed strong and far-reaching effects of predation risk. We further suggest that risk effects influence the spatial distribution of herbivory, rather than reduce overall herbivory that is expected if predators kill herbivores. Consequently, the relative importance of predation and risk effects is crucial for the way predators affect lower trophic levels.
Initial goal of the current dissertation was the determination of image-based biomarkers sensitive for neurodegenerative processes in the human brain. One such process is the demyelination of neural cells characteristic for Multiple sclerosis (MS) - the most common neurological disease in young adults for which there is no cure yet. Conventional MRI techniques are very effective in localizing areas of brain tissue damage and are thus a reliable tool for the initial MS diagnosis. However, a mismatch between the clinical fndings and the visualized areas of damage is observed, which renders the use of the standard MRI diffcult for the objective disease monitoring and therapy evaluation. To address this problem, a novel algorithm for the fast mapping of myelin water content using standard multiecho gradient echo acquisitions of the human brain is developed in the current work. The method extents a previously published approach for the simultaneous measurement of brain T1, T∗ 2 and total water content. Employing the multiexponential T∗ 2 decay signal of myelinated tissue, myelin water content is measured based on the quantifcation of two water pools (myelin water and rest) with different relaxation times. Whole brain in vivo myelin water content maps are acquired in 10 healthy controls and one subject with MS. The in vivo results obtained are consistent with previous reports. The acquired quantitative data have a high potential in the context of MS. However, the parameters estimated in a multiparametric acquisition are correlated and constitute therefore an ill-posed, nontrivial data analysis problem. Motivated by this specific problem, a new data clustering approach is developed called Nuclear Potential Clustering, NPC. It is suitable for the explorative analysis of arbitrary dimensional and possibly correlated data without a priori assumptions about its structure. The developed algorithm is based on a concept adapted from nuclear physics. To partition the data, the dynamic behavior of electrically even charged nucleons interacting in a d-dimensional feature space is modeled. An adaptive nuclear potential, comprised of a short-range attractive (Strong interaction) and a long-range repulsive term (Coulomb potential), is assigned to each data point. Thus, nucleons that are densely distributed in space fuse to build nuclei (clusters), whereas single point clusters are repelled (noise). The algorithm is optimized and tested in an extensive study with a series of synthetic datasets as well as the Iris data. The results show that it can robustly identify clusters even when complex configurations and noise are present. Finally, to address the initial goal, quantitative MRI data of 42 patients are analyzed employing NPC. A series of experiments with different sets of image-based features show a consistent grouping tendency: younger patients with low disease grade are recognized as cohesive clusters, while those of higher age and impairment are recognized as outliers. This allows for the definition of a reference region in a feature space associated with phenotypic data. Tracking of the individual's positions therein can disclose patients at risk and be employed for therapy evaluation.
More than 10,000 organic chemicals such as pharmaceuticals, ingredients of personal care products and biocides are ubiquitously used in every day life. After their application, many of these chemicals enter the domestic sewer. Research has shown that conventional biological wastewater treatment in municipal wastewater treatment plants (WWTPs) is an insufficient barrier for the release of most of these anthropogenic chemicals into the receiving waters.
This bears unforeseen risks for aquatic wildlife and drinking water resources. Especially for recently introduced and/or detected compounds (so called emerging micropollutants), there is a growing need to investigate the occurrence and fate in WWTPs. In order to get a comprehensive picture on the behavior in municipal wastewater treatment, the following groups of emerging organic micropollutants, spanning a broad range of applications and physico-chemical properties, were selected as target compounds: pharmaceuticals (beta blockers, psycho-active drugs), UV-filters, vulcanization accelerators (benzothiazoles), biocides (anti-dandruffs, preservatives, disinfectants) and pesticides (phenylurea and triazine herbicides).
Stream ecosystems are one of the most threatened ecosystems worldwide due to their exposure to diverse anthropogenic stressors. Pesticides appear to be the most relevant stressor for agricultural streams. Due to the current mismatch of modelled and measured pesticide concentrations, monitoring is necessary to inform risk assessment or improve future pesticide approvals. Knowing if biotic stress responses are similar across large scales and long time frames could ultimately help in estimating protective stressor thresholds.
This thesis starts with an overview of entry pathways of pesticides to streams as well as the framework of current pesticide monitoring and gives an outline of the objectives of the thesis. In chapter 2, routine monitoring data based on grab sampling from several countries is analysed to identify the most frequently occurring pesticide mixtures. These mixtures are comprised of relatively low numbers of pesticides, of which herbicides are dominating. The detected pesticide mixtures differ between regions and countries, due to differences in the spectrum of analysed compounds and limits of quantification. Current routine monitoring does not include sampling during pesticide peaks associated with heavy rainfall events which likely influences the detected pesticide mixtures. In chapter 3, sampling rates of 42 organic pesticides for passive sampling are provided together with recommendations for the monitoring of field-relevant peaks. Using this information, in chapter 4 a pesticide gradient is established in an Eastern European region where agricultural intensity adjacent to sampled streams ranges from low to high. In contrast to current routine monitoring, rainfall events were sampled and a magnitude of pesticides were analysed. This led to the simultaneous detection of numerous pesticides of which one to three drive the pesticide toxicity. The toxicity, however, showed no relationship to the agricultural intensity. Using microcosms, the stress responses of fungal communities, the hyphomycetes, and the related ecosystem function of leaf decomposition, is investigated in chapter 5. Effects of a field-relevant fungicide mixture are examined across three biogeographical regions for three consecutive cycles of microbial leaf colonisation and decomposition. Despite different initial communities, stress responses as well as recoveries were similar across biogeographical regions, indicating a general pattern.
Overall, this thesis contributes to an improved understanding of occurrence and concentrations of pesticides mixtures in streams, their monitoring and impact on an ecosystem function. We showed that estimated pesticide toxicities reach levels that affect non-target organisms and thereby potentially whole ecosystems. Routine monitoring, however, likely underestimates the threat by pesticides. Effects leading to a loss in biodiversity or functions in streams ecosystems can be reduced by reassessing approved pesticides with ongoing targeted monitoring and increased knowledge of effects caused by these pesticides.
Through the increasing availability of access to the web, more and more interactions between people take place in online social networks, such as Twitter or Facebook, or sites where opinions can be exchanged. At the same time, knowledge is made openly available for many people, such as by the biggest collaborative encyclopedia Wikipedia and diverse information in Internet forums and on websites. These two kinds of networks - social networks and knowledge networks - are highly dynamic in the sense that the links that contain the important information about the relationships between people or the relations between knowledge items are frequently updated or changed. These changes follow particular structural patterns and characteristics that are far less random than expected.
The goal of this thesis is to predict three characteristic link patterns for the two network types of interest: the addition of new links, the removal of existing links and the presence of latent negative links. First, we show that the prediction of link removal is indeed a new and challenging problem. Even if the sociological literature suggests that reasons for the formation and resolution of ties are often complementary, we show that the two respective prediction problems are not. In particular, we show that the dynamics of new links and unlinks lead to the four link states of growth, decay, stability and instability. For knowledge networks we show that the prediction of link changes greatly benefits from the usage of temporal information; the timestamp of link creation and deletion events improves the prediction of future link changes. For that, we present and evaluate four temporal models that resemble different exploitation strategies. Focusing on directed social networks, we conceptualize and evaluate sociological constructs that explain the formation and dissolution of relationships between users. Measures based on information about past relationships are extremely valuable for predicting the dissolution of social ties. Hence, consistent for knowledge networks and social networks, temporal information in a network greatly improves the prediction quality. Turning again to social networks, we show that negative relationship information such as distrust or enmity can be predicted from positive known relationships in the network. This is particularly interesting in networks where users cannot label their relationships to other users as negative. For this scenario we show how latent negative relationships can be predicted.
While reading this sentence, you probably gave (more or less deliberately) instructions to approximately 100 to 200 muscles of your body. A sceptical face or a smile, your fingers scrolling through the text or holding a printed version of this work, holding your head, sitting, and much more.
All these processes take place almost automatically, so they seem to be no real achievement. In the age of digitalization it is a defined goal to transfer human (psychological and physiological) behavior to machines (robots). However, it turns out that it is indeed laborious to obtain human facial expression or walking from robots. To optimize this transfer, a deeper understanding of a muscle's operating principle is needed (and of course an understanding of the human brain, which will, however, not be part of this thesis).
A human skeletal muscle can be shortened willingly, but not lengthened, thereto it takes an antagonist. The muscle's change in length is dependent on the incoming stimulus from the central nervous system, the current length of the muscle itself, and certain muscle--specific quantities (parameters) such as the maximum force. Hence, a muscle can be mathematically described by a differential equation (or more exactly a coupled differential--algebraic system, DAE), whose structure will be revealed in the following chapters. The theory of differential equations is well-elaborated. A multitude of applicable methods exist that may not be known by muscle modelers. The purpose of this work is to link the methods from applied mathematics to the actual application in biomechanics.
The first part of this thesis addresses stability theory. Let us remember the prominent example from middle school physics, in which the resting position of a ball was obviously less susceptible towards shoves when lying in a bowl rather than balancing at the tip of a hill. Similarly, a dynamical (musculo-skeletal) system can attain equilibrium states that react differently towards perturbations.
We are going to compute and classify these equilibria.
In the second part, we investigate the influence of individual parameters on model equations or more exactly their solutions. This method is known as sensitivity analysis.
Take for example the system "car" containing a value for the quantity "pressure on the break pedal while approaching a traffic light". A minor deviation of this quantity upward or downward may lead to an uncomfortable, abrupt stop or even to a collision, instead of a smooth stop with a sufficient gap.
The considered muscle model contains over 20 parameters that, if changed slightly, have varying effects on the model equation solutions at different instants of time. We will investigate the sensitivity of those parameters regarding different sub--models, as well as the whole model among different dynamical boundary conditions.
The third and final part addresses the \textit{optimal control} problem (OCP).
The muscle turns a nerve impulse (input or control) into a length change and therefore a force response (output). This forward process is computable by solving the respective DAE. The reverse direction is more difficult to manage. As an everyday example, the OCP is present regarding self-parking cars, where a given path is targeted and the controls are the position of the
steering wheel as well as the gas pedal.
We present two methods of solving OCPs in muscle modeling: the first is a conjunction of variational calculus and optimization in function spaces, the second is a surrogate-based optimization.
The stands surveyed are among the last closed canopy forests in Rwanda. Their exploration began in the early twentieth century and is still ongoing. Previous studies were mainly concerned with plant sociological issues and presented references to environmental factors in anecdotal form, at best using indirect ordination methods. The present study undertakes a classification of the vegetation with numerical methods and establishes quantitative relationships of the species’ distributional structure to environmental parameters using spatially explicit procedures. For this purpose, 94 samples were taken in 100 m² hexagonal plots. Of these, 70 samples are from Nyungwe, 14 are from Gishwati, and 10 are from Cyamudongo. Given the homogeneity of the terrain and vegetation, all vegetation types encountered, all types of stands, and all vegetation strata were included. The beta diversity is expressed by an average Bray-Curtis dissimilarity of 0.92, and in JOST’S (2007) numbers equivalents, 37.90 equally likely samples would be needed to represent the diversity encountered. Within the survey, 1198 species in 127 families were collected. Among the specimens are 6 local endemics and 40 Albertine Rift endemics. Resulting from UPGMA and FCM-NC, 20 to 40 plant communities were established depending on the level of resolution. It can be inferred by means of a Mantel correlogram that the mean zone of influence of a single vegetation stand, as sampled by a 100 m² plot in Nyungwe Forest, ranges between 0.016 and 3.42 km. Of the communities compiled using FCM-NC and UPGMA, 50% consist of individual samples. Beyond undersampling, natural small-scale discontinuities are reflected by this result. Partial db-RDA resulted in an explained variation of 9.60% and 14.41% for environmental and soil factors, respectively. Utilising variation partitioning analyses based on CCA and tb-RDA, between 21.70% and 37.80% of the variation in vegetation data could be explained. The spatially structured fraction of these parameters accounts for between 30.50% and 49.80% of the explained variation (100%). The purely environmental parameters account for a share of 10.30% to 16.30%, whereby the lower limit originates from the unimodal approach and has lost its statistical significance. The soil variables, also after partial analysis, account for a share of 19.00% to 35.70%. While the residual impact of the climatic parameters is hardly significant, the effect of the soil properties is prevalent. In general, the spatially structured fraction of the parameters is predominant here. While on the broad-scale climatic factors, the altitude a.s.l. and the geology are determining factors, some soil parameters and matrix components also show their impacts here. In the mid-range of the scale, it is the forest matrix, the soil types, and the geology that determine species distribution. While in the fine range of the scale, some unrecorded parameters seem to have an effect, there are also neutral processes that determine species composition.
In Part I: "The flow-decomposition problem", we introduce and discuss the flow-decomposition problem. Given a flow F, this problem consists of decomposing the flow into a set of paths optimizing specific properties of those paths. We introduce different types of decompositions, such as integer decompositions and alpha-decompositions, and provide two formulations of the set of feasible decompositions.
We show that the problem of minimizing the longest path in a decomposition is NP-hard, even for fractional solutions. Then we develop an algorithm based on column generation which is able to solve the problem.
Tight upper bounds on the optimal objective value help to improve the performance.
To find upper bounds on the optimal solution for the shortest longest path problem, we develop several heuristics and analyze their quality. On pearl graphs we prove a constant approximation ratio of 2 and 3 respectively for all heuristics. A numerical study on random pearl graphs shows that the solutions generated by the heuristics are usually much better than this worst-case bound.
In Part II: "Construction and analysis of evacuation models using flows over time", we consider two optimization models in the context of evacuation planning. The first model is a parameter-based quickest flow model with time-dependent supply values. We give a detailed description of the network construction and of how different scenarios are modeled by scenario parameters. In a second step we analyze the effect of the scenario parameters on the evacuation time. Understanding how the different parameters influence the evacuation time allows us to provide better advice for evacuation planning and allows us to predict evacuation times without solving additional optimization problems. To understand the effect of the time-dependent supply values, we consider the quickest path problem with time-dependent supply values and provide a solution algorithm. The results from this consideration are generalized to approximate the behavior of the evacuation times in the context of quickest flow problems.
The second model we consider is a path-based model for evacuation in the presence of a dynamic cost function. We discuss the challenges of this model and provide ideas for how to approach the problem from different angles. We relate the problem to the flow-decomposition problem and consider the computation of evacuation paths with dynamic costs for large capacities. For the latter method we provide heuristics to find paths and compare them to the optimal solutions by applying the methods to two evacuation scenarios. An analysis shows that the paths generated by the heuristic yield close to optimal solutions and in addition have several desirable properties for evacuation paths which are not given for the optimal solution.
On the recognition of human activities and the evaluation of its imitation by robotic systems
(2023)
This thesis addresses the problem of action recognition through the analysis of human motion and the benchmarking of its imitation by robotic systems.
For our action recognition related approaches, we focus on presenting approaches that generalize well across different sensor modalities. We transform multivariate signal streams from various sensors to a common image representation. The action recognition problem on sequential multivariate signal streams can then be reduced to an image classification task for which we utilize recent advances in machine learning. We demonstrate the broad applicability of our approaches formulated as a supervised classification task for action recognition, a semi-supervised classification task for one-shot action recognition, modality fusion and temporal action segmentation.
For action classification, we use an EfficientNet Convolutional Neural Network (CNN) model to classify the image representations of various data modalities. Further, we present approaches for filtering and the fusion of various modalities on a representation level. We extend the approach to be applicable for semi-supervised classification and train a metric-learning model that encodes action similarity. During training, the encoder optimizes the distances in embedding space for self-, positive- and negative-pair similarities. The resulting encoder allows estimating action similarity by calculating distances in embedding space. At training time, no action classes from the test set are used.
Graph Convolutional Network (GCN) generalized the concept of CNNs to non-Euclidean data structures and showed great success for action recognition directly operating on spatio-temporal sequences like skeleton sequences. GCNs have recently shown state-of-the-art performance for skeleton-based action recognition but are currently widely neglected as the foundation for the fusion of various sensor modalities. We propose incorporating additional modalities, like inertial measurements or RGB features, into a skeleton-graph, by proposing fusion on two different dimensionality levels. On a channel dimension, modalities are fused by introducing additional node attributes. On a spatial dimension, additional nodes are incorporated into the skeleton-graph.
Transformer models showed excellent performance in the analysis of sequential data. We formulate the temporal action segmentation task as an object detection task and use a detection transformer model on our proposed motion image representations. Experiments for our action recognition related approaches are executed on large-scale publicly available datasets. Our approaches for action recognition for various modalities, action recognition by fusion of various modalities, and one-shot action recognition demonstrate state-of-the-art results on some datasets.
Finally, we present a hybrid imitation learning benchmark. The benchmark consists of a dataset, metrics, and a simulator integration. The dataset contains RGB-D image sequences of humans performing movements and executing manipulation tasks, as well as the corresponding ground truth. The RGB-D camera is calibrated against a motion-capturing system, and the resulting sequences serve as input for imitation learning approaches. The resulting policy is then executed in the simulated environment on different robots. We propose two metrics to assess the quality of the imitation. The trajectory metric gives insights into how close the execution was to the demonstration. The effect metric describes how close the final state was reached according to the demonstration. The Simitate benchmark can improve the comparability of imitation learning approaches.
In this thesis, I study the spectral characteristics of large dynamic networks and formulate the spectral evolution model. The spectral evolution model applies to networks that evolve over time, and describes their spectral decompositions such as the eigenvalue and singular value decomposition. The spectral evolution model states that over time, the eigenvalues of a network change while its eigenvectors stay approximately constant.
I validate the spectral evolution model empirically on over a hundred network datasets, and theoretically by showing that it generalizes arncertain number of known link prediction functions, including graph kernels, path counting methods, rank reduction and triangle closing. The collection of datasets I use contains 118 distinct network datasets. One dataset, the signed social network of the Slashdot Zoo, was specifically extracted during work on this thesis. I also show that the spectral evolution model can be understood as a generalization of the preferential attachment model, if we consider growth in latent dimensions of a network individually. As applications of the spectral evolution model, I introduce two new link prediction algorithms that can be used for recommender systems, search engines, collaborative filtering, rating prediction, link sign prediction and more.
The first link prediction algorithm reduces to a one-dimensional curve fitting problem from which a spectral transformation is learned. The second method uses extrapolation of eigenvalues to predict future eigenvalues. As special cases, I show that the spectral evolution model applies to directed, undirected, weighted, unweighted, signed and bipartite networks. For signed graphs, I introduce new applications of the Laplacian matrix for graph drawing, spectral clustering, and describe new Laplacian graph kernels. I also define the algebraic conflict, a measure of the conflict present in a signed graph based on the signed graph Laplacian. I describe the problem of link sign prediction spectrally, and introduce the signed resistance distance. For bipartite and directed graphs, I introduce the hyperbolic sine and odd Neumann kernels, which generalize the exponential and Neumann kernels for undirected unipartite graphs. I show that the problem of directed and bipartite link prediction are related by the fact that both can be solved by considering spectral evolution in the singular value decomposition.
The Stereotype Content Modell (SCM; Fiske et al., 2002) proposes two fundamental dimensions of social evaluation: Warmth, or the intentions of the target, and Competence, or the ability to enact these intentions. The practical applications of the SCM are very broad and have led to an assumption of universality of warmth and competence as fundamental dimensions of social evaluation.
This thesis has identified five mainly methodological shortcomings of the current SCM research and literature: (I) An insufficient initial scale development; (II) the usage of varying warmth and competence scales without sufficient scale property assessment in later research; (III) the dominant application of first-generation analytical approaches; (IV) the insufficient definition and empirical proof for the SCM’s assumption of universality; and (V) the limited application of the SCM for some social targets. These shortcomings were addressed in four article manuscripts strictly following open science recommendations.
Manuscript # 1 re-analysed published research using English SCM measures to investigate the measurement properties of the used warmth and competence scales. It reported the scales’ reliability, dimensionality and comparability across targets as well as the indicator-based parameter performance in a (multiple group) confirmatory factor analysis framework. The findings indicate that about two thirds of all re-analysed scales do not show the theoretically expected warmth and competence dimensionality. Moreover, only about eleven per cent allowed meaningful mean value comparisons between targets. Manuscript # 2 presents a replication of Manuscript # 1 in the national and language of German(y) generating virtually identical results as Manuscript # 1 did. Manuscript # 3 investigated the stereotype content of refugee subgroups in Germany. We showed that refugees was generally perceived unfavourably in terms of warmth and competence, but that the stereotype content varied based on the refugees’ geographic origin, religious affiliation, and flight motive. These results were generated using a reliability-corrected approach to compare mean values named alignment optimisation procedure. Manuscript # 4 developed and tested a high-performing SCM scale assessing occupational stereotypes a number of exploratory and confirmatory factor analyses.
The decline of biodiversity can be observed worldwide and its consequences are alarming. It is therefore crucial that nature must be protected and, where possible, restored. A wide variety of different project options are possible. Yet in the context of limited availability of resources, the selection of the most efficient measures is increasingly important. For this purpose, there is still a lack of information. This pertains, as outlined in the next paragraph, in particular, to information at different scales of projects.
Firstly, there is a lack of information on the concrete added value of biodiversity protection projects. Secondly, there is a lack of information on the actual impacts of such projects and on the costs and benefits associated with a project. Finally, there is a lack of information on the links between the design of a project, the associated framework conditions and the perception of specific impacts. This paper addresses this knowledge gap by providing more information on the three scales by means of three empirical studies on three different biodiversity protection projects in order to help optimize future projects.
The first study “Assessing the trade-offs in more nature-friendly mosquito control in the Upper Rhine region” examines the added value of a more nature-friendly mosquito control in the Upper Rhine Valley of Germany using a contingent valuation method. Recent studies show that the widely used biocide Bti, which is used as the main mosquito control agent in many parts of the world, has more negative effects on nature than previously expected. However, it is not yet clear whether the population supports a more nature-friendly mosquito control, as such an adaptation could potentially lead to higher nuisance. This study attempts to answer this question by assessing the willingness to pay for an adapted mosquito control strategy that reduces the use of Bti, while maintaining nuisance protection within settlements. The results show that the majority of the surveyed population attaches a high value to a more nature-friendly mosquito control and is willing to accept a higher nuisance outside of the villages.
The second study “Inner city river restoration projects: the role of project components for acceptance” examines the acceptance of a river restoration project in Rhineland-Palatinate, Germany. Despite much effort, many rivers worldwide are still in poor condition. Therefore, a rapid implementation of river restoration projects is of great importance. In this context, acceptance by society plays a fundamental role, however, the factors determining such acceptance are still poorly understood. In particular, the complex interplay between the acceptance or rejection of specific project components and the acceptance of the overall project require further exploration. This study addresses this knowledge gap by assessing the acceptance of the project, its various ecological and social components, and the perception of real and fictitious costs as well as the benefits of the components. Our findings demonstrate that while acceptance of the overall project is generally rather high, many respondents reject one or more of the project's components. Complementary social project components, like a playground, find less support than purely ecological components. Overall, our research shows that complementary components may increase or decrease acceptance of the overall project. We, furthermore, found that differences in the acceptance of the individual components depend on individual concerns, such as perceived flood risk, construction costs, expected noise and littering as well as the quality of communication, attachment to the site, and the age of the respondents.
The third study “What determines preferences for semi-natural habitats in agrarian landscapes? A choice-modelling approach across two countries using attributes characterizing vegetation” investigates people's aesthetic preferences for semi-natural habitats in agricultural landscapes. The EU-Common Agricultural Policy promotes the introduction of woody and grassy semi-natural habitats (SNH) in agricultural landscapes. While the benefits of these structures in terms of regulating ecosystem services are already well understood, the effects of SNH on visual landscape quality is still not clear. This study investigates the factors determining people’s visual preferences in the context of grassy and woody SNH elements in Swiss and Hungarian landscapes using picture-based choice experiments. The results suggest that respondents’ choices strongly depend on specific vegetation characteristics that appear and disappear over the year. In particular, flowers as a source of colours and green vegetation as well as ordered structure and the proportion of uncovered soil in the picture play an important role regarding respondents’ aesthetic perceptions of the pictures.
The three empirical studies can help to make future projects in the study areas of biodiversity protection more efficient. While this thesis highlights the importance of exploring biodiversity protection projects at different scales, further analyses of the different scales of biodiversity protection projects are needed to provide a sound basis to develop guidance on identifying the most efficient biodiversity protection projects.
Fresh water resources like rivers and reservoirs are exposed to a drastically changing world. In order to safeguard these lentic ecosystems, they need stronger protection in times of global change and population growth. In the last years, the exploitation pressure on drinking water reservoirs has increased steadily worldwide. Besides securing the demands of safe drinking water supply, international laws especially in Europe (EU Water Framework Directive) stipulate to minimize the impact of dams on downstream rivers. In this study we investigate the potential of a smart withdrawal strategy at Grosse Dhuenn Reservoir to improve the temperature and discharge regime downstream without jeopardizing drinking water production. Our aim is to improve the existing withdrawal strategy for operating the reservoir in a sustainable way in terms of water quality and quantity. First, we set-up and calibrated a 1D numerical model for Grosse Dhuenn Reservoir with the open-source community model “General Lake Model” (GLM) together with its water quality module “Aquatic Ecodynamics” library (AED2). The reservoir model reproduced water temperatures and hypolimnetic dissolved oxygen concentrations accurately over a 5 year period. Second, we extended the model source code with a selective withdrawal functionality (adaptive offtake) and added operational rules for a realistic reservoir management. Now the model is able to autonomously determine the best withdrawal height according to the temperature and flow requirements of the downstream river and the raw water quality objectives. Criteria for the determination of the withdrawal regime are selective withdrawal, development of stratification and oxygen content in the deep hypolimnion. This functionality is not available in current reservoir models, where withdrawal heights are generally provided a priori to the model and kept fixed during the simulation. Third, we ran scenario simulations identifying an improved reservoir withdrawal strategy to balance the demands for downstream river and raw water supply. Therefore we aimed at finding an optimal parallel withdrawal ratio between cold hypolimnetic water and warm epilimnetic or metalimnetic water in order to provide a pre-defined temperature in the downstream river. The reservoir model and the proposed withdrawal strategy provide a simple and efficient tool to optimize reservoir management in a multi-objective view for mastering future reservoir management challenges.
Many pharmaceuticals (e.g. antibiotics, contrast media, beta blockers) are excreted unmetabolized and enter wastewater treatment plants (WWTPs) through the domestic sewage system. Research has shown that many of them are not effectively removed by conventional wastewater treatment and therefore are detected in surface waters. Reverse osmosis (RO) is one of the most effective means for removing a wide range of micropollutants in water recycling. However, one significant disadvantage is the need to dispose the resultant RO concentrate. Due to the fact that there are elevated concentrations of micropollutants in the concentrate, a direct disposal to surface water could be hazardous to aquatic organisms. As a consequence, further treatment of the concentrate is necessary. In this study, ozonation was investigated as a possible treatment option for RO concentrates. Concentrate samples were obtained from a RO-membrane system which uses municipal WWTP effluents as feeding water to produce infiltration water for artificial groundwater recharge. In this study it could be shown that ozonation is efficient in the attenuation of selected pharmaceuticals, even in samples with high TOC levels (46 mg C/L). Tests with chlorinated and non-chlorinated WWTP effluent showed an increase of ozone stability, but a decrease of hydroxyl radical exposure in the samples after chlorination. This may shift the oxidation processes towards direct ozone reactions and favors the degradation of compounds with high apparent second order rate constants. Additionally it might inhibit an oxidation of compound predominantly reacting with OH radicals. Ozone reaction kinetics were investigated for beta blockers (acebutolol, atenolol, metoprolol and propranolol) which are permanently present in WWTP effluents. For beta blockers two moieties are common which are reactive towards ozone, a secondary amine group and an activated aromatic ring. The secondary amine is responsible for a pH dependence of the direct ozone reaction rate, since only the deprotonated amine reacts very quickly. At pH 7 acebutolol, atenolol and metoprolol reacted with ozone with an apparent second order rate constant of about 2000 M-1 s-1, whereas propranolol reacted at ~1.0 105 M-1 s-1. The rate constants for the reaction of the selected compounds with OH radicals were determined to be 0.5-1.0 x 1010 M-1 s-1. Oxidation products (OPs) formed during ozonation of metoprolol and propranolol were identified via liquid chromatography (LC) tandem mass spectrometry. Ozonation led to a high number of OPs being formed. Experiments were carried out in MilliQ-water at pH 3 and pH 8 as well as with and without the radical scavenger tertiary butanol (t-BuOH). This revealed the influence of pH and the OH radical exposure on OP formation. The OH radical exposure was determined by adding the probe compound para-chlorobenzoic acid (pCBA). Metoprolol: To define the impacts of the protonated and non protonated metoprolol species on OH radical formation, the measured pCBA attenuation was compared to modeled values obtained by a simplified kinetic model (Acuchem). A better agreement with the measured results was obtained, when the model was based on a stoichiometric formation of OH radical precursors (O2-) during the primary ozone reaction of metoprolol. However, for reaction of a deprotonated molecule (attack of the aromatic ring) a formation of O2- could be confirmed, but an assumed stoichiometric O2- formation over-estimated the formation of OH radicals in the system. Analysis of ozonated raw wastewater and municipal WWTP effluent spiked with 10 μM metoprolol exhibited a similar OP formation pattern as detected in the reaction system at pH 8 without adding radical scavenger. This indicated a significant impact of OH radical exposure on the formation of OPs in real wastewater matrices. Propranolol: The primary ozonation product of propranolol (OP-291) was formed by an ozone attack of the naphthalene ring, which resulted in a ring opening and two aldehyde moieties being formed. OP-291 was further oxidized to OP-307, presumably by an OH radical attack, which was then further oxidized to OP-281. Reaction pathways via ozone as well as OH radicals were proposed and confirmed by the chemical structures identified with MS2 and MS3 data. It can be concluded that ozonation of WWTP effluent results in the formation of a high number of OPs with an elevated toxic potential (i.e. formation of aldehydes).
In a world where language defines the boundaries of one's understanding, the words of Austrian philosopher Ludwig Wittgenstein resonate profoundly. Wittgenstein's assertion that "Die Grenzen meine Sprache bedeuten die Grenzen meiner Welt" (Wittgenstein 2016: v. 5.6) underscores the vital role of language in shaping our perceptions. Today, in a globalized and interconnected society, fluency in foreign languages is indispensable for individual success. Education must break down these linguistic barriers, and one promising approach is the integration of foreign languages into content subjects.
Teaching content subjects in a foreign language, a practice known as Content Language Integrated Learning (CLIL), not only enhances language skills but also cultivates cognitive abilities and intercultural competence. This approach expands horizons and aligns with the core principles of European education (Leaton Gray, Scott & Mehisto 2018: 50). The Kultusministerkonferenz (KMK) recognizes the benefits of CLIL and encourages its implementation in German schools (cf. KMK 2013a).
With the rising popularity of CLIL, textbooks in foreign languages have become widely available, simplifying teaching. However, the appropriateness of the language used in these materials remains an unanswered question. If textbooks impose excessive linguistic demands, they may inadvertently limit students' development and contradict the goal of CLIL.
This thesis focuses on addressing this issue by systematically analyzing language requirements in CLIL teaching materials, emphasizing receptive and productive skills in various subjects based on the Common European Framework of Reference. The aim is to identify a sequence of subjects that facilitates students' language skill development throughout their school years. Such a sequence would enable teachers to harness the full potential of CLIL, fostering a bidirectional approach where content subjects facilitate language learning.
While research on CLIL is extensive, studies on language requirements for bilingual students are limited. This thesis seeks to bridge this gap by presenting findings for History, Geography, Biology, and Mathematics, allowing for a comprehensive understanding of language demands. This research endeavors to enrich the field of bilingual education and CLIL, ultimately benefiting the academic success of students in an interconnected world.
Small headwater streams comprise most of the total channel length and catchment area in fluvial networks. They are tightly connected to their catchments and, thus, are highly vulnerable to changes in catchment hydrologic budgets and land use. Although these small, often fishless streams are of little economic interest, they are vital for the ecological and chemical state of larger water bodies. Although numerous studies investigate the impact of various anthropogenic stressors or altered catchment conditions, we lack an in-depth understanding of the natural conditions and processes in headwater streams. This natural state, however, largely affects how a headwater stream responds to anthropogenic or climatic changes. One of the major threats to aquatic ecosystems is the excessive anthropogenic input of nutrients leading to eutrophication. Nutrients exert a bottom-up effect in the food web, foremost affecting primary producers and their consumers, i.e. periphyton and benthic grazers in headwater streams. The periphyton-grazer link is the main path of autochthonous (in-stream) production into the stream food web and the strength of this link largely determines the effectiveness of this pathway. Therefore, this thesis aims at elucidating important biological processes with the explicit focus on periphyton-grazer interactions. I assessed different aspects of periphyton-grazer interactions using laboratory experiments to solve methodological problems, and using a field study to compare the benthic communities of three morphologically similar, phosphorus-limited, near-natural headwater streams. With the results of the laboratory experiments, I was able to show that periphyton RNA/DNA ratios can be used as proxy for periphyton growth rates in controlled experiments and that the fatty acid composition of grazing mayfly nymphs responds to changes in fatty acids provided by the diet after only two weeks. The use of the RNA/DNA ratio as a proxy for periphyton growth rate allows a comparison of these growth rates even in simple experimental set-ups and thereby permits the inclusion of this important process in ecotoxicological or ecological experiments. The observed fast turnover rates of fatty acids in consumer tissues show that even short-term changes in available primary producers can alter the fatty acid composition of primary consumers with important implications for the supply of higher trophic levels with physiologically important polyunsaturated fatty acids. With the results of the field study, I revealed gaps in the understanding of the linkages between catchment and in-stream phosphorus availability under near-natural conditions and demonstrated that seemingly comparable headwater streams had significantly different benthic communities. These differences most likely affect stream responses to environmental changes.
Vertebrate biodiversity is rapidly decreasing worldwide with amphibians being the most endangered vertebrate group. In the EU, 21 of 89 amphibian species are recognized as being endangered. The intensively used European agricultural landscape is one of the major causes for these declines. As agriculture represents an essential habitat for amphibians, exposure to pesticides can have adverse effects on amphibian populations. Currently, the European risk assessment of pesticides for vertebrates requires specific approaches for fish regarding aquatic vertebrate toxicity and birds as well as mammals for terrestrial vertebrate toxicity but does not address the unique characteristics of amphibians. Therefore, the overall goal of this thesis was to investigate the ecotoxicological effects of pesticides on Central European anuran amphibians. For this, effects on aquatic and terrestrial amphibian life stages as well as on reproduction were investigated. Then, in anticipation of a risk assessment of pesticides for amphibians, this thesis discussed potential regulatory risk assessment approaches.
For the investigated pesticides and amphibian species, it was observed that the acute aquatic toxicity of pesticides can be addressed using the existing aquatic risk assessment approach based on fish toxicity data. However, lethal as well as sublethal effects were observed in terrestrial juveniles after dermal exposure to environmentally realistic pesticide concentrations, which cannot be covered using an existing risk assessment approach. Therefore, pesticides should also be evaluated for potential terrestrial toxicity using risk assessment tools before approval. Additionally, effects of co-formulants and adjuvants of pesticides need specific consideration in a future risk assessment as they can increase toxicity of pesticides to aquatic and terrestrial amphibian stages. The chronic duration of combined aquatic and terrestrial exposure was shown to affect amphibian reproduction. Currently, such effects cannot be captured by the existing risk assessment as data involving field scenarios analysing effects of multiple pesticides on amphibian reproduction are too rare to allow comparison to data of other terrestrial vertebrates such as birds and mammals. In the light of these findings, future research should not only address acute and lethal effects, but also chronic and sublethal effects on a population level. As pesticide exposure can adversely affect amphibian populations, their application should be considered even more carefully to avoid further amphibian declines. Overall, this thesis emphasizes the urgent need for a protective pesticide risk assessment for amphibians to preserve and promote stable amphibian populations in agricultural landscapes.
The adoption of the EU Water Framework Directive (WFD) in 2000 marked the beginning of a new era of European water policy. However, more than a decade later, the majority of European rivers are still failing to meet one of the main objectives of the WFD: the good ecological status. Pesticides are a major stressor for stream ecosystems. This PhD thesis emphasises the need for WFD managers to consider all main agricultural pesticide sources and influencing landscape parameters when setting up River Basin Management Plans and Programmes of Measures. The findings and recommendations of this thesis can help to successfully tackle the risk of pesticide contamination to achieve the WFD objectives.
A total of 663 sites that were situated in the German Federal States of Saxony, Saxony-Anhalt, Thuringia and Hesse were studied (Chapter 3 and 4). In addition to an analysis of the macroinvertebrate data of the governmental WFD monitoring network, a detailed GIS analysis of the main agricultural pesticide sources (arable land and garden allotments as well as wastewater treatment plants (WWTPs)) and landscape elements (riparian buffer strips and forested upstream reaches) was conducted. Based on the results, a screening approach was developed that allows an initial rapid and cost-effective identification of those sites that are potentially affected by pesticide contamination. By using the trait-based bioindicator SPEARpesticides, the insecticidal long-term effects of the WWTP effluents on the structure of the macroinvertebrate community were identified up to at least 1.5 km downstream (in some cases even 3 km) of the WWTPs. The results of the German Saprobic Index revealed that the WWTPs can still be important sources of oxygen-depleting substances. Furthermore, the results indicate that forested upstream reaches and riparian buffer strips at least 5 m in width can be appropriate measures in mitigating the effects and exposure of pesticides.
There are concerns that the future expansion of energy crop cultivation will lead to an increased pesticide contamination of ecosystems in agricultural landscapes. Therefore, the potential of energy crops for pesticide contamination was examined based on an analysis of the development of energy crop cultivation in Germany and a literature search on perennial energy crops (Chapter 5). The results indicate that the future large-scale expansion of energy crop cultivation will not necessarily cause an increase or decrease in the amounts of pesticides that are released into the environment. The potential effects will depend on the future design of the agricultural systems. Instead of creating energy monocultures, annual energy crops should be integrated into the existing food production systems. Financial incentives and further education are needed to encourage the use of sustainable crop rotations, innovative cropping systems and perennial energy crops, which may contribute to crop diversity and generate lower pesticide demands than do intensive farming systems.
This thesis examined two specific cases of point and diffuse pollution, pesticides and salinisation, which are two of the most concerning stressors of Germany’s freshwater bodies. The findings of this thesis were organized into three major components, of which the first component presents the contribution of WWTPs to pesticide toxicity (Chapter 2). The second component focuses on the current and future background salt ion concentrations under climate change with the absence of anthropogenic activities (Chapter 3). Finally, the third major component shows the response of invertebrate communities in terms of species turnover to levels of salinity change, considered as a proxy for human-driven salinisation (Chapter 4).
The physical-biological interactions that affect the temporal variability of benthic oxygen fluxes were investigated to gain improved understanding of the factors that control these processes. This study, for the first time is able to resolve benthic diffusive boundary layer (DBL) dynamics using the newly developed lifetime-based laser induced fluorescence (τLIF) oxygen imaging system, which enables study of the role of small-scale fluid mechanics generated by benthic organism activity, and hence a more detailed analysis of oxygen transport mechanisms across the sediment-water interface (SWI).
The net benthic oxygen flux across the sediment-water interface is controlled by sediment oxygen uptake and oxygen transport. While the oxygen transport is largely influenced by turbulence driven by large-scale flows, sediment oxygen uptake is mainly affected by oxygen production and biological- and chemical-oxygen degradation of organic matter. Both processes can be enhanced by the presence of fauna and are intimately coupled. The benthic oxygen flux can be influenced by fauna in two ways, i.e. by modulating the availability of oxygen, which enhances the sediment oxygen uptake, and by enhancing the transport of oxygen.
In-situ and a series of laboratory measurements were conducted to estimate the short- and seasonal variability of benthic fluxes including the effects of burrow ventilation activity by tube-dwelling animals using eddy correlation (EC) and τLIF oxygen imaging techniques, respectively.
The in-situ benthic oxygen fluxes showed high variability at hourly and seasonal timescales, where statistical analysis indicated that current velocity and water depth were the most significant predictors of benthic oxygen flux at the waterside, which co-varied with the discharge, temperature, and oxygen concentration. The range of variability of seasonal fluxes corresponded to the friction velocities which were driven by large-scale flows. Application of a simplified analytical model that couples the effect of hydrodynamic forcing of the diffusive boundary layer with a temperature-dependent oxygen consumption rate within the sediment showed that friction velocity and temperature cause similar variability of the steady-state benthic oxygen flux.
The application of τLIF oxygen imaging system in bioturbation experiments enabled the investigation and discovery of insights into oxygen transport mechanisms across the sediment-water interface. Distinct oxygen structures above burrow openings were revealed, these were associated with burrow ventilation. The DBL was degraded in the presence of burrow ventilation. Advective transport generated by the energetic plumes released at burrow outlets was the dominant transport driving mechanism. The contribution of diffusive flux to the total estimated decreased with increasing larval density. For a range of larvae densities, commonly observed in ponds and lakes, sediment oxygen uptake rates increased up to 2.5-fold in the presence of tube-dwelling animals, and the oxygen transport rate exceeded chironomid respiration by up to a factor of 4.
The coupled physical-biological factors affecting net benthic oxygen flux can be represented by temperature, which is a prominent factor that accounts for both oxygen transport and sediment oxygen uptake. Low oxygen transport by flow coincided with high summer temperatures, amplified by a reduction of benthic population density and pupation. It can also, however, be offset by increased ventilation activity. In contrast, low temperature coincided with high oxygen concentrations, an abundance of larvae, and higher flow is offset by less burrow ventilation activity. Investigation of the effect of hydrodynamics on oxygen transport alone suggested that the expected increase of benthic oxygen flux under global warming can be offset by a reduction in flow velocity, which could ultimately lead to increasing carbon burial rates, and in a growing importance of anaerobic mineralization pathways with increasing emission rates of methane.
This study suggests a significant contribution of biological induced benthic oxygen flux to physical transport driven by large-scale flow-fields contributing to bottom-boundary layer turbulence.
The implementation of physiological indicators reflecting the response of organisms to changes in their environment is assumed to provide potential benefits for ecological studies. By analysing the physiological condition of organisms in freshwater ecological studies rather than their ultimate effects, physiological indicators can contribute to a faster assessment of effects than using traditional ecological indicators, such as the evaluation of the benthic community structure or the determination of the reproductive success of organisms. This can increase the effectiveness of environmental health assessment and experimental ecology. In this respect the thesis focuses on physiological measures characterizing the energetic condition and energy consumption (the concentration of energy storage compounds, the adenylate energy charge, the energy consumption in vivo), as well as individual growth (RNA:DNA ratio) of organisms. Although these sub-individual indicators are commonly applied in marine ecology and more recently in ecotoxicology, they have been rarely applied in freshwater ecology to date. With respect to an increased use of physiological indicators in freshwater ecological studies, the objectives of the present thesis are twofold. First, it highlights the potential of assessing the individual fitness by means of physiological indicators in freshwater ecological studies. For that reason, Chapter 2 provides the basic assumptions as well as the theoretical and methodological fundamentals necessary for the application of physiological indicators within freshwater ecology and, furthermore, points out their applicability by several case studies. As second objective, the thesis addresses selected ecophysiological aspects of native and non-native freshwater amphipods, which are considered suitable candidates for the determination of physiological indicators in ecological studies due to their function as keystone species within aquatic habitats. The studies presented in Chapters 3−5 of the thesis provide information on (i) species- and sex-specific seasonal variations within the energetic condition of natural Gammarus populations (G. fossarum, G. pulex), (ii) differences in metabolic activity and behaviour between different amphipod species (G fossarum, G. roeselii and D. villosus), as well as (iii) the direct effects of ambient ammonia on the physiology and behaviour of D. villosus. The fundamental conclusions drawn from the conducted field and laboratory studies, as well as their relevance and general implications for the application of physiological indicators in freshwater ecological research are discussed in Chapter 6.
Nandi forests (South and North Nandi forests) are situated in the Rift Valley Province of Kenya very close to Kakamega forest. From previous documents it has been seen that Kakamega and Nandi forests were connected to each other forming one big "U" shaped forest block till the beginnings of 1900s. Due to human pressures, currently there are three different forests form the previous one block forest. Although they were one forest, information on Nandi forests is very scanty when it is compared to that of Kakamega forest. The species composition and diversity as well as plant communities and population structure of Nandi forests have not been studied. Information is not available about the similarity status of South and North Nandi forests. Furthermore the natural regeneration potential (seedling bank) of these forests is not well studied and documented. Hence this study aims to fill these gaps.
In this study totally 76 quadrates (49 from South Nandi and 27 from North Nandi) were used to collect data. In the South Nandi forests 27 of the quadrates were laid in the better side of the forest (at Kobujoi) and the remaining 22 were in the heavily disturbed part of this forest (Bonjoge). The quadrates were arranged on transects that have one to one and half km which were parallel to the slope. The distance between the quadrates was 100 meter and transects are 500 m apart. The size of the main quadrate was 400 m2 (20 X 20 m) which also had five small plots (3 X 3 m) distributed on the four corners and in the center. Each woody plants (climbers, shrubs and trees) having more than one meter and greater than two centimeter diameter at breast height (dbh) were measured and recorded. Seedlings and herbaceous plants were sampled in the smaller plots. Individual plants were identified at species level and when it was not possible to identify in the field voucher specimen were prepared and latter identified at the East African Herbarium, National Museum of Kenya, and Nairobi. Clustering and ordination were performed using PC-ORD and CANOCO ecological softwares, respectively. For both clustering and ordination abundance data of the species was used. Shannon diversity index and evenness were computed using PC-ORD while similarity indices, Fisher alpha, rarefaction, species richness estimation (nonparametric species richness estimators) were conducted using EstimateS. Indicator species analysis was undertaken using PC-ORD. Basal area and height class distribution at forests level or site level (Bonjoge and Kobujoi) and diameter (dbh) class distribution for selected trees species were performed to evaluate population structure.
Furthermore importance value (IV) of woody plant species was calculated. SPSS version 16 was used to undertake both parametric (when data assume normal distribution) and nonparametric (when data are not assuming normal distribution) comparison of means, correlation and regression analysis.
In this study totally 321 vascular plant species comprising 92 families and 243 genera were identified in Nandi forests (both South and North Nandi forests). In South Nandi forest 253 plant species form 82 families and 201 genera were recorded while in North Nandi 181 species comprising 67 families and 155 genera were recorded. Jackknife second order estimators gave the highest species richness estimate for both South and North Nandi forests i.e. 284 and 209, respectively. In the case of highly disturbed and less disturbed parts of South Nandi forest 138 and 172 vascular plant species were recorded, respectively. Asteraceae, Rubiaceae and Euphorbiaceae are the top three species rich families of Nandi forests. In terms of different diversity measures (i.e. alpha and beta diversity, Fisher alpha, Shannon diversity and evenness indices) South Nandi is more diverse than North Nandi forest. Sörensen and Jaccard (classic) as well as their respective abundance based similarities showed that there is a low species similarity between South and Nandi forests. The cluster analysis resulted in three different plant communities and this result is supported by the ordination result.
South and North Nandi forest has inverted "J" height class distribution showing that larger proportion of woody plant individuals are found in the lower height classes. Similar pattern is observed when the diameters of all woody plants were considered together. However, different diameter class distributions (seven types) were identified when selected tree species were analyzed separately. It has been observed that the basal area of South Nandi forest is significantly lower than that of North Nandi forest (Mann-Whitney U =358, p < 0.001). Similarly Bonjoge has significantly lower basal area (t-value=3.77, p<0.01) than that of Kobujoi. Number of woody plat seedlings in South Nandi forest is significantly higher than that of North Nandi (Mann-Whitney U = 362.5, p<0.001). In the same way Bonjoge has significantly smaller number of ssedlings than Kobujoi (t-value 4.24, p<0.001). Most of species in both forests are able to resprout from stumps after physical damage; hence this helps the regeneration of the forests in addition to seedling banks. This study enables to fill some of the information gaps about Nandi forests especially of floristic composition, population structure, natural regeneration and human impacts on this ecosystem.
The three biodegradable polymers polylactic acid (PLA), polyhydroxybutyrate (PHB) and polybutylene adipate terephthalate (PBAT) were coated with hydrogenated amorphous carbon layers (a-C:H) in the context of this thesis. A direct alignment of the sample surface to the source was chosen, resulting in the deposition of a robust, r-type a-C:H. At the same time, a partly covered silicon wafer was placed together with the polymers in the coating chamber and was coated. Silicon is a hard material and serves as a reference for the applied layers. Due to the hardness of the material, no mixed phase occurs between the substrate and the applied layer (no interlayer formation). In addition, the thickness of the applied layer can be estimated with the help of the silicon sample.
The deposition of the layer was realized by radio frequency plasma enhanced chemical vapor deposition (RF-PECVD). For the coating the samples were pre-treated with an oxygen plasma. Acetylene was used as precursor gas for the plasma coating. Coatings with increasing thickness in 50 nm steps from 0-500 nm were realised.
The surface analysis was performed using several techniques: The morphology and layer stability were analyzed with scanning electron microscopy (SEM) measurements. The wettability was determined by contact angle technique. In addition, the contact angles provide macroscopic information about the bond types of the carbon atoms present on the surface. For microscopic analysis of the chemical composition of the sample and layer surfaces, diffuse reflectance Fourier transform infrared spectroscopy (DRIFT) as well as synchrotron based X-ray photon spectroscopy (XPS) and near edge X-ray absorption fine structure spectroscopy (NEXAFS) were used.
All coated polymers showed several cases of layer failure due to internal stress in the layers. However, these were at different layer thicknesses, so there was a substrate effect. In addition, it is visible in the SEM images that the coatings of PLA and PHB can cause the applied layer to wave, the so-called cord buckling. This does not occur with polymer PBAT, which indicates a possible better bonding of the layer to the polymer. The chemical analyses of the layer surfaces show for each material a layer thickness dependent ratio of sp² to sp³ bonds of carbon, which alternately dominate the layer. In all polymers, the sp³ bond initially dominates, but the sp² to sp³ ratio changes at different intervals. Although the polymers were coated in the same plasma, i.e. the respective layer thicknesses (50 nm, 100 nm, ...) were applied in the same plasma process, the respective systems differed considerably from each other. A substrate effect is therefore demonstrably present. In addition, it was found that a change in the dominant bond from sp³ to sp² is an indication ofan upcoming layer failure of the a-C:H layer deposited on the polymer. In the case of PLA, this occurs immediately with change to sp² as the dominant bond; in the case of PHB and PBAT, this occurs with different delay to increased layer thicknesses (at PHB 100 nm, at PBAT approx. 200 nm.
Overall, this thesis shows that there is a substrate effect in the coating of the biodegradable polymers PLA, PHB and PBAT, since despite the same coating there is a different chemical composition of the surface at the respective layer thicknesses. In addition, a layer failure can be predicted by analyzing the existing bond.
Agriculture requires a sustainable intensification to feed the growing world population without exacer-bating soil degradation and threatening soil quality. Globally, plastic mulching (PM) is increasingly used to improve crop growth and yields and consequently agronomic productivity. However, recent literature reported also critical aspects of PM for soil quality and showed contradictory outcomes. This might result from the numerous applications of PM in different climates across various crops, soils and agri-cultural techniques. Thus, a closer look is necessary on how PM influences soil processes under certain climate and cultivation conditions to obtain a comprehensive understanding of its effects, which is im-portant to evaluate PM in terms of a sustainable agriculture.
The aim of this PhD thesis was to understand how multiannual PM influences soil properties and pro-cesses under the temperate, humid Central European cultivation conditions and to evaluate the resulting consequences for soil quality. I designed a three-year field study to investigate the influence of PM (black polyethylene, 50 μm) on microclimate, structural stability, soil organic matter (SOM) and the concentrations of selected fungicides and mycotoxins in three soil layers (0–10, 10–30 and 30–60 cm) compared to straw mulching (SM). Both mulching types were applied in a drip-irrigated ridge-furrow system in strawberry cultivation.
PM shifted the soil microclimate to higher soil temperatures and lower soil moistures. The higher soil temperature seems thus to be the key factor for the increased crop growth and yields under the present humid climate. The reduced soil moisture under PM indicated that under PM the impeded rainfall infil-tration had a stronger effect on the water balance than the reduced evaporation. This indicate an ineffi-cient rainwater use in contrast to arid climates. PM changed the water cycling in the ridges from down-ward directed water flows to lateral water flows from furrows to ridges. This reduced nitrogen leaching in the topsoil (0–10 cm) in the strawberry establishment period. The plastic mulches avoided aggregate breakdown due to rapid soil wetting and excess water during rainfalls and thus maintained a loose and stable soil structure in the surface soil, which prevents soil compaction and made soil less prone to erosion. PM changed carbon fluxes and transformation so that a larger total and more stable SOM was observed. Thus, the higher belowground biomass productivity under PM compensated the impeded aboveground biomass input and the temperature-induced SOM decomposition. However, SM increased the labile and total SOM in the topsoil after the first experiment year and promoted microbial growth due to the aboveground biomass incorporation. PM reduced fungicide entry into soil compared to SM and reduced consequently the fungal biomass reduction and the biosynthesis of the mycotoxin deoxyni-valenol. The modified microclimate under PM did not increase mycotoxin occurrence. In this context, PM poses no risk for an increased soil contamination, impairing soil quality. This PhD thesis demon-strated that the PM effects on soil can vary depending on time, season and soil depth, which emphasizes the importance to include soil depth and time in future studies.
Compared to semiarid and arid regions, the PM effects found in this PhD thesis were small, absent or in another way. I attributed this to the fact that PM under humid climate reduced instead of increased soil moisture and that SM had due to straw und strawberry canopy a similar ‘covering effect’ as PM. Thus, generalizing the PM effects on soil across different climates seems hardly possible as they differ in type and extent depending on climate. A differentiated consideration is hence necessary to evaluate the PM effects on soil quality. I conclude that PM under temperate, humid climate might contribute to reduce soil degradation (e.g., SOM depletion, erosion, nutrient leaching, soil compaction and soil contamina-tion), which sustains soil quality and helps to enable a sustainable agricultural intensification. However, further research is necessary (1) to support my findings on a larger scale, longer time periods and across various soil and crop types, (2) to address remaining open questions and (3) to develop optimization to overcome the critical aspects of PM (e.g. macro- and microplastic waste in soil, mulch disposal).
Amphibian populations are declining worldwide for multiple reasons such as habitat destruction and climate change. An example for an endangered European amphibian is the yellow-bellied toad Bombina variegata. Populations have been declining for decades, particularly at the northern and western range margin. One of the extant northern range centres is the Westerwald region in Rhineland-Palatinate, Germany. To implement informed conservation activities on this threatened species, knowledge of its life-history strategy is crucial. This study therefore focused on different developmental stages to test predictions of life-history theory. It addressed (1) developmental, (2) demographic and (3) genetic issues of Bombina variegata as a model organism: (1) Carry-over effects from larval environment to terrestrial stages and associated vulnerability to predators were investigated using mesocosm approaches, fitness tests and predation trials. (2) The dynamics and demography of B. variegata populations were studied applying a capture-mark-recapture analysis and skeletochronology. The study was complemented through (3) an analysis of genetic diversity and structuring of B. variegata populations using 10 microsatellite loci. In order to reveal general patterns and characteristics among B. variegata populations, the study focused on three geographical scales: local (i.e. a former military training area), regional (i.e. the Westerwald region) and continental scale (i.e. the geographical range of B. variegata). The study revealed carry-over effects of larval environment on metamorph phenotype and behaviour causing variation in fitness in the early terrestrial stage of B. variegata. Metamorph size and condition are crucial factors for survival, as small-sized individuals were particularly prone to predator attacks. Yellow-bellied toads show a remarkable fast-slow continuum of the life-history trait longevity. A populations’ position within this continuum may be determined by local environmental stochasticity, i.e. an extrinsic source of variation, and the efficiency of chemical antipredator protection, i.e. an intrinsic source of variation. Extreme longevity seems to be an exception in B. variegata. Senescence was absent in this study. Weather variability affected reproductive success and thus population dynamics. The dispersal potential was low and short-term fragmentation of populations caused significant genetic differentiation at the local scale. Long-term isolation resulted in increased genetic distance at the regional scale. At the continental scale, populations inhabiting the marginal regions were deeply structured with reduced allelic richness. As consequence of environmental changes, short-lived and isolated B. variegata populations at the range margin may face an increased risk of extinction. Conservation measures should thus improve the connectivity among local populations and reinforce annual reproductive success. Further research on the intraspecific variation in B. variegata skin toxins is required to reveal potential effects on palatability and thus longevity.
Population genetic structure in European Hyalodaphnia species: Monopolization versus gene flow
(2012)
Cyclic parthenogens displays an alternation of asexual and sexual reproduction which has consequences for the genetic structure of these organisms. The clonal diversity of cyclic parthenogenetic zooplankton populations is influenced by the size of the dormant egg bank, i.e., the amount of sexually produced dormant eggs that assembled in the sediment, as these dormant eggs contribute new genetic variants to the populations. Further, the clonal diversity is impacted by clonal erosion over time, which reduces the number of different clones through stochastic and selective processes. Although freshwater invertebrates are good dispersers through their dormant stages, the influence of gene flow is assumed to be negligible, as the local population successfully monopolizes the available resources. As these populations reach carrying capacity fast due to the asexual reproduction, the first colonizing individuals are able to successfully establish in the habitat, resulting in a priority effect which hinders the invasion of new genotypes. Due to clonal selection and sexual reproduction a population will locally adapt over time and will establish a dormant egg bank which facilitates the fast re-colonization after a hostile period. This thesis evaluates the processes altering the population genetic structure of cyclic parthenogenetic zooplankton with a special focus on the concepts of monopolization as well as the counteracting effects of gene flow, using large-lake Daphnia species. Thirty-two variable microsatellite DNA markers were developed and a subset of twelve markers was evaluated regarding their suitability for species assignment and hybrid class detection. With this marker set and an additional mitochondrial DNA marker forty-four natural European populations of the species D. cucullata, D. galeata and D. longispina were studied. In D. galeata, most populations were characterized by low clonal diversities which suggest high influence from clonal erosion over the growing season and a low contribution from the dormant egg bank. Further, recent expansions as well as gene flow were detected, probably caused by the anthropogenic alteration of freshwater habitats, in particular eutrophication of many European lakes. D. longispina and D. cucullata revealed a different genetic structure compared to D. galeata, with high genetic differentiation among populations. This indicates low levels of effective gene flow which is in line with the predictions of monopolization. Further, high clonal diversities were found in populations of the two taxa, suggesting a high contribution from the dormant egg bank while clonal erosion was often not detectable. In D. longispina, mitochondrial data revealed an ancient expansion which was probably initiated by the formation of glacial lakes after the last ice age.
In addition, in D. longispina not only clonal diversity but also genetic diversity was high, indicating that during the build-up of the studied populations the influence from gene flow was probably high. To better understand the processes that act on early populations the population build-up in regard to the temporal advantage of clones during invasion succession was experimentally studied and revealed that priority effects shape population structure of Daphnia species. However, in certain cases the highly superior clones resulted in the extinction of inferior clones independent of the temporal advantage the single clones had.
This clearly shows that not only the time of succession is important but also the competitive strength. rnIn conclusion, the results obtained show that the population genetic structure in cyclic parthenogenetic zooplankton species is impacted by various processes. In addition to earlier studies, which mainly focus on local adaptation, clonal erosion and the size of the dormant egg bank to understand population genetic structure, this thesis could show that gene flow may be effective as well. During population build-up the advantage of early arriving individuals does not necessarily predict the outcome of population assembly, as additional genotypes may contribute to the population. Finally, the genetic structure of established populations may be severely impacted by effective gene flow, if severe environmental changes alter the habitat of the locally adapted population.
Natural pest control and pollination are important ecosystem services for agriculture. They can be supported by organic farming and by seminatural habitats at the local and landscape scale.
The potential of seminatural habitats to support predatory flies (chapters 2 and 3) and bees(chapter 7) at the local and landscape scale was investigated in seminatural habitats. Predatory flies were more abundant in woody habitats and positively related to landscape complexity. The diversity and the abundance of honey and wild bees were positively related to the supply of flowers offered in the seminatural habitats.
The influence of organic farming, adjacent seminatural habitats and landscape complexity on pest control (chapter 4) and pollination (chapter 6) was investigated in 18 pumpkin fields. Organic farming lacked strong effects both on the pest control and on the pollination of pumpkin.
Pest control is best supported at the local scale by the flower abundance in the adjacent habitat. The flower supply positively affected the density of natural enemies and tended to reduce aphid densities in pumpkin fields.
Pumpkin provides a striking example for a dominant role of wild pollinators for pollination success, because bumble bees are the key pollinators of pumpkin in Germany, despite a higher visitation frequency of honey bees. Pollination is best supported by landscape complexity. Bumble bee visits and as a result pollen delivery in pumpkin were negatively related to the dominance of agricultural land in the surrounding landscape.
The influence of aphid density (chapter 8) and pollination (chapter 5) on pumpkin yield was evaluated. Pumpkin yields were not affected by aphid densities observed in the pumpkin fields and not limited by pollination at the current levels of bee visitation.
In conclusion, especially seminatural habitats, that provide diverse, continuous floral resources, are important for natural enemies and pollinators. A sufficient proportion of different seminatural habitat types in agricultural landscapes should be maintained and restored. Thereby natural enemies such as predatory flies, wild pollinators such as bumble bees, and the pest control and pollination provided by them can be supported.
In the last decade, policy-makers around the world have turned their attention toward the creative industry as the economic engine and significant driver of employments. Yet, the literature suggests that creative workers are one of the most vulnerable work-forces of today’s economy. Because of the highly deregulated and highly individuated environment, failure or success are believed to be the byproduct of individual ability and commitment, rather than a structural or collective issue. This thesis taps into the temporal, spatial, and social resolution of digital behavioural data to show that there are indeed structural and historical issues that impact individuals’ and
groups’ careers. To this end, this thesis offers a computational social science research framework that brings together the decades-long theoretical and empirical knowledge of inequality studies, and computational methods that deal with the complexity and scale of digital data. By taking music industry and science as use cases, this thesis starts off by proposing a novel gender detection method that exploits image search and face-detection methods.
By analysing the collaboration patterns and citation networks of male and female computer scientists, it sheds lights on some of the historical biases and disadvantages that women face in their scientific career. In particular, the relation of scientific success and gender-specific collaboration patterns is assessed. To elaborate further on the temporal aspect of inequalities in scientific careers, this thesis compares the degree of vertical and horizontal inequalities among the cohorts of scientists that started their career at different point in time. Furthermore, the structural inequality in music industry is assessed by analyzing the social and cultural relations that breed from live performances and musics releases. The findings hint toward the importance of community belonging at different stages of artists’ careers. This thesis also quantifies some of the underlying mechanisms and processes of inequality, such as the Matthew Effect and the Hipster Paradox, in creative careers. Finally, this thesis argues that online platforms such as Wikipedia could reflect and amplify the existing biases.
Studies on the toxicity of chemical mixtures find that components at levels below no-observed-effect concentrations (NOECs) may cause toxicity resulting from the combined effects of mixed chemicals. However, chemical risk assessment frequently focuses on individual chemical substances, although most living organisms are substantially exposed to chemical mixtures rather than single substances. The concepts of additive toxicity, concentration addition (CA), and independent action (IA) models are often applied to predict the mixture toxicity of similarly and dissimilarly acting chemicals, respectively. However, living organisms and the environment may be exposed to both types of chemicals at the same time and location. In addition, experimental acquisition of toxicity data for every conceivable mixture is unfeasible since the number of chemical combinations is extremely large. Therefore, an integrated model to predict mixture toxicity on the basis of single mixture components having various modes of toxic action (MoAs) needs to be developed. The objectives of the present study were to analyze the challenges in predicting mixture toxicity in the environment, and to develop integrated models that overcome the limitations of the existing prediction models for estimating the toxicity of non-interactive mixtures through computational models. For these goals, four sub-topics were generated in this study. Firstly, applicable domains and limitations of existing integrated models were analyzed and grouped into three kinds of categories in this study. There are current approaches used to assess mixture toxicity; however, there is a need for a new research concept to overcome challenges associated with such approaches, which recent studies have addressed. These approaches are discussed with particular emphasis on those studies involved in computational approaches to predict the toxicity of chemical mixtures based on the toxicological data of individual chemicals. Secondly, through a case study and a computational simulation, it was found that the Key Critical Component (KCC) and Composite Reciprocal (CR) methods (as described in the European Union (EU) draft technical guidance notes for calculating the Predicted No Effect Concentration (PNEC) and Derived No Effect Level (DNEL) of mixtures) could derive significantly different results. As the third and fourth sub-topics of this study, the following two integrated addition models were developed and successfully applied to overcome the inherent limitations of the CA and IA models, which could be theoretically used for either similarly or dissimilarly acting chemicals: i) a Partial Least Squares-Based Integrated Addition Model (PLS-IAM), and, ii) a Quantitative Structure-Activity Relationship-Based Two-Stage Prediction (QSAR-TSP) model. In this study, it was shown that the PLS-IAM might be useful to estimate mixture toxicity when the toxicity data of similar mixtures having the same compositions were available. In the case of the QSAR-TSP model, it showed the potential to overcome the critical limitation of the conventional TSP model, which requires knowledge of the MoAs for all chemicals. Therefore, this study presented good potential for the advanced integrated models (e.g., PLS-IAM and QSAR-TSP), while considering various non-interactive constituents that have different MoAs in order to increase the reliance of conventional models and simplify the procedure for risk assessment of mixtures.
Connected vehicles will have a tremendous impact on tomorrow’s mobility solutions. Such systems will heavily rely on information delivery in time to ensure the functional reliability, security and safety. However, the host-centric communication model of today’s networks questions efficient data dissemination in a scale, especially in networks characterized by a high degree of mobility. The Information-Centric Networking (ICN) paradigm has evolved as a promising candidate for the next generation of network architectures. Based on a loosely coupled communication model, the in-network processing and caching capabilities of ICNs are promising to solve the challenges set by connected vehicular systems. In such networks, a special class of caching strategies which take action by placing a consumer’s anticipated content actively at the right network nodes in time are promising to reduce the data delivery time. This thesis contributes to the research in active placement strategies in information-centric and computation-centric vehicle networks for providing dynamic access to content and computation results. By analyzing different vehicular applications and their requirements, novel caching strategies are developed in order to reduce the time of content retrieval. The caching strategies are compared and evaluated against the state-of-the-art in both extensive simulations as well as real world deployments. The results are showing performance improvements by increasing the content retrieval (availability of specific data increased up to 35% compared to state-of-the-art caching strategies), and reducing the delivery times (roughly double the number of data retrieval from neighboring nodes). However, storing content actively in connected vehicle networks raises questions regarding security and privacy. In the second part of the thesis, an access control framework for information-centric connected vehicles is presented. Finally, open security issues and research directions in executing computations at the edge of connected vehicle networks are presented.
This thesis presents novel approaches for integrating context information into probabilistic models. Data from social media is typically associated with metadata, which includes context information such as timestamps, geographical coordinates or links to user profiles. Previous studies showed the benefits of using such context information in probabilistic models, e.g.\ improved predictive performance. In practice, probabilistic models which account for context information still play a minor role in data analysis. There are multiple reasons for this. Existing probabilistic models often are complex, the implementation is difficult, implementations are not publicly available, or the parameter estimation is computationally too expensive for large datasets. Additionally, existing models are typically created for a specific type of content and context and lack the flexibility to be applied to other data.
This thesis addresses these problems by introducing a general approach for modelling multiple, arbitrary context variables in probabilistic models and by providing efficient inference schemes and implementations.
In the first half of this thesis, the importance of context and the potential of context information for probabilistic modelling is shown theoretically and in practical examples. In the second half, the example of topic models is employed for introducing a novel approach to context modelling based on document clusters and adjacency relations in the context space. They can cope with areas of sparse observations and These models allow for the first time the efficient, explicit modelling of arbitrary context variables including cyclic and spherical context (such as temporal cycles or geographical coordinates). Using the novel three-level hierarchical multi-Dirichlet process presented in this thesis, the adjacency of ontext clusters can be exploited and multiple contexts can be modelled and weighted at the same time. Efficient inference schemes are derived which yield interpretable model parameters that allow analyse the relation between observations and context.
Probability propagation nets
(2008)
This work introduces a Petri net representation for the propagation of probabilities and likelihoods, which can be applied to probabilistic Horn abduction, fault trees, and Bayesian networks. These so-called "probability propagation nets" increase the transparency of propagation processes by integrating structural and dynamical aspects into one homogeneous representation. It is shown by means of popular examples that probability propagation nets improve the understanding of propagation processes - especially with respect to the Bayesian propagation algorithms - and thus are well suited for the analysis and diagnosis of probabilistic models. Representing fault trees with probability propagation nets transfers these possibilities to the modeling of technical systems.
This dissertation is dedicated to a new concept for capturing renunciation-oriented attitudes and beliefs — sufficiency orientation. Sufficiency originates in the interdisciplinary sustain-ability debate. In contrast to efficiency and consistency, sufficiency considers human behaviour as the cause of socio-ecological crises and strives for a reduction in consumption respecting the planetary boundaries. The present work places sufficiency in a psychological research context and explores it qualitatively and quantitatively. On the basis of five manuscripts, the overarching question pursued is to what extent sufficiency orientation contributes to socio-ecological transformation. Based on one qualitative study and five further quantitative studies, sufficiency orientation is investigated in different behavioural contexts that are of particular importance with regard to CO2 emissions. In addition, sufficiency orientation is linked to a wider range of psychologically relevant theories that help gain an overview of correlates and possible causes for the development of a sufficiency orientation.
Manuscript 1 uses expert interviews (N = 21) to develop a heuristic framework on a transformation towards societal sufficiency orientation including barriers and enablers, as well as ambiguities on such a change. The derived elements are interpreted in the light of the leverage points approach. This framework can serve as a heuristic for future research and to develop measures concerning sufficiency orientation.
As part of an online study (N = 648), Manuscript 2 examines the extent to which sufficiency orientation can be embedded in classic models for explaining pro-environmental intentions and behaviour (Theory of Planned Behaviour, Norm Activation Model), and showed a significant contribution to the explanation of intentions and behaviour in the field of plastic consumption.
Manuscript 3 reports two framing experiments (Study 1, N = 123, Study 2, N = 330) to investigate how pro-social justice sensitivity contributes to making sufficiency orientation more salient and promoting it. While sufficiency orientation and pro-social facets of justice sensitivity were positively related to each other, there was no effect of the framing intervention in the hypothesised direction. The results indicate that justice-related information at least in the presented manner is more likely to generate reactance.
Manuscript 4 presents an online study (N = 317) and targets the importance of sufficiency orientation for predicting actual greenhouse gas emissions in relation to flight behav-iour and policy support for the decarbonisation of mobility. In addition, the connection between sufficiency orientation and global identity is examined. It turns out that sufficiency orientation is superior to global identity in predicting actual emissions and decarbonisation policies. Contrary to expectations, sufficiency orientation and the form of global identity operationalised in the presented study shows a positive correlation and are compatible.
Manuscript 5 reports a reflective diary intervention (N = 252) that should lead to a short- and long-term increase in sufficiency orientation by satisfying basic psychological needs through induced self-reflection. For both groups with or without the intervention, sufficiency orientation increased slightly but significantly. Although no specific effect of the manipulation was found, basic psychological need satisfaction turns out to be the largest predictor for sufficiency orientation. Subjective well-being is positively associated with sufficiency orientation, while time affluence shows no clear associations in the study.
Overall, the results highlight the relevance of sufficiency orientation in relation to socio-ecological transformation and actual behavioural change. Sufficiency orientation is related to low-emission behaviour and support for political measures to decarbonize infrastructures. These results contribute to the discussion on the intention-behaviour gap in regard to impact-relevant behaviour, i.e. behaviour producing high emissions. The present findings suggest, that sufficiency orientation could be related to a strong intention-behavioural consistency. However, further research is needed to validate these results and improve the measurement of sufficiency orientation. Furthermore, the studies provided insights on correlates of sufficiency orientation: justice sensitivity, global identity, subjective well-being and left-wing liberal political ideologies are all found to be positively related to sufficiency orien-tation. Moreover, basic psychological need satisfaction was identified as a potential mechanism that can support the emergence of sufficiency orientation, however, causality remains unclear. From these findings, the work derives practical implications how to possibly strengthen sufficiency orientation on the micro, meso and macro levels of society.
Taken together, the dissertation provides important insights into a new and still developing concept, and shows its connectivity to psychological theories. However, future research is required in order to grasp more precisely the complexity of sufficiency orientation and to understand origins and predictors of sufficiency orientation. This work contributes to the interdisciplinary debate on socio-ecological transformation and points out that sufficiency orientation can serve to a future worth living as being related to reduced consumption.
The history of human kind is characterized by social conflict. Every conflict can be the starting point of social change or the escalation into more destructive forms. The social conflict in regard to rising numbers of refugees and their acceptance that arose in most host countries in 2015 already took on destructive forms – in Germany, right-wing extremists attacked refugee shelters and even killed multiple people, including political leaders who openly supported refugees. Thus, incompatible expectancies and values of different parts of the society led to violent action tendencies, which tremendously threaten intergroup relations. Psychological research has developed several interventions in past decades to improve intergroup relations, but they fall short, for example, when it comes to the inclusion of people with extreme attitudes and to precisely differentiate potential prosocial outcomes of the interventions. Thus, this dissertation aimed to a) develop psychological interventions, that could also be applied to people with more extreme attitudes, thereby putting a special emphasis on collecting a diverse sample; b) gain knowledge about target- and outcome specific effects: Who benefits from which intervention and how can specific prosocial actions be predicted in order to develop interventions that guide needs-based actions; and c) shed light on potential underlying mechanisms of the interventions.
The dissertation will be introduced by the socio-political background that motivated the line of research pursued, before providing an overview of the conceptualization of social conflicts and potential psychological inhibitors and catalyzers for conflict transformation. Based on past research on socio-psychological interventions and their limitations, the aims of the dissertation will be presented in more detail, followed by a short summary of each manuscript. Overall, the present thesis comprises four manuscripts that were summarized in the general discussion into a road map for social-psychological interventions to put them into a broader perspective. The road map aspires to provide recommendations for increasing – either approach-oriented or support-oriented actions – by the socio-psychological interventions for a variety of host society groups depending on their pre-existing attitude towards refugees.
A Paradoxical Intervention targeting central beliefs of people with negative attitudes towards refugees influenced inhibitory and catalyzing factors for conflict transformation over the course of three experiments – thereby providing an effective tool to establish approach-oriented action tendencies, such as the willingness to get in contact with refugees. Further, the dissertation presents a novel mechanism – namely Cognitive Flexibility – which could explain the Paradoxical Interventions’ effect of past research. By positively affecting a context-free mindset, the Paradoxical Intervention could impact more flexible thought processes in general, irrespective of the topic tackled in the Paradoxical Intervention itself. For people with rather positive attitudes addressing emotions may increase specific support-oriented action tendencies. The dissertation provides evidence of a positive relation between moral outrage and hierarchy-challenging actions, such as solidarity-based collective action, and sympathy with prosocial hierarchy-maintaining support-oriented actions, such as dependency-oriented helping. These exclusive relations between specific emotions and action intentions provide important implications for the theorizing of emotion-behavior relations, as well as for practical considerations. In addition, a diversity workshop conducted with future diplomats showed indirect effects on solidarity-based collective action via diversity perception and superordinate group identification, thereby extending past research by including action intentions and going beyond the focus on grassroot-initiatives by presenting an implementable intervention for future leaders in a real world context.
Taken together, this dissertation provides important insights for the development of socio-psychological interventions. By integrating a diverse sample, including members of institutions on meso- and macro-levels (non-governmental organizations and future politicians) of our society, this dissertation presents a unique multi-perspective of host society members on the social conflict of refugee acceptance and support. Thereby, this work contributes to theoretical and practical advancement of how social psychology can contribute not only to negative peace – by for example (indirectly) reducing support of violence against refugees – but also to positive peace – by for example investigating precursors of hierarchy-challenging actions that enable equal rights.
Climate change is an existential threat to human survival, the social organization of society, and the stability of ecosystems. It is thereby profoundly frightening. In the face of threat, people often want to protect themselves instead of engaging in mitigating behaviors. When psychological resources are insufficient to cope, people often respond with different forms of denial. In this dissertation, I contribute original knowledge to the understanding of the multifaceted phenomenon of climate denial from a psychological perspective.
There are four major gaps in the literature on climate denial: First, the spectrum of climate denial as a self-protective response to the climate crisis has not received attention within psychology. Second, basic psychological need satisfaction, a fundamental indicator of human functioning and the ability to cope with threat, has not been investigated as a predictor of climate denial. Third, relations of the spectrum of climate denial to climate-relevant emotions, specifically climate anxiety, have not been examined empirically. Forth, it has not been investigated how the spectrum of climate denial relates to established predictors of climate denial, namely right-wing ideological convictions and male gender. To address those gaps, I investigate what the spectrum of climate denial looks like in the German context and how it relates to basic psychological need satisfaction and frustration, pro-environmental behavior, climate anxiety, ideological conviction, and gender.
Five manuscripts reveal that climate denial exists on a spectrum in the German context, ranging from the distortion of facts (interpretive climate denial, specifically denial of personal and global outcome severity) to the denial of the implications of climate change (implicatory climate denial, specifically avoidance, denial of guilt, and rationalization of one's own involvement). Across analyses, low basic psychological need satisfaction predicted the spectrum of climate denial, which was negatively related to pro-environmental behavior. Climate denial was generally negatively related to climate anxiety, except for a positive association of avoidance and climate anxiety. Right-wing ideological conviction was the strongest predictor of climate denial across the spectrum. However, low need satisfaction and male gender were additional weaker predictors of implicatory climate denial.
These findings suggest that the spectrum of climate denial serves many psychological functions. Climate denial is possibly both a self-protective strategy to downregulate emotions and to protect oneself from loss of privilege. In short, it represents a barrier to climate action that may only be resolved once people have sufficient psychological resources to face the threat of climate change and cope with their underlying self-protective, emotional responses.
Placing questions before the material or after the material constitute different reading situations. To adapt to these reading situations, readers may apply appropriate reading strategies. Reading strategy caused by location of question has been intensively explored in the context of text comprehension. (1) However, there is still not enough knowledge about whether text plays the same role as pictures when readers apply different reading strategies. To answer this research question, three reading strategies are experimentally manipulated by displaying question before or after the blended text and picture materials: (a) Unguided processing with text and pictures and without the question. (b) Information gathering to answer the questions after the prior experience with text and pictures. (c) Comprehending text and pictures to solve the questions with the prior information of the questions. (2) Besides, it is arguable whether readers prefer text or pictures when the instructed questions are in different difficulty levels. (3) Furthermore, it is still uncertain whether students from higher school tier (Gymnasium) emphasize more on text or on pictures than students from lower school tier (Realschule). (4) Finally, it is rarely mentioned whether higher graders are more able to apply reading strategies in text processing and picture processing than lower graders.
Two experiments were undertaken to investigate the usage of text and pictures in the perspectives of task orientation, question difficulty, school and grade. For a 2x2(x2x2x2) mixed design adopting eye tracking method, participants were recruited from grade 5 (N = 72) and grade 8 (N = 72). In Experiment 1, thirty-six 5th graders were recruited from higher tier (Gymnasium) and thirty-six 5th graders were from lower tier (Realschule). In Experiment 2, thirty-six 8th graders were recruited from higher tier and thirty-six were from lower tier. They were supposed to comprehend the materials combining text and pictures and to answer the questions. A Tobii XL60 eye tracker recorded their eye movements and their answers to the questions. Eye tracking indicators were analyzed and reported, such as accumulated fixation duration, time to the first fixation and transitions between different Areas of Interest. The results reveal that students process text differently from pictures when they follow different reading strategies. (1) Consistent with Hypothesis 1, students mainly use text to construct their mental model in unguided spontaneous processing of text and pictures. They seem to mainly rely on the pictures as external representations when trying to answer questions after the prior experience with the material. They emphasize on both text and pictures when questions are presented before the material. (2) Inconsistent with Hypothesis 2, students are inclined to emphasize on text and on pictures as question difficulty increases. However, the increase of focus on pictures is more than on text when the presented question is difficult. (3) Different from Hypothesis 3, the current study discovers that higher tier students did not differ from lower tier students in text processing. Conversely, students from higher tier attend more to pictures than students from lower tier. (4) Differed from Hypothesis 4, 8th graders outperform 5th graders mainly in text processing. Only a subtle difference is found between 5th graders and 8th graders in picture processing.
To sum up, text processing differs from picture processing when applying different reading strategies. In line with the Integrative Model of Text and Picture Comprehension by Schnotz (2014), text is likely to play a major part in guiding the processing of meaning or general reading, whereas pictures are applied as external representations for information retrieval or selective reading. When question is difficulty, pictures are emphasized due to their advantages in visualizing the internal structure of information. Compared to lower tier students (poorer problem solvers), higher tier students (good problem solvers) are more capable of comprehending pictures rather than text. Eighth graders are more efficient than 5th graders in text processing rather than picture processing. It also suggests that in designing school curricula, more attention should be paid to students’ competence on picture comprehension or text-picture integration in the future.
Diffusion imaging captures the movement of water molecules in tissue by applying varying gradient fields in a magnetic resonance imaging (MRI)-based setting. It poses a crucial contribution to in vivo examinations of neuronal connections: The local diffusion profile enables inference of the position and orientation of fiber pathways. Diffusion imaging is a significant technique for fundamental neuroscience, in which pathways connecting cortical activation zones are examined, and for neurosurgical planning, where fiber reconstructions are considered as intervention related risk structures.
Diffusion tensor imaging (DTI) is currently applied in clinical environments in order to model the MRI signal due to its fast acquisition and reconstruction time. However, the inability of DTI to model complex intra-voxel diffusion distributions gave rise to an advanced reconstruction scheme which is known as high angular resolution diffusion imaging (HARDI). HARDI received increasing interest in neuroscience due to its potential to provide a more accurate view of pathway configurations in the human brain.
In order to fully exploit the advantages of HARDI over DTI, advanced fiber reconstructions and visualizations are required. This work presents novel approaches contributing to current research in the field of diffusion image processing and visualization. Diffusion classification, tractography, and visualizations approaches were designed to enable a meaningful exploration of neuronal connections as well as their constitution. Furthermore, an interactive neurosurgical planning tool with consideration of neuronal pathways was developed.
The research results in this work provide an enhanced and task-related insight into neuronal connections for neuroscientists as well as neurosurgeons and contribute to the implementation of HARDI in clinical environments.
Software systems have an increasing impact on our daily lives. Many systems process sensitive data or control critical infrastructure. Providing secure software is therefore inevitable. Such systems are rarely being renewed regularly due to the high costs and effort. Oftentimes, systems that were planned and implemented to be secure, become insecure because their context evolves. These systems are connected to the Internet and therefore also constantly subject to new types of attacks. The security requirements of these systems remain unchanged, while, for example, discovery of a vulnerability of an encryption algorithm previously assumed to be secure requires a change of the system design. Some security requirements cannot be checked by the system’s design but only at run time. Furthermore, the sudden discovery of a security violation requires an immediate reaction to prevent a system shutdown. Knowledge regarding security best practices, attacks, and mitigations is generally available, yet rarely integrated part of software development or covering evolution.
This thesis examines how the security of long-living software systems can be preserved taking into account the influence of context evolutions. The goal of the proposed approach, S²EC²O, is to recover the security of model-based software systems using co-evolution.
An ontology-based knowledge base is introduced, capable of managing common, as well as system-specific knowledge relevant to security. A transformation achieves the connection of the knowledge base to the UML system model. By using semantic differences, knowledge inference, and the detection of inconsistencies in the knowledge base, context knowledge evolutions are detected.
A catalog containing rules to manage and recover security requirements uses detected context evolutions to propose potential co-evolutions to the system model which reestablish the compliance with security requirements.
S²EC²O uses security annotations to link models and executable code and provides support for run-time monitoring. The adaptation of running systems is being considered as is round-trip engineering, which integrates insights from the run time into the system model.
S²EC²O is amended by prototypical tool support. This tool is used to show S²EC²O’s applicability based on a case study targeting the medical information system iTrust.
This thesis at hand contributes to the development and maintenance of long-living software systems, regarding their security. The proposed approach will aid security experts: It detects security-relevant changes to the system context, determines the impact on the system’s security and facilitates co-evolutions to recover the compliance with the security requirements.
This thesis addresses the reduced basis methods for parametrized quasilinear elliptic and parabolic partial differential equations with strongly monotone differential operator. It presents all of the ingredients of the reduced basis method: basis generation for reduced basis approximation, certification of the approximation error by suitable a-posteriori error control and an Offine-Online decomposition. The methodology is further applied to the magnetostatic and magnetoquasistatic approximations of Maxwell’s equations and its validity is confirmed by numerical examples.
Introduction:
In March 2012 a secessionist-Islamist insurgency gained momentum in Mali and quickly took control of two-thirds of the state territory. Within weeks radical Islamists, drug smugglers and rebels suddenly ruled over a territory bigger than Germany. News of the abuse of the population and the introduction of harsh Sharia law spread soon, and word got out that the Malian Army had simply abandoned the land. The general echo of the IC was surprise, a reaction that was, as this research will show, as unfunded as it was unconstructive*. When Malian state structures collapsed, the world watched in shock, even though the developments couldhave been anticipated –and prevented. Ultimately, the situation had to be resolved by international forces (most notably French troops), who are still in Mali at the time of writing (Arieff 2013a: 5; Lohmann 2012: 3; Walther and Christopoulos 2015: 514f.; Shaw 2013: 204; Qantara, Interview, 2012;L’Express, Mali, 2015; Deutscher Bundestag, MINUSMA und EUTM Mali, 2016; UN, MUNISMA, 2016; Boeke and Schuurmann 2015: 801; Chivvis 2016: 93f.).
This research will show that the developments in Mali in 2012 have been developing for a long time and could have been avoided. In doing so, it will also show why state security can never be analyzed or consolidated in an isolated manner. Instead, it is necessary to take into account regional dynamics and developments in order to find a comprehensive approach to security in individual states. Once state failure occurs, not only does the state itself fail, but the surrounding region equally failed to prevent the failure.
Weak states are a growing concern in many world regions, particularly in Africa. As international intervention often proves unsustainable for various reasons*, the author believes that states which cannot stabilize themselves need a regional agent to support them. This regional agent should be a Regional Security Complex (RSC) asdefined by Barry Buzan and Ole Waever (Buzan and Waever 2003). As the following analysis will show, Mali is a case in point. The hope is that this study will help avoid similar failures in the future by making a strong case for the establishment of RSC’s.
…
Microbial pollution of surface waters poses substantial risks for public health, amongst others during recreational use. Microbial pollution was studied at selected sampling sites in rivers Rhine, Moselle and Lahn (Germany) on the basis of commonly used fecal indicator organisms (FIO) indicating bacterial (Escherichia coli, intestinal enterococci) and viral (somatic coliphages) fecal contamination. In addition, blaCTX-Mantibiotic resistance genes (ARG) were quantified at twosites in river Lahn and were used as markers for tracking the spread of antibiotic resistance in the aquatic environment. The impact of changes in climate-related parameters on FIO was examined by studying monitoring results of contrasting flow conditions at rivers Rhine and Moselle. Analyses at all studied river sites clearly indicate that high discharge and precipitation enhance the influx of FIO, ARG and thus potentially (antibiotic resistant) pathogens into rivers. In contrast, a decrease in hygienic microbial pollution was observed under high solar irradiation and increasing water temperatures. Based on identified contributing key factors, multiple linear regression (MLR) models for five sites at a stretch of river Lahn were established that allow a timely assessment of fecal indicator abundances. An interaction between abiotic and biotic factors (i.e. enhanced grazing pressure) considerably contributed to the formation of seasonal patterns among FIO abundances. This was enhanced during extraordinary low flow conditions in rivers with pronounced trophic interactions, clearly hampering a transfer of model approaches between rivers of different biological and hydrological characteristics. Bacterial indicatorswere stronger influenced by grazing pressure than phages. Hence, bacterial indicators alone do not sufficiently describe viral pollution in rivers. BlaCTX-Mgenes were omnipresent in Lahn River water and corresponded to distribution patterns of FIO, indicating fecal sources. Agriculture and waste watertreatment plant effluents contributed to ARG loads and participants in non-bathing water sports were found to be at risk of ingesting antibiotic resistant bacteria (ARB) including ARG, bearing the risk of infection or colonization. Results of the present study highlight the need to be aware of such risks not only in designated bathing waters. ARG abundance at both riverine sampling sites could largely be explained by E. coliabundance and may thus also be incorporated into multiple regression models using E. colispecific environmental predictors. It can be expected that the frequency of short-term microbial pollution events will increase over the next decades due to climate change. Several challenges were identified with regard to the implementation of early warning systems to protect the public from exposure to pathogens in rivers. Most importantly, the concept of the Bathing Water Directive (Directive 2006/7/EC) itself as well as the lack of harmonization in the regulatory framework at European Union (EU) level are major drawbacks and require future adjustments to reliably manage health risks related to microbial water pollution in waters used in multifunctional ways.
The first group that was revised within my study is Ochralea Clark, 1865 (Hazmi & Wagner 2010a). I have checked the type specimen of most species that were originally described in Ochralea and there is no doubt that this genus is clearly distinct from Monolepta. Weise (1924) has synonymised Galeruca nigripes (Olivier, 1808) with O. nigricornis Clark, 1865 and the valid name of the species is O. nigripes (Olivier, 1808). Out of ten species originally described in this genus, only this species remain valid and O. pectoralis is a new synonym of O. nigripes. Additionally, Monolepta wangkliana Mohamedsaid, 2000 is very closely related to O. nigripes and need to be transferred to Ochralea. The second genus where the revision is still published is Arcastes Baly, 1865 (Hazmi & Wagner 2010b). I have checked the genitalic characters of A. biplagiata, and most of the type species of other Arcastes. Arcastes biplagiata possesses a peculiar shape of the median lobe and asymmetrically arranged endophallic structures. These peculiar characters are very useful to delimit this genus from the others. Therefore, only three valid species remain in Arcastes, while two new synonyms are found and fourrnother species need to be transferred to other genera. While checking the genitalic characteristics of type species of Arcastes sanguinea, thernmedian lobe as well as the spermatheca of this species possesses strong differences to A. biplagiata. The species was redescribed and transferred in a monotypic new genus Rubrarcastes Hazmi & Wagner, 2010c. The fourth genus that was already revised is Neolepta Jacoby, 1884. It was originally described on base of only two species by that time, N. biplagiata and N. fulvipennis. Jacoby has not designated a type species of the genus, and Maulik (1936) did it later, with the designation of N. biplagiata. Jacoby in his original description has only commented that Neolepta is very close and similar to Monolepta Chevrolat, 1837 and Candezea Chapuis, 1879. Subsequent authors have described further eight species, and transferred one species from Luperodes to it, summing up the total number of eleven described species in Neolepta. I have checked the genitalic characters of the type, N. biplagiata and have found out that the median lobe is not incised apically and stronger sclerotised ventral carinae with an apical hook close to the apex occur. Out of all described species, only two are closely related to the genero-type, N. sumatrensis (Jacoby, 1884) new combination and N. quadriplagiata Jacoby, 1886 that will remain in this group after the revision. All other species need to be transferred to other genera, including the newly described Paraneolepta and Orthoneolepta. The last distinct paper of this thesis presented the results on Monolepta Chevrolat, 1837. The massive number of Monolepta from the entire Oriental Region, with about 260 described species names is a more long-life project and not practicable within a PhD-study. Thus I have focused on the species of Monolepta known from the Sundaland area in this work. A comprehensive revision including the study of the primary types of the described species, has never been done for Monolepta from this sub-region, while new species have also been described in the last decade (e. g. Mohamedsaid 1993, 1997, 1998, 1999, 2000a,b, 2001, 2002, 2005).
On base of the most current species lists of Mohamedsaid (2001, 2004, 2005) and Kimoto (1990), the number of valid species described from this region is about 72. After my revision, only thirteen valid species can remain in Monolepta in the sense of the generotype M. bioculata (Wagner 2007), while seven species have been found as new synonyms, three have been already transferred to other genera and further 49 species need to be transferred to other genera.
The increasing, anthropogenic demand for chemicals has created large environmental problems with repercussions for the health of the environment, especially aquatic ecosystems. As a result, the awareness of the public and decision makers on the risks from chemical pollution has increased over the past half-century, prompting a large number of studies in the field of ecological toxicology (ecotoxicology). However, the majority of ecotoxicological studies are laboratory based, and the few studies extrapolating toxicological effects in the field are limited to local and regional levels. Chemical risk assessment on large spatial scales remains largely unexplored, and therefore, the potential large-scale effects of chemicals may be overlooked.
To answer ecotoxicological questions, multidisciplinary approaches that transcend classical chemical and toxicological concepts are required. For instance, the current models for toxicity predictions - which are mainly based on the prediction of toxicity for a single compound and species - can be expanded to simultaneously predict the toxicity for different species and compounds. This can be done by integrating chemical concepts such as the physicochemical properties of the compounds with evolutionary concepts such as the similarity of species. This thesis introduces new, multidisciplinary tools for chemical risk assessments, and presents for the first time a chemical risk assessment on the continental scale.
After a brief introduction of the main concepts and objectives of the studies, this thesis starts by presenting a new method for assessing the physiological sensitivity of macroinvertebrate species to heavy metals (Chapter 2). To compare the sensitivity of species to different heavy metals, toxicity data were standardized to account for the different laboratory conditions. These rankings were not significantly different for different heavy metals, allowing the aggregation of physiological sensitivity into a single ranking.
Furthermore, the toxicological data for macroinvertebrates were used as input data to develop and validate prediction models for heavy metal toxicity, which are currently lacking for a wide array of species (Chapter 3). Apart from the toxicity data, the phylogenetic information of species (evolutionary relationships among species) and the physicochemical parameters for heavy metals were used. The constructed models had a good explanatory power for the acute sensitivity of species to heavy metals with the majority of the explained variance attributed to phylogeny. Therefore, the integration of evolutionary concepts (relatedness and similarity of species) with the chemical parameters used in ecotoxicology improved prediction models for species lacking experimental toxicity data. The ultimate goal of the prediction models developed in this thesis is to provide accurate predictions of toxicity for a wide range of species and chemicals, which is a crucial prerequisite for conducting chemical risk assessment.
The latter was conducted for the first time on the continental scale (Chapter 4), by making use of a dataset of 4,000 sites distributed throughout 27 European countries and 91 respective river basins. Organic chemicals were likely to exert acute risks for one in seven sites analyzed, while chronic risk was prominent for almost half of the sites. The calculated risks are potentially underestimated by the limited number of chemicals that are routinely analyzed in monitoring programmes, and a series of other uncertainties related with the limit of quantification, the presence of mixtures, or the potential for sublethal effects not covered by direct toxicity.
Furthermore, chemical risk was related to agricultural and urban areas in the upstream catchments. The analysis of ecological data indicated chemical impacts on the ecological status of the river systems; however, it is difficult to discriminate the effects of chemical pollution from other stressors that river systems are exposed to. To test the hypothesis of multiple stressors, and investigate the relative importance of organic toxicants, a dataset for German streams is used in chapter 5. In that study, the risk from abiotic (habitat degradation, organic chemicals, and nutrients enrichment) and biotic stressors (invasive species) was investigated. The results indicated that more than one stressor influenced almost all sites. Stream size and ecoregions influenced the distribution of risks, e.g., the risks for habitat degradation, organic chemicals and invasive species increased with the stream size; whereas organic chemicals and nutrients were more likely to influence lowland streams. In order to successfully mitigate the effects of pollutants in river systems, co-occurrence of stressors has to be considered. Overall, to successfully apply integrated water management strategies, a framework involving multiple environmental stressors on large spatial scales is necessary. Furthermore, to properly address the current research needs in ecotoxicology, a multidisciplinary approach is necessary which integrates fields such as, toxicology, ecology, chemistry and evolutionary biology.
Inland waters play an active role in the global carbon cycle. They collect carbon from upstream landmasses and transport it downstream until it finally reaches the ocean. Along this path, manifold processing steps are evident, resulting in (permanent) retention of carbon by sediment burial as well as loss by evasion to the atmosphere. Constraining these carbon fluxes and their anthropogenic perturbation is an urgent need. In this context, attention needs to be set on a widespread feature of inland waters: their partial desiccation. This results in the emergence of formerly inundated sediments to the atmosphere, referred to as dry inland waters. One observed feature of dry inland waters are disproportional high carbon dioxide (CO2) emissions. However, this observation was so far based on local case studies and knowledge on the global prevalence and fundamental mechanisms of these emissions is lacking. Against this background, this thesis aims to provide a better understanding of the magnitude and mechanisms of carbon emissions from dry inland waters on the global and local scale and to assess the impact of dry inland waters on the global carbon cycle. The specific research questions of this thesis were: (1) How do gaseous carbon emissions from dry inland waters integrate into the global carbon cycle and into global greenhouse gas (GHG) budgets? (2) What effect do seasonal and long term drying have on the carbon cycling of inland waters? The thesis revealed that dry inland waters emit disproportional large amounts of CO 2 on a global scale and that these emissions share common drivers across ecosystems. Quantifying global reservoir drawdown and upscaling carbon fluxes to the global scale suggests that reservoirs emit more carbon than they bury, challenging the current understanding of reservoirs as net carbon sinks. On the local scale, this thesis revealed that both, heterogeneous emission pattern between different habitats and seasonal variability of carbon emissions from the drawdown area, needs to be considered. Further, this thesis showed that re-mobilization of buried carbon upon permanent desiccation of water bodies can explain the observed emission rates, supporting the hypothesis of a positive feedback-loop between climate change and desiccation of inland waters. Overall, the present thesis highlights the importance of adding emissions from dry inland waters as a pathway to the global carbon cycle of inland waters.
Confidentiality, integrity, and availability are often listed as the three major requirements for achieving data security and are collectively referred to as the C-I-A triad. Confidentiality of data restricts the data access to authorized parties only, integrity means that the data can only be modified by authorized parties, and availability states that the data must always be accessible when requested. Although these requirements are relevant for any computer system, they are especially important in open and distributed networks. Such networks are able to store large amounts of data without having a single entity in control of ensuring the data's security. The Semantic Web applies to these characteristics as well as it aims at creating a global and decentralized network of machine-readable data. Ensuring the confidentiality, integrity, and availability of this data is therefore also important and must be achieved by corresponding security mechanisms. However, the current reference architecture of the Semantic Web does not define any particular security mechanism yet which implements these requirements. Instead, it only contains a rather abstract representation of security.
This thesis fills this gap by introducing three different security mechanisms for each of the identified security requirements confidentiality, integrity, and availability of Semantic Web data. The mechanisms are not restricted to the very basics of implementing each of the requirements and provide additional features as well. Confidentiality is usually achieved with data encryption. This thesis not only provides an approach for encrypting Semantic Web data, it also allows to search in the resulting ciphertext data without decrypting it first. Integrity of data is typically implemented with digital signatures. Instead of defining a single signature algorithm, this thesis defines a formal framework for signing arbitrary Semantic Web graphs which can be configured with various algorithms to achieve different features. Availability is generally supported by redundant data storage. This thesis expands the classical definition of availability to compliant availability which means that data must only be available as long as the access request complies with a set of predefined policies. This requirement is implemented with a modular and extensible policy language for regulating information flow control. This thesis presents each of these three security mechanisms in detail, evaluates them against a set of requirements, and compares them with the state of the art and related work.
Reactive local algorithms are distributed algorithms which suit the needs of battery-powered, large-scale wireless ad hoc and sensor networks particularly well. By avoiding both unnecessary wireless transmissions and proactive maintenance of neighborhood tables (i.e., beaconing), such algorithms minimize communication load and overhead, and scale well with increasing network size. This way, resources such as bandwidth and energy are saved, and the probability of message collisions is reduced, which leads to an increase in the packet reception ratio and a decrease of latencies.
Currently, the two main application areas of this algorithm type are geographic routing and topology control, in particular the construction of a node's adjacency in a connected, planar representation of the network graph. Geographic routing enables wireless multi-hop communication in the absence of any network infrastructure, based on geographic node positions. The construction of planar topologies is a requirement for efficient, local solutions for a variety of algorithmic problems.
This thesis contributes to reactive algorithm research in two ways, on an abstract level, as well as by the introduction of novel algorithms:
For the very first time, reactive algorithms are considered as a whole and as an individual research area. A comprehensive survey of the literature is given which lists and classifies known algorithms, techniques, and application domains. Moreover, the mathematical concept of O- and Omega-reactive local topology control is introduced. This concept unambiguously distinguishes reactive from conventional, beacon-based, topology control algorithms, serves as a taxonomy for existing and prospective algorithms of this kind, and facilitates in-depth investigations of the principal power of the reactive approach, beyond analysis of concrete algorithms.
Novel reactive local topology control and geographic routing algorithms are introduced under both the unit disk and quasi unit disk graph model. These algorithms compute a node's local view on connected, planar, constant stretch Euclidean and topological spanners of the underlying network graph and route messages reactively on these spanners while guaranteeing the messages' delivery. All previously known algorithms are either not reactive, or do not provide constant Euclidean and topological stretch properties. A particularly important partial result of this work is that the partial Delaunay triangulation (PDT) is a constant stretch Euclidean spanner for the unit disk graph.
To conclude, this thesis provides a basis for structured and substantial research in this field and shows the reactive approach to be a powerful tool for algorithm design in wireless ad hoc and sensor networking.
Social Business Documents: An Investigation of their Nature, Structure and Long-term Management
(2018)
Business documents contain valuable information. In order to comply with legal requirements, to serve as organisational knowledge and to prevent risks they need to be managed. However, changes in technology with which documents are being produced introduced new kinds of documents and new ways of interacting with documents. Thereby, the web 2.0 led to the development of Enterprise Collaboration Systems (ECS), which enable employees to use wiki, blog or forum applications for conducting their business. Part of the content produced in ECS can be called Social Business Documents (SBD). Compared to traditional digital documents SBD are different in their nature and structure as they are, for example, less well-structured and do not follow a strict lifecycle. These characteristics bring along new management challenges. However, currently research literature lacks investigations on the characteristics of SBD, their peculiarities and management.
This dissertation uses document theory and documentary practice as theoretical lenses to investigate the new challenges of the long-term management of SBD in ECS. By using an interpretative, exploratory, mixed methods approach the study includes two major research parts. First, the nature and structure of Social Business Documents is addressed by analysing them within four different systems using four different modelling techniques each. The findings are used to develop general SBD information models, outlining the basic underlying components, structure, functions and included metadata, as well as a broad range of SBD characteristics. The second phase comprises a focus group, a case study including in-depth interviews and a questionnaire, all conducted with industry representatives. The focus group identified that the kind of SBD used for specific content and the actual place of storage differ between organisations as well as that there are currently nearly no management practices for SBD at hand. The case study provided deep insights into general document management activities and investigated requirements, challenges and actions for managing SBD. Finally, the questionnaire consolidated and deepened the previous findings. It provides insights about the value of SBD, their current management practices as well as management challenges and needs. Despite all participating organisations storing information worth managing in SBD most are not addressing them with management activities and many challenges remain.
Together, the investigations enable a contribution to practice and theory. The progress in practice is summarised through a framework, addressing the long-term management of Social Business Documents. The framework identifies and outlines the requirements and challenges of and the actions for SBD management. It also indicates the dependencies of the different aspects. Furthermore, the findings enable the progress in theory within documentary practice by discussing the extension of document types to include SBD. Existing problems are outlined along the definitions of records and the newly possible characteristics of documents emerging through Social Business Documents are taken into account.
Enterprise Collaboration Systems (ECS) have become substantial for computer-mediated communication and collaboration among employees in organisations. As ECS combine features from social media and traditional groupware, a growing number of organisations implement ECS to facilitate collaboration among employees. Consequently, ECS form the core of the digital workplace. Thus, the activity logs of ECS are particularly valuable since they provide a unique opportunity for observing and analysing collaboration in the digital workplace.
Evidence from academia and practice demonstrates that there is no standardised approach for the analysis of ECS logs and that practitioners struggle with various barriers. Because current ECS analytics tools only provide basic features, academics and practitioners cannot leverage the full potential of the activity logs. As ECS activity logs are a valuable source for understanding collaboration in the digital workplace, new methods and metrics for their analysis are required. This dissertation develops Social Collaboration Analytics (SCA) as a method for measuring and analysing collaboration activities in ECS. To address the existing limitations in academia and practice and to contribute a method and structures for applying SCA in practice, this dissertation aims to answer two main research questions:
1. What are the current practices for measuring collaboration activities in Enterprise Collaboration Systems?
2. How can Social Collaboration Analytics be implemented in practice?
By answering the research questions, this dissertation seeks to (1) establish a broad thematic understanding of the research field of SCA and (2) to develop SCA as a structured method for analysing ac-tivity logs of ECS. As part of the first research question, this dissertation documents the status quo of SCA in the academic literature and practice. By answering the second research question, this dissertation contributes the SCA framework (SCAF), which guides the practical application of SCA. SCAF is the main contribution of this dissertation. The framework was developed based on findings from an analysis of 86 SCA studies, results from 6 focus groups and results from a survey among 27 ECS user companies. The phases of SCAF were derived from a comparison of established process models for data mining and business intelligence. The eight phases of the framework contain detailed descriptions, working steps, and guiding questions, which provide a step by step guide for the application of SCA in practice. Thus, academics and practitioners can benefit from using the framework.
The constant evaluation of the research outcomes in focus groups ensures both rigour and relevance. This dissertation employs a qualitative-dominant mixed-methods approach. As part of the university-industry collaboration initiative IndustryConnect, this research has access to more than 30 leading ECS user companies. Being built on a key case study and a series of advanced focus groups with representatives of user companies, this dissertation can draw from unique insights from practice as well as rich data with a longitudinal perspective.
Social entrepreneurship is a form of entrepreneurship that marries a social mission to a competitive value proposition. Notably, social entrepreneurship fosters a more equitable society by addressing social issues and trying to achieve an ongoing sustainable impact through a social mission rather than purely profit maximization. The topic of social entrepreneurship has appealed considerably to many different streams of research. The focus on understanding how and why entrepreneurs think and act is a significant justification for future research. Nevertheless, the theoretical examination of this phenomenon is in its infancy. Social entrepreneurship research is still largely phenomenon-driven. Specifically, Social Entrepreneurial Intention is in an early stage and lacks quantitative research. Therefore, this thesis proposes to address this need. The thesis’ objectives are twofold: (1) develop a formation model for Social Entrepreneurial Intentions in general and (2) test the model by conducting an empirical study. Based on these objectives, the two research questions guiding the thesis are (1) what factors influence the intention of a person to become a social entrepreneur and (2) what relationships exist among these factors.
In order to answer these two research questions, this thesis uses purposeful research design, which is a combination of literature review and empirical study. The literature review is based on a comprehensive range of books, articles, and research papers published in leading academic journals and conference proceedings in different disciplines such as entrepreneurship, social entrepreneurship, entrepreneurship education, management, social psychology, and social economics. The empirical study is conducted via a survey of 600 last-year students from four universities in three regions in Vietnam: Hanoi, Da Nang, and Ho Chi Minh. The data are analyzed with SPSS-AMOS version 24, using screening data, scale development, exploratory factor analysis, and confirmation factor analysis. The thesis ascertains that Entrepreneurship Experience/Extra-curricular Activity, Role Model, Social Entrepreneurial Self-Efficacy, and Social Entrepreneurial Outcome Expectation directly and positively affect the intention of the Vietnamese students to be social entrepreneurs. Entrepreneurship Education also influences the Social Entrepreneurial Intention, but not directly, otherwise indirectly via Social Entrepreneurial Self-Efficacy and Social Entrepreneurial Outcome Expectation. Similarly, Perceived Support has no direct relationship to Social Entrepreneurial Intention; however, it shows an indirect link via the mediator ‘Social Entrepreneurial Outcome Expectation’. Furthermore, the dissertation brings new insights to the social entrepreneurship literature and provides important implications for practice. Limitations and future directions are also provided in the thesis.
Homonegative discrimination such as the denial of leadership qualities and higher salaries concern not only lesbians and gay men but also individuals who were perceived as lesbian or gay (Fasoli et al., 2017). Hence, it is assumed that especially straight people become victims of homonegative discrimination (Plöderl, 2014). The perception of sexual orientation is indeed stereotype-driven (e.g., Cox et al., 2015) but there is a lack of knowledge on how accurate stereotypes are – particularly those referring to speech. Despite a variety of sociophonetic and social psychological research related to sexual orientation and gender, an encompassing understanding is missing on how sexual orientation is expressed and perceived.
The present thesis aims to fill these gaps. The two major aims of the present work are a) the examination of the accuracy of speech stereotypes in the context of sexual orientation and b) the development of a model on how sexual orientation is interpersonally construed. Overall, the present thesis comprises five manuscripts with the following aspects in common: They integratively deal with social psychological and sociophonetic perspectives, share a social identity approach, and primarily center speech instead of facial appearance. Moreover, mostly German and German native speaking participants, respectively, have been investigated.
Manuscript 1 establishes the Traditional Masculinity/Femininity-Scale as a reliable and valid instrument for assessing gender-role self-concept. The invention was necessary because existing scales insufficiently represented the self-ascribed masculinity/femininity yet (e.g., Abele, 2003; Evers & Sieverding, 2014). Manuscripts 2, 3, and 4 address the (in)accuracy of speech stereotypes regarding stereotypic content and suggested within-group homogeneity. This is carried out by the application of different methodological approaches. On the one hand, relevant acoustic parameters of lesbian/gay and straight women and men were averaged for each group. On the other hand, voice morphing was applied in order to create prototypical and naturally sounding voice averages (Kawahara et al., 2008). Lesbians and straight women differed in none, gay and straight men in one of the analyzed acoustic parameters only. In contrast, a fine-grained psychological analysis yielded various evidence for acoustic within-group heterogeneity. In particular, the exclusivity of sexual orientation and gender-role self-concept have been acoustically indexicalized which suggests that speech stereotypes are inaccurate. However, voice averages do carry perceivable sexual orientation information. Hence, speech stereotypes can be considered as exaggerations of tiny kernels of truth. In Manuscript 5, previous literature on the interpersonal construction of sexual orientation is integrated in a model: The Expression and Perception of Sexual Orientation Model (EPSOM). This model postulates an indirect route and describes how sexual orientation information is transmitted from producer to perceiver by proposing three mediating components. Thereby, the model is able to offer an explanation why sexual orientation can be perceived with above-chance but far-away-from-perfect accuracy.
Overall, the present thesis provides meaningful impulses for enhancements of research on social markers of sexual orientation and gender. This thesis offers a model on how sexual orientation is expressed and perceived, shows the benefits of combining sociophonetic and social psychological approaches, and points out the value of applying novel methods and technologies. Beyond that, the present thesis offers useful implications for practice. Speech stereotypes in the context of sexual orientation can be rejected as inaccurate – for example, native German straight men do not nasalize more or less than gay men. Thereby, the present thesis contributes to an erosion of stereotypes and a potential reduction of homonegative discrimination.
The term “Software Chrestomaty” is defined as a collection of software systems meant to be useful in learning about or gaining insight into software languages, software technologies, software concepts, programming, and software engineering. 101companies software chrestomathy is a community project with the attributes of a Research 2.0 infrastructure for various stakeholders in software languages and technology communities. The core of 101companies combines a semantic wiki and confederated open source repositories. We designed and developed an integrated ontology-based knowledge base about software languages and technologies. The knowledge is created by the community of contributors and supported with a running example and structured documentation. The complete ecosystem is exposed by using Linked Data principles and equipped with the additional metadata about individual artifacts. Within the context of software chrestomathy we explored a new type of software architecture – linguistic architecture that is targeted on the language and technology relationships within a software product and based on the megamodels. Our approach to documentation of the software systems is highly structured and makes use of the concepts of the newly developed megamodeling language MegaL. We “connect” an emerging ontology with the megamodeling artifacts to raise the cognitive value of the linguistic architecture.
Soil organic matter (SOM) is a key component responsible for sequestration of organic molecules in soil and regulation of their mobility in the environment. The basic structure of SOM is a supramolecular assembly responding dynamically to the environmental factors and the presence of interacting molecules. Despite of the advances in the understanding of sorption processes, the relation between sorbate molecules, SOM supramolecular structure and its dynamics is limited. An example of a dynamic nature of SOM is a physicochemical matrix aging that is responsible for SOM structural arrangement. The underlying process of the physicochemical aging is the formation of water molecule bridges (WaMB) between functional groups of molecular segments. Since WaMB influence the stiffness of SOM structure, it was hypothesized that formation of WaMB contributes to the sequestration of organic molecules. However, this hypothesis has not been tested experimentally until now. Furthermore, the knowledge about the influence of organic molecules on WAMB is based solely on computer modeling studies. In addition, the influence of organic molecules on some physical phases forming SOM is not well understood. Especially, the interactions between organic molecules and crystalline phases represented by aliphatic crystallites, are only presumed. Thus, the investigation of those interactions in unfractioned SOM is of high importance.
In order to evaluate the involvement of WaMB in the sequestration of organic molecules and to increase our understanding about interactions of organic chemicals with WaMB or aliphatic crystallites, the following hypotheses were tested experimentally. 1) Similarly to crystalline phases in synthetic polymers, aliphatic crystallites, as a part of SOM, cannot be penetrated by organic molecules. 2) The stability of WaMB is determined by the ability of surrounding molecules to interact with water forming WaMB. 3) WaMB prevent organic molecules to leave the SOM matrix and contribute thus to their immobilization. In order to test the hypotheses 1 and 2, a set of experiments including treatment of soils with chosen chemicals was prepared. Interaction abilities of these chemicals were characterized using interaction parameters from the Linear Solvation Energy Relationship theory. WaMB characteristics were monitored using Differential Scanning Calorimetry (DSC) allowing to measure the WaMB thermal stability and the rigidity of SOM matrix; which in turn was determined by the heat capacity change. In addition, DSC and 13C NMR spectroscopy assessed thermal properties and the structure of aliphatic crystallites. The spiking of samples with a model compound, phenol, and measurements of its desorption allowed to link parameters of the desorption kinetics with WaMB characteristics.
The investigation showed that the WaMB stability is significantly reduced by the presence of molecules with H-donor/acceptor interaction abilities. The matrix rigidity associated with WaMB was mainly influenced by the McGowan’s volume of surrounding molecules, suggesting the importance of dispersion forces. The desorption kinetics of phenol followed a first order model with two time constants. Both of them showed a relation with WaMB stability, which supports the hypothesis that WaMB contribute to the physical immobilization of organic molecules. The experiments targeted to the crystallites revealed their structural change from the ordered to the disordered state, when in contact with organic chemicals. This manifested in their melting point depression and the decrease of overall crystallinity. Described structural changes were caused by molecules interacting with specific as well as non-specific forces, which suggests that aliphatic crystallites can be penetrated and modified by molecules with a broad range of interaction abilities.
This work shows that chosen organic molecules interact with constituents of SOM as exemplified on WaMB and aliphatic crystallites, and cause measurable changes of their structure and properties. These findings show that the relevance of aliphatic crystallites for sorption in soil may have been underestimated. The results support the hypothesis that physicochemical matrix aging significantly contributes to the immobilization of organic chemicals in SOM.
We consider variational discretization of three different optimal control problems.
The first being a parabolic optimal control problem governed by space-time measure controls. This problem has a nice sparsity structure, which motivates our aim to achieve maximal sparsity on the discrete level. Due to the measures on the right hand side of the partial differential equation, we consider a very weak solution theory for the state equation and need an embedding into the continuous functions for the pairings to make sense. Furthermore, we employ Fenchel duality to formulate the predual problem and give results on solution theory of both the predual and the primal problem. Later on, the duality is also helpful for the derivation of algorithms, since the predual problem can be differentiated twice so that we can apply a semismooth Newton method. We then retrieve the optimal control by duality relations.
For the state discretization we use a Petrov-Galerkin method employing piecewise constant states and piecewise linear and continuous test functions in time. For the space discretization we choose piecewise linear and continuous functions. As a result the controls are composed of Dirac measures in space-time, centered at points on the discrete space-time grid. We prove that the optimal discrete states and controls converge strongly in L^q and weakly-* in Μ, respectively, to their smooth counterparts, where q ϵ (1,min{2,1+2/d}] is the spatial dimension. The variational discrete version of the state equation with the above choice of spaces yields a Crank-Nicolson time stepping scheme with half a Rannacher smoothing step.
Furthermore, we compare our approach to a full discretization of the corresponding control problem, precisely a discontinuous Galerkin method for the state discretization, where the discrete controls are piecewise constant in time and Dirac measures in space. Numerical experiments highlight the sparsity features of our discrete approach and verify the convergence results.
The second problem we analyze is a parabolic optimal control problem governed by bounded initial measure controls. Here, the cost functional consists of a tracking term corresponding to the observation of the state at final time. Instead of a regularization term for the control in the cost functional, we consider a bound on the measure norm of the initial control. As in the first problem we observe a sparsity structure, but here the control resides only in space at initial time, so we focus on the space discretization to achieve maximal sparsity of the control. Again, due to the initial measure in the partial differential equation, we rely on a very weak solution theory of the state equation.
We employ a dG(0) approximation of the state equation, i.e. we choose piecewise linear and continuous functions in space, which are piecewise constant in time for our ansatz and test space. Then, the variational discretization of the problem together with the optimality conditions induce maximal discrete sparsity of the initial control, i.e. Dirac measures in space. We present numerical experiments to illustrate our approach and investigate the sparsity structure
As third problem we choose an elliptic optimal control governed by functions of bounded variation (BV) in one space dimension. The cost functional consists of a tracking term for the state and a BV-seminorm in terms of the derivative of the control. We derive a sparsity structure for the derivative of the BV control. Additionally, we utilize the mixed formulation for the state equation.
A variational discretization approach with piecewise constant discretization of the state and piecewise linear and continuous discretization of the adjoint state yields that the derivative of the control is a sum of Dirac measures. Consequently the control is a piecewise constant function. Under a structural assumption we even get that the number of jumps of the control is finite. We prove error estimates for the variational discretization approach in combination with the mixed formulation of the state equation and confirm our findings in numerical experiments that display the convergence rate.
In summary we confirm the use of variational discretization for optimal control problems with measures that inherit a sparsity. We are able to preserve the sparsity on the discrete level without discretizing the control variable.