Refine
Year of publication
- 2013 (23) (remove)
Document Type
- Doctoral Thesis (10)
- Bachelor Thesis (4)
- Part of Periodical (4)
- Master's Thesis (2)
- Conference Proceedings (1)
- Diploma Thesis (1)
- Habilitation (1)
Language
- English (23) (remove)
Keywords
- Pflanzenschutzmittel (3)
- ABox (1)
- Abduktion <Logik> (1)
- Agrochemikalien (1)
- Bach (1)
- Boden (1)
- Bodenchemie (1)
- Bodenökologie (1)
- C++ (1)
- Calculus (1)
- Cations (1)
- Chironomus riparius (1)
- Computational Toxicology (1)
- Compute Shader (1)
- Computergraphik (1)
- Computervisualistik (1)
- Crayfish (1)
- Crayfish plague (1)
- Deduktion (1)
- Defi-Now! (1)
- Defibrillator (1)
- Differentia Scanning Calorimetry (1)
- Differential scanning calorimetry (1)
- E-KRHyper (1)
- E-KRHyper theorem prover (1)
- Edelkrebs (1)
- Emergenz (1)
- Endokrine Regulation (1)
- Englisch (1)
- Environmental Risk Assessment (1)
- Erste Hilfe (1)
- Fabric Simulation (1)
- First aid (1)
- Fledermäuse (1)
- Fluss (1)
- Fragebeantwortung (1)
- Gefrierpunktserniedrigung (1)
- Genetic diversity (1)
- Genetische Variabilität (1)
- Gewässer (1)
- Glasumwandlung (1)
- Glasübergang (1)
- Graphik (1)
- Hyaluronan (1)
- Hyaluronsäure (1)
- Hydratation (1)
- Hydration (1)
- Informatik (1)
- Integrated Model (1)
- Kation-Brücken (1)
- Kationen (1)
- Kognitive Linguistik (1)
- Konturfindung (1)
- Konzept (1)
- Krebspest (1)
- Landwirtschaft (1)
- Limology (1)
- Line Space (1)
- Linespace (1)
- Linked Data Modeling (1)
- Logischer Schluss (1)
- Magnetis (1)
- Methode (1)
- Mikroorganismus (1)
- Mixture Toxicity (1)
- N-Body Simulation (1)
- N-Körper Simulation (1)
- NMR-Spektroskopie (1)
- Non-freezing water (1)
- Nuclear Magnetic R (1)
- OWL (1)
- OpenGL (1)
- OpenGL Shading Language (1)
- Organische Bodensubstanz (1)
- Pestizid (1)
- Pestizide (1)
- Phylogeographie (1)
- Plasticization; Glass transition (1)
- Plastifizieren (1)
- Plastifizierung (1)
- Politik (1)
- Polysaccharide (1)
- Polysaccharides (1)
- Predictive Model (1)
- Programmierung (1)
- Prädikatenlogik (1)
- Präposition (1)
- RDF (1)
- Risikoabschätzung (1)
- Risikoanalyse (1)
- Risikomanagement (1)
- Schlussfolgern (1)
- Semantic Web (1)
- Simulation (1)
- Smartphone Applikation (1)
- Softwareergonomie (1)
- Steuerung (1)
- Stoffsimulation (1)
- Streams (1)
- Support System (1)
- Säugetiere (1)
- Süßwasserhaushalt (1)
- TBox (1)
- Text Analysis (1)
- Text Mining (1)
- Theorem prover (1)
- Theorembeweiser (1)
- Torf (1)
- Umweltchemikalie (1)
- Umwelttoxikologie (1)
- Umweltwissenschaften (1)
- Usability (1)
- Vocabulary Mapping (1)
- Wachstumsregler (1)
- Wirbellose (1)
- Wissensbasis (1)
- Zuckmücken (1)
- agent-based simulation (1)
- agriculture (1)
- aquatic ecotoxicology (1)
- automated theorem prover (1)
- bats (1)
- by-stander effect (1)
- cation bridges (1)
- cognitive linguistic approach (1)
- concept (1)
- ecological risk management (1)
- ecosystem functions (1)
- emergence (1)
- endocrine disrupting chemicals (1)
- endokrine Regulation (1)
- english prepositions (1)
- freshwater ecosystem (1)
- governance (1)
- invertebrates (1)
- knowledge base (1)
- landscape (1)
- life cycle test (1)
- mammals (1)
- microorganisms (1)
- minimum self-contained graphs (1)
- modelling (1)
- nicht gefrierbares Wasser (1)
- norm (1)
- peat (1)
- pesticide (1)
- pesticides (1)
- plant protection products (1)
- policy modelling (1)
- probabilistic (1)
- question answering (1)
- risk assessment (1)
- smartphone app (1)
- soil (1)
- soil organic matter (1)
- teaching (1)
- usability study (1)
- Ökosystem (1)
- Ökotoxologie (1)
Chemical plant protection is an essential element in integrated pest management and hence, in current crop production. The use of Plant Protection Products (PPPs) potentially involves ecological risk. This risk has to be characterised, assessed and managed.
For the coming years, an increasing need for agricultural products is expected. At the same time, preserving our natural resources and biodiversity per se is of equally fundamental importance. The relationship of our economic success and cultural progress to protecting the environment has been made plain in the Ecosystem Service concept. These distinct 'services' provide the foundation for defining ecological protection goals (Specific Protection Goals, SPGs) which can serve in the development of methods for ecological risk characterisation, assessment and management.
Ecological risk management (RM) of PPPs is a comprehensive process that includes different aspects and levels. RM is an implicit part of tiered risk assessment (RA) schemes and scenarios, yet RM also explicitly occurs as risk mitigation measures. At higher decision levels, RM takes further risks, besides ecological risk, into account (e.g., economic). Therefore, ecological risk characterisation can include RM (mitigation measures) and can be part of higher level RM decision-making in a broader Ecosystem Service context.
The aim of this thesis is to contribute to improved quantification of ecological risk as a basis for RA and RM. The initial general objective had been entitled as "… to estimate the spatial and temporal extent of exposure and effects…" and was found to be closely related to forthcoming SPGs with their defined 'Risk Dimension'.
An initial exploration of the regulatory framework of ecological RA and RM of PPPs and their use, carried out in the present thesis, emphasised the value of risk characterisation at landscape-scale. The landscape-scale provides the necessary and sufficient context, including abiotic and biotic processes, their interaction at different scales, as well as human activities. In particular, spatially (and temporally) explicit landscape-scale risk characterisation and RA can provide a direct basis for PPP-specific or generic RM. From the general need for tiered landscape-scale context in risk characterisation, specific requirements relevant to a landscape-scale model were developed in the present thesis, guided by the key objective of improved ecological risk quantification. In principle, for an adverse effect (Impact) to happen requires a sensitive species and life stage to co-occur with a significant exposure extent in space and time. Therefore, the quantification of the Probability of an Impact occurring is the basic requirement of the model. In a landscape-scale context, this means assessing the spatiotemporal distribution of species sensitivity and their potential exposure to the chemical.
The core functionality of the model should reflect the main problem structures in ecological risk characterisation, RA and RM, with particular relationship to SPGs, while being adaptable to specific RA problems. This resulted in the development of a modelling framework (Xplicit-Framework), realised in the present thesis. The Xplicit-Framework provides the core functionality for spatiotemporally explicit and probabilistic risk characterisation, together with interfaces to external models and services which are linked to the framework using specific adaptors (Associated-Models, e.g., exposure, eFate and effect models, or geodata services). From the Xplicit-Framework, and using Associated-Models, specific models are derived, adapted to RA problems (Xplicit-Models).
Xplicit-Models are capable of propagating variability (and uncertainty) of real-world agricultural and environmental conditions to exposure and effects using Monte Carlo methods and, hence, to introduce landscape-scale context to risk characterisation. Scale-dependencies play a key role in landscape-scale processes and were taken into account, e.g., in defining and sampling Probability Density Functions (PDFs). Likewise, evaluation of model outcome for risk characterisation is done at ecologically meaningful scales.
Xplicit-Models can be designed to explicitly address risk dimensions of SPGs. Their definition depends on the RA problem and tier. Thus, the Xplicit approach allows for stepwise introduction of landscape-scale context (factors and processes), e.g., starting at the definitions of current standard RA (lower-tier) levels by centring on a specific PPP use, while introducing real-world landscape factors driving risk. With its generic and modular design, the Xplicit-Framework can also be employed by taking an ecological entity-centric perspective. As the predictive power of landscape-scale risk characterisation increases, it is possible that Xplicit-Models become part of an explicit Ecosystem Services-oriented RM (e.g., cost/benefit level).
The following thesis analyses the functionality and programming capabilitiesrnof compute shaders. For this purpose, chapter 2 gives an introductionrnto compute shaders by showing how they work and how they can be programmed. In addition, the interaction of compute shaders and OpenGL 4.3 is shown through two introductory examples. Chapter 3 describes an NBodyrnsimulation that has been implemented in order to show the computational power of compute shaders and the use of shared memory. Then it is shown in chapter 4 how compute shaders can be used for physical simulationsrnand where problems may arise. In chapter 5 a specially conceived and implemented algorithm for detecting lines in images is described and then compared with the Hough transform. Lastly, a final conclusion is drawn in chapter 6.
Tagging systems are intriguing dynamic systems, in which users collaboratively index resources with the so-called tags. In order to leverage the full potential of tagging systems, it is important to understand the relationship between the micro-level behavior of the individual users and the macro-level properties of the whole tagging system. In this thesis, we present the Epistemic Dynamic Model, which tries to bridge this gap between the micro-level behavior and the macro-level properties by developing a theory of tagging systems. The model is based on the assumption that the combined influence of the shared background knowledge of the users and the imitation of tag recommendations are sufficient for explaining the emergence of the tag frequency distribution and the vocabulary growth in tagging systems. Both macro-level properties of tagging systems are closely related to the emergence of the shared community vocabulary. rnrnWith the help of the Epistemic Dynamic Model, we show that the general shape of the tag frequency distribution and of the vocabulary growth have their origin in the shared background knowledge of the users. Tag recommendations can then be used for selectively influencing this general shape. In this thesis, we especially concentrate on studying the influence of recommending a set of popular tags. Recommending popular tags adds a feedback mechanism between the vocabularies of individual users that increases the inter-indexer consistency of the tag assignments. How does this influence the indexing quality in a tagging system? For this purpose, we investigate a methodology for measuring the inter-resource consistency of tag assignments. The inter-resource consistency is an indicator of the indexing quality, which positively correlates with the precision and recall of query results. It measures the degree to which the tag vectors of indexed resources reflect how the users perceive the similarity between resources. We argue with our model, and show it with a user experiment, that recommending popular tags decreases the inter-resource consistency in a tagging system. Furthermore, we show that recommending the user his/her previously used tags helps to increase the inter-resource consistency. Our measure of the inter-resource consistency complements existing measures for the evaluation and comparison of tag recommendation algorithms, moving the focus to evaluating their influence on the indexing quality.
This paper originates from the FP6 project "Emergence in the Loop (EMIL)" which explores the emergence of norms in artificial societies. Part of work package 3 of this project is a simulator that allows for simulation experiments in different scenarios, one of which is collaborative writing. The agents in this still prototypical implementation are able to perform certain actions, such as writing short texts, submitting them to a central collection of texts (the "encyclopaedia") or adding their texts to texts formerly prepared by other agents. At the same time they are able to comment upon others' texts, for instance checking for correct spelling, for double entries in the encyclopaedia or for plagiarisms. Findings of this kind lead to reproaching the original authors of blamable texts. Under certain conditions blamable activities are no longer performed after some time.
This paper presents a method for the evolution of SHI ABoxes which is based on a compilation technique of the knowledge base. For this the ABox is regarded as an interpretation of the TBox which is close to a model. It is shown, that the ABox can be used for a semantically guided transformation resulting in an equisatisfiable knowledge base. We use the result of this transformation to effciently delete assertions from the ABox. Furthermore, insertion of assertions as well as repair of inconsistent ABoxes is addressed. For the computation of the necessary actions for deletion, insertion and repair, the E-KRHyper theorem prover is used.
Studies on the toxicity of chemical mixtures find that components at levels below no-observed-effect concentrations (NOECs) may cause toxicity resulting from the combined effects of mixed chemicals. However, chemical risk assessment frequently focuses on individual chemical substances, although most living organisms are substantially exposed to chemical mixtures rather than single substances. The concepts of additive toxicity, concentration addition (CA), and independent action (IA) models are often applied to predict the mixture toxicity of similarly and dissimilarly acting chemicals, respectively. However, living organisms and the environment may be exposed to both types of chemicals at the same time and location. In addition, experimental acquisition of toxicity data for every conceivable mixture is unfeasible since the number of chemical combinations is extremely large. Therefore, an integrated model to predict mixture toxicity on the basis of single mixture components having various modes of toxic action (MoAs) needs to be developed. The objectives of the present study were to analyze the challenges in predicting mixture toxicity in the environment, and to develop integrated models that overcome the limitations of the existing prediction models for estimating the toxicity of non-interactive mixtures through computational models. For these goals, four sub-topics were generated in this study. Firstly, applicable domains and limitations of existing integrated models were analyzed and grouped into three kinds of categories in this study. There are current approaches used to assess mixture toxicity; however, there is a need for a new research concept to overcome challenges associated with such approaches, which recent studies have addressed. These approaches are discussed with particular emphasis on those studies involved in computational approaches to predict the toxicity of chemical mixtures based on the toxicological data of individual chemicals. Secondly, through a case study and a computational simulation, it was found that the Key Critical Component (KCC) and Composite Reciprocal (CR) methods (as described in the European Union (EU) draft technical guidance notes for calculating the Predicted No Effect Concentration (PNEC) and Derived No Effect Level (DNEL) of mixtures) could derive significantly different results. As the third and fourth sub-topics of this study, the following two integrated addition models were developed and successfully applied to overcome the inherent limitations of the CA and IA models, which could be theoretically used for either similarly or dissimilarly acting chemicals: i) a Partial Least Squares-Based Integrated Addition Model (PLS-IAM), and, ii) a Quantitative Structure-Activity Relationship-Based Two-Stage Prediction (QSAR-TSP) model. In this study, it was shown that the PLS-IAM might be useful to estimate mixture toxicity when the toxicity data of similar mixtures having the same compositions were available. In the case of the QSAR-TSP model, it showed the potential to overcome the critical limitation of the conventional TSP model, which requires knowledge of the MoAs for all chemicals. Therefore, this study presented good potential for the advanced integrated models (e.g., PLS-IAM and QSAR-TSP), while considering various non-interactive constituents that have different MoAs in order to increase the reliance of conventional models and simplify the procedure for risk assessment of mixtures.
Various best practices and principles guide an ontology engineer when modeling Linked Data. The choice of appropriate vocabularies is one essential aspect in the guidelines, as it leads to better interpretation, querying, and consumption of the data by Linked Data applications and users.
In this paper, we present the various types of support features for an ontology engineer to model a Linked Data dataset, discuss existing tools and services with respect to these support features, and propose LOVER: a novel approach to support the ontology engineer in modeling a Linked Data dataset. We demonstrate that none of the existing tools and services incorporate all types of supporting features and illustrate the concept of LOVER, which supports the engineer by recommending appropriate classes and properties from existing and actively used vocabularies. Hereby, the recommendations are made on the basis of an iterative multimodal search. LOVER uses different, orthogonal information sources for finding terms, e.g. based on a best string match or schema information on other datasets published in the Linked Open Data cloud. We describe LOVER's recommendation mechanism in general and illustrate it alongrna real-life example from the social sciences domain.
This thesis describes the implementation of a Path-planning algorithm for multi-axle vehicles using machine learning algorithms. For that purpose, a general overview over Genetic Algorithms is given and alternative machine learning algorithms are briefly explained. The software developed for this purpose is based on the EZSystem Simulation Software developed by the AG Echtzeitysteme at the University Koblenz-Landau and a path correction algorithm developed by Christian Schwarz, which is also detailed in this paper. This also includes a description of the vehicle used in these simulations. Genetic Algorithms as a solution for path-planning in complex scenarios are then evaluated based on the results of the developed simulation software and compared to alternative, non-machine learning solutions, which are also shortly presented.
Iterative Signing of RDF(S) Graphs, Named Graphs, and OWL Graphs: Formalization and Application
(2013)
When publishing graph data on the web such as vocabulariesrnusing RDF(S) or OWL, one has only limited means to verify the authenticity and integrity of the graph data. Today's approaches require a high signature overhead and do not allow for an iterative signing of graph data. This paper presents a formally defined framework for signing arbitrary graph data provided in RDF(S), Named Graphs, or OWL. Our framework supports signing graph data at different levels of granularity: minimum self-contained graphs (MSG), sets of MSGs, and entire graphs. It supports for an iterative signing of graph data, e. g., when different parties provide different parts of a common graph, and allows for signing multiple graphs. Both can be done with a constant, low overhead for the signature graph, even when iteratively signing graph data.
Structure of soil organic matter (SOM) is a hot topic of discussion among scientific community for several decades. The mostly discussed models, among many, are polymer model and supramolecular model. While the former considers SOM as macromolecules consisting of amorphous and crystalline domains, the latter explains SOM as a physicochemical entity dominated by weak hydrophobic and H-bond interactions in the secondary level, which holds individual molecules of primary structure together. The weak forces in secondary level impart characteristic mobility of SOM. Very important consequence of this multidimensional formulation is that physicochemical structure plays a crucial role in most biogeochemical functions of SOM, apart from the chemical composition. Recently introduced concept of cation and water molecule mediated bridges between OM molecular segments (CaB and WaMB, respectively) evolved from physicochemical understanding of SOM structure. Even though several indirect evidences were produced for CaB and WaMB during last years, no clear-cut understanding of these processes has been achieved yet. Experimental difficulty due to overlapping effects of equally important CaB-governing parameters such a pH and competing cations raises huge challenge in investigating CaB-related influences. This thesis, therefore, aims to validate an experimental set-up for inducing CaB within OM structures and assessing it from various chemical and physicochemical aspects.
The method involved removal of omnipresent cations and adjustment of pH before cation addition. This helped to separate pH effects and cation effects. Based on results obtained on two different types of organic matter, it can be deduced that multivalent cations can cross-link SOM, given that functional group density of the SOM material is enough for the functional groups to be arranged in sufficient spatial proximity to each other. Physicochemical structural reorganisation during aging causes formation of more and/or stronger CaB and WaMB. As for inducing CaB directly after cation treatment, cationic size and valency were found determinant also for aging effect. A strongly cross-linked system in the beginning is less vulnerable to structural changes and undergoes aging with lower intensity, than an initially weakly cross-linked system. Responsible for the structural changes is, the inherent mobility of SOM within its physicochemical assemblage. Thus the information on structural requirement of CaB and its consequences on OM matrix rigidity will help to obtain insight into the physicochemical SOM structure. Additionally, organic matter quality (assessed by thermal analysis) and pore structure of SOM formed in a set of artificial soils showed that mineral materials are important for the chemical nature of SOM molecules, but not for the physical structure of organo-mineral associations, at least after several months of SOM development.
Furthermore, nanothermal analysis using atomic force microscopy (AFM-nTA) was implemented in soils for the first time to reveal nanoscale thermal properties and their spatial distribution in nano- and micrometer scales. This helped to identify physicochemical processes, such as disruption of WaMB, in low-organic soils, in which bulk methods fail due to their low sensitivity. Further, various types of materials constituting in soils were distinguished with high resolution by advanced application of the method, in combination with other AFM parameters. Attempts were done to identify various materials, with the usage of defined test materials. Above all, the method is potent to reveal microspatial heterogeneity on sample surfaces, which could help understanding process-relevant hotspots, for example.
This thesis thus contributes to the scientific understanding on physicochemical structural dynamics via cross-linking by cations and via nanoscale thermal properties. Direct investigation on CaB demonstrated here will potentially help making a big leap in knowledge about the interaction. The observed aging effects add well to the understanding of supramolecular consideration of SOM. By introducing nanothermal analysis to the field of soil science, it is made possible to face the problem of heterogeneity and spatial distribution of thermal characteristics. Another important achievement of AFM-nTA is that it can be used to detect physicochemical processes, which are of low intensity.