Refine
Year of publication
Document Type
- Doctoral Thesis (245)
- Master's Thesis (90)
- Part of Periodical (84)
- Bachelor Thesis (44)
- Diploma Thesis (27)
- Article (13)
- Conference Proceedings (10)
- Study Thesis (10)
- Habilitation (4)
- Other (2)
Language
- English (531) (remove)
Has Fulltext
- yes (531) (remove)
Keywords
- Pestizid (8)
- Pflanzenschutzmittel (6)
- Software Engineering (6)
- Internet of Things (5)
- Biodiversität (4)
- Bluetooth (4)
- Bodenchemie (4)
- Landwirtschaft (4)
- Semantic Web (4)
- ecotoxicology (4)
Institute
- Fachbereich 4 (115)
- Institut für Informatik (81)
- Fachbereich 7 (78)
- Institut für Computervisualistik (52)
- Institut für Wirtschafts- und Verwaltungsinformatik (52)
- Institute for Web Science and Technologies (50)
- Institut für Management (30)
- Institut für Integrierte Naturwissenschaften, Abt. Biologie (23)
- Institut für Umweltwissenschaften (23)
- Fachbereich 8 (20)
In the last decades, it became evident that the world is facing an unprecedented, human-induced global biodiversity crisis with amphibians being one of the most threatened species groups. About 41% of the amphibian species are classified as endangered by the IUCN, but even in amphibian species that are listed as "least concern", population declines can be observed on a local level. With land-use change and agrochemicals (i.e. pesticides), two of the main drivers for this amphibian decline are directly linked to intensive agriculture, which is the dominant landscape type in large parts of Europe. Thus, understanding the situation of amphibians in the agricultural landscape is crucial for conservation measures. In the present thesis, I investigated the effects of viticulture on amphibian populations around Landau in der Pfalz (Germany) in terms of habitat use, pesticide exposure, biometric traits as well as genetic and age structure. From the perspective of amphibians, land-use change means usually the destruction of habitats in agricultural landscapes, which often leads to landscape fragmentation. Thus, I followed the question if also vineyards lead to the fragmentation of the landscape and if pesticides that are frequently used in viticulture have to be considered as a factor too, so if there is a chemical landscape fragmentation. Using telemetry, I could show that common toads (Bufo bufo) can be found directly in vineyards, but that they tend to avoid them as habitat. Analysing the genetic structure of common frogs (Rana temporaria) revealed that vineyards have to be considered as a barrier for amphibians. To identify if pesticides contribute to the resulting landscape fragmentation, I conducted an arena choice experiment in the laboratory in which I found evidence for an avoidance of pesticide-contaminated soil. Such an avoidance could be one of the underlying reasons for a potential chemical landscape fragmentation. By combining telemetry data with information about pesticide applications from local wine growers, I could show that a large part of the common toads is likely to come in contact with pesticides. Further, I demonstrated that the agricultural landscape, probably due to the application of pesticides, can have negative effects on the reproduction capacity of common toads. By studying palmate newts (Lissotriton helveticus) I found that adult newts from agricultural ponds are smaller than those from forest ponds. As I did not find differences in the age structure and growth, these differences might be carry-over effects from earlier life stages. While agricultural ponds might be suitable habitats for adult palmate newts, the potential carry-over effect indicates suboptimal conditions for larvae and/or juveniles. I conclude that the best management measure for sustaining amphibians in the agricultural landscape would be a heterogeneous cultural landscape with a mosaic of different habitat patches that work without or at least a reduced amount of pesticides. Green corridors between populations and different habitats would allow migrating individuals to avoid agricultural and thus pesticide-contaminated areas. This would reduce the pesticide exposure risk of amphibians, while preventing the fragmentation of the landscape and thus the isolation of populations.
Graphs are known to be a good representation of structured data. TGraphs, which are typed, attributed, ordered, and directed graphs, are a very general kind of graphs that can be used for many domains. The Java Graph Laboratory (JGraLab) provides an efficient implementation of TGraphs with all their properties. JGraLab ships with many features, including a query language (GReQL2) for extracting data from a graph. However, it lacks a generic library for important common graph algorithms. This mid-study thesis extends JGraLab with a generic algorithm library called Algolib, which provides a generic and extensible implementation of several important common graph algorithms. The major aspects of this work are the generic nature of Algolib, its extensibility, and the methods of software engineering that were used for achieving both. Algolib is designed to be extensible in two ways. Existing algorithms can be extended for solving specialized problems and further algorithms can be easily added to the library.
The use of agricultural plastic covers has become common practice for its agronomic benefits such as improving yields and crop quality, managing harvest times better, and increasing pesticide and water use efficiency. However, plastic covers are suspected of partially breaking down into smaller debris and thereby contributing to soil pollution with microplastics. A better understanding of the sources and fate of plastic debris in terrestrial systems has so far been hindered by the lack of adequate analytical techniques for the mass-based and polymer-selective quantification of plastic debris in soil. The aim of this dissertation was thus to assess, develop, and validate thermoanalytical methods for the mass-based quantification of relevant polymers in and around agricultural fields previously covered with fleeces, perforated foils, and plastic mulches. Thermogravimetry/mass spectrometry (TGA/MS) enabled direct plastic analyses of 50 mg of soil without any sample preparation. With polyethylene terephthalate (PET) as a preliminary model, the method limit of detection (LOD) was 0.7 g kg−1. But the missing chromatographic separation complicated the quantification of polymer mixtures. Therefore, a pyrolysis-gas chromatography/mass spectrometry (Py-GC/MS) method was developed that additionally exploited the selective solubility of polymers in specific solvents prior to analysis. By dissolving polyethylene (PE), polypropylene (PP), and polystyrene (PS) in a mixture of 1,2,4-trichlorobenzene and p-xylene after density separation, up to 50 g soil became amenable to routine plastic analysis. Method LODs were 0.7–3.3 mg kg−1, and the recovery of 20 mg kg−1 PE, PP, and PS from a reference loamy sand was 86–105%. In the reference silty clay, however, poor PS recoveries, potentially induced by the additional separation step, suggested a qualitative evaluation of PS. Yet, the new solvent-based Py-GC/MS method enabled a first exploratory screening of plastic-covered soil. It revealed PE, PP, and PS contents above LOD in six of eight fields (6% of all samples). In three fields, PE levels of 3–35 mg kg−1 were associated with the use of 40 μm thin perforated foils. By contrast, 50 μm PE films were not shown to induce plastic levels above LOD. PP and PS contents of 5–19 mg kg−1 were restricted to single observations in four fields and potentially originated from littering. The results suggest that the short-term use of thicker and more durable plastic covers should be preferred to limit plastic emissions and accumulation in soil. By providing mass-based information on the distribution of the three most common plastics in agricultural soil, this work may facilitate comparisons with modeling and effect data and thus contribute to a better risk assessment and regulation of plastics. However, the fate of plastic debris in the terrestrial environment remains incompletely understood and needs to be scrutinized in future, more systematic research. This should include the study of aging processes, the interaction of plastics with other organic and inorganic compounds, and the environmental impact of biodegradable plastics and nanoplastics.
The loss of biodiversity is recognised on a global scale and also in the anthropogenic landscapes used for agriculture, now covering almost 50% of the global terrestrial land surface. In agriculture pesticides, biologically active chemicals are deliberately distributed to control pests, disease and weeds in the cropped areas. The quantification of remaining semi-naturals structures such as field margins and hedges is a prerequisite to understand the impact of pesticides on biodiversity, since these structures represent habitats for many organisms in agricultural landscapes. The presence of organisms in these habitats and crops is required to obtain an estimate of their potential pesticide exposure. In this text I provide studies on animal groups so far not addressed in risk assessment procedures for the regulation of pesticides such as amphibians, moths and bats. For all groups it becomes apparent that they are present in agricultural landscapes and potentially coincide with pesticide applications indicating a risk. Risk quantification also requires data on the sensitivity of organisms and here data for plants, amphibians and bees are presented. Effects translating to community level were studied for herbicide, insecticide and fertiliser effects in a natural system. After three years the treatments resulted in simplified plant communities with lower species numbers and a reduction in flowering plants. This reduction of flowers is used as an example for an indirect effect and was especially obvious for the effect of an herbicide on the common buttercup. Sublethal herbicide effects for a plant translated in an impact on feeding caterpillars, indicating a reduction in food quality. Insecticide inputs realistic for field margins also reduced moth pollination of white champion flowers by 30%. These indirect effects by distortions of food web characteristics are playing a critical role to understand declines in organism groups, however so far are not accounted for in pesticide risk assessment schemes. The current intense use of pesticides in agriculture and their inherent toxicity may lead to a chemical landscape fragmentation, where populations may not be connected anymore. Source-sink dynamics are important ecological processes and as a final result not only population size but also genetic population structure might be affected. Including potential pesticide impacts as costs in a model for amphibians migrating to breeding ponds in vineyards in Rhineland-Palatinate indicated the isolation of investigated populations. A first validation by analyzing the population structure of the European common frog confirmed the model prediction for some sites. For the regulation of pesticides in Europe a risk assessment is required and for the organisms of the terrestrial habitat a multitude of guidance documents is in place or is recently developed or improved. The results of the presented research indicate that wild plants and especially their reproductive flower stage are highly sensitive and risks are underestimated. Population recovery of arthropods needs a reevaluation at landscape scale and the addition of amphibian risk assessment in regulation procedures is suggested. However, developing or adopting risk assessment procedures and test systems is a time consuming task and therefore the establishment of risk management options is a pragmatic alternative with immediate effects. Artificial wetlands in the agricultural landscape proved to be important foraging sites for bats and their creation could mitigate negative pesticide effects. The integration of direct and indirect effects in a risk assessment scheme for all organism groups addressing also landscape scale and pesticide mixtures requires a long developing time. The establishment of model landscapes where management options and integrated pest management are applied on a larger scale would allow us to study pesticide effects in a realistic scenario and to develop an approach for the agriculture of the future.
Despite the inception of new technologies at a breakneck pace, many analytics projects fail mainly due to the use of incompatible development methodologies. As big data analytics projects are different from software development projects, the methodologies used in software development projects could not be applied in the same fashion to analytics projects. The traditional agile project management approaches to the projects do not consider the complexities involved in the analytics. In this thesis, the challenges involved in generalizing the application of agile methodologies will be evaluated, and some suitable agile frameworks which are more compatible with the analytics project will be explored and recommended. The standard practices and approaches which are currently applied in the industry for analytics projects will be discussed concerning enablers and success factors for agile adaption. In the end, after the comprehensive discussion and analysis of the problem and complexities, a framework will be recommended that copes best with the discussed challenges and complexities and is generally well suited for the most data-intensive analytics projects.
The formulation of the decoding problem for linear block codes as an integer program (IP) with a rather tight linear programming (LP) relaxation has made a central part of channel coding accessible for the theory and methods of mathematical optimization, especially integer programming, polyhedral combinatorics and also algorithmic graph theory, since the important class of turbo codes exhibits an inherent graphical structure. We present several novel models, algorithms and theoretical results for error-correction decoding based on mathematical optimization. Our contribution includes a partly combinatorial LP decoder for turbo codes, a fast branch-and-cut algorithm for maximum-likelihood (ML) decoding of arbitrary binary linear codes, a theoretical analysis of the LP decoder's performance for 3-dimensional turbo codes, compact IP models for various heuristic algorithms as well as ML decoding in combination with higher-order modulation, and, finally, first steps towards an implementation of the LP decoder in specialized hardware. The scientific contributions are presented in the form of seven revised reprints of papers that appeared in peer-reviewed international journals or conference proceedings. They are accompanied by an extensive introductory part that reviews the basics of mathematical optimization, coding theory, and the previous results on LP decoding that we rely on afterwards.
Advanced Auditing of Inconsistencies in Declarative Process Models using Clustering Algorithms
(2021)
To have a compliant business process of an organization, it is essential to ensure a onsistent process. The measure of checking if a process is consistent or not depends on the business rules of a process. If the process adheres to these business rules, then the process is compliant and efficient. For huge processes, this is quite a challenge. Having an inconsistency in a process can yield very quickly to a non-functional process, and that’s a severe problem for organizations. This thesis presents a novel auditing approach for handling inconsistencies from a post-execution perspective. The tool identifies the run-time inconsistencies and visualizes them in heatmaps. These plots aim to help modelers observe the most problematic constraints and help them make the right remodeling decisions. The modelers assisted with many variables can be set in the tool to see a different representation of heatmaps that help grasp all the perspectives of the problem. The heatmap sort and shows the run-time inconsistency patterns, so that modeler can decide which constraints are highly problematic and should address a re-model. The tool can be applied to real-life data sets in a reasonable run-time.
Semantic descriptions of non-textual media available on the web can be used to facilitate retrieval and presentation of media assets and documents containing them. While technologies for multimedia semantic descriptions already exist, there is as yet no formal description of a high quality multimedia ontology that is compatible with existing (semantic) web technologies. We explain the complexity of the problem using an annotation scenario. We then derive a number of requirements for specifying a formal multimedia ontology, including: compatibility with MPEG-7, embedding in foundational ontologies, and modularisation including separation of document structure from domain knowledge. We then present the developed ontology and discuss it with respect to our requirements.
Research has shown that people recognize personality, gender, inner states and many other items of information by simply observing human motion. Therefore the expressive human motion seems to be a valuable non-verbal communication channel. On the quest for more believable characters in virtual three dimensional simulations a great amount of visual realism has been achieved during the last decades. However, while interacting with synthetic characters in real-time simulations, often human users still sense an unnatural stiffness. This disturbance in believability is generally caused by a lack of human behavior simulation. Expressive motions, which convey personality and emotional states can be of great help to create more plausible and life-like characters. This thesis explores the feasibility of an automatic generation of emotionally expressive animations from given neutral character motions. Such research is required since common animation methods, such as manual modeling or motion capturing techniques, are too costly to create all possible variations of motions needed for interactive character behavior. To investigate how emotions influence human motion relevant literature from various research fields has been viewed and certain motion rules and features have been extracted. These movement domains were validated in a motion analysis and implemented in a system in an exemplary manner capable of automating the expression of angry, sad and happy states in a virtual character through its body language. Finally, the results were evaluated in user test.
Successful export sectors in manufacturing and agribusiness are important drivers of structural transformation in Sub-Sahara African countries. Backed by industrial policies and active state involvement, a small number of successful productive export sectors has emerged in Sub-Saharan Africa. This thesis asks the question: How do politics shape the promotion of export-driven industrialisation and firm-level upgrading in Sub-Saharan Africa? It exemplifies this question with an in-depth, qualitative study of the cashew processing industry in Mozambique in the period from 1991 until 2019. Mozambique used to be one of the world’s largest producers and processors of cashew nuts in the 1960s and 1970s. At the end of the 20th century, the cashew processing industry broke down completely but has re-emerged as one of the country’s few successful agro-processing exports.
The thesis draws on theoretical approaches from the fields of political science, notably the political settlements framework, global value chain analysis and the research on technological capabilities to explore why the Mozambican Government supported the cashew processing industry and how Mozambican cashew processors acquired the technological capabilities needed to access the global cashew value chain and to upgrade. It makes an important theoretical contribution by linking the political settlements framework and the literature on upgrading in global value chains to study how politics shaped productive sector promotion and upgrading in the Mozambican cashew processing industry. The findings of the thesis are based on extensive primary data, including 58 expert interviews and 10 firm surveys, that was collected in Mozambique in 2018 as well as a broad base of secondary literature.
The thesis argues that the Mozambican Government supported the cashew processing industry because it became important for the Government’s political survival. Promoting the cashew sector formed part of an electoral strategy for the ruling FRELIMO coalition and a means to keep FRELIMO factions united by offering economic opportunities to key constituencies. In 1999, it adopted a protectionist cashew law that created strong incentives for cashew processing in Mozambique. This not only facilitated the re-emergence of the cashew processing industry after its breakdown. The law and the active involvement of the National Cashew Institute (INCAJU) also affected the governance of the local cashew value chain, the creation of backward linkages, and the upgrading paths of cashew processors. The findings of the thesis suggest that the cashew law reduced the pressure on the cashew processing industry to upgrade. The law further created opportunities for formal and informal rent creation for members of the political elite and lower level FRELIMO officials that prevented a far-reaching reform of the law. The thesis shows that international buyers do not promote upgrading among Sub-Sahara African firms in global value chains with market-based or modular governance. Moreover, firms that operate in countries where industrial policies are not enforced effectively cannot draw on the support of government institutions to enhance their capabilities and to upgrade. Firms therefore mainly depended on costly learning channels at firm level, e.g. learning by doing or hiring skilled labour, and/or on technical assistance from donors to build the technological capabilities needed to access global value chains and to remain competitive.
The findings of the thesis suggest that researchers, governments, development practitioners and consultants need to rethink their understanding of upgrading in GVCs in four ways. First, they need to move away from understanding upgrading in terms of moving towards more complex, higher value-added activities in GVCs (functional upgrading). Instead, it is important to consider the potential of other, more realistic types of upgrading for firms in low-income countries, such reducing risks by diversifying suppliers and buyers or increasing rewards by making production processes more efficient. Second, they need to replace an overly positive view on upgrading that neglects possible side-effects at sector and/or country level. Third, GVC participation on its own does not promote upgrading among local supplier firms in Sub-Saharan Africa. The interests of lead firms and Sub-Sahara African supplier firms may not be aligned or even conflicting. Targeted industrial policies and the creation of institutions that effectively promote capability building among firms therefore become even more important. Finally, upgrading needs to be understood as a process that is not only shaped by interactions between firms, but also by local domestic politics.
The findings of the thesis are highly relevant for scholars from the fields of political science, development studies, and economics. Its practical implications and tools, e.g. a technological capabilities matrix for the cashew industry, are of interest for development practitioners, members of public institutions in Sub-Sahara African countries, local entrepreneurs, and representatives of local business associations that are involved in promoting export sectors and upgrading among local firms.
With the emergence of current generation head-mounted displays (HMDs), virtual reality (VR) is regaining much interest in the field of medical imaging and diagnosis. Room-scale exploration of CT or MRI data in virtual reality feels like an intuitive application. However in VR retaining a high frame rate is more critical than for conventional user interaction seated in front of a screen. There is strong scientific evidence suggesting that low frame rates and high latency have a strong influence on the appearance of cybersickness. This thesis explores two practical approaches to overcome the high computational cost of volume rendering for virtual reality. One lies within the exploitation of coherency properties of the especially costly stereoscopic rendering setup. The main contribution is the development and evaluation of a novel acceleration technique for stereoscopic GPU ray casting. Additionally, an asynchronous rendering approach is pursued to minimize the amount of latency in the system. A selection of image warping techniques has been implemented and evaluated methodically, assessing the applicability for VR volume rendering.
Bio-medical data comes in various shapes and with different representations.
Domain experts use such data for analysis or diagnosis,
during research or clinical applications. As the opportunities to obtain
or to simulate bio-medical data become more complex and productive,
the experts face the problem of data overflow. Providing a
reduced, uncluttered representation of data, that maintains the data’s
features of interest falls into the area of Data Abstraction. Via abstraction,
undesired features are filtered out to give space - concerning the
cognitive and visual load of the viewer - to more interesting features,
which are therefore accentuated. To address this challenge, the dissertation
at hand will investigate methods that deal with Data Abstraction
in the fields of liver vasculature, molecular and cardiac visualization.
Advanced visualization techniques will be applied for this purpose.
This usually requires some pre-processing of the data, which will also
be covered by this work. Data Abstraction itself can be implemented
in various ways. The morphology of a surface may be maintained,
while abstracting its visual cues. Alternatively, the morphology may
be changed to a more comprehensive and tangible representation.
Further, spatial or temporal dimensions of a complex data set may
be projected to a lower space in order to facilitate processing of the
data. This thesis will tackle these challenges and therefore provide an
overview of Data Abstraction in the bio-medical field, and associated
challenges, opportunities and solutions.
The development of a pan-European public E-Procurement system is an important target of the European Union to enhance the efficiency, transparency and competitiveness of public procurement procedures conducted within the European single market. A great obstacle for cross-border electronic procurement is the heterogeneity of national procurement systems in terms of technical, organizational and legal differences. To overcome this obstacle the European Commission funds several initiatives that contribute to the aim of achieving interoperability for pan-European public procurement. Pan European Public Procurement OnLine (PEPPOL) is one of these initiatives that aims at piloting an interoperable pan-European E-Procurement solution to support businesses and public purchasing entities from different member states to conduct their procurement processes electronically.rnrnAs interoperability and inter-connection of distributed heterogeneous information systems are the major requirements in the European procurement domain, and the VCD sub-domain in particular, service-oriented architecture (SOA) seems to provide a promising approach to realize such an architecture, as it promotes loose coupling and interoperability. This master thesis therefore discusses the SOA approach and how its concepts, methodologies and technologies can be used for the development of interoperable IT systems for electronic public procurement. This discussion is enhanced through a practical application of the discussed SOA methodologies by conceptualizing and prototyping of a sub-system derived from the overall system domain of the Virtual Company Dossier. For that purpose, important aspects of interoperability and related standards and technologies will be examined and put into the context of public electronic procurement. Furthermore, the paradigm behind SOA will be discussed, including the derivation of a top-down development methodology for service-oriented systems.
Conventional security infrastructures in the Internet cannot be directly adopted to ambient systems, especially if based on short-range communication channels: Personal, mobile devices are used and the participants are present during communication, so privacy protection is a crucial issue. As ambient systems cannot rely on an uninterrupted connection to a Trust Center, certiffed data has to be veriffed locally. Security techniques have to be adjusted to the special environment. This paper introduces a public key infrastructure (PKI) to provide secure communication channels with respect to privacy, confidentiality, data integrity, non-repudiability, and user or device authentication. It supports three certiffcate levels with a different balance between authenticity and anonymity. This PKI is currently under implementation as part of the iCity project.
The lack of a formal event model hinders interoperability in distributed event-based systems. Consequently, we present in this paper a formal model of events, called F. The model bases on an upper-level ontology and pro-vides comprehensive support for all aspects of events such as time and space, objects and persons involved, as well as the structural aspects, namely mereological, causal, and correlational relationships. The event model provides a flexible means for event composition, modeling of event causality and correlation, and allows for representing different interpretations of the same event. The foundational event model F is developed in a pattern-oriented approach, modularized in different ontologies, and can be easily extended by domain specifific ontologies.
This dissertation introduces a methodology for formal specification and verification of user interfaces under security aspects. The methodology allows to use formal methods pervasively in the specification and verification of human-computer interaction. This work consists of three parts. In the first part, a formal methodology for the description of human-computer interaction is developed. In the second part, existing definitions of computer security are adapted for human-computer interaction and formalized. A generic formal model of human-computer interaction is developed. In the third part, the methodology is applied to the specification and verification of a secure email client.
Global crop production increased substantially in recent decades due to agricultural intensification and expansion and today agricultural areas occupy about 38% of Earth’s terrestrial surface - the largest use of land on the planet. However, current high-intensity agricultural practices fostered in the context of the Green Revolution led to serious consequences for the global environment. Pesticides, in particular, are highly biologically active substances that can threaten the ecological integrity of aquatic and terrestrial ecosystems. Although the global pesticide use increases steadily, our field-data based knowledge regarding exposure of non-target ecosystems such as surface waters is very restricted. Available studies have by now been limited to spatially restricted geographical areas or had rather specific objectives rendering the extrapolation to larger spatial scales questionable.
Consequently, this thesis evaluated based on four scientific publications the exposure, effects, and regulatory implications of particularly toxic insecticides` concentrations detected in global agricultural surface waters. FOCUS exposure modelling was used to characterise the highly specific insecticide exposure patterns and to analyse the resulting implications for both monitoring and risk assessment (publication I). Based on more than 200,000 scientific database entries, 838 peer-reviewed studies finally included, and more than 2,500 sites in 73 countries, the risks of agricultural insecticides to global surface waters were analysed by means of a comprehensive meta-analysis (publication II). This meta-analysis evaluated whether insecticide field concentrations exceed legally accepted regulatory threshold levels (RTLs) derived from official EU and US pesticide registration documents and, amongst others, how risks depend on insecticide development over time and stringency of environmental regulation. In addition, an in-depth analysis of the current EU pesticide regulations provided insights into the level of protection and field relevance of highly elaborated environmental regulatory risk assessment schemes (publications III and IV).
The results of this thesis show that insecticide surface water exposure is characterized by infrequent and highly transient concentration peaks of high ecotoxicological relevance. We thus argue in publication I that sampling based on regular intervals is inadequate for the detection of insecticide surface water concentrations and that traditional risk assessment concepts based on all insecticide concentrations including non-detects lead to severely biased results and critical underestimations of risks. Based on these considerations, publication II demonstrates that out of 11,300 measured insecticide concentrations (MICs; i.e., those actually detected and quantified), 52.4% (5,915 cases; 68.5%) exceeded the RTL for either water (RTLSW) or sediments. This indicates a substantial risk for the biological integrity of global water resources as additional analyses on pesticide effects in the field clearly evidence that the regional aquatic biodiversity is reduced by approximately 30% at pesticide concentrations equalling the RTLs. In addition, publication II shows that there is a complete lack of scientific monitoring data for ~90% of global cropland and that both the actual insecticide contamination of surface waters and the resulting ecological risks are most likely even greater due to, for example, inadequate sampling methods employed in the studies and the common occurrence of pesticide mixtures. A linear model analysis identified that RTLSW exceedances depend on the catchment size, sampling regime, sampling date, insecticide substance class, and stringency of countries` environmental regulations, as well as on the interactions of these factors. Importantly, the risks are significantly higher for newer-generation insecticides (i.e., pyrethroids) and are high even in countries with stringent environmental regulations. Regarding the latter, an analysis of the EU pesticide regulations revealed critical deficiencies and the lack of protectiveness and field-relevance for current presumed highly elaborated FOCUS exposure assessment (publication IV) and overall risk assessment schemes (publication III). Based on these findings, essential risk assessment amendments are proposed.
In essence, this thesis analyses the agriculture–environment linkages for pesticides at the global scale and it thereby contributes to a new research frontier in global ecotoxicology. The overall findings substantiate that agricultural insecticides are potential key drivers for the global freshwater biodiversity crisis and that the current regulatory risk assessment approaches for highly toxic anthropogenic chemicals fail to protect the global environment. This thesis provides an integrated view on the environmental side effects of global high-intensity agriculture and alerts that beside worldwide improvements to current pesticide regulations and agricultural pesticide application practices, the fundamental reformation of conventional agricultural systems is urgently needed to meet the twin challenges of providing sufficient food for a growing human population without destroying the ecological integrity of global ecosystems essential to human existence.
This thesis presents an analysis of API usage in a large corpus of Java software retrieved from the open source repositories hosted at SourceForge. Most larger software projects use software libraries, which offer a public "application programming interface" or API as an interface for the programmer. In order to facilitate the transition between different APIs, there are emerging research projects in the field of automated API migration. However, there is a lack of basic statistical background information about in-the-wild usage of APIs as such measurements have, until now, only been done on rather small corpora. We thus present an analysis method suitable for measurements with large corpora. First, we create a corpus of open source projects hosted on SourceForge, as well as a corpus of software libraries. Then, all projects in the corpus are compiled with an instrumented compiler. We use a compiler plugin for javac that gives detailed information about every method created by the compiler. This information is stored in a database and analyzed.
Robotics research today is primarily about enabling autonomous, mobile robots to seamlessly interact with arbitrary, previously unknown environments. One of the most basic problems to be solved in this context is the question of where the robot is, and what the world around it, and in previously visited places looks like " the so-called simultaneous localization and mapping (SLAM) problem. We present a GraphSLAM system, which is a graph-based approach to this problem. This system consists of a frontend and a backend: The frontend- task is to incrementally construct a graph from the sensor data that models the spatial relationship between measurements. These measurements may be contradicting and therefore the graph is inconsistent in general. The backend is responsible for optimizing this graph, i. e. finding a configuration of the nodes that is least contradicting. The nodes represent poses, which do not form a regular vector space due to the contained rotations. We respect this fact by treating them as what they really are mathematically: manifolds. This leads to a very efficient and elegant optimization algorithm.
The term "Augmented Reality (AR)" denotes the superposition of additional virtual objects and supplementary information over real images. The joint project Enhanced Reality (ER)1 aims at a generic AR-system. The ER-project is a cooperation of six different research groups of the Department of Computer Science at the University of Koblenz-Landau. According to Ronald Azuma an AR-system combines real and virtual environments, where the real and virtual objects are registered in 3-D, and it provides interactivity in real time [Azu97]. Enhanced Reality extends Augmented Reality by requiring the virtual objects to be seamlessly embedded into the real world as photo-realistic objects according to the exact lighting conditions. Furthermore, additional information supplying value-added services may be displayed and interaction of the user may even be immersive. The short-term goal of the ER-project is the exploration of ER-fundamentals using some specific research scenarios; the long-term goal is the development of a component-based ER-framework for the creation of ER-applications for arbitrary application areas. ER-applications are developed as single-user applications for users who are moving in a real environment and are wearing some kind of visual output device like see-through glasses and some mobile end device. By these devices the user is able to see reality as it is, but he can also see the virtual objects and the additional information about some value-added service. Furthermore he might have additional devices whereby he can interact with the available virtual objects. The development of a generic framework for ER-applications requires the definition of generic components which are customizable and composable to build concrete applications and it requires a homogeneous data model which supports all components equally well. The workgroup "Software Technology"2 is responsible for this subproject. This report gives some preliminary results concerning the derivation of a component-based view of ER. There are several augmented reality frameworks like ARVIKA, AMIRE, DWARF, MORGAN, Studierstube and others which offer some support for the development of AR-applications. All of them ease the use of existing subsystems like AR-Toolkit, OpenGL and others and leverage the generation process for realistic systems by making efficient use of those subsystems. Consequently, they highly rely on them.
Web application testing is an active research area. Garousi et al. did a systematic mapping study and classified 79 papers published between 2000-2011. However, there seems to be a lack of information exchange between the scientific community and tool developers.
This thesis systematically analyzes the field of functional, system level web application testing tools. 194 candidate tools were collected in the tool search and screened, with 23 tools being selected as foundation of this thesis. These 23 tools were systematically used to generate a feature model of the domain. The methodology to support this is an additional contribution of this thesis. It processes end user documentation of tools belonging to an examined domain and creates a feature model. The feature model gives an overview over the existing features, their alternatives and their distribution. It can be used to identify trends and problems, extraordinary features, help decision making of tool purchase or guide scientists how to focus research.
Water is used in a way as if it were available infinitely. Droughts, increased rainfall or flooding already lead to water shortages and, thus, deprive entire population groups of the basis of their livelihoods. There is a growing fear that conflicts over water will increase, especially in arid climate zones, because life without water - whether for humans, animals or plants - is not possible.
More than 60 % of the African population depend on land and water resources for their livelihoods through pastoralism, fishing and farming. The water levels of rivers and lakes are decreasing. Hence, the rural population which is dependent on land and water move towards water-rich and humid areas. This internal migration increases the pressure on available water resources. Driven by the desire to strengthen the economic development, African governments align their political agendas with the promotion of macro international and national economic projects.
This doctoral thesis examines the complex interrelationships between water shortages, governance, vulnerability, adaptive capacity and violent and non-violent conflicts at Lake Naivasha in Kenya and Lake Wamala in Uganda. In order to satisfy the overall complexity, this doctoral thesis combines various theoretical and empirical aspects in which a variety of methods are applied to different geographical regions, across disciplines, and cultural and political boundaries.
The investigation reveals that Lake Naivasha is more affected by violent conflicts than Lake Wamala. Reasons for this include population growth, historically grown ethnic conflicts, corruption and the preferential treatment of national and international economic actors. The most common conflict response tools are raiding and the blockage of water access. However, deathly encounters, destruction of property and cattle slaughtering are increasingly used to gain access to water and land.
The insufficient implementation of the political system and the governments’ prioritization to foster economic development results, on the one hand, in the commercialization of water resources and increases, on the other hand, non-violent conflict between national and sub-national political actors. While corruption, economic favours and patronage defuse this conflict, resource access becomes more difficult for the local population. Resulting thereof, a final hypothesis is developed which states that the localization of the political conflict aggravates the water situation for the local population and, thereby, favours violent conflicts over water access and water use in water-rich areas.
English prepositions take only a small proportion of the language but play a substantial role. Although prepositions are of course also frequently used in English textbooks for secondary school, students fail to incidentally acquire them and often show low achievements in using prepositions correctly. The strategy commonly employed by language instructors is teaching the multiple senses of prepositions by rote which fails to help the students to draw links between the different meanings in usage. New findings in Cognitive Linguistics (CL) suggest a different approach to teaching prepositions and thus might have a strong impact on the methodologies of foreign language teaching and learning on the aspects of meaningful learning. Based on the Theory of Domains (Langacker, 1987), the notions of image schemas (Johnson, 1987) as well as the Conceptual Metaphor Theory (Lakoff & Johnson, 1980), the present study developed a CL-inspired approach to teaching prepositions, which was compared to the traditional teaching method by an empirical study conducted in a German school setting. Referring to the participants from the higher track and the medium track, who are at different proficiency levels, the results indicate that the CL-inspired teaching approach improved students" performance significantly more than the traditional approach in all the cases for the higher track and in some cases for the medium track. Thus, these findings open up a new perspective of the CL-inspired meaningful learning approach on language teaching. In addition, the CL-inspired approach demonstrates the unification of the integrated model of text and picture comprehension (the ITPC model) in integrating the new knowledge with related prior knowledge in the cognitive structure. According to the learning procedure of the ITPC model, the image schema as visual image is first perceived through the sensory register, then is processed in the working memory by conceptual metaphor, and finally it is integrated with cognitive schemata in the long term memory. Moreover, deep-seated factors, such as transfer of mother tongue, the difficulty of teaching materials, and the influence of prior knowledge, have strong effects on the acquisition of English prepositions.
The mitral valve is one of four human heart valves. It is located in the left heart and acts as a unidirectional passageway for blood between the left atrium and the left ventricle. A correctly functioning mitral valve prevents a backflow of blood into the pulmonary circulation (lungs) and thus constitutes a vital part of the cardiac cycle. Pathologies of the mitral valve can manifest in a variety of symptoms with severity ranging from chest pain and fatigue to pulmonary edema (fluid accumulation in the tissue and air space of lungs), which may ultimately cause respiratory failure.
Malfunctioning mitral valves can be restored through complex surgical interventions, which greatly benefit from intensive planning and pre-operative analysis. Visualization techniques provide a possibility to enhance such preparation processes and can also facilitate post-operative evaluation. The work at hand extends current research in this field, building upon patient-specific mitral valve segmentations developed at the German Cancer Research Center, which result in triangulated 3D models of the valve surface. The core of this work will be the construction of a 2D-view of these models through global parameterization, a method that can be used to establish a bijective mapping between a planar parameter domain and a surface embedded in higher dimensions.
A flat representation of the mitral valve provides physicians with a view of the whole surface at once, similar to a map. This allows assessment of the valve's area and shape without the need for different viewing angles. Parts of the valve that are occluded by geometry in 3D become visible in 2D.
An additional contribution of this work will be the exploration of different visualizations of the 3D and 2D mitral valve representations. Features of the valve can be highlighted by associating them with specified colors, which can for instance directly convey pathology indicators.
Quality and effectiveness of the proposed methods were evaluated through a survey conducted at the Heidelberg University Hospital.
In this work the Navier-Stokes equations for non-stationary incompressible
flow of the Newtonian fluid in time dependent domain are studied. The geometry of the flow domain changes in time according to fluid properties such as stress tensor. The motivation for our study comes from medicine—the simulation of blood flow in arteries and veins.
After choosing an appropriate mathematical model of the flow in a domain with viscoelastic compliant walls, we deal with its theoretical analysis. We prove the existence of a weak solution using the weak compressible approximation in a moving domain with given deformation function. In our approach the fluid-structure interface condition is treated using a permeable-wall approach decoupling the fluid and the deformable structure.
Finally we present some numerical experiments illustrating the convergence of the iteration with respect to the domain deformation function as well as the behavior of the moving wall for decreased permeability.
Business rules have become an important tool to warrant compliance at their business processes. But the collection of these business rules can have various conflicting elements. This can lead to a violation of the compliance to be achieved. This conflicting elements are therefore a kind of inconsistencies, or quasi incon- sistencies in the business rule base. The target for this thesis is to investigate how those quasi inconsistencies in business rules can be detected and analyzed. To this aim, we develop a comprehensive library which allows to apply results from the scientific field of inconsistency measurement to business rule formalisms that are actually used in practice.
Over the last three decades researchers of Cognitive Metaphor Theory have shown conclusively that metaphor is motivated rather than arbitrary and often used to systematically map out conceptual territory. This cognitive semantic proposal holds the potential for alternative L2 teaching strategies. As an abstract domain, business discourse is naturally rich in metaphors and is additionally filled with consciously used metaphorical language to strategically manipulate clients and business partners. Business English courses especially stand to profit from metaphor-oriented language teaching, as (future) managers aim to quickly improve their language performance to be prepared for international business communication. In using metaphors, speakers as well as hearers conceptualize and thus experience one thing in terms of another. Having been made aware of the conceptual linkage, students are immediately equipped with a whole set of vocabulary they may already have learned for a concrete domain and are then able to elaborate in the more abstract area of business discourse. Enhanced metaphor awareness may thus prove to be a valuable vehicle for vocabulary acquisition as well as for vocabulary retention. This thesis is subdivided into ten chapters. With each successive chapter, the focus will increasingly sharpen on the main hypothesis that metaphor awareness raising and explicit teaching in the business English classroom assists the students to dip into their savings' and transfer already acquired vocabulary to abstract business discourse and thus to become more proficient business communicators. After an introduction to the main objectives, chapter two critically looks at the different strands of Cognitive Linguistic contributions to metaphor theory made within the last three decades and discusses the structure, function and processing of figurative language to single out relevant aspects of the language classroom applications. Chapter three narrows the perspective to the socio-economic discourse as the very target domain in focus and surveys the conceptual metaphors that have been identified for this target domain, namely the source domains most productive for the target and therefore most valuable for the language classroom. In chapter four Cognitive Linguistic findings are put in contact with language didactics; i.e., the Cognitive Linguistic basis is discussed in the context of language teaching and learning theories and a first classification of metaphor teaching in the theoretical framework of language didactics is proposed. Ten cornerstones summarize the theoretical output of the previous chapters and the respective didactic consequences are considered. Theories of cognitive psychology pertaining to noticing, processing, and storing metaphors are systematically revisited and expanded to formulate further didactic implications for metaphor teaching. The consequences drawn from both linguistic as well as didactic theory are translated into a list of ten short guidelines identifying essentials for the explicit integration of metaphors into the language classroom. In chapter five those experimental studies that have already been conducted in the field of Cognitive Linguistic-inspired figurative language teaching are systematically summarized and possible contributions to set up a didactic framework for metaphor teaching are investigated. Chapters six to nine then present a piece of original research. Starting out from five research questions tackling receptive and productive vocabulary acquisition and retention as well as the influence of and on the learner- level of language proficiency, a three-fold study was designed and conducted in a regular business English classroom and results are discussed in detail. The last chapter deals again with specific implications for teaching. Earlier statements about and claims for the language classroom are revisited and refined on the basis of the theoretical linguistic, didactic and empirical findings, and an agenda for further empirical investigations is sketched out.