Refine
Year of publication
Document Type
- Doctoral Thesis (476) (remove)
Language
- English (249)
- German (225)
- Multiple languages (1)
- Spanish (1)
Keywords
- Pestizid (8)
- Pflanzenschutzmittel (8)
- Führung (6)
- Inklusion (6)
- Grundwasserfauna (5)
- Landwirtschaft (5)
- Modellierung (4)
- Persönlichkeit (4)
- Software Engineering (4)
- Unterrichtsforschung (4)
Institute
- Fachbereich 7 (93)
- Fachbereich 8 (47)
- Institut für Informatik (35)
- Institut für Integrierte Naturwissenschaften, Abt. Biologie (29)
- Institut für Umweltwissenschaften (23)
- Institut für Integrierte Naturwissenschaften, Abt. Chemie (22)
- Fachbereich 5 (20)
- Institut für Computervisualistik (18)
- Institut für Integrierte Naturwissenschaften, Abt. Physik (13)
- Institut für Pädagogik, Abteilung Pädagogik (13)
The availability of digital cameras and the possibility to take photos at no cost lead to an increasing amount of digital photos online and on private computers. The pure amount of data makes approaches that support users in the administration of the photo necessary. As the automatic understanding of photo content is still an unsolved task, metadata is needed for supporting administrative tasks like search or photo work such as the generation of photo books. Meta-information textually describes the depicted scene or consists of information on how good or interesting a photo is.
In this thesis, an approach for creating meta-information without additional effort for the user is investigated. Eye tracking data is used to measure the human visual attention. This attention is analyzed with the objective of information creation in the form of metadata. The gaze paths of users working with photos are recorded, for example, while they are searching for photos or while they are just viewing photo collections.
Eye tracking hardware is developing fast within the last years. Because of falling prices for sensor hardware such as cameras and more competition on the eye tracker market, the prices are falling, and the usability is increasing. It can be assumed that eye tracking technology can soon be used in everyday devices such as laptops or mobile phones. The exploitation of data, recorded in the background while the user is performing daily tasks with photos, has great potential to generate information without additional effort for the users.
The first part of this work deals with the labeling of image region by means of gaze data for describing the depicted scenes in detail. Labeling takes place by assigning object names to specific photo regions. In total, three experiments were conducted for investigating the quality of these assignments in different contexts. In the first experiment, users decided whether a given object can be seen on a photo by pressing a button. In the second study, participants searched for specific photos in an image search application. In the third experiment, gaze data was collected from users playing a game with the task to classify photos regarding given categories. The results of the experiments showed that gaze-based region labeling outperforms baseline approaches in various contexts. In the second part, most important photos in a collection of photos are identified by means of visual attention for the creation of individual photo selections. Users freely viewed photos of a collection without any specific instruction on what to fixate, while their gaze paths were recorded. By comparing gaze-based and baseline photo selections to manually created selections, the worth of eye tracking data in the identification of important photos is shown. In the analysis of the data, the characteristics of gaze data has to be considered, for example, inaccurate and ambiguous data. The aggregation of gaze data, collected from several users, is one suggested approach for dealing with this kind of data.
The results of the performed experiments show the value of gaze data as source of information. It allows to benefit from human abilities where algorithms still have problems to perform satisfyingly.
Model-Driven Engineering (MDE) aims to raise the level of abstraction in software system specifications and increase automation in software development. Modelware technological spaces contain the languages and tools for MDE that software developers take into consideration to model systems and domains. Ontoware technological spaces contain ontology languages and technologies to design, query, and reason on knowledge. With the advent of the Semantic Web, ontologies are now being used within the field of software development, as well. In this thesis, bridging technologies are developed to combine two technological spaces in general. Transformation bridges translate models between spaces, mapping bridges relate different models between two spaces, and, integration bridges merge spaces to new all-embracing technological spaces. API bridges establish interoperability between the tools used in the space. In particular, this thesis focuses on the combination of modelware and ontoware technological spaces. Subsequent to a sound comparison of languages and tools in both spaces, the integration bridge is used to build a common technological space, which allows for the hybrid use of languages and the interoperable use of tools. The new space allows for language and domain engineering. Ontology-based software languages may be designed in the new space where syntax and formal semantics are defined with the support of ontology languages, and the correctness of language models is ensured by the use of ontology reasoning technologies. These languages represent a core means for exploiting expressive ontology reasoning in the software modeling domain, while remaining flexible enough to accommodate varying needs of software modelers. Application domains are conceptually described by languages that allow for defining domain instances and types within one domain model. Integrated ontology languages may provide formal semantics for domain-specific languages and ontology technologies allow for reasoning over types and instances in domain models. A scenario in which configurations for network device families are modeled illustrates the approaches discussed in this thesis. Furthermore, the implementation of all bridging technologies for the combination of technological spaces and all tools for ontology-based language engineering and use is illustrated.
Eine zutreffende Diagnose über den aktuellen Kenntnisstand der jeweiligen Schülerinnen und Schüler ist notwendig, um adäquat in Gruppenarbeitsprozesse intervenieren zu können. Von diesem Zusammenhang wird in der Literatur weit-gehend ausgegangen, jedoch gibt es bisher kaum empirische Studien, die diesen belegen. Die vorliegende Arbeit widmet sich schwerpunktmäßig dem Interventi-onsverhalten von Studierenden. Dabei wird die prozessdiagnostische Fähigkeit „Deuten“ zugrundegelegt, um unterschiedliches Interventionsverhalten auf diese Fähigkeit zurückführen zu können. Sowohl beim Aufbau diagnostischer Fähig-keiten als auch bei der (Weiter-)Entwicklung des eigenen Lehrerhandelns gilt Reflexion als hilfreich. Entsprechend wird auch das Zusammenspiel von Pro-zessdiagnose und Reflexionsverhalten sowie von Interventionsverhalten und Reflexionsverhalten untersucht.
Für die Erhebung der prozessdiagnostischen Fähigkeit „Deuten“ wurden drei Videovignetten erstellt und in das Videodiagnosetool ViviAn eingebunden. Die Videovignetten zeigen jeweils vier Schülerinnen, die sich mit dem Thema „Ter-me“ beschäftigen. Im Rahmen eines Lehr-Lern-Labores wurden über vier Se-mester hinweg alle teilnehmenden Studierenden dazu angehalten, die Videovig-netten zu bearbeiten. Ebenso konzipierten sie jeweils zu dritt eine Laborstation im Mathematik-Labor „Mathe ist mehr“ und erprobten diese mit einer Schul-klasse. Dabei wurden die Interventionen der Studierenden in die Gruppenarbeits-prozesse der Schülerinnen und Schüler videographiert. Anschließend reflektierten die Studierenden in Kleingruppen über die Erprobungen und über die getätigten Interventionen. Die Reflexionsgespräche wurden ebenfalls videographiert.
Es zeigt sich, dass die Studierenden, die sich zum Zeitpunkt der Erhebung im Masterstudium befanden, noch Entwicklungsspielraum in Bezug auf ihre pro-zessdiagnostische Fähigkeit „Deuten“ besitzen. Im Hinblick auf die Interventio-nen waren responsive Interventionen häufiger angemessen als invasive Interven-tionen, wobei responsive Internvetionen auch vergleichsweise häufiger dazu führten, dass mehr Schülerinnen und Schüler nach der Intervention aktiv waren. Studierende mit höherer prozessdiagnostischer Fähigkeit „Deuten“ intervenierten jedoch häufiger invasiv und tätigten dabei trotzdem angemessenere und aktivie-rendere Interventionen als ihre Kommilitoninnen und Kommilitonen. Entspre-chend scheint sich die prozessdiagnostische Fähigkeit „Deuten“ positiv auf die Interventionen der Studierenden auszuwirken und sollte daher bereits im Rah-men des (Lehramts-)Studiums verstärkt geschult werden.
Retrospektive Analyse der Ausbreitung und dynamische Erkennung von Web-Tracking durch Sandboxing
(2018)
Aktuelle quantitative Analysen von Web-Tracking bieten keinen umfassenden Überblick über dessen Entstehung, Ausbreitung und Entwicklung. Diese Arbeit ermöglicht durch Auswertung archivierter Webseiten eine rückblickende Erfassung der Entstehungsgeschichte des Web-Trackings zwischen den Jahren 2000 und 2015. Zu diesem Zweck wurde ein geeignetes Werkzeug entworfen, implementiert, evaluiert und zur Analyse von 10000 Webseiten eingesetzt. Während im Jahr 2005 durchschnittlich 1,17 Ressourcen von Drittparteien eingebettet wurden, zeigt sich ein Anstieg auf 6,61 in den darauffolgenden 10 Jahren. Netzwerkdiagramme visualisieren den Trend zu einer monopolisierten Netzstruktur, in der bereits ein einzelnes Unternehmen 80 % der Internetnutzung überwachen kann.
Trotz vielfältiger Versuche, dieser Entwicklung durch technische Maßnahmen entgegenzuwirken, erweisen sich nur wenige Selbst- und Systemschutzmaßnahmen als wirkungsvoll. Diese gehen häufig mit einem Verlust der Funktionsfähigkeit einer Webseite oder mit einer Einschränkung der Nutzbarkeit des Browsers einher. Mit der vorgestellten Studie wird belegt, dass rechtliche Vorschriften ebenfalls keinen hinreichenden Schutz bieten. An Webauftritten von Bildungseinrichtungen werden Mängel bei Erfüllung der datenschutzrechtlichen Pflichten festgestellt. Diese zeigen sich durch fehlende, fehlerhafte oder unvollständige Datenschutzerklärungen, deren Bereitstellung zu den Informationspflichten eines Diensteanbieters gehören.
Die alleinige Berücksichtigung klassischer Tracker ist nicht ausreichend, wie mit einer weiteren Studie nachgewiesen wird. Durch die offene Bereitstellung funktionaler Webseitenbestandteile kann ein Tracking-Unternehmen die Abdeckung von 38 % auf 61 % erhöhen. Diese Situation wird durch Messungen von Webseiten aus dem Gesundheitswesen belegt und aus technischer sowie rechtlicher Perspektive bewertet.
Bestehende systemische Werkzeuge zum Erfassen von Web-Tracking verwenden für ihre Messung die Schnittstellen der Browser. In der vorliegenden Arbeit wird mit DisTrack ein Framework zur Web-Tracking-Analyse vorgestellt, welches eine Sandbox-basierte Messmethodik verfolgt. Dies ist eine Vorgehensweise, die in der dynamischen Schadsoftwareanalyse erfolgreich eingesetzt wird und sich auf das Erkennen von Seiteneffekten auf das umliegende System spezialisiert. Durch diese Verhaltensanalyse, die unabhängig von den Schnittstellen des Browsers operiert, wird eine ganzheitliche Untersuchung des Browsers ermöglicht. Auf diese Weise können systemische Schwachstellen im Browser aufgezeigt werden, die für speicherbasierte Web-Tracking-Verfahren nutzbar sind.
German politicians have identified a need for greater citizen involvement in decision-making than in the past, as confirmed by a recent German parliamentarians study ("DEUPAS"). As in other forms of social interactions, the Internet provides significant potential to serve as the digital interface between citizens and decision-makers: in the recent past, dedicated electronic participation ("e-participation") platforms (e.g. dedicated websites) have been provided by politicians and governments in an attempt to gather citizens" feedback and comment on a particular issue or subject. Some of these have been successful, but a large proportion of them are grossly under-used " often only small numbers of citizens use them. Over the same time period, enthusiasm of Society for social networks has increased and is now commonplace. Many citizens use social networks such as Facebook and Twitter for all kinds of purposes, and in some cases to discuss political issues.
Social networks are therefore obviously attractive to politicians " from local government to federal agencies, politicians have integrated social media into their daily work. However, there is a significant challenge regarding the usefulness of social networks. The problem is the continuous increase in digital information: social networks contain vast amounts of information, and it is impossible for a human to manually filter the relevant information from the irrelevant (so-called "information overload"). Even using the search tools provided by social networks, it is still a huge task for a human to determine meanings and themes from the multitude of search results. New technologies and concepts have been proposed to provide summaries of masses of information through lexical analysis of social media messages, and therefore they promise an easy and quick overview of the information.
This thesis examines the relevance of these analyses" results, for the use in everyday political life, with the emphasis on the social networks Facebook and Twitter as data sources. Here we make use of the WeGov Toolbox and its analysis components that were developed during the EU project WeGov. The assessment has been performed in consultation with actual policy-makers from different levels of German government: policy-makers from the German Federal Parliament, the State Parliament North Rhine-Westphalia, the State Chancellery of the Saarland and the cities of Cologne and Kempten all took part in the study. Our method was to execute the analyses on data collected from Facebook and Twitter, and present the results to the policy-makers, who would then evaluate them using a mixture of qualitative methods.
The responses of the participants have provided us with some useful conclusions:
1) None of the participants believe that e-participation is possible in this way. But participants confirm that "citizen-friendliness" can be supported by this approach.
2) The most likely users for the summarisation tools are those who have experience with social networks, but are not "power users". The reason being is that "power users" already knew the relevant information provided by analysis tools. But without any experiences for social networks it is hard to interpret the analysis results the right way.
3) The evaluation has considered geographical aspects, and related this to e.g. a politician- constituency as a local area of social networks. Comparing the rural to the urban areas, it is shown that the amount of relevant political information in the rural areas is low. While the proportion of publicly available information in urban areas is relatively high, the proportion in the rural areas is much lower.
The findings that result from the engagement with policy-makers will be systematically surveyed and validated within this thesis.
Previous research concerned with early science education revealed that guided play can support young children’s knowledge acquisition. However, the questions whether guided play maintains other important prerequisites such as children’s science self-concept and how guided play should be implemented remain unanswered. The present dissertation encompasses three research articles that investigated 5- to 6-year-old children’s science knowledge, science theories, and science self-concept in the stability domain and their relation to interindividual prerequisites. Moreover, the articles examined whether children’s science knowledge, science theories, and science self-concept can be supported by different play forms, i.e., guided play with material and verbal scaffolds, guided play with material scaffolds, and free play. The general introduction of the present dissertation first highlights children’s cognitive development, their science self-concept, and interindividual prerequisites, i.e., fluid and crystallised intelligence, mental rotation ability, and interest in block play. These prerequisites are applied to possible ways of supporting children during play. The first article focused on the measurement of 5-to-6-year-old children’s stability knowledge and its relation to interindividual prerequisites. Results suggested that children’s stability knowledge could be measured reliably and validly, and was related to their fluid and crystallised intelligence. The second article was concerned with the development of children’s intuitive stability theories over three points of measurement and the effects of guided and free play, children’s prior theories as well as their intelligence on these intuitive theories. Results implied that guided play with material and verbal scaffolds supported children’s stability theories more than the other two play forms, i.e., guided play with material scaffolds and free play. Moreover, consistency of children’s prior theories, their fluid and crystallised intelligence were related to children’s theory adaptation after the intervention. The third article focused on the effect of the playful interventions on children’s stability knowledge and science self-concept over three points of measurement. Furthermore, the reciprocal effects between knowledge acquisition and science self-concept were investigated. Results implied that guided play supported knowledge acquisition and maintained children’s science self-concept. Free play did not support children’s stability knowledge and decreased children’s science self-concept. No evidence for reciprocal effects between children’s stability knowledge and their science self-concept was found. Last, in a general discussion, the findings of the three articles are combined and reflected amidst children’s cognitive development. Summarising, the present dissertation shows that children’s science knowledge, science theories, and science self-concept can be supported through guided play that considers children’s cognitive development.
Fresh water resources like rivers and reservoirs are exposed to a drastically changing world. In order to safeguard these lentic ecosystems, they need stronger protection in times of global change and population growth. In the last years, the exploitation pressure on drinking water reservoirs has increased steadily worldwide. Besides securing the demands of safe drinking water supply, international laws especially in Europe (EU Water Framework Directive) stipulate to minimize the impact of dams on downstream rivers. In this study we investigate the potential of a smart withdrawal strategy at Grosse Dhuenn Reservoir to improve the temperature and discharge regime downstream without jeopardizing drinking water production. Our aim is to improve the existing withdrawal strategy for operating the reservoir in a sustainable way in terms of water quality and quantity. First, we set-up and calibrated a 1D numerical model for Grosse Dhuenn Reservoir with the open-source community model “General Lake Model” (GLM) together with its water quality module “Aquatic Ecodynamics” library (AED2). The reservoir model reproduced water temperatures and hypolimnetic dissolved oxygen concentrations accurately over a 5 year period. Second, we extended the model source code with a selective withdrawal functionality (adaptive offtake) and added operational rules for a realistic reservoir management. Now the model is able to autonomously determine the best withdrawal height according to the temperature and flow requirements of the downstream river and the raw water quality objectives. Criteria for the determination of the withdrawal regime are selective withdrawal, development of stratification and oxygen content in the deep hypolimnion. This functionality is not available in current reservoir models, where withdrawal heights are generally provided a priori to the model and kept fixed during the simulation. Third, we ran scenario simulations identifying an improved reservoir withdrawal strategy to balance the demands for downstream river and raw water supply. Therefore we aimed at finding an optimal parallel withdrawal ratio between cold hypolimnetic water and warm epilimnetic or metalimnetic water in order to provide a pre-defined temperature in the downstream river. The reservoir model and the proposed withdrawal strategy provide a simple and efficient tool to optimize reservoir management in a multi-objective view for mastering future reservoir management challenges.
Pelagic oxyclines, the transition zone between oxygen rich surface waters and oxygen depleted deep waters, are a common characteristic of eutrophic lakes during summer stratification. They can have tremendous effects on the biodiversity and the ecosystem functioning of lakes and, to add insult to injury, are expected to become more frequent and more pronounced as climate warming progresses. On these grounds, this thesis endeavors to advance the understanding of formation, persistence, and consequences of pelagic oxyclines: We test, whether the formation of metalimnetic oxygen minima is intrinsically tied to a locally enhanced oxygen consuming process, investigate the relative importance of vertical physical oxygen transport and biochemical oxygen consumption for the persistence of pelagic oxyclines, and finally assess their potential consequences for whole lake cycling. To pursue these objectives, the present thesis nearly exclusively resorts to in situ measurements. Field campaigns were conducted at three lakes in Germany featuring different types of oxyclines and resolved either a short (hours to days) or a long (weeks to months) time scale. Measurements comprised temperature, current velocity, and concentrations of oxygen and reduced substances in high temporal and vertical resolution. Additionally, vertical transport was estimated by applying the eddy correlation technique within the pelagic region for the first time. The thesis revealed, that the formation of metalimnetic oxygen minima does not necessarily depend on locally enhanced oxygen depletion, but can solely result from gradients and curvatures of oxygen concentration and depletion and their relative position to each other. Physical oxygen transport was found to be relevant for oxycline persistence when it considerably postponed anoxia on a long time scale. However, its influence on oxygen dynamics was minor on short time scales, although mixing and transport were highly variable. Biochemical consumption always dominated the fate of oxygen in pelagic oxyclines. It was primarily determined by the oxidative breakdown of organic matter originating from the epilimnion, whereas in meromictic lakes, the oxidation of reduced substances dominated. Beyond that, the results of the thesis emphasize that pelagic oxyclines can be a hotspot of mineralization and, hence, short-circuit carbon and nutrient cycling in the upper part of the water column. Overall, the present thesis highlights the importance of considering physical transport as well as biochemical cycling in future studies.
Groundwater is essential for the provision of drinking water in many areas around the world. The ecosystem services provided by groundwater-related organisms are crucial for the quality of groundwater-bearing aquifers. Therefore, if remediation of contaminated groundwater is necessary, the remediation method has to be carefully selected to avoid risk-risk trade-offs that might impact these valuable ecosystems. In the present thesis, the ecotoxicity of the in situ remediation agent Carbo-Iron (a composite of zero valent nano-iron and active carbon) was investigated, an estimation of its environmental risk was performed, and the risk and benefit of a groundwater remediation with Carbo-Iron were comprehensively analysed.
At the beginning of the work on the present thesis, a sound assessment of the environmental risks of nanomaterials was impeded by a lack of guidance documents, resulting in many uncertainties on selection of suitable test methods and a low comparability of test results from different studies with similar nanomaterials. The reasons for the low comparability were based on methodological aspects of the testing procedures before and during the toxicity testing. Therefore, decision trees were developed as a tool to systematically decide on ecotoxicity test procedures for nanomaterials. Potential effects of Carbo-Iron on embryonic, juvenile and adult life stages of zebrafish (Danio rerio) and the amphipod Hyalella azteca were investigated in acute and chronic tests. These tests were based on existing OECD and EPA test guidelines (OECD, 1992a, 2013a, 2013b; US EPA, 2000) to facilitate the use of the obtained effect data in the risk assessment. Additionally, the uptake of particles into the test organisms was investigated using microscopic methods. In zebrafish embryos, effects of Carbo-Iron on gene expression were investigated. The obtained ecotoxicity data were complemented by studies with the waterflea Daphnia magna, the algae Scenedesmus vacuolatus, larvae of the insect species Chironomus riparius and nitrifying soil microorganisms.
In the fish embryo test, no passage of Carbo-Iron particles into the perivitelline space or the embryo was observed. In D. rerio and H. azteca, Carbo-Iron was detected in the gut at the end of exposure, but no passage into the surrounding tissue was detected. Carbo-Iron had no significant effect on soil microorganisms and on survival and growth of fish. However, it had significant effects on the growth, feeding rate and reproduction of H. azteca and on survival and reproduction in D. magna. Additionally, the development rate of C. riparius and the cell volume of S. vacuolatus were negatively influenced.
A predicted no effect concentration of 0.1 mg/L was derived from the ecotoxicity studies based on the no-effect level determined in the reproduction test with D. magna and an assessment factor of 10. It was compared to measured and modelled environmental concentrations for Carbo-Iron after application to an aquifer contaminated with chlorohydrocarbons in a field study. Based on these concentrations, risk quotients were derived. Additionally, the overall environmental risk before and after Carbo-Iron application was assessed to verify whether the chances for a risk-risk trade-off by the remediation of the contaminated site could be minimized. With the data used in the present study, a reduced environmental risk was identified after the application of Carbo-Iron. Thus, the benefit of remediation with Carbo-Iron outweighs potential negative effects on the environment.
Today’s agriculture heavily relies on pesticides to manage diverse pests and maximise crop yields. Despite elaborate regulation of pesticide use based on a complex environmental risk assessment (ERA) scheme, the widespread use of these biologically active compounds has been shown to be a threat to the environment. For surface waters, pesticide exposure has been observed to exceed safe concentration levels and negatively impact stream ecology leading to the question whether current ERA schemes ensure a sustainable use of pesticides. To answer this, the large-scale “Kleingewässer-Monitoring” (KgM) assessed the occurrence of pesticides and related effects in 124 streams throughout Germany, Central Europe, in 2018 and 2019.
Based on five scientific publications originating from the KgM, this thesis evaluated pesticide exposure in streams, ecological effects and the regulatory implications. More than 1,000 water samples were analysed for over 100 pesticide analytes to characterise occurrence patterns (publication 1). Measured concentrations and effects were used to validate the exposure and effect concentrations predicted in the ERA (publication 2). By jointly analysing real-world pesticide application data and measured pesticide mixtures in streams, the disregard of environmental pesticide mixtures in the ERA was evaluated (publication 3). The toxic potential of mixtures in stream water was additionally investigated using suspect screening for 395 chemicals and a battery of in-vitro bioassays (publication 4). Finally, the results from the KgM stream monitoring were used to assess the capability to identify pesticide risks in governmental monitoring programmes (publication 5).
The results of this thesis reveal the widespread occurrence of pesticides in non-target stream ecosystems. The water samples contained a variety of pesticides occurring in complex mixtures predominantly in short-term peaks after rainfall events (publications 1 & 4). Respective pesticide concentration maxima were linked to declines in vulnerable invertebrate species and exceeded regulatory acceptable concentrations in about 80% of agricultural streams, while these thresholds were still estimated partly insufficient to protect the invertebrate community (publication 2). The co-occurrence of pesticides in streams led to a risk underestimated in the single substance-oriented ERA by a factor of about 3.2 in realistic worst-case scenarios, which is further exacerbated by a high frequency at which non-target organism are exposed to pesticides (publication 3). Stream water samples taken after rainfall caused distinct effects in bioassays which were only explainable to a minor extent by the many analytes, indicating the relevance of unknown chemical or biological mixture components (publication 4). Finally, the regulatory monitoring of surface waters under the Water Framework Directive (WFD) was found to significantly underestimate pesticide risks, as about three quarters of critical pesticides and more than half of streams at risk were overlooked (publication 5).
Essentially, this thesis involves a new level of validation of the ERA of pesticides in aquatic ecosystems by assessing pesticide occurrence and environmental impacts at a scale so far unique. The overall results demonstrate that the current agricultural use of pesticides leads to significant impacts on stream ecology that go beyond the level tolerated under the ERA. This thesis identified the underestimation of pesticide exposure, the potential insufficiency of regulatory thresholds and the general inertia of the authorisation process as the main causes why the ERA fails to meet its objectives. To achieve a sustainable use of pesticides, the thesis proposes substantial refinements of the ERA. Adequate monitoring programmes such as the KgM, which go beyond current government monitoring efforts, will continue to be needed to keep pesticide regulators constantly informed of the validity of their prospective ERA, which will always be subject to uncertainty.
Instructor feedback on written assignments is one of the most important elements in the writing process, especially for students writing in English as a foreign language. However, students are often critical of both the amount and quality of the feedback they receive. In order to better understand what makes feedback effective, this study explored the nature of students’ assessments of the educational alliance, and how their receptivity to, perceptions of, and decisions about using their instructors’ feedback differed depending on how strong they believed the educational alliance to be. This exploratory case study found that students not only assessed the quality of the educational alliance based on goal compatibility, task relevance, and teacher effectiveness, but that there was also a reciprocal relationship between these elements. Furthermore, students’ perceptions of the educational alliance directly influenced how they perceived the feedback, which made the instructor’s choice of feedback method largely irrelevant. Stronger educational alliances resulted in higher instances of critical engagement, intrinsic motivation, and feelings of self-efficacy. The multidirectional influence of goal, task, and bond mean that instructors who want to maximize their feedback efforts need to attend to all three.
Thousands of chemicals from daily use are being discharged from civilization into the water cycle via different pathways. Ingredients of personal care products, detergents, pharmaceuticals, pesticides, and industrial chemicals thus find their way into the aquatic ecosystems and may cause adverse impacts on the ecology. Pharmaceuticals for instance, represent a central group of anthropogenic chemicals, because of their designed potency to interfere with physiological functions in organisms. Ecotoxicological effects from pharmaceutical burden have been verified in the past. Therapeutic groups with pronounced endocrine disrupting potentials such as steroid hormones gain increasing focus in environmental research as it was reported that they cause endocrine disruption in aquatic organisms even when exposed to environmentally relevant concentrations. This thesis considers the comprehensive investigation of the occurrence of corticosteroids and progestogens in wastewater treatment plant (WWTP) effluents and surface waters as well as the elucidation of the fate and biodegradability of these steroid families during activated sludge treatment. For the first goal of the thesis, a robust and highly sensitive analytical method based on liquid chromatography-tandem mass spectrometry (LC-MS/MS) was developed in order to simultaneously determine the occurrence of around 60 mineralocorticoids, glucocorticoids and progestogens in the aquatic environment. A special focus was set to the compound selection due to the diversity of marketed synthetic steroids. Some analytical challenges have been approved by individual approaches regarding sensitivity enhancement and compound stabilities. These results may be important for further research in environmental analysis of steroid hormones. Reliable and low quantification limits are the perquisite for the determination of corticosteroids and progestogens at relevant concentrations due to low consumption volumes and simultaneously low effect-based trigger values. Achieved quantification limits for all target analytes ranged between 0.02 ng/L and 0.5 ng/L in surface water and 0.05 ng/L to 5 ng/L in WWTP effluents. This sensitivity enabled the detection of three mineralocorticoids, 23 glucocorticoids and 10 progestogens within the sampling campaign around Germany. Many of them were detected for the first time in the environment, particularly in Germany and the EU. To the best of our knowledge, this in-depth steroid screening provided a good overview of single steroid burden and allowed for the identification of predominantly steroids of each steroid
type analyzed for the first time. The frequent detection of highly potent synthetic steroids (e.g. triamcinolone acetonide, clobetasol propionate, betamethasone valerate, dienogest, cyproterone acetate) highlighted insufficient removal during conventional Summary wastewater treatment and indicated the need for regulation to control their emission since the steroid concentrations were found to be above the reported effect-based trigger values for biota. Overall, the study revealed reliable environmental data of poorly or even not analyzed steroids. The results complement the existing knowledge in this field but also providednew information which can beused particularly for compound prioritization in ecotoxicological research and environmental analysis. Based on the data obtained from the monitoring campaign, incubation experiments were conducted to enable the comparison of the biodegradability and transformation processes in activated sludge treatment for structure-related steroids under aerobic and standardized experimental conditions. The compounds were accurately selected to cover manifold structural moieties of commonly used glucocorticoids, including non-halogenated and halogenated steroids, their mono- and diesters, and several acetonide-type steroids. This approach allowed for a structure-based interpretation of the results. The obtained biodegradation rate constants suggested large variations in the biodegradability (half-lifes ranged from < 0.5 h to > 14 d). An increasing stability was identified in the order from non-halogenated steroids (e.g. hydrocortisone), over 9α-halogenated steroids (e.g. betamethasone), to C17-monoesters (e.g. betamethasone 17-valerate, clobetasol propionate), and finally to acetonides (e.g. triamcinolone acetonide), thus suggesting a strong relationship of the biodegradability with the glucocorticoid structure. Some explanations for this behavior have been received by identifying the transformation products (TPs) and elucidating individual transformation pathways. The results revealed the identification of the likelihood of transformation reactions depending on the chemical steroid structure for the first time. Among the identified TPs, the carboxylates (e.g. TPs of fluticasone propionate, triamcinolone acetonide) have been shown persistency in the subsequent incubation experiments. The newly identified TPs furthermore were frequently detected in the effluents of full-scale wastewater treatment plants. These findings emphasized i) the transferability of the lab-scale degradation experiments to real world and that ii) insufficient removals may cause adverse effects in the aquatic environment due to the ability of the precursor steroids and TPs to interact with the endocrine system in biota. For the last goal, the conceptual study for glucocorticoids was applied to progestogens.
Here, two sub-types of the steroid family frequently used for hormonal contraception were selected (17α-hydroxyprogesterone and 19-norstestosterone type). The progestogens showed a fast and complete degradation within six hours, and thus empathizes pronounced biodegradability. However, cyproterone acetate and dienogest Summary have been found to be more recalcitrant in activated sludge treatment. This was consistent with their ubiquitously occurrence during the previous monitoring campaign. The elucidation of TPs again revealed some crucial information regarding the observed behavior and highlighted furthermore the formation of hazardous TPs. It was shown that 19-nortestosterone type steroids are able to undergo aromatization at ring A in contact with activated sludge, leading to the formation of estrogen-like TPs with a phenolic moiety at ring A. In the case of norethisterone the formation of 17α-ethinylestradiol was confirmed, which is a well-known potent synthetic estrogen with elevated ecotoxicological potency. Thus, the results indicated for the very first time an unknown source of estrogenic compounds, particularly for 17α-ethinylestradiol. In conclusion, some steroids were found to be very stable in activated sludge treatment, others degrade well, and others which do degrade but predominantly to active TPs depending on their chemical structure. Fluorinated acetal steroids such as triamcinolone acetonide and fluocinolone acetonide are poorly biodegradable, which is reflected in high concentrations detected ubiquitously in WWTP effluents. Endogenous steroids and their most related synthetic once such as hydrocortisone, prednisolone or 17α-hydroxyprogesterone are readily biodegradable. Regardless their high influent concentrations, they are almost completely removed in conventional WWTPs. Steroids between this range have been found to form elevated quantities of TPs which are partially still active, which particularly the case for betamethasone, fluticasone propionate, cyproterone acetate or dienogest. The thesis illustrates the need for an extensive evaluation of the environmental risks and carried out that corticosteroids and progestogens merit more attention in environmental regulatory and research than it is currently the case
The decline of biodiversity can be observed worldwide and its consequences are alarming. It is therefore crucial that nature must be protected and, where possible, restored. A wide variety of different project options are possible. Yet in the context of limited availability of resources, the selection of the most efficient measures is increasingly important. For this purpose, there is still a lack of information. This pertains, as outlined in the next paragraph, in particular, to information at different scales of projects.
Firstly, there is a lack of information on the concrete added value of biodiversity protection projects. Secondly, there is a lack of information on the actual impacts of such projects and on the costs and benefits associated with a project. Finally, there is a lack of information on the links between the design of a project, the associated framework conditions and the perception of specific impacts. This paper addresses this knowledge gap by providing more information on the three scales by means of three empirical studies on three different biodiversity protection projects in order to help optimize future projects.
The first study “Assessing the trade-offs in more nature-friendly mosquito control in the Upper Rhine region” examines the added value of a more nature-friendly mosquito control in the Upper Rhine Valley of Germany using a contingent valuation method. Recent studies show that the widely used biocide Bti, which is used as the main mosquito control agent in many parts of the world, has more negative effects on nature than previously expected. However, it is not yet clear whether the population supports a more nature-friendly mosquito control, as such an adaptation could potentially lead to higher nuisance. This study attempts to answer this question by assessing the willingness to pay for an adapted mosquito control strategy that reduces the use of Bti, while maintaining nuisance protection within settlements. The results show that the majority of the surveyed population attaches a high value to a more nature-friendly mosquito control and is willing to accept a higher nuisance outside of the villages.
The second study “Inner city river restoration projects: the role of project components for acceptance” examines the acceptance of a river restoration project in Rhineland-Palatinate, Germany. Despite much effort, many rivers worldwide are still in poor condition. Therefore, a rapid implementation of river restoration projects is of great importance. In this context, acceptance by society plays a fundamental role, however, the factors determining such acceptance are still poorly understood. In particular, the complex interplay between the acceptance or rejection of specific project components and the acceptance of the overall project require further exploration. This study addresses this knowledge gap by assessing the acceptance of the project, its various ecological and social components, and the perception of real and fictitious costs as well as the benefits of the components. Our findings demonstrate that while acceptance of the overall project is generally rather high, many respondents reject one or more of the project's components. Complementary social project components, like a playground, find less support than purely ecological components. Overall, our research shows that complementary components may increase or decrease acceptance of the overall project. We, furthermore, found that differences in the acceptance of the individual components depend on individual concerns, such as perceived flood risk, construction costs, expected noise and littering as well as the quality of communication, attachment to the site, and the age of the respondents.
The third study “What determines preferences for semi-natural habitats in agrarian landscapes? A choice-modelling approach across two countries using attributes characterizing vegetation” investigates people's aesthetic preferences for semi-natural habitats in agricultural landscapes. The EU-Common Agricultural Policy promotes the introduction of woody and grassy semi-natural habitats (SNH) in agricultural landscapes. While the benefits of these structures in terms of regulating ecosystem services are already well understood, the effects of SNH on visual landscape quality is still not clear. This study investigates the factors determining people’s visual preferences in the context of grassy and woody SNH elements in Swiss and Hungarian landscapes using picture-based choice experiments. The results suggest that respondents’ choices strongly depend on specific vegetation characteristics that appear and disappear over the year. In particular, flowers as a source of colours and green vegetation as well as ordered structure and the proportion of uncovered soil in the picture play an important role regarding respondents’ aesthetic perceptions of the pictures.
The three empirical studies can help to make future projects in the study areas of biodiversity protection more efficient. While this thesis highlights the importance of exploring biodiversity protection projects at different scales, further analyses of the different scales of biodiversity protection projects are needed to provide a sound basis to develop guidance on identifying the most efficient biodiversity protection projects.
Das Organische Qualitätsmanagement (OQM) hat seine Wurzeln in der "Natürlichen Gemeindeentwicklung" (NGE). Der evangelische Theologe Christian A. Schwarz und der Diplompsychologe Christoph Schalk gingen im Rahmen eines internationalen Forschungsprojektes der Frage nach, ob es universell gültige Prinzipien für das qualitative und quantitative Wachstum von Kirchengemeinden gibt. Diese Studie wurde zu einem der größten Forschungsprojekte, das jemals über das Wachstum von Gemeinden durchgeführt wurde. Bis Februar 2011 wurden in über 70 Ländern 71.512 Profile für Kirchengemeinden unterschiedlichster Prägung und Konfession erstellt.
Das Ergebnis dieser fortschreitenden Studie bietet eine wissenschaftlich zu verifizierende Antwort auf die Frage: "Was sind die Wachstumsprinzipien, die unabhängig von Kultur, theologischer Prägung und Frömmigkeitsrichtung gelten und können sich diese Prinzipien auch positiv auf Non Profit Organisationen und säkulare Wirtschaftsunternehmen anwenden lassen? Kapitel 1 beschreibt zunächst die Entstehung, Herkunft und Entwicklung des OQM, d.h. den Übertragungsprozess aller Erkenntnisse aus der Entwicklung von Kirchengemeinden auf die Realität von kirchlichen Organisationen und später auf die Anforderungen eines Wirtschaftsunternehmens. Kapitel 2 geht ausführlich auf die Beschreibung der acht Qualitätsmerkmale ein. Hierbei stehen weniger die Substantive (Leitung, Mitarbeiter, Strukturen, Beziehungen etc.) im Mittelpunkt, als vielmehr die Adjektive (bevollmächtigend, gabenorientiert, zweckmäßig, vertrauensvoll etc.). Sie beschreiben, auf was es in Veränderungsprozessen wirklich ankommt.
Zu Beginn des Projektes war nicht abzusehen, ob und in welcher Weise sich die Qualitätsmerkmale und die christlichen Sozialprinzipien als Erfolgsfaktoren auch in anderen Unternehmensformen beweisen können. In Kapitel 3 werden Praxisbeispiele zeigen, dass ein dialogisch arbeitendes OQM nicht nur Kirchengemeinden sondern auch caritativen Unternehmen und sogar Wirtschaftsunternehmen helfen kann zu wachsen und somit erfolgreich zu sein.
Forschungsergebnisse zum Männerchorwesen Deutschlands im 19. Jahrhundert belegen dessen gesellschaftliche und politische Relevanz. Das so genannte Sängerwesen leistete demnach einen wesentlichen Beitrag zur Nationsbildung in Deutschland, da die Sänger durch ihren Gesang sowie durch ihre Aktivitäten im Verein und in der Öffentlichkeit zur inneren Einigung der Bevölkerung beitrugen und somit halfen, eine einheitliche Nation zu formen. Im Gegensatz dazu gab es bislang kaum Erkenntnisse über die gesellschaftlichen und politischen Hintergründe des Männerchorwesens der Pfalz im gleichen Zeitraum. Um diese Lücke zu schließen, wurde mit der vorliegenden Arbeit die Geschichte des Männerchorwesens der Pfalz erforscht, insbesondere hinsichtlich seiner Bedeutung für die Nationsbildung Deutschlands. Der Untersuchungszeitraum erstreckt sich von 1816, dem Jahr, in dem die Pfalz zum bayerischen Staatsgebiet wurde, bis zur Gründung des Deutschen Reiches im Jahre 1871.
Zunächst wird die Entwicklung des pfälzischen Sängerwesens bezüglich der Zahl der gegründeten Vereinen in den einzelnen Jahren und Orten sowie bezüglich des Feierns lokaler und regionaler Sängerfeste im Überblick und im Vergleich zum Musikvereinswesen der Pfalz im gleichen Zeitraum dargestellt. Dieser Betrachtung des pfälzischen Männerchorwesens als Ganzem folgt die Untersuchung seiner Einzelteile, der Personen und Ereignisse innerhalb einzelner Sängervereine und innerhalb bestimmter Zeitabschnitte, vor dem Hintergrund der jeweiligen politischen und gesellschaftlichen Situation. Bedeutend sind in diesem Zusammenhang vor allem die Auswirkungen der politisch-gesellschaftlichen Großereignisse Hambacher Fest von 1832 sowie Revolution von 1848/49 auf die laienmusikalischen Vereinskulturen.
Schließlich werden die einzelnen Phänomene und die Gesamtentwicklung aufeineander bezogen. Der Anhang der Forschungsarbeit beinhaltet, neben Auszügen aus den Protokollbüchern des "Cäcilienverein-Liedertafel Dürkheim" sowie Plakaten von pfälzischen Musik- und Sängerfesten der 1840er Jahre, Übersichtstabellen mit Informationen zu den Pfälzischen Musikfesten des 19. Jahrhunderts sowie zu den im Untersuchungszeitraum gegründeten Sängervereinen, außerdem Kartenmaterial zur räumlichen Verbreitung der Vereine sowie Notenmaterial zum freimaurerischen "Weihelied" des Kaiserslauterer Seminarlehrers Philipp Walter.
More than 10,000 organic chemicals such as pharmaceuticals, ingredients of personal care products and biocides are ubiquitously used in every day life. After their application, many of these chemicals enter the domestic sewer. Research has shown that conventional biological wastewater treatment in municipal wastewater treatment plants (WWTPs) is an insufficient barrier for the release of most of these anthropogenic chemicals into the receiving waters.
This bears unforeseen risks for aquatic wildlife and drinking water resources. Especially for recently introduced and/or detected compounds (so called emerging micropollutants), there is a growing need to investigate the occurrence and fate in WWTPs. In order to get a comprehensive picture on the behavior in municipal wastewater treatment, the following groups of emerging organic micropollutants, spanning a broad range of applications and physico-chemical properties, were selected as target compounds: pharmaceuticals (beta blockers, psycho-active drugs), UV-filters, vulcanization accelerators (benzothiazoles), biocides (anti-dandruffs, preservatives, disinfectants) and pesticides (phenylurea and triazine herbicides).
The work presented in this thesis investigated interactions of selected biophysical processes that affect zooplankton ecology at smaller scales. In this endeavour, the extent of changes in swimming behaviour and fluid disturbances produced by swimming Daphnia in response to changing physical environments were quantified. In the first research question addressed within this context, size and energetics of hydrodynamic trails produced by Daphnia swimming in non-stratified still waters were characterized and quantified as a function of organisms’ size and their swimming patterns.
The results revealed that neither size nor the swimming pattern of Daphnia affects the width of induced trails or dissipation rates. Nevertheless, as the size and swimming velocity of the organisms increased, trail volume increased in proportional to the cubic power of Reynolds number, and the biggest trail volume was about 500 times the body volume of the largest daphnids. Larger spatial extent of fluid perturbation and prolonged period to decay caused by bigger trail volumes would play a significant role in zooplankton ecology, e.g. increasing the risk of predation.
The study also found that increased trail volume brought about significantly enhanced total dissipated power at higher Reynolds number, and the magnitudes of total dissipated power observed varied in the range of (1.3-10)X10-9 W.
Furthermore, this study provided strong evidence that swimming speed of Daphnia and total dissipated power in Daphnia trails exceeded those of some other selected zooplankton species.
In recognizing turbulence as an intrinsic environmental perturbation in aquatic habitats, this thesis also examined the response of Daphnia to a range of turbulence flows, which correspond to turbu-lence levels that zooplankton generally encounter in their habitats. Results indicated that within the range of turbulent intensities to which the Daphnia are likely to be exposed in their natural habitats, increasing turbulence compelled the organisms to enhance their swimming activity and swim-ming speed. However, as the turbulence increased to extremely high values (10-4 m2s-3), Daphnia began to withdraw from their active swimming behaviour. Findings of this work also demonstrated that the threshold level of turbulence at which animals start to alleviate from largely active swimming is about 10-6 m2s-3. The study further illustrated that during the intermediate range of turbu-lence; 10-7 - 10-6 m2s-3, kinetic energy dissipation rates in the vicinity of the organisms is consistently one order of magnitude higher than that of the background turbulent flow.
Swarming, a common conspicuous behavioural trait observed in many zooplankton species, is considered to play a significant role in defining freshwater ecology of their habitats from food exploitation, mate encountering to avoiding predators through hydrodynamic flow structures produced by them, therefore, this thesis also investigated implications of Daphnia swarms at varied abundance & swarm densities on their swimming kinematics and induced flow field.
The results showed that Daphnia aggregated in swarms with swarm densities of (1.1-2.3)x103 L-1, which exceeded the abundance densities by two orders of magnitude (i.e. 1.7 - 6.7 L-1). The estimated swarm volume decreased from 52 cm3 to 6.5 cm3, and the mean neighbouring distance dropped from 9.9 to 6.4 body lengths. The findings of this work also showed that mean swimming trajectories were primarily horizontal concentric circles around the light source. Mean flow speeds found to be one order of magnitude lower than the corresponding swimming speeds of Daphnia. Furthermore, this study provided evidences that the flow fields produced by swarming Daphnia differed considerably between unidirectional vortex swarming and bidirectional swimming at low and high abundances respectively.
Agricultural land-use may lead to brief pulse exposures of pesticides in edge-of-field streams, potentially resulting in adverse effects on aquatic macrophytes, invertebrates and ecosystem functions. The higher tier risk assessment is mainly based on pond mesocosms which are not designed to mimic stream-typical conditions. Relatively little is known on exposure and effect assessment using stream mesocosms.
Thus the present thesis evaluates the appliacability of the stream mesocosms to mimic stream-typical pulse exposures, to assess resulting effects on flora and fauna and to evaluate aquatic-terrestrial food web coupling. The first objective was to mimic stream-typical pulse exposure scenarios with different durations (≤ 1 to ≥ 24 hours). These exposure scenarios established using a fluorescence tracer were the methodological basis for the effect assessment of an herbicide and an insecticide. In order to evaluate the applicability of stream mesocosms for regulatory purposes, the second objective was to assess effects on two aquatic macrophytes following a 24-h pulse exposure with the herbicide iofensulfuron-sodium (1, 3, 10 and 30 µg/L; n = 3). Growth inhibition of up to 66 and 45% was observed for the total shoot length of Myriophyllum spicatum and Elodea canadensis, respectively. Recovery of this endpoint could be demonstrated within 42 days for both macrophytes. The third objective was to assess effects on structural and functional endpoints following a 6-h pulse exposure of the pyrethroid ether etofenprox (0.05, 0.5 and 5 µg/L; n = 4). The most sensitive structural (abundance of Cloeon simile) and functional (feeding rates of Asellus aquaticus) endpoint revealed significant effects at 0.05 µg/L etofenprox. This concentration was below field-measured etofenprox concentrations and thus suggests that pulse exposures adversely affect invertebrate populations and ecosystem functions in streams. Such pollutions of streams may also result in decreased emergence of aquatic insects and potentially lead to an insect-mediated transfer of pollutants to adjacent food webs. Test systems capable to assess aquatic-terrestrial effects are not yet integrated in mesocosm approaches but might be of interest for substances with bioaccumulation potential. Here, the fourth part provides an aquatic-terrestrial model ecosystem capable to assess cross-ecosystem effects. Information on the riparian food web such as the contribution of aquatic (up to 71%) and terrestrial (up to 29%) insect prey to the diet of the riparian spider Tetragnatha extensa was assessed via stable isotope ratios (δ13C and δ15N). Thus, the present thesis provides the methodological basis to assess aquatic-terrestrial pollutant transfer and effects on the riparian food web.
Overall the results of this thesis indicate, that stream mesocosms can be used to mimic stream-typical pulse exposures of pesticides, to assess resulting effects on macrophytes and invertebrates within prospective environmental risk assessment (ERA) and to evaluate changes in riparian food webs.
In this work has been examined, how the existing model of the simulation of cables and hoses can be advanced. Therefore an investigation has been made on the main influences to the shape simulation and the factors of constraints and side conditions were analyzed. For the validation of the accuracy, the simulation has to be compared to real specimen behavior. To obtain a very precise digitalization of the shape, the choice was made to use a laser scanner that converts the pointcloud into a .vrml file which can be imported into the simulation environment. The assumption was that the simulation method itself has the highest impact to the simulated shape. This is why the capabilities of the most sophisticated methods have been analyzed. The main criterion for the success of a simulation approach proved not to be accuracy, as expected. Process integration and usability showed to be of higher interest for the efficient exertion. Other factors like the pricing, the functionality and the real-time capability were assayed as well. The analyzed methods are based on the solution of the equations of elasticity with different ways of discetization, finite-elements and a spring-impulse-system. Since the finite-element-system takes several minutes for the computation of the shape and the spring-impulse-system reacts retarded on user manipulation, the competitiveness of these approaches is low. The other methods distinguish more in real-time performance, data interfaces and functionality than in accuracy. For the accuracy of a system, the consideration of other factors proved to be very important. As one of these main factors, the accurate assignment of the material properties was indicated. Until the start of this work, only the finite-element-approach dealt with this factor, but no documentation or validation is provided. In the publications of the other methods, the material properties are estimated to obtain a plausible simulation shape. Therefore the specific material values of real specimen have been measured and assigned to the simulation. With the comparison to the real shape it has been proven that the accuracy is very high with the measured properties. Since these measurements are very costly and time consuming, an investigation on a faster and cheaper way to obtain these values has been made. It has been assumed that with the knowledge of the cross-section it should be possible to compute the specimen behavior. Since the braid distribution changes individually from specimen to specimen, a more general way to obtain the values needed to be found. The program composer has been developed, where only the number of the different braids and the taping is entered. It computes with very high precision the stiffness, the density and the final diameter of the bundle. With the measured values and the fitting to the real shape it has been proven that the simulation approach reflects the precise behavior of cables and hoses. Since the stiffness of the single braids is wasteful to measure, a measurement setup was created where the stiffness has a large impact to the shape. With known density, the stiffness of the specimen can be reconstructed precisely. Thus a fast and beneficial way of obtaining the stiffness of a cable has been invented. The poissons ratio of cables and bundles cannot be measured with a tensile test, since the inner structure is very complex. For hoses, the variation of the inner diameter has been measured during the tensile test as well. The resulting values were reasonable, but their accuracy could not be proven. For cables and hoses, it has been tried to obtain the poissons ratio via the computation of the cross section, but the influence of individual changes and the crosstalk of the braids is very high. Therefore a setup was constructed where the torsion stiffness can be measured. For cables and hoses, the individual cross-sections and taping lead to varying results. For hoses, expected and repeatable good values for the poissons ratio were obtained. The low influence of the poisons ratio in the range between 0 and 0.5 has been proven. Therefore we decided to follow the advice of [Old06] and our own experiences to set the poisons ratio for cables and bundles to 0.25. With the knowledge of the measurability and the capabilities of the developed program composer, a procedure to obtain material properties for bundles has been designed. 1. Measurement of the braid density with via pyknometer or mass, length and diameter. 2. Empirical reconstruction of the stiffness with the designed setup. 3. Composing the bundle with the program composer. 4. Adding a factor for the taping and transfer the values to the simulation. The model of the cable simulation has been improved as follows: The main influences in the simulation of cables and hoses are the simulation method, the material properties and the geometric constraints. To obtain higher accuracy, an investigation on the correct material properties is indispensable. The scientific determination of material properties for the simulation of cables, bundles and hoses has been performed for the first time. The influence of geometrical constraints has been analyzed and documented. The next steps are the analysis of pre-deformation and further investigations to the determination of the poisons ratio with a more precise torsion test. All analysis were made with the simulation approach fleXengine. A comparison to other simulation methods would be of high interest.
In der vorliegenden Untersuchung stehen geometrische Aufgaben und die in den seit 2004 national verbindlichen Bildungsstandards im Fach Mathematik für den Primarbereich formulierten Anforderungsbereiche im Zentrum. Diese zeigen die kognitiven Anforderungen an Schülerinnen und Schüler bei der Bearbeitung von Aufgaben auf, wobei zwischen „Reproduzieren", „Zusammenhänge herstellen" und „Verallgemeinern und Reflektieren" unterschieden wird (KMK, 2005a, S. 13).
Durch die drei Anforderungsbereiche sollen Lehrkräfte unter anderem die Chance zur Entwicklung einer anforderungsbezogenen Aufgabenkultur erhalten. Des Weiteren soll die Integration von Aufgaben aus allen drei Anforderungsbereichen im Unterricht angeregt und einem einseitig ausgerichteten Unterricht entgegen gewirkt werden.
Da die Anforderungsbereiche bislang nicht empirisch validiert wurden und in den Veröffentlichungen der Kultusministerkonferenz nicht klar zur Schwierigkeit von Aufgaben abgegrenzt werden (KMK, 2005a, S. 13; KMK, 2005b, S. 17; KMK, 2004b, S. 13), wurde in der vorliegenden Untersuchung zum einen die Möglichkeit der eindeutigen Zuordnung geometrischer Aufgaben zu den drei Anforderungsbereichen geprüft.
Zum anderen wurde untersucht, inwiefern die in den geometrischen Aufgaben enthaltenen kognitiven Anforderungen in Zusammenhang mit der empirischen Schwierigkeit von Aufgaben, der mathematischen Leistungsfähigkeit von Schülerinnen und Schülern, dem Geschlecht und den Anforderungen der im Unterricht gestellten Aufgaben stehen.
Vor dem Hintergrund der dem deutschen Mathematikunterricht nachgesagten Kalkül- beziehungsweise Fertigkeitsorientierung (Baumert et al., 2001, S. 296; Granzer & Walther, 2008, S. 9) und den damit einhergehenden Stärken deutscher Schülerinnen und Schüler im Bereich von Routineaufgaben und Schwächen im Bereich von Aufgaben mit höheren kognitiven Anforderungen (Grassmann et al., 2014, S. 11; Reiss & Hammer, 2013, S. 82; Schütte, 2008, S. 41) wurde zudem die Verteilung der im Rahmen der Untersuchung gewonnenen, schriftlich fixierten geometrischen Schulbuch- und Unterrichtsaufgaben auf die drei Anforderungsbereiche analysiert.
Durch die Betrachtung geometrischer Aufgaben konnte stichprobenartig der quantitative Geometrieanteil in den Schulbüchern und im Unterricht der vierten Jahrgangsstufe ermittelt werden, um so den Forschungsstand zum Stellenwert des Geometrieunterrichts (Maier, 1999; Backe-Neuwald, 2000; Roick, Gölitz & Hasselhorn, 2004) zu aktualisieren beziehungsweise zu ergänzen.
Will Eisners Graphic Novels zeugen von einer tiefgehenden Identitifation mit dem Judentum als Volkszugehörigkeit, Religion und Kultur und spiegeln das Judentum in all seinen Facetten wider. Dabei ist besonders hervorzuheben, dass die Entwicklung des Gesamtwerks parallel verläuft zur Geschichte der Emanzipation der jüdischen Bevölkerung in New York City. Der Band clustert die jüdischen Aspekte in Eisners Werk in beispielsweise Faktoren kollektiver Erinnerung, Umgang mit und Kampf gegen Antisemitismus und religiöse Besonderheiten.
Der Wettbewerb um die besten Technologien zur Realisierung des autonomen Fahrens ist weltweit in vollem Gange.
Trotz großer Anstrengungen ist jedoch die autonome Navigation in strukturierter und vor allem unstrukturierter Umgebung bisher nicht gelöst.
Ein entscheidender Baustein in diesem Themenkomplex ist die Umgebungswahrnehmung und Analyse durch passende Sensorik und entsprechende Sensordatenauswertung.
Insbesondere bildgebende Verfahren im Bereich des für den Menschen sichtbaren Spektrums finden sowohl in der Praxis als auch in der Forschung breite Anwendung.
Dadurch wird jedoch nur ein Bruchteil des elektromagnetischen Spektrums genutzt und folglich ein großer Teil der verfügbaren Informationen zur Umgebungswahrnehmung ignoriert.
Um das vorhandene Spektrum besser zu nutzen, werden in anderen Forschungsbereichen schon seit Jahrzehnten \sog spektrale Sensoren eingesetzt, welche das elektromagnetische Spektrum wesentlich feiner und in einem größeren Bereich im Vergleich zu klassischen Farbkameras analysieren. Jedoch können diese Systeme aufgrund technischer Limitationen nur statische Szenen aufnehmen. Neueste Entwicklungen der Sensortechnik ermöglichen nun dank der \sog Snapshot-Mosaik-Filter-Technik die spektrale Abtastung dynamischer Szenen.
In dieser Dissertation wird der Einsatz und die Eignung der Snapshot-Mosaik-Technik zur Umgebungswahrnehmung und Szenenanalyse im Bereich der autonomen Navigation in strukturierten und unstrukturierten Umgebungen untersucht. Dazu wird erforscht, ob die aufgenommen spektralen Daten einen Vorteil gegenüber klassischen RGB- \bzw Grauwertdaten hinsichtlich der semantischen Szenenanalyse und Klassifikation bieten.
Zunächst wird eine geeignete Vorverarbeitung entwickelt, welche aus den Rohdaten der Sensorik spektrale Werte berechnet. Anschließend wird der Aufbau von neuartigen Datensätzen mit spektralen Daten erläutert. Diese Datensätze dienen als Basis zur Evaluation von verschiedenen Klassifikatoren aus dem Bereich des klassischen maschinellen Lernens.
Darauf aufbauend werden Methoden und Architekturen aus dem Bereich des Deep-Learnings vorgestellt. Anhand ausgewählter Architekturen wird untersucht, ob diese auch mit spektralen Daten trainiert werden können. Weiterhin wird die Verwendung von Deep-Learning-Methoden zur Datenkompression thematisiert. In einem nächsten Schritt werden die komprimierten Daten genutzt, um damit Netzarchitekturen zu trainieren, welche bisher nur mit RGB-Daten kompatibel sind. Abschließend wird analysiert, ob die hochdimensionalen spektralen Daten bei der Szenenanalyse Vorteile gegenüber RGB-Daten bieten
Die Forschung im Bereich der modellbasierten Objekterkennung und Objektlokalisierung hat eine vielversprechende Zukunft, insbesondere die Gebäudeerkennung bietet vielfaltige Anwendungsmöglichkeiten. Die Bestimmung der Position und der Orientierung des Beobachters relativ zu einem Gebäude ist ein zentraler Bestandteil der Gebäudeerkennung.
Kern dieser Arbeit ist es, ein System zur modellbasierten Poseschätzung zu entwickeln, das unabhängig von der Anwendungsdomäne agiert. Als Anwendungsdomäne wird die modellbasierte Poseschätzung bei Gebäudeaufnahmen gewählt. Vorbereitend für die Poseschätzung bei Gebäudeaufnahmen wird die modellbasierte Erkennung von Dominosteinen und Pokerkarten realisiert. Eine anwendungsunabhängige Kontrollstrategie interpretiert anwendungsspezifische Modelle, um diese im Bild sowohl zu lokalisieren als auch die Pose mit Hilfe dieser Modelle zu bestimmen. Es wird explizit repräsentiertes Modellwissen verwendet, sodass Modellbestandteilen Bildmerkmale zugeordnet werden können. Diese Korrespondenzen ermöglichen die Kamerapose aus einer monokularen Aufnahme zurückzugewinnen. Das Verfahren ist unabhängig vom Anwendungsfall und kann auch mit Modellen anderer rigider Objekte umgehen, falls diese der definierten Modellrepräsentation entsprechen. Die Bestimmung der Pose eines Modells aus einem einzigen Bild, das Störungen und Verdeckungen aufweisen kann, erfordert einen systematischen Vergleich des Modells mit Bilddaten. Quantitative und qualitative Evaluationen belegen die Genauigkeit der bestimmten Gebäudeposen.
In dieser Arbeit wird zudem ein halbautomatisches Verfahren zur Generierung eines Gebäudemodells vorgestellt. Das verwendete Gebäudemodell, das sowohl semantisches als auch geometrisches Wissen beinhaltet, den Aufgaben der Objekterkennung und Poseschätzung genügt und sich dennoch an den bestehenden Normen orientiert, ist Voraussetzung für das Poseschätzverfahren. Leitgedanke der Repräsentationsform des Modells ist, dass sie für Menschen interpretierbar bleibt. Es wurde ein halbautomatischer Ansatz gewählt, da die automatische Umsetzung dieses Verfahrens schwer die nötige Präzision erzielen kann. Das entwickelte Verfahren erreicht zum einen die nötige Präzision zur Poseschätzung und reduziert zum anderen die Nutzerinteraktionen auf ein Minimum. Eine qualitative Evaluation belegt die erzielte Präzision bei der Generierung des Gebäudemodells.
There has been little research on out-of-school places of learning and their effec-tiveness in the context of ESD education measures. With the help of a multi-stage analysis, this study identifies out-of-school places of learning with reference to the ESD education concept in the Rhineland-Palatinate study area. To this end, qualita-tive literature analyses were first used to generate ESD criteria, which were opera-tionalised as a methodological instrument in the form of an ESD checklist for out-of-school places of learning. The data obtained in this way provide the basis for the creation of a geographically oriented learning location database with ESD refer-ence. A cartographic visualisation of the data results in a spatial distribution pattern: Thus, there are districts and cities that are well supplied with ESD learning loca-tions, but also real ESD learning location deserts where there is a need to catch up. Furthermore, there is an accumulation of ESD learning sites in areas close to for-ests.
A guideline-based explorative interview with two ESD experts provides additional insights into the question of how ESD has been implemented in the federal state of Rhineland-Palatinate, the extent to which there is a need for optimisation, and which continuing measures are being taken for ESD outside schools within the framework of Agenda 2030.
In addition, a quantitative questionnaire study was carried out with 1358 pupils at 30 out-of-school places of learning after participation in an educational measure, in which environmental awareness, attitudes towards environmental behaviour and local learning were also considered. By including non-ESD learning locations, a comparative study on the effectiveness of ESD learning locations became possible. The statistical data evaluation leads to a variety of interesting results. Contra-intuitively, for instance, the type of learning location (ESD or non-ESD learning lo-cation) is not a significant predictor for the environmental awareness and environ-mental behaviour of the surveyed students, whereas communication structures within educational measures at extracurricular learning locations, the multimediality and action orientation and the duration of educational measures have a significant influence.
Keywords: extracurricular learning locations, education for sustainable develop-ment (ESD), ESD criteria, learning location landscape Rhineland-Palatinate, ESD learning locations, environmental awareness, environmental behaviour.
This thesis addresses the automated identification and localization of a time-varying number of objects in a stream of sensor data. The problem is challenging due to its combinatorial nature: If the number of objects is unknown, the number of possible object trajectories grows exponentially with the number of observations. Random finite sets are a relatively new theory that has been developed to derive at principled and efficient approximations. It is based around set-valued random variables that contain an unknown number of elements which appear in arbitrary order and are themselves random. While extensively studied in theory, random finite sets have not yet become a leading paradigm in practical computer vision and robotics applications. This thesis explores random finite sets in visual tracking applications. The first method developed in this thesis combines set-valued recursive filtering with global optimization. The problem is approached in a min-cost flow network formulation, which has become a standard inference framework for multiple object tracking due to its efficiency and optimality. A main limitation of this formulation is a restriction to unary and pairwise cost terms. This circumstance makes integration of higher-order motion models challenging. The method developed in this thesis approaches this limitation by application of a Probability Hypothesis Density filter. The Probability Hypothesis Density filter was the first practically implemented state estimator based on random finite sets. It circumvents the combinatorial nature of data association itself by propagation of an object density measure that can be computed efficiently, without maintaining explicit trajectory hypotheses. In this work, the filter recursion is used to augment measurements with an additional hidden kinematic state to be used for construction of more informed flow network cost terms, e.g., based on linear motion models. The method is evaluated on public benchmarks where a considerate improvement is achieved compared to network flow formulations that are based on static features alone, such as distance between detections and appearance similarity. A second part of this thesis focuses on the related task of detecting and tracking a single robot operator in crowded environments. Different from the conventional multiple object tracking scenario, the tracked individual can leave the scene and later reappear after a longer period of absence. Therefore, a re-identification component is required that picks up the track on reentrance. Based on random finite sets, the Bernoulli filter is an optimal Bayes filter that provides a natural representation for this type of problem. In this work, it is shown how the Bernoulli filter can be combined with a Probability Hypothesis Density filter to track operator and non-operators simultaneously. The method is evaluated on a publicly available multiple object tracking dataset as well as on custom sequences that are specific to the targeted application. Experiments show reliable tracking in crowded scenes and robust re-identification after long term occlusion. Finally, a third part of this thesis focuses on appearance modeling as an essential aspect of any method that is applied to visual object tracking scenarios. Therefore, a feature representation that is robust to pose variations and changing lighting conditions is learned offline, before the actual tracking application. This thesis proposes a joint classification and metric learning objective where a deep convolutional neural network is trained to identify the individuals in the training set. At test time, the final classification layer can be stripped from the network and appearance similarity can be queried using cosine distance in representation space. This framework represents an alternative to direct metric learning objectives that have required sophisticated pair or triplet sampling strategies in the past. The method is evaluated on two large scale person re-identification datasets where competitive results are achieved overall. In particular, the proposed method better generalizes to the test set compared to a network trained with the well-established triplet loss.
In the last years the e-government concentrated on the administrative aspects of administrative modernisation. In the next step the e-discourses will gain in importance as an instrument of the public-friendliness and means of the e-democracy/e-participation. With growing acceptance of such e-discourses, these will fastly reach a complexity, which could not be mastered no more by the participants. Many impressions, which could be won from presence discussions, will be lacking now. Therefore the exposed thesis has the objective of the conception and the prototypical implementation of an instrument (discourse meter), by which the participants, in particular the moderators of the e-discourse, are capable to overlook the e-discourse at any time and by means of it, attain their discourse awareness. Discourse awareness of the present informs about the current action in the e-discourse and discourse awareness of the past about the past action, by which any trends become visible. The focus of the discourse awareness is located in the quantitative view of the action in the e-discourse. From the model of e-discourse, which is developed in this thesis, the questions of discourse awareness are resulting, whose concretion is the basis for the implementation of the discourse meter. The discourse sensors attached to the model of the e-discourse are recording the actions of the e-discourse, showing events of discourse, which are represented by the discourse meter in various forms of visualizations. The concept of discourse meter offers the possibility of discourse awareness relating to the present as monitoring and the discourse awareness relating to the past as query (quantitative analysis) to the moderators of the e-discourse.
Climate change is an existential threat to human survival, the social organization of society, and the stability of ecosystems. It is thereby profoundly frightening. In the face of threat, people often want to protect themselves instead of engaging in mitigating behaviors. When psychological resources are insufficient to cope, people often respond with different forms of denial. In this dissertation, I contribute original knowledge to the understanding of the multifaceted phenomenon of climate denial from a psychological perspective.
There are four major gaps in the literature on climate denial: First, the spectrum of climate denial as a self-protective response to the climate crisis has not received attention within psychology. Second, basic psychological need satisfaction, a fundamental indicator of human functioning and the ability to cope with threat, has not been investigated as a predictor of climate denial. Third, relations of the spectrum of climate denial to climate-relevant emotions, specifically climate anxiety, have not been examined empirically. Forth, it has not been investigated how the spectrum of climate denial relates to established predictors of climate denial, namely right-wing ideological convictions and male gender. To address those gaps, I investigate what the spectrum of climate denial looks like in the German context and how it relates to basic psychological need satisfaction and frustration, pro-environmental behavior, climate anxiety, ideological conviction, and gender.
Five manuscripts reveal that climate denial exists on a spectrum in the German context, ranging from the distortion of facts (interpretive climate denial, specifically denial of personal and global outcome severity) to the denial of the implications of climate change (implicatory climate denial, specifically avoidance, denial of guilt, and rationalization of one's own involvement). Across analyses, low basic psychological need satisfaction predicted the spectrum of climate denial, which was negatively related to pro-environmental behavior. Climate denial was generally negatively related to climate anxiety, except for a positive association of avoidance and climate anxiety. Right-wing ideological conviction was the strongest predictor of climate denial across the spectrum. However, low need satisfaction and male gender were additional weaker predictors of implicatory climate denial.
These findings suggest that the spectrum of climate denial serves many psychological functions. Climate denial is possibly both a self-protective strategy to downregulate emotions and to protect oneself from loss of privilege. In short, it represents a barrier to climate action that may only be resolved once people have sufficient psychological resources to face the threat of climate change and cope with their underlying self-protective, emotional responses.
In a world where language defines the boundaries of one's understanding, the words of Austrian philosopher Ludwig Wittgenstein resonate profoundly. Wittgenstein's assertion that "Die Grenzen meine Sprache bedeuten die Grenzen meiner Welt" (Wittgenstein 2016: v. 5.6) underscores the vital role of language in shaping our perceptions. Today, in a globalized and interconnected society, fluency in foreign languages is indispensable for individual success. Education must break down these linguistic barriers, and one promising approach is the integration of foreign languages into content subjects.
Teaching content subjects in a foreign language, a practice known as Content Language Integrated Learning (CLIL), not only enhances language skills but also cultivates cognitive abilities and intercultural competence. This approach expands horizons and aligns with the core principles of European education (Leaton Gray, Scott & Mehisto 2018: 50). The Kultusministerkonferenz (KMK) recognizes the benefits of CLIL and encourages its implementation in German schools (cf. KMK 2013a).
With the rising popularity of CLIL, textbooks in foreign languages have become widely available, simplifying teaching. However, the appropriateness of the language used in these materials remains an unanswered question. If textbooks impose excessive linguistic demands, they may inadvertently limit students' development and contradict the goal of CLIL.
This thesis focuses on addressing this issue by systematically analyzing language requirements in CLIL teaching materials, emphasizing receptive and productive skills in various subjects based on the Common European Framework of Reference. The aim is to identify a sequence of subjects that facilitates students' language skill development throughout their school years. Such a sequence would enable teachers to harness the full potential of CLIL, fostering a bidirectional approach where content subjects facilitate language learning.
While research on CLIL is extensive, studies on language requirements for bilingual students are limited. This thesis seeks to bridge this gap by presenting findings for History, Geography, Biology, and Mathematics, allowing for a comprehensive understanding of language demands. This research endeavors to enrich the field of bilingual education and CLIL, ultimately benefiting the academic success of students in an interconnected world.
Foliicolous lichens are one of the most abundant epiphytes in tropical rainforests and one of the few groups of organisms that characterize these forests. Tropical rainforests are increasingly affected by anthropogenic disturbance resulting in forest destruction and degradation. However, not much is known on the effects of anthropogenic disturbance on the diversity of foliicolous lichens. Understanding such effects is crucial for the development of appropriate measures for the conservation of such organisms. In this study, foliicolous lichens diversity was investigated in three tropical rainforests in East Africa. Godere Forest in Southwest Ethiopia is a transitional rainforest with a mixture of Afromontane and Guineo-Congolian species. The forest is secondary and has been affected by shifting cultivation, semi-forest coffee management and commercial coffee plantation. Budongo Forest in West Uganda is a Guineo-Congolian rainforest consisting of primary and secondary forests. Kakamega Forest in western Kenya is a transitional rainforest with a mixture of Guineo-Congolian and Afromontane species. The forest is a mosaic of near-primary forest, secondary forests of different seral stages, grasslands, plantations, and natural glades.
Background: Somatoform symptoms are a prevalent and disabling condition in primary practice, causing high medical care utilization. Objective: To compare the short and long term effects of cognitive behavioral outpatient group-therapy to a relaxation-group and a waiting-control-group, on physical symptoms, anxiety, depression, functional health, symptomspecific cognitions and illness-behavior. Methods: 135 subjects were treated and assessed in a randomized control group design. The manualized interventions comprised eight sessions. Results: The cognitive-behavioral group treatment lead to lower levels of somatoform symptoms (SOMS-7) and enhanced mental health (SF-12). There were no differential effects between cognitive-behavioral therapy and relaxation treatment on any of the analysed variables. Conclusions: This brief cognitive-behavioral group therapy has beneficial effects on ambulatory patients with somatoform. To enhance effect sizes and facilitate differential effects, future studies should consider applying increased therapy dosage.
While the 1960s and 1970s still knew permanent education (Council of Europe), recurrent education (OECD) and lifelong education (UNESCO), over the past 20 years, lifelong learning has become the single emblem for reforms in (pre-) primary, higher and adult education systems and international debates on education. Both highly industrialized and less industrialized countries embrace the concept as a response to the most diverse economic, social and demographic challenges - in many cases motivated by international organizations (IOs).
Yet, literature on the nature of this influence, the diffusion of the concept among IOs and their understanding of it is scant and usually focuses on a small set of actors. Based on longitudinal data and a large set of education documents, the work identifies rapid diffusion of the concept across a heterogeneous, expansive and dynamic international field of 88 IOs in the period 1990-2013, which is difficult to explain with functionalist accounts.
Based on the premises of world polity theory, this paper argues that what diffuses resembles less the bundle of systemic reforms usually associated with the concept in the literature and more a surprisingly detailed model of a new actor " the lifelong learner.
Die vorliegende Arbeit betrachtet den Einfluss von Wald- und Wirtschaftswegen auf Abflussentstehung und Bodenerosionsraten innerhalb eines bewaldeten Einzugsgebiets im Naturschutzgebiet Laacher See. Hierfür wurden sowohl bestehende Erosions- und Akkumulationsformen im Gelände kartiert, als auch Erosionssimulationen mittels einer Kleinberegnungsanlage durchgeführt. Zuletzt erfolgte eine Modellierung des Erosionspotentials auf Grundlage der Simulationsergebnisse.
Die Analyse bestehender Erosions- und Akkumulationsformen im Gelände gab einen Hinweis auf Bodenerosionsraten von Wegoberflächen, die zwischen 27,3 und 93,5 t ha-1 a-1 und somit in derselben Größenordnung wie Erosionsraten unter intensiver ackerbaulicher Nutzung lagen.
Die Simulationsläufe zeigten, dass persistente Waldwege ein deutlich verändertes Infiltrationsverhalten aufwiesen. Auf natürlichen Waldböden lag der Anteil des infiltrierten Niederschlags bei durchschnittlich 96%. Im Falle von Waldwegen nahm dieser Anteil im Mittel auf 14% bis 7% ab. Besonders auffällig waren die Ergebnisse auf Rückegassen, auf denen ein erheblicher Einfluss der Bodenverdichtung durch Befahrung nachgewiesen werden konnte. Hier sank der Anteil des infiltrierten Niederschlags auf 31% in den Fahrspuren, zwischen den Spuren wurden noch 76 % infiltriert.
Während der Simulationsläufe konnten maximale Sedimentmengen von 446 g m-2 erodiert werden, was einer mittleren Bodenerosionsrate von 4,96 g m-2 min-1 entspricht. Diese hohen Abtragsraten wurden auf persistenten Wegen mit geringer Befestigung gemessen. Rückegassen wiesen die geringsten Abtragswerte auf, maximal konnten 37 g m-2 erodiert werden, gleichbedeutend mit einer Abtragsrate von 0,41 g m-2 min-1. Die erodierten Sedimentmengen betrugen im Mittel bei Wegen 167 bis 319 g m-2 und im Falle von Rückegassen 17 g m-2. Anhand von Vergleichsmessungen auf Waldstandorten, bei denen ein mittlerer Bodenabtrag von ca. 5 g m-2 festgestellt wurde, konnte eine erhöhte Erodierbarkeit für jedwede Form der Weganlage bestätigt werden.
Auf Basis der im Gelände gemessenen Abtragsraten wurden die Modellierungen kalibriert. Die Ergebnisse der ABAG / DIN 19708 zeigten für das betrachtete Untersuchungsgebiet eine mittlere jährliche Bodenerosionsgefährdung von 2,4 - 5,8 t ha-1 a-1 für persistente Wege und von 0,5 t ha-1 a-1 für Rückegassen. Im Vergleich zum Mittelwert weitgehend unbeeinflusster Waldflächen im Untersuchungsgebiet von 0,1 t ha-1 a-1 zeigte sich abermals ein erhöhtes Abtragspotential. Die physikalisch basierte Modellierung der Beregnungsversuche mittels WEPP zeigte ein zufriedenstellendes Ergebnis bei der Einschätzung des Abflussverhaltens, so wurden für persistente Wege nur Abweichungen von maximal -5% festgestellt. Die Abflussmodellierung auf Rückegassen sowie die generelle Modellierung der Bodenerosion während der Beregnungsversuche zeigte sich im Kontrast hierzu noch fehlerbehaftet, was ursächlich mit der für ein physikalisches Modell relativ geringen Eingangsdatentiefe zu begründen ist.
Es wurde nachgewiesen, dass Waldwege einen bedeutenden Einfluss auf den Wasserhaushalt und das Bodenerosionsgeschehen haben. Der Rückhalt von Niederschlägen wird gemindert und es kommt zu intensivierten Bodenerosionsprozessen. Schlecht befestigte Wege zeigten einen stark erhöhten Bodenabtrag, der zu ökologischen Folgeschäden führen kann. Der Abtrag kann ebenso zu einer Beeinträchtigung der Befahrbarkeit führen. Anhand der Folgen lässt sich die Relevanz der Betrachtung von Abfluss- und Bodenerosionsprozessen auf Wald- und Wirtschaftswegen deutlich machen. Die vorliegende Arbeit stellt die erste Studie dar, innerhalb derer Abfluss- und Bodenerosionspozesse für Walderschließungsnetzwerke in Mitteleuropa untersucht wurden.
Placing questions before the material or after the material constitute different reading situations. To adapt to these reading situations, readers may apply appropriate reading strategies. Reading strategy caused by location of question has been intensively explored in the context of text comprehension. (1) However, there is still not enough knowledge about whether text plays the same role as pictures when readers apply different reading strategies. To answer this research question, three reading strategies are experimentally manipulated by displaying question before or after the blended text and picture materials: (a) Unguided processing with text and pictures and without the question. (b) Information gathering to answer the questions after the prior experience with text and pictures. (c) Comprehending text and pictures to solve the questions with the prior information of the questions. (2) Besides, it is arguable whether readers prefer text or pictures when the instructed questions are in different difficulty levels. (3) Furthermore, it is still uncertain whether students from higher school tier (Gymnasium) emphasize more on text or on pictures than students from lower school tier (Realschule). (4) Finally, it is rarely mentioned whether higher graders are more able to apply reading strategies in text processing and picture processing than lower graders.
Two experiments were undertaken to investigate the usage of text and pictures in the perspectives of task orientation, question difficulty, school and grade. For a 2x2(x2x2x2) mixed design adopting eye tracking method, participants were recruited from grade 5 (N = 72) and grade 8 (N = 72). In Experiment 1, thirty-six 5th graders were recruited from higher tier (Gymnasium) and thirty-six 5th graders were from lower tier (Realschule). In Experiment 2, thirty-six 8th graders were recruited from higher tier and thirty-six were from lower tier. They were supposed to comprehend the materials combining text and pictures and to answer the questions. A Tobii XL60 eye tracker recorded their eye movements and their answers to the questions. Eye tracking indicators were analyzed and reported, such as accumulated fixation duration, time to the first fixation and transitions between different Areas of Interest. The results reveal that students process text differently from pictures when they follow different reading strategies. (1) Consistent with Hypothesis 1, students mainly use text to construct their mental model in unguided spontaneous processing of text and pictures. They seem to mainly rely on the pictures as external representations when trying to answer questions after the prior experience with the material. They emphasize on both text and pictures when questions are presented before the material. (2) Inconsistent with Hypothesis 2, students are inclined to emphasize on text and on pictures as question difficulty increases. However, the increase of focus on pictures is more than on text when the presented question is difficult. (3) Different from Hypothesis 3, the current study discovers that higher tier students did not differ from lower tier students in text processing. Conversely, students from higher tier attend more to pictures than students from lower tier. (4) Differed from Hypothesis 4, 8th graders outperform 5th graders mainly in text processing. Only a subtle difference is found between 5th graders and 8th graders in picture processing.
To sum up, text processing differs from picture processing when applying different reading strategies. In line with the Integrative Model of Text and Picture Comprehension by Schnotz (2014), text is likely to play a major part in guiding the processing of meaning or general reading, whereas pictures are applied as external representations for information retrieval or selective reading. When question is difficulty, pictures are emphasized due to their advantages in visualizing the internal structure of information. Compared to lower tier students (poorer problem solvers), higher tier students (good problem solvers) are more capable of comprehending pictures rather than text. Eighth graders are more efficient than 5th graders in text processing rather than picture processing. It also suggests that in designing school curricula, more attention should be paid to students’ competence on picture comprehension or text-picture integration in the future.
Based on dual process models of information processing, the present research addressed how explicit disgust sensitivity is re-adapted according to implicit disgust sensitivity via self-perception of automatic behavioral cues. Contrary to preceding studies (Hofmann, Gschwendner, & Schmitt, 2009) that concluded that there was a "blind spot" for self- but not for observer perception of automatic behavioral cues, in the present research, a re-adaption process was found for self-perceivers and observers. In Study 1 (N = 75), the predictive validity of an indirect disgust sensitivity measure was tested with a double-dissociation strategy. Study 2 (N = 117) reinvestigated the hypothesis that self-perception of automatic behavioral cues, predicted by an indirect disgust sensitivity measure, led to a re-adaption of explicit disgust sensitivity measures. Using a different approach from Hofmann et al. (2009), the self-perception procedure was modified by (a) feeding back the behavior several times while a small number of cues had to be rated for each feedback condition, (b) using disgust sensitivity as a domain with clearly unequivocal cues of automatic behavior (facial expression, body movements) and describing these cues unambiguously, and (c) using a specific explicit disgust sensitivity measure in addition to a general explicit disgust sensitivity measure. In Study 3 (N = 130), the findings of Study 2 were replicated and display rules and need for closure as moderator effects of predictive validity and cue utilization were additionally investigated. The moderator effects give hints that both displaying a disgusted facial expression and self-perception of one- own disgusted facial expression are subject to a self-serving bias, indicating that facial expression may not be an automatic behavior. Practical implications and implications for future research are discussed.
Leaf litter breakdown is a fundamental process in aquatic ecosystems, being mainly mediated by decomposer-detritivore systems that are composed of microbial decomposers and leaf-shredding, detritivorous invertebrates. The ecological integrity of these systems can, however, be disturbed, amongst others, by chemical stressors. Fungicides might pose a particular risk as they can have negative effects on the involved microbial decomposers but may also affect shredders via both waterborne toxicity and their diet; the latter by toxic effects due to dietary exposure as a result of fungicides’ accumulation on leaf material and by negatively affecting fungal leaf decomposers, on which shredders’ nutrition heavily relies. The primary aim of this thesis was therefore to provide an in-depth assessment of the ecotoxicological implications of fungicides in a model decomposer-detritivore system using a tiered experimental approach to investigate (1) waterborne toxicity in a model shredder, i.e., Gammarus fossarum, (2) structural and functional implications in leaf-associated microbial communities, and (3) the relative importance of waterborne and diet-related effects for the model shredder.
Additionally, knowledge gaps were tackled that were related to potential differences in the ecotoxicological impact of inorganic (also authorized for organic farming in large parts of the world) and organic fungicides, the mixture toxicity of these substances, the field-relevance of their effects, and the appropriateness of current environmental risk assessment (ERA).
In the course of this thesis, major differences in the effects of inorganic and organic fungicides on the model decomposer-detritivore system were uncovered; e.g., the palatability of leaves for G. fossarum was increased by inorganic fungicides but deteriorated by organic substances. Furthermore, non-additive action of fungicides was observed, rendering mixture effects of these substances hardly predictable. While the relative importance of the waterborne and diet-related effect pathway for the model shredder seems to depend on the fungicide group and the exposure concentration, it was demonstrated that neither path must be ignored due to additive action. Finally, it was shown that effects can be expected at field-relevant fungicide levels and that current ERA may provide insufficient protection for decomposer-detritivore systems. To safeguard aquatic ecosystem functioning, this thesis thus recommends including leaf-associated microbial communities and long-term feeding studies using detritus feeders in ERA testing schemes, and identifies several knowledge gaps whose filling seems mandatory to develop further reasonable refinements for fungicide ERA.
Real-time operating systems for mixed-criticality systems
must support different types of software, such as
real-time applications and general purpose applications,
and, at the same time, must provide strong spatial and
temporal isolation between independent software components.
Therefore, state-of-the-art real-time operating systems
focus mainly on predictability and bounded worst-case behavior.
However, general purpose operating systems such as Linux
often feature more efficient---but less deterministic---mechanisms
that significantly improve the average execution time.
This thesis addresses the combination of the two contradicting
requirements and shows thread synchronization mechanisms
with efficient average-case behavior, but without sacrificing
predictability and worst-case behavior.
This thesis explores and evaluates the design space of fast paths
in the implementation of typical blocking synchronization
mechanisms, such as mutexes, condition variables, counting
semaphores, barriers, or message queues. The key technique here
is to avoid unnecessary system calls, as system calls have high
costs compared to other processor operations available in user
space, such as low-level atomic synchronization primitives.
In particular, the thesis explores futexes, the state-of-the-art
design for blocking synchronization mechanisms in Linux
that handles the uncontended case of thread synchronization
by using atomic operations in user space and calls into the
kernel only to suspend and wake up threads. The thesis also
proposes non-preemptive busy-waiting monitors that use an
efficient priority ceiling mechanism to prevent the lock holder
preemption problem without using system calls, and according
low-level kernel primitives to construct efficient wait and
notify operations.
The evaluation shows that the presented approaches
improve the average performance comparable
to state-of-the-art approaches in Linux.
At the same time, a worst-case timing analysis shows
that the approaches only need constant or bounded temporal
overheads at the operating system kernel level.
Exploiting these fast paths is a worthwhile approach
when designing systems that not only have to fulfill
real-time requirements, but also best-effort workloads.