Refine
Year of publication
Document Type
- Doctoral Thesis (476) (remove)
Language
- English (249)
- German (225)
- Multiple languages (1)
- Spanish (1)
Keywords
- Pestizid (8)
- Pflanzenschutzmittel (8)
- Führung (6)
- Inklusion (6)
- Grundwasserfauna (5)
- Landwirtschaft (5)
- Modellierung (4)
- Persönlichkeit (4)
- Software Engineering (4)
- Unterrichtsforschung (4)
Institute
- Fachbereich 7 (93)
- Fachbereich 8 (47)
- Institut für Informatik (35)
- Institut für Integrierte Naturwissenschaften, Abt. Biologie (29)
- Institut für Umweltwissenschaften (23)
- Institut für Integrierte Naturwissenschaften, Abt. Chemie (22)
- Fachbereich 5 (20)
- Institut für Computervisualistik (18)
- Institut für Integrierte Naturwissenschaften, Abt. Physik (13)
- Institut für Pädagogik, Abteilung Pädagogik (13)
Carabids, which are frequently distributed in agricultural landscapes, are natural enemies of different pests including slugs. Semi-natural habitats are known to affect carabids and thus, their potential to support natural pest control.
The impact of semi-natural habitats was investigated on carabids and slugs within different non-crop habitats (chapter 2). Most carabids and Deroceras reticulatum showed preferences for herbaceous semi-natural habitats, while Arion spp. occured mainly in woody habitats. An increase of predatory carabid abundance, which was linked to an inclining amount of semi-natural habitats in the landscape, and a decrease of Arion spp. densities, indicated a high potential for slug control in structural rich landscapes.
Effects of semi-natural habitats were investigated on predatory carabids and slugs in 18 wheat fields (chapter 3). Predatory carabid species richness was positively affected by the increasing amount of semi-natural habitats in the landscape, whereas predatory carabid abundance was neither influenced by adjacent habitat type nor by the proportion of semi-natural habitats in the landscape. The target pest species showed divergent patterns, whereas Arion spp. densities were highest in structural poor landscapes near woody margins. D. reticulatum was not affected by habitat type or landscape, reflecting its adaptation to agriculture. Results indicate an increased control of Arion spp. by carabids in landscapes with a high amount of semi-natural habitats.
Effects of semi-natural habitats and the influence of farming system was tested on carabid distribution within 18 pumpkin fields (chapter 4). Carabid species richness generally increased with decreasing distance to the field margins, whereas carabid abundance responded differently according to the adjacent habitat type. Farming system had no effect on carabids and landscape heterogeneity only affected carabids in organic pumpkin fields.
Slug and slug egg predation of three common carabid species was tested in single and double species treatments in the laboratory (chapter 5). Results show additive and synergistic effects depending on the carabid species. In general, semi-natural habitats can enhance the potential of slug control by carabids. This counts especially for Arionid slugs. Semi-natural habitats can support carabid communities by providing shelter, oviposition and overwintering sites as wells as complementary food sources. Therefore, it is important to provide a certain amount of non-crop habitats in agricultural landscapes.
The flexible integration of information from distributed and complex information systems poses a major challenge for organisations. The ontology-based information integration concept SoNBO (Social Network of Business Objects) developed and presented in this dissertation addresses these challenges. In an ontology-based concept, the data structure in the source systems (e.g. operational application systems) is described with the help of a schema (= ontology). The ontology and the data from the source systems can be used to create a (virtualised or materialised) knowledge graph, which is used for information access. The schema can be flexibly adapted to the changing needs of a company regarding their information integration. SoNBO differs from existing concepts known from the Semantic Web (OBDA = Ontology-based Data Access, EKG = Enterprise Knowledge Graph) both in the structure of the company-specific ontology (= Social Network of Concepts) as well as in the structure of the user-specific knowledge graph (= Social Network of Business Objects) and makes use of social principles (known from Enterprise Social Software). Following a Design Science Research approach, the SoNBO framework was developed and the findings documented in this dissertation. The framework provides guidance for the introduction of SoNBO in a company and the knowledge gained from the evaluation (in the company KOSMOS Verlag) is used to demonstrate its viability. The results (SoNBO concept and SoNBO framework) are based on the synthesis of the findings from a structured literature review and the investigation of the status quo of ontology-based information integration in practice: For the status quo in practice, the basic idea of SoNBO is demonstrated in an in-depth case study about the engineering office Vössing, which has been using a self-developed SoNBO application for a few years. The status quo in the academic literature is presented in the form of a structured literature analysis on ontology-based information integration approaches. This dissertation adds to theory in the field of ontology-based information integration approaches (e. g. by an evaluated artefact) and provides an evaluated artefact (the SoNBO Framework) for practice.
Cultural eutrophication due to excessive inputs of nutrients seriously threatens aquatic ecosystems worldwide and is one of the major anthropogenic stressors on aquatic biota in European rivers. In streams and shallow rivers, its effects include excessive periphyton growth, which causes biological clogging and thereby oxygen depletion in the hyporheic zone. The result is a serious degradation of habitat quality for benthic invertebrates as well as for the eggs and larvae of gravel-spawning fish. Unlike in standing waters, efficient tools for controlling eutrophication in rivers are lacking. However, top-down control of the food-web by manipulating fish stocks, similar to the biomanipulation successfully applied in lakes, offers a promising approach to mitigating the effects of eutrophication in shallow rivers, especially those in which major reductions in nutrient inputs are not feasible. The overall aim of this thesis was to assess the potential for top-down control by two large cypriniform fish, the common nase (Chondrostoma nasus), the only obligate herbivorous fish species in European rivers, and the omnivorous European chub (Squalius cephalus), to mitigate the effects of eutrophication in medium-sized rivers. I therefore conducted field experiments on different spatial and temporal scales in the hyporhithral zone of a eutrophic gravel-bed river. Generally, the results of those experiments revealed the crucial role of fish-mediated top-down effects in river food webs. In a 4-year reach-scale experiment, the key contribution of my thesis, the enhancement of fish densities significantly increased both oxygen availability and water exchange in the upper layer of the hyporheic zone, even though the top-down effects of the fish on periphyton biomass were relatively small. These findings were supported by those of a 4-week mesocosm experiment, which also provided insights into the mechanisms underlying the mitigation of eutrophication effects by nase and chub. The top-down effects of both fish species reduced hyporheic oxygen depletion, suggesting a reduction of biological clogging. The positive effects of herbivorous nase on hyporheic oxygen availability could be attributed to benthic grazing, whereas the reduction of hyporheic oxygen depletion in the presence of omnivorous chub was best explained by the enhanced bioturbation induced by the fish’s benthic foraging. Overall, the results of my thesis demonstrate that biomanipulation achieved by enhancing herbivorous and omnivorous fish stocks can mitigate the effects of eutrophication in medium-sized European rivers. The results may be the first step towards the establishment of biomanipulation as a supportive management tool for eutrophication control in running waters and therefore as a strategy to preserve aquatic biodiversity.
The aquatic environment is exposed to multiple environmental pressures and mixtures of chemical substances, among them petroleum and petrochemicals, metals, and pesticides. Aquatic invertebrate communities are used as bioindicators to reflect long-term and integral effects. Information on the presence of species can be supplemented with information on their traits. SPEAR-type bioindicators integrate such trait information on the community level.
This thesis aimed at enhancing specificity of SPEAR-type bioindicators towards particular groups of chemicals, namely to mixtures of oil sands-derived compounds, hydrocarbons, and metals.
For developing a bioindicator for discontinuous contamination with oil-derived organic toxicants, a field study was conducted in the Canadian oil sands development region in Northern Alberta. The traits ‘physiological sensitivity towards organic chemicals’ and ‘generation time’ were integrated to develop the bioindicator SPEARoil, reflecting the community sensitivity towards oil sands derived contamination in relation to fluctuating hydrological conditions.
According to the SPEARorganic approach, a physiological sensitivity ranking of taxa was developed for hydrocarbon contamination originating from crude oil or petroleum distillates. For this purpose, ecotoxicological information from acute laboratory tests was enriched with rapid and mesocosm test results. The developed Shydrocarbons sensitivity values can be used in SPEAR-type bioindicators.
To specifically reflect metal contamination in streams via bioindicators, Australian field studies were re-evaluated with focus on the traits ‘physiological metal sensitivity’ and ‘feeding type’. Metal sensitivity values, however, explained community effects in the field only weakly. Instead, the trait ‘feeding type’ was strongly related to metal exposure. The fraction of predators in a community can, thus, serve as an indicator for metal contamination in the field.
Furthermore, several metrics reflecting exposure to chemical cocktails in the environment were compared using existing pesticide datasets. Exposure metrics based on the 5% fraction of species sensitivity distributions were found to perform best, however, closely followed by Toxic Unit metrics based on the most sensitive species of a community or Daphnia magna.
This study had two main aims. The first one was to investigate the quality of lesson plans. Two important features of lesson plans were used as a basis to determine the quality of lesson plans. These are adaptability to preconditions and cognitive activation of students. The former refers to how the planning teacher considers the diversity of students pre-existing knowledge and skills. The latter refers to how the planning teacher sequences deep learning tasks and laboratory activities to promote the cognitive activation of students.
The second aim of the study was to explore teachers thinking about and explanation of externally generated feedback data on their students’ performance. The emphasis here was to understand how the teachers anticipate planning differentiated lessons to accommodate the variations in students learning outcomes revealed by the feedback data.
The study followed a qualitative approach with multiple sources of data. Concept maps, questionnaires, an online lesson planning tool, standardized tests, and semi-structured interviews were the main data collection instruments used in the study. Participants of this study were four physics teachers teaching different grade levels. For the purpose of generating feedback for the participant teachers, a test was administered to 215 students. Teachers were asked to plan five lessons for their ongoing practices. The analysis showed that the planned lessons were not adapted to the diversity in students pre-existing knowledge and skills. The analysis also indicated that the lessons planned had limitations with regard to cognitive activation of students. The analysis of the interview data also revealed that the participant teachers do not normally consider differentiating lessons to accommodate the differences in students learning, and place less emphasis on the cognitive activation of students. The analysis of the planned lessons showed a variation in teachers approach in integrating laboratory activities in the sequence of the lessons ranging from a complete absence through a demonstrative to an investigative approach. Moreover, the findings from the interviews indicated differences between the participant teachers espoused theory (i.e. what they said during interview) and their theory- in –use (i.e. what is evident from the planned lessons). The analysis of the interview data demonstrated that teachers did not interpret the data, identify learning needs, draw meaningful information from the data for adapting (or differentiating) instruction. They attributed their students’ poor performance to task difficulty, students’ ability, students’ motivation and interest. The teachers attempted to use the item level and subscale data only to compare the relative position of their class with the reference group. However, they did not read beyond the data, like identifying students learning needs and planning for differentiated instruction based on individual student’s performance.
To render the surface of a material capable of withstanding mechanical and electrochemical loads, and to perform well in service, the deposition of a thin film or coating is a solution. In this project, such a thin film deposition is carried out. The coating material chosen is titanium nitride (TiN) which is a ceramic material known to possess a high hardness (>10 GPa) as well as good corrosion resistance. The method of deposition selected is high power impulse magnetron sputtering (HiPIMS) that results in coatings with high quality and enhanced properties. Sputtering is a physical process that represents the removal or dislodgment of surface atoms by energetic particle bombardment. The term magnetron indicates that a magnetic field is utilized to increase the efficiency of the sputtering process. In HiPIMS, a high power is applied in pulses of low duty cycles to a cathode that is sputtered and that consists of the coating material. As result of the high power, the ionization of the sputtered material takes place giving the possibility to control these species with electric and magnetic field allowing thereby the improvement and tuning of coating properties. However, the drawback of HiPIMS is a low deposition rate.
In this project, it is demonstrated first that it is possible to deposit TiN coating using HiPIMS with an optimized deposition rate, by varying the magnetic field strength. It was found that low magnetic field strength (here 22mT) results in a deposition rate similar to that of conventional magnetron sputtering in which the average power is applied continuously, called also direct current magnetron sputtering (dcMS). The high deposition rate at low magnetic field strength was attributed to a reduction in the back attraction probability of the sputtered species. The magnetic field strength did not show noticeable influence on the mechanical properties. The proposed explanation was that the considered peak current density interval 1.22-1.72 A∙cm-2 does not exhibit dramatic changes in the plasma dynamics.
In a second part, using the optimized deposition rate, the optimized chemical composition of TiN was determined. It was shown that the chemical composition of TiN does not significantly influence the corrosion performance but impacts considerably the mechanical properties. It was also shown that the corrosion resistance of the coatings deposited using HiPIMS was higher than that of the coatings deposited using dcMS.
The third study was the effect of annealing post deposition on the properties of TiN coating deposited using HiPIMS. The hardness of the coatings showed a maximum at 400°C reaching 24.8 GPa. Above 400°C however, a lowering of the hardness was measured and was due to the oxidation of TiN which led to the formation of TiN-TiO2 composites with lower mechanical properties.
The coating microscopic properties such as crystal orientation, residual stresses, average grain size were determined from X-ray diffraction data and the roughness was measured using atomic force microscopy. These properties were found to vary with the magnetic field strength, the chemical composition as well as the annealing temperature.
Zum Inhalt:
Die Dissertation ist in einem empirisch-qualitativen Forschungssetting eingebettet, bei der das Team-Teaching als Unterrichtsmethode im Forschungsfokus steht. Die Erhebung des empirischen Datenmaterials erfolgt einerseits mittels teilnehmender Beobachtung im Forschungsfeld und andererseits durch ein Gruppeninterview, das mit Lehrkräften geführt wird, die über einen gewissen Beobachtungszeitraum Erfahrungen mit der Unterrichtsmethode des Team-Teaching gesammelt haben. Für die hier zu Grunde gelegte Feldforschung wird das „theoretical sampling“ zum Einsatz kommen, das 1967 von Glaser und Strauss im Rahmen einer empirischen Untersuchung begründet worden ist.
Zum Aufbau:
In der Einleitung wird das Forschungsinteresse, der Forschungsstand und die zentrale Forschungsfrage der Dissertation beschrieben. Im zweiten Kapitel folgt der theoretische Teil mit einer differenzierten Definition des Team-Teachings als Unterrichtsmethode. Es folgt eine allgemeine Definition zum Unterricht, um die spezifischen Merkmale des Team-Teachings differenziert herausarbeiten zu können. Die empirischen Daten werden im Rahmen einer teilnehmenden Beobachtung im Unterricht der Oberstufe einer Förderschule mit dem Förderschwerpunkt Lernen und sozial-emotionale Entwicklung erhoben. Deshalb sind im Theorieteil der Förderschwerpunkt und die curricularen Aspekte zum Förderschwerpunkt dargestellt. Die Bedeutung der Unterrichtsmethode des Team-Teachings für das Unterrichtssetting in einer Förderschule mit dem sozial-emotionalem Förderschwerpunkt ist ebenfalls im Theorieteil verortet.
Im dritten Kapitel werden die qualitativen und quantitativen Erhebungsinstrumente beschrieben und Hypothesen formuliert. Die Darstellung der Methoden zur Aufbereitung des Datenmaterials ist im vierten Kapitel verortet. Die Interpretation der Forschungsergebnisse erfolgt im fünften Kapitel. Die qualitative und quantitative Datenauswertung des empirischen Datenmaterials ist im sechsten Kapitel dargestellt. Die Dissertation schließt im siebten Kapitel mit einen Fazit und Ausblick.
This thesis is concerned with an issue of considerable importance to the development of revision skills: the role of teacher feedback. Prompted by the concern to develop a model of instruction which will help students write to the best of their capacities, the present study forms a proposal: an interactive model of revision. The study researches whether the kind of feedback proposed in this model is indeed a helpful tool for revision and whether the kind of negotiated revision that occurs is a vehicle for learning. The first section of the thesis reviews different areas of literature which are relevant to the study. More specifically, Chapter 2 presents the historical and theoretical foundations of different writing instructional practices and sheds light on issues concerning the use of the process approach. It also reviews research based on sociocognitive theoretical perspectives in an attempt to delineate the impact of interpersonal or social activity on individual performance and progress. Chapter 3 examines issues associated with the process approach in particular and illustrates how theory and method come together in a process writing classroom. Chapter 4 presents the differences in revising behaviours between experienced and inexperienced writers in both L1 and L2 contexts and the various ways these differences have been justified. It also highlights a number of issues which have been identified as contributing to effective revision. Particular attention is paid to the role that teacher feedback has to play as a means of promoting substantive student revision with an instructional emphasis on fluency, organisation and language. Chapter 5 presents an interactive model of revision, which envisions a communicative exchange between two partners, the student-writer and the teacher-reader, collaborating in order to develop awareness of revision strategies and establish criteria for effective writing. Chapter 6 investigates the epistemological basis of the research and presents a set of research questions and hypotheses, which guided the investigation. Chapter 7 frames the context of the research and details the methods used to collect the data from the study. The study involved 100 Year 7 students in two gymnasia in Koblenz, Germany. During the time of the investigation, the students wrote and revised five tasks. Three of these tasks were revised after receiving teacher feedback, which focused on aspects such as appropriacy and sufficiency of information, organization, coherence and grammatical accuracy. The study investigates the effects of this kind of focused feedback on the students" revisions and explores the relationship between revision and text improvement. Large quantitative and qualitative data sets were generated during the research. The quantitative data was based on the student documents (1000 original and revised drafts) whereas the qualitative data emerged from student questionnaires and seven case studies. Chapter 8 presents descriptions of the data analyses. More specifically, it describes the initial and final coding of the revisions traced in the student documents. Then it focuses on the type of qualitative analysis employed in the case studies in order to investigate the relationship between revision and text improvement. The final section of the chapter describes the questionnaire analysis, which was carried out to investigate attitudes, benefits and constraints from the implementation of the model. Chapter 9 examines the statistical results from the analysis of the students" revisions. More specifically, it explores the revisions made by the students across tasks and the relationships between the features of the teacher feedback and these revisions. The analysis highlights patterns in the development of revision skills and positive correlations of student revisions with features of the teacher feedback. Chapter 10 looks at the descriptive data from the case studies of seven individual student writers. The analysis of this data illustrates how the specific students negotiated the revisions and sheds more light on the relationship between feedback, revision and text improvement. Chapter 11 contains the analysis of the students" answers to the questionnaire, which provide illuminative information about the feedback-related attitudes. In Chapter 12, the thesis reaches its final destination. The journey over the paths of literature exploration, data gathering and data analysis ends with reflections on the messages that emerge from the data analysis. The conclusion reached is that young students can learn how to revise their writing and focused feedback is a viable pedagogic option for teaching revision. In addition to discussing the findings, this final section considers the pedagogical implications for the teaching of writing and suggests possible avenues for further work.
Since the Bologna reform a continuous improvement of the lessons’ quality at school, which is often connected with the professionalization of the future teachers and the teaching post education, is aimed by Alliance and federal states. The quality of the lessons is connected with the professionalization of the future teacher and the teaching post education. In most studies about quality improvement the consideration occurs predominantly from the university view and it is seldom related on the subject Sport. The quality study is established on these two points and leads to the main question: Are there any differences in the teachers‘ and learners‘ perceptions of the professionalization of sport teachers in the certain education phases in Rheinland-Pfalz?
With the help of 101 guide interviews and the evaluation according to the Grounded Theory this source question can be answered straight. There were interviewed teachers of the universities, of the state study seminars and the school, as well as learners, to that refer trainees and students. During the study the „missing school relation” crystallizes consistent in all personal groups as key element (core category) in the first and second education phase. The interviewed, which belong to different school forms, give relevant concrete, specific for sport and partially subject covering optimization proposals. As a result a main focus forms untimely relations to the school everyday life and at the same time collect teaching experiences with learning groups at school to get to know their different motor abilities and skills. The improvement approaches concern the university phase and the training period in the study seminars and schools, and the involved consider for necessary a more intensive interlinking of the individual institutions. A mutual, continuous cooperation for the professionalization in the sportsman's education and therefore the optimization of the sports teacher training is very important for all involved.
The intention of this thesis was to characterise the effect of naturally occurring multivalent cations like Calcium and Aluminium on the structure of Soil Organic Matter (SOM) as well as on the sorption behaviour of SOM for heavy metals such as lead.
The first part of this thesis describes the results of experiments in which the Al and Ca cation content was changed for various samples originated from soils and peats of different regions in Germany. The second part focusses on SOM-metal cation precipitates to study rigidity in dependence of the cation content. In the third part the effects of various cation contents in SOM on the binding strength of Pb cations were characterised by using a cation exchange resin as desorption method.
It was found for soil and peat samples as well as precipitates that matrix rigidity was affected by both type and content of cation. The influence of Ca on rigidity was less pronounced than the influence of Al and of Pb used in the precipitation experiments. For each sample one cation content was identified where matrix rigidity was most pronounced. This specific cation content is below the cation saturation as expected by cation exchange capacity. These findings resulted in a model describing the relation between cation type, content and the degree of networking in SOM. For all treated soil and precipitate samples a step transition like glass transition was observed, determined by the step transition temperature T*. It is known from literature that this type of step transition is due to bridges between water molecules and organic functional groups in SOM. In contrast to the glass transition temperature this thermal event is slowly reversing after days or weeks depending on the re-conformation of the water molecules. Therefore, changes of T* with different cation compositions in the samples are explained by the formation of water-molecule-cation bridges between SOM-functional groups. No influence on desorption kinetics of lead for different cation compositions in soil samples was observed. Therefore it can be assumed that the observed changes of matrix rigidity are highly reversible by changing the water status, pH or putting agitation energy by shaking in there.
Nandi forests (South and North Nandi forests) are situated in the Rift Valley Province of Kenya very close to Kakamega forest. From previous documents it has been seen that Kakamega and Nandi forests were connected to each other forming one big "U" shaped forest block till the beginnings of 1900s. Due to human pressures, currently there are three different forests form the previous one block forest. Although they were one forest, information on Nandi forests is very scanty when it is compared to that of Kakamega forest. The species composition and diversity as well as plant communities and population structure of Nandi forests have not been studied. Information is not available about the similarity status of South and North Nandi forests. Furthermore the natural regeneration potential (seedling bank) of these forests is not well studied and documented. Hence this study aims to fill these gaps.
In this study totally 76 quadrates (49 from South Nandi and 27 from North Nandi) were used to collect data. In the South Nandi forests 27 of the quadrates were laid in the better side of the forest (at Kobujoi) and the remaining 22 were in the heavily disturbed part of this forest (Bonjoge). The quadrates were arranged on transects that have one to one and half km which were parallel to the slope. The distance between the quadrates was 100 meter and transects are 500 m apart. The size of the main quadrate was 400 m2 (20 X 20 m) which also had five small plots (3 X 3 m) distributed on the four corners and in the center. Each woody plants (climbers, shrubs and trees) having more than one meter and greater than two centimeter diameter at breast height (dbh) were measured and recorded. Seedlings and herbaceous plants were sampled in the smaller plots. Individual plants were identified at species level and when it was not possible to identify in the field voucher specimen were prepared and latter identified at the East African Herbarium, National Museum of Kenya, and Nairobi. Clustering and ordination were performed using PC-ORD and CANOCO ecological softwares, respectively. For both clustering and ordination abundance data of the species was used. Shannon diversity index and evenness were computed using PC-ORD while similarity indices, Fisher alpha, rarefaction, species richness estimation (nonparametric species richness estimators) were conducted using EstimateS. Indicator species analysis was undertaken using PC-ORD. Basal area and height class distribution at forests level or site level (Bonjoge and Kobujoi) and diameter (dbh) class distribution for selected trees species were performed to evaluate population structure.
Furthermore importance value (IV) of woody plant species was calculated. SPSS version 16 was used to undertake both parametric (when data assume normal distribution) and nonparametric (when data are not assuming normal distribution) comparison of means, correlation and regression analysis.
In this study totally 321 vascular plant species comprising 92 families and 243 genera were identified in Nandi forests (both South and North Nandi forests). In South Nandi forest 253 plant species form 82 families and 201 genera were recorded while in North Nandi 181 species comprising 67 families and 155 genera were recorded. Jackknife second order estimators gave the highest species richness estimate for both South and North Nandi forests i.e. 284 and 209, respectively. In the case of highly disturbed and less disturbed parts of South Nandi forest 138 and 172 vascular plant species were recorded, respectively. Asteraceae, Rubiaceae and Euphorbiaceae are the top three species rich families of Nandi forests. In terms of different diversity measures (i.e. alpha and beta diversity, Fisher alpha, Shannon diversity and evenness indices) South Nandi is more diverse than North Nandi forest. Sörensen and Jaccard (classic) as well as their respective abundance based similarities showed that there is a low species similarity between South and Nandi forests. The cluster analysis resulted in three different plant communities and this result is supported by the ordination result.
South and North Nandi forest has inverted "J" height class distribution showing that larger proportion of woody plant individuals are found in the lower height classes. Similar pattern is observed when the diameters of all woody plants were considered together. However, different diameter class distributions (seven types) were identified when selected tree species were analyzed separately. It has been observed that the basal area of South Nandi forest is significantly lower than that of North Nandi forest (Mann-Whitney U =358, p < 0.001). Similarly Bonjoge has significantly lower basal area (t-value=3.77, p<0.01) than that of Kobujoi. Number of woody plat seedlings in South Nandi forest is significantly higher than that of North Nandi (Mann-Whitney U = 362.5, p<0.001). In the same way Bonjoge has significantly smaller number of ssedlings than Kobujoi (t-value 4.24, p<0.001). Most of species in both forests are able to resprout from stumps after physical damage; hence this helps the regeneration of the forests in addition to seedling banks. This study enables to fill some of the information gaps about Nandi forests especially of floristic composition, population structure, natural regeneration and human impacts on this ecosystem.
The concept of hard and soft news (HSN) is regarded as one of the most important concepts in journalism research. Despites this popularity, two major research voids can be assigned to the concept. First, it lacks conceptual clarity: the concept gets used interchangeably with related concepts such as sensationalism, which has led to fuzzy demarcations of HSN. Also, it is still not agreed on of which dimensions the concept in composed. Second, little is known about the factors that influence the production of news in terms of their hard or soft nature. The present disserta-tion casts a twofold glance on the HSN concept – it aims to assess the conceptual status of the concept and production of hard and soft news.
At the outset, this dissertation delineates the theoretical base for three manuscripts in total and presented considerations on concepts in social sciences in general and hard and soft news in particular as well as the production of news, particularly of hard and soft news. The first paper proposed a theoretical frame-work model to distinguish HSN and related concepts. Based on a literature review of in total five concepts, this model suggested a hierarchy in which these concepts can be discerned according to their occurrence in media content. The second pa-per focused on the inner coherence of the HSN concept in its most recent academ-ic understanding. The results of a factorial survey with German newspaper jour-nalists showed that, indeed, four out of five dimensions of the HSN concept com-prised what the journalists understood by it. Hence, the most recent academic un-derstanding is to a great extent coherent. The third study shed light on the produc-tion of HSN, focusing on the influence of individual journalists’ and audience’s characteristics on whether news was presented in hard or soft way. The findings of a survey with simulated decision scenarios among German print journalists showed that the HSN dimensions were susceptible to different journalistic influ-ences and that a perceived politically uninterested audience led to a softer cover-age. The dissertation concluded with connecting these findings with the considera-tions on concept evaluation and the production of news. Implications for research on and with the concept of HSN were presented, before concluding with limitations and suggestions for future research.
In jüngerer Vergangenheit erlebt der zumeist auf Michel Foucault zurückgeführte Begriff des 'Dispositivs' – ähnlich wie zuvor der des 'Diskurses' – eine regelrechte Konjunktur in den Geistes-, Sozial- und Kulturwissenschaften. Seine Verwendung weist jedoch eine Reihe von Verkürzungen und Entstellungen auf; so werden damit recht pauschal meist bloß die architekturalen und/oder räumlichen Umstände bzw. Kontexte sozialer, medialer oder anderer Situationen bezeichnet, ohne die für Foucault so wichtigen Fragen nach Wissen, Macht und Subjektivität zu beachten. So gerät der Begriff in Gefahr, seine theoretische und analytische Potenz einzubüßen.
In „Im Dispositiv. Macht, Medium, Wissen“ legt die Autorin daher in einem ersten Schritt eine Präzisierung des Dispositivbegriffs vor, der den zugrundeliegenden Theoriehorizont Foucaults systematisch miteinbezieht. Dazu werden zunächst diskurstheoretische Grundlagen erläutert und anschließend deren strukturelle Analogien zu den machtanalytischen Befunden Foucaults mit Hilfe einer neuen Terminologie sichtbar gemacht.
Vor allem in den Medienwissenschaften ist der Begriff des Dispositivs prominent, um die gesellschaftliche Einbettung von Einzelmedien zu pointieren. Jedoch fällt auf, dass der dort in weiten Teilen vertretene Ansatz, Medien als Dispositive zu verstehen, den Blick auf medienspezifische Eigenlogiken verstellt. In einem zweiten Schritt weist die Autorin daher die strukturellen Analogien von Medialität, Wissen und Macht auf und kommt so zu einem integrativen Dispositivkonzept, das die drei Sphären als dynamisch miteinander verknüpfte, interdependente und reziproke Felder von Kultur modelliert.
Streams are coupled with their riparian area. Emerging insects from streams can be an important prey in the riparian area. Such aquatic subsidies can cause predators to switch prey or increase predator abundances. This can impact the whole terrestrial food web. Stressors associated with agricultural land use can alter insect communities in water and on land, resulting in complex response patterns of terrestrial predators that rely on prey from both systems.
This thesis comprises studies on the impact of aquatic nsects on a terrestrial model ecosystem (Objective 1, hapter 2), the influence of agricultural land use on riparian spiders’ traits and community (Objective 2, Chapter 3), and on the impact of agricultural land use on the contribution of different prey to spider diet (Objective 3, Chapter 4).
In chapter 2, I present a study where we conducted a mesocosm experiment to examine the effects of aquatic subsidies on a simplified terrestrial food web consisting of two types of herbivores (leafhoppers and weevils), plants and predators (spiders). I focused on the prey choice of the spiders by excluding predator immigration and reproduction. In accordance with predator switching, survival of leafhoppers increased in the presence of aquatic subsidies. By contrast, the presence of aquatic subsidies indirectly reduced weevils and herbivory.
In chapter 3, I present the results on the taxonomic and trait response of riparian spider communities to gradients of agricultural stressors and environmental variables, with a particular emphasis on pesticides. To capture spiders with different traits and survival strategies, we used multiple collection methods. Spider community composition was best explained by in-stream pesticide toxicity and shading of the stream bank, a proxy for the quality of the habitat. Species richness and the number of spider individuals, as well as community ballooning ability, were negatively associated with in-stream pesticide toxicity. In contrast, mean body size and shading preference of spider communities responded strongest to shading,
whereas mean niche width (habitat preference for moisture and shading) responded strongest to other environmental variables.
In chapter 4, I describe aquatic-terrestrial predator-prey relations with gradients of agricultural stressors and environmental variables. I sampled spiders, as well as their aquatic and terrestrial prey along streams with an assumed pesticide pollution gradient and determined their stable carbon and nitrogen signals. Potential aquatic prey biomass correlated positively with an increasing aquatic prey contribution of T. montana. The contribution of aquatic prey to the diet of P. amentata showed a positive relationship with increasing toxicity in streams.
Overall, this thesis contributes to the emerging discipline of cross-ecosystem ecology and shows that aquatic-terrestrial linkages and riparian food webs can be influenced by land use related stressors. Future manipulative field studies on aquatic-terrestrial linkages are required that consider the quality of prey organisms, fostering mechanistic understanding of such crossecosystem effects. Knowledge on these linkages is important to improve understanding of consequences of anthropogenic stressors and to prevent further losses of ecosystems and their biodiversity.
Enterprise Collaboration Systems (ECS) werden zunehmend als Kernkomponenten des digitalen Arbeitsplatzes in Unternehmen eingesetzt, die mit der Implementierung dieser neuen Softwaregattung jeweils unterschiedliche Ziele für die innerbetriebliche Zusammenarbeit verfolgen. Diese Ziele werden in der Praxis oftmals nicht eindeutig genug formuliert. Der Einsatz von traditionellen Controlling-Kennzahlen mit dem Schwerpunkt auf Termineinhaltung und Kosten sind zudem ungeeignet, um die Realisierung des individuellen Nutzens von ECS zu messen. Diese Forschungsarbeit beschreibt die Entwicklung und Anwendung des Benefits Scorecards for Collaboration Platforms in Enterprises (SCoPE) Frameworks, das als ein Rahmenwerk für die kennzahlengestützte Nutzenmessung von ECS dient und an dem Goal-Question-Metrics-Approach der NASA sowie der Balance Scorecard von Kaplan & Norton angelehnt ist. Die Ergebnisse tragen zu einem breiteren Verständnis der kennzahlenbasierten Nutzenanalyse für ECS in Organisationen bei. Das Benefits-SCoPE-Framework wurde in enger Zusammenarbeit mit Experten aus 16 Anwenderunternehmen im Rahmen der Initiative IndustryConnect entwickelt. In einem design-orientierten Ansatz wurde ein ausgewählter Methodenmix angewandt, inklusive Experteninterview, Fokusgruppe, Workshop und Card Sorting. Der von Anwenderunternehmen erwartete Nutzen des ECS wird in spezifische, durch Kennzahlen beantwortbare Fragestellungen umformuliert und fragmentiert. Insgesamt konnten so 313 nutzenorientierte Fragen von drei Anwenderunternehmen identifiziert werden. Die durchgeführte Kategorisierung der Fragen verdeutlicht zum einen das gemeinsame Verständnis der Unternehmen hinsichtlich der entscheidenden Faktoren für die Nutzenrealisierung von ECS und zum anderen die Nutzen-ziele, die die Unternehmen durch den Einsatz des ECS anvisieren. Die Entwicklung und Erhebung von Kennzahlen zur quantitativen Beantwortung von ausgewählten Fragestellungen wird mit funktions-fähigen Prototypen auf einer operativen Kollaborationsplattform demonstriert, die seit mehr als sieben Jahren im Einsatz ist und mehr als 5.000 registrierte Benutzer aufweist. Die Kennzahlenentwicklung, die Auswahl der Datenerhebungsmethode, die Durchführung der Messung sowie die Interpretation der Messergebnisse werden im Framework durch die Verwendung von Benefits Scorecards unterstützt. Die individuelle Orchestrierung mehrerer Benefits Scorecards repräsentiert die Struktur eines Ordnungssystems zur ECS-Nutzenanalyse in Anwenderunternehmen und verdeutlicht das Vorgehen zur Erhebung der äquivalenten Daten.
Die Umbruchphase in der beruflichen Erstausbildung - unter besonderer Berücksichtigung der Berufsschulen - zeigt Wirkung auf schulische, didaktische und pädagogische Konzeptionen. Die kaum noch überschaubaren technologischen, ingenieur- und naturwissenschaftlichen Entwicklungen der letzten Jahrzehnte haben die Inhalte einer gewerblich-technischen Ausbildung, aber auch die zunehmenden Anforderungen an die fachliche, methodische, emotionale und persönliche Kompetenz radikal verändert und werden sich weiterhin verändern. Wenn auch die Berufausbildung durch die Entwicklung neuer Berufsbilder und angepasster Lehrpläne versucht darauf zu reagieren, geht es in erster Linie doch um ein neues Lernverständnis. Vorbei sind die Zeiten, in denen das einmal erworbene berufliche Können lebenslang vorhält. Vor-dringlicher denn je fordert der rasante technologische Veränderungsprozess eine lebenslange Lernkompetenz. Mittlerweile reicht in den meisten Fällen die Bereitschaft allein nicht mehr aus, sich aus eigenem Antrieb neue Kompetenzen während der beruflichen Tätigkeit anzueignen, die Lerner müssen auch dazu befähigt sein. Unverkennbar zeigt sich in der Berufsschule als Folge davon eine deutliche Hinwendung zu offenen Lernformen. Daraus lässt sich schließen, dass bisherige Konzepte den Erwerb von anspruchsvollen Handlungs- und Lernkompetenzen nicht erfüllen können. Es stellt sich die Frage, inwieweit der Projektunterricht als Unterrichtskonzept in der Lage ist, neben der Vermittlung der erforderlichen neuen Handlungskompetenzen, auch die Bereitschaft der Lernenden zu wecken, eigenverantwortlich und selbstständig lebensbegleitend zu lernen und sich weiter zu qualifizieren. Sich auf Projektunterricht einzulassen, kann durchaus als Wagnis nicht nur hinsichtlich des Verlaufes als auch des Ergebnisses gesehen werden. Eine treffende Umschreibung bieten DE BIE/LOUWERSE (1977, 211): "Es kann nicht genug betont werden, daß Projektunterricht größere Anstrengungen, mehr Einsatz, mehr Disziplin und eine größere Frustrationstoleranz erfordert als der traditionelle Unterricht. Im Projektunterricht gibt es kein festes System und keinen Dozenten, der die Vorbereitung und die Durchführung übernimmt, und man kann daher auch nicht die Mängel auf sie projizieren oder Frustrationen auf sie abwälzen. Projektunterricht stellt größere Anforderungen als der traditionelle Unterricht; deshalb erfordert er auch "reifere" Teilnehmer - sowohl was die Studenten wie die Dozenten betrifft. - Mit der Aufarbeitung des Projektunterrichts in einer gewerblichen Berufsschule in Form einer Fallstudie greift der Autor ein zentrales betriebs- und berufspädagogisches Thema auf. Er gliedert seine Fallstudie in sechs Kapitel. Nach der allgemeinen Betrachtung des Lernens richtet sich sein Fokus im ersten Abschnitt auf die Anforderungen in der heutigen Zeit und vor allem auf das in der Berufsschule gemäß Lehrplan geforderte Konzept der Handlungskompetenz. Die beiden folgenden Kapitel beschäftigen sich mit dem Projektunterricht. In Teil zwei werden zunächst die Grundlagen herausgearbeitet und über den historischen Rückblick auf die Inhalts- und Berührungspunkte dieser Lernart hingewiesen. Die Problematik angesichts der zahlreichen Projektautoren mit ihren unterschiedlichen Standpunkten wird bei der Begriffsklärung nur zu deutlich. Darauf aufbauend erfolgt in Kapitel drei die Beleuchtung des Projektunterrichts als didaktisches Modell. FLECHSIGS Konzeption didaktischer Modelle zeigt auf, dass der Projektunterricht wesentlich zur individuellen Förderung der Persönlichkeit beiträgt und beispielhaft eine Fülle von Lernmöglichkeiten für Schüler und Lehrer bietet. Teil vier beinhaltet das Konzept der Fallstudie zum "projektorientierten Unterricht". Projekt "Lampe" ist als Beitrag zur Verbesserung der Unterrichtsqualität im Rahmen der Schulentwicklung an der Berufsschule zu verstehen. Der Verfasser verfolgt mit dem Projekt "Lampe" zweierlei: Zum einen sollen die bisher vereinzelt gemachten Erfahrungen mit selbstständigen Lernprozessen gebündelt und in eine komplexe Aufgabe transferiert werden, andererseits erstrebt er, den Anforderungen an den klassischen Projektunterricht so nah wie möglich zu kommen. Die Untersuchung wird geleitet von folgender Hauptthese: "Volljährige Schüler, die in einer eigenen Lehrwerkstatt mit hauptberuflichen Ausbildern ihren Beruf erlernen, den Abschluss der "Mittleren Reife" mitbringen, überdurchschnittliche Leistungen im Rahmen der Ausbildung nachweisen können, stehen offenen Lehr- und Lernformen (in diesem Fall dem Projektlernen) positiv gegenüber." Der Autor erhob die für die empirische Auswertung maßgeblichen Informationen in Teil fünf über ein von jedem Schüler anonym am Ende jeder Projektwoche auszufüllendes Meinungsbarometer. Gegenstand der Auswertung war die gesamte Population. Die Fallstudie weist folgende Vorgehensweise auf: Die zentralen Fragestellungen mündeten in das Meinungsbarometer. Die darin enthaltenen Fragebogenkategorien wurden mit der Faktorenanalyse auf ihre Zusammensetzung hin überprüft und aufgrund des Ergebnisses für die Auswertung neu gruppiert. Über die Korrelationsprüfung der Fragebogenkomponenten traten die Zusammenhänge zwischen dem Fragebogen und den in den zentralen Fragestellungen genannten Items hervor. rnAbschließend erfolgte die Untersuchung der korrelierenden Variablen unter Einbezug der Schulbildung mit der Varianzanalyse und die Bewertung der Hauptthese. Die umfangreichen Daten der Ergebnisse erhalten trotz der hohen Informationsdichte durch die Darstellung in Tabellen, Polygonzügen und Histogrammen ein hohes Maß an Anschaulichkeit. Teil sechs bietet einen Ausblick auf die zukünftige Gestaltung des Projekts "Lampe".
Connected vehicles will have a tremendous impact on tomorrow’s mobility solutions. Such systems will heavily rely on information delivery in time to ensure the functional reliability, security and safety. However, the host-centric communication model of today’s networks questions efficient data dissemination in a scale, especially in networks characterized by a high degree of mobility. The Information-Centric Networking (ICN) paradigm has evolved as a promising candidate for the next generation of network architectures. Based on a loosely coupled communication model, the in-network processing and caching capabilities of ICNs are promising to solve the challenges set by connected vehicular systems. In such networks, a special class of caching strategies which take action by placing a consumer’s anticipated content actively at the right network nodes in time are promising to reduce the data delivery time. This thesis contributes to the research in active placement strategies in information-centric and computation-centric vehicle networks for providing dynamic access to content and computation results. By analyzing different vehicular applications and their requirements, novel caching strategies are developed in order to reduce the time of content retrieval. The caching strategies are compared and evaluated against the state-of-the-art in both extensive simulations as well as real world deployments. The results are showing performance improvements by increasing the content retrieval (availability of specific data increased up to 35% compared to state-of-the-art caching strategies), and reducing the delivery times (roughly double the number of data retrieval from neighboring nodes). However, storing content actively in connected vehicle networks raises questions regarding security and privacy. In the second part of the thesis, an access control framework for information-centric connected vehicles is presented. Finally, open security issues and research directions in executing computations at the edge of connected vehicle networks are presented.
Non-Contact anterior cruciate ligament (ACL) injuries are a major problem in modern football (soccer). The stud design of the football shoes is suspected to be one important risk factor for ACL injuries.
The aim of this thesis was therefore to investigate whether or not the football shoe stud design corresponds to the loads occurring in the ACL. As direct measurements as well as subject tests (ethical reasons) are not possible, mechanical tests of the shoe-surface interaction are the only way to answer the research question. Hereby the realistic loading of the football shoes during the experimental tests is of major importance in order to get reliable and meaningful measurement results.
Therefore the kinematics and ground reaction forces of real ACL injury situations were determined by means of the Poser method and the resulting joint moments were calculated via computer simulation using an inverse dynamics approach. The results of the Poser analysis and the computer simulation were the basic condition for the experimental setup comparing four different stud designs using a novel pneumatic driven test device called TrakTester.
The measurement data showed significant differences of the measured forces and torques between the different stud designs. In order to estimate the influence of the stud design on the loading of the ACL risk potentials were derived from the measurement data for each loading scenario considering also medical and biomechanical knowledge. These risk potentials lead to the conclusion that the stud design influences the loading of the ACL. But they depend substantially on the specific boundary conditions and the loading scenario. This thesis basically contradicts the assumption that the use of football shoes with bladed studs causes a higher risk for the ACL compared to shoes with conventional round studs. In summary the new method developed during this thesis enables in combination with the TrakTester a considerably more realistic investigation of the shoe-surface interaction than approaches used up to now. Beside their role as boundary conditions for the experimental setup the results of the Poser analysis and the computer simulation provide furthermore a lot of biomechanical perceptions regarding the injury mechanism of non-contact ACL-injuries.
Because silver nanoparticles (Ag NPs) are broadly applied in consumer products, their leaching will result in the continuous release of Ag NPs into the natural aquatic environment. Therefore, bacterial biofilms, as the prominent life form of microorganisms in the aquatic environment, are most likely confronted with Ag NPs as a pollutant stressor. Notwithstanding the significant ecological relevance of bacterial biofilms in aquatic systems, and though Ag NPs are expected to accumulate within these biofilms in the environment, the knowledge on the environmental and ecological impact of Ag NPs, is still lagging behind the industrial growth of nanotechnology. Consequently, aim of this thesis was to perform effect assessment of Ag NP exposure on bacterial biofilms with ambient Ag NPs concentrations and under environmentally relevant conditions. Therefore, a comprehensive set of methods was applied in this work to study if and how Ag NPs of two different sizes (30 and 70 nm) affect bacterial biofilms i.e. both monospecies biofilms and freshwater biofilms in environmentally relevant concentrations (600 - 2400 µg l-1). Within the first part of this work, a newly developed assay to test the mechanical stability of
monospecies biofilms of the freshwater model bacterium Aquabacterium citratiphilum was validated. In the first study, to investigate the impact of Ag NPs on the mechanical stability of bacterial biofilms, sublethal effects on the mechanical stability of the biofilms were observed with negative implications for biostabilization. Furthermore, as it is still challenging to monitor the ecotoxicity of Ag NPs in natural freshwater environments, a mesocosm study was performed in this work to provide the possibility for the detailed investigation of effects of Ag NPs on freshwater biofilms under realistic environmental conditions. By applying several approaches to analyze biofilms as a whole in response to Ag NP treatment, insights into the resilience of bacterial freshwater biofilms were obtained. However, as revealed by t-RFLP fingerprinting combined with phylogenetic studies based on the 16S gene, a shift in the bacterial community composition, where Ag NP-sensitive bacteria were replaced by more Ag NP-tolerant species with enhanced adaptability towards Ag NP stress was determined. This shift within the bacterial community may be associated with potential detrimental effects on the functioning of these biofilms with respect to nutrient loads, transformation and/or degradation of pollutants, and biostabilization. Overall, bringing together the key findings of this thesis, 4 general effect mechanisms of Ag NP treatment have been identified, which can be extrapolated to natural freshwater biofilms i.e. (i) the identification of Comamonadaceae as Ag NP-tolerant, (ii) a particular resilient behaviour of the biofilms, (iii) the two applied size fractions of Ag NPs exhibited similar effects independent of their sizes and their synthesis method, and (iv) bacterial biofilms show a high uptake capacity for Ag NPs, which indicates cumulative enrichment.
Früher wurden in Museen den Besuchern meist gesicherte Wissensbestände präsentiert. Heute ist das vielfach anders: Nicht nur fraglos geltendes Wissen, sondern auch der Prozess dorthin wird aufgezeigt. Die dargebotene Information ist teilweise nicht mehr konsistent und widerspruchsfrei sondern konfligierend. Wie aber reagieren die Besucher auf konfligierende Information? Wie unterstützen ihre persönlichen Vorrausetzungen eine differenziertere Betrachtung? Wie können Museen " obwohl ihre Information doch freiwillig, selbstbestimmt, ohne Zwang und Pression rezipiert wird - die Informationsverarbeitung ihrer Besucher durch eine bestimmte räumliche Informationsanordnung beeinflussen? Diese Fragen wurden mittels vier Feldstudien vor Ort untersucht:
Studie 1 charakterisiert Museumsbesucher durch Persönlichkeitsprofile, die aus den für die Verarbeitung konfligierender Information als relevant betrachteten Personenmerkmalen (epistemologischen Überzeugungen, Ambiguitätstoleranz, Selbstwirksamkeitsannahmen) erstellt sind.
Studie 2 untersucht den Einfluss dieser Personenmerkmale und des situationalen Interesses auf die Tendenz zur Konfliktverarbeitung, ohne Fähigkeiten und Fassungskraft des Besuchers zu berücksichtigen.
Studie 3 überprüft die Effekte von situationalem Interesse, epistemologischen Überzeugungen, Ambiguitätstoleranz und Selbstwirksamkeitsannahmen auf die Konfliktverarbeitungsleistung.
In Studie 4 wird schließlich die Wirkung räumlich naher Informationspräsentation untersucht.
Museumsbesucher können durch die Persönlichkeitsprofile "selbstzweifelnd skeptisch", "selbstbewusst vertrauensvoll" und "stabilitätsgläubig" charakterisiert werden. Eine überwältigende Mehrheit zeigt die Bereitschaft, Konflikte wahrzunehmen und ein großer Teil ist darüber hinaus bereit, konfligierende Informationen auf tieferer Ebene zu verarbeiten. Eine räumlich nahe (thematisch kontinuierliche) Präsentation unterstützt die Tendenz, tief zu verarbeiten. Etwa drei Viertel der Besucher konnten erkennen, wann konfligierende Information logisch widersprüchlich ist und konnten überzufällig gut bei konsistenter und konfligierender Information Kohärenz bilden. Positive Selbstwirksamkeitsannahmen unterstützen vermittelt über hohes situationales Interesse die Tendenz, Information tiefer zu verarbeiten und steigern die Fähigkeit, Kohärenz zu bilden. Reifere epistemologische Überzeugungen der Dimension "Sicherheit" und "Veränderlichkeit" fördern Kohärenzbildung.
This thesis presents the analysis of gamebased touristic applications. In tourism, actions can only be motivated intrinsic. Thus, this thesis at first researches specific intrinsic motivation concepts. It shows how gamebased motivation can be produced on purpose and answers the question whether gamebased motivation can be transferred to non-gamebased applications.
Using these results, different touristic applications have been developed and evaluated.
All applications aimed to add value to the touristic experience. The applications are sorted by their mobility. There are completely mobile, completely stationary and hybrid systems in this work. There are different ways to add value which are presented in this work: Gamebased exploration, knowledge transfer and social interaction between tourists.
Finally, an authoring tool for gamebased touristic tours on smartphones is presented.
Education and training of the workforce have become an important competitive factor for companies because of the rapid technological changes in the economy and the corresponding ever shorter innovation cycles. Traditional training methods, however, are limited in terms of meeting the resulting demand for education and training in a company, which continues to grow and become faster all the time. Therefore, the use of technology-based training programs (that is, courseware) is increasing because courseware enables self-organized and self-paced learning and, through integration into daily work routines, allows optimal transfer of knowledge and skills, resulting in high learning outcome. To achieve these prospects, high-quality courseware is required, with quality being defined as supporting learners optimally in achieving their learning goals. Developing high-quality courseware, however, usually requires more effort and takes longer than developing other programs, which limits the availability of this courseware in time and with the required quality.
This dissertation therefore deals with the research question of how courseware has to be developed in order to produce high-quality courseware with less development effort and shorter project duration. In addition to its high quality, this courseware should be optimally aligned to the characteristics and learning goals of the learners as well as to the planned usage scenarios for the knowledge and skills being trained. The IntView Method for the systematic and efficient development of high-quality courseware was defined to answer the research question of this dissertation. It aims at increasing the probability of producing courseware in time without exceeding project schedules and budgets while developing a high-quality product optimally focused on the target groups and usage scenarios.
The IntView Methods integrates those execution variants of all activities and activity steps required to develop high-quality courseware, which were identified in a detailed analysis of existing courseware development approaches as well as production approaches from related fields, such as multimedia, web, or software engineering, into a systematic process that in their interaction constitute the most efficient way to develop courseware. The main part of the proposed method is therefore a systematic process for engineering courseware that encompasses all courseware lifecycle phases and integrates the activities and methods of all disciplines involved in courseware engineering, including a lifecycle encompassing quality assurance, into a consolidated process. This process is defined as a lifecycle model as well as a derived process model in the form of a dependency model in order to optimally support courseware project teams in coordinating and synchronizing their project work. In addition to the models, comprehensive, ready-to-apply enactment support materials are provided, consisting of work sheets and document templates that include detailed activity descriptions and examples.
The evaluation of the IntView Method proved that the method together with the enactment support materials enables efficient as well as effective courseware development. The projects and case studies conducted in the context of this evaluation demonstrate that, on the one hand, the method is easily adaptable to the production of different kinds of courseware or to different project contexts, and, on the other hand, that it can be used efficiently and effectively.
This thesis was motivated by the need to advance the knowledge on the variability and dynamics of energy fluxes in lakes and reservoirs, as well as about the physical processes that regulate the fluxes at both the air and water side of the air-water interface.
In the first part, I re-examine how mechanical energy, resulting from its major source – the vertical wind energy flux - distributes into the various types of water motions, including turbulent flows and surface and internal waves. Although only a small fraction of the wind energy flux from the atmosphere is transferred to the water, it is crucial for physical, biogeochemical and ecological processes in lentic ecosystems. Based on extensive air- and water-side measurements collected at two small water bodies (< 10 km2), we estimated the energy fluxes and energy content in surface and in internal waves. Overall, the estimated energy fluxes and energy content agree well with results reported for larger water bodies, suggesting that the energetics driving the water motions in enclosed basins is similar, independently of the basin size. Our findings highlight the importance of the surface waves that receive the largest fraction of the wind energy flux, which strongly nonlinearly increases for wind speeds exceeding 3 m s-1. We found that the existing parameterization of the wave height as a function of wind speed and fetch length did not reproduce the measured wave amplitude in lakes. On average, the highest energy content was observed in basin-scale internal waves, together with high-frequency internal waves exhibiting seasonal variability and varies among the aquatic systems. During our analysis, we discovered the diurnal variability of the energy dissipation rates in the studied lake, suggesting biogenic turbulence generation, which appears to be a widespread phenomenon in lakes and reservoirs.
In the second part of the thesis, I addressed current knowledge gaps related to the bulk transfer coefficients (also known as the drag coefficient, the Stanton number and the Dalton number), which are of a particular importance for the bulk estimation of the surface turbulent fluxes of momentum, sensible and latent heat in the atmospheric boundary layer. Their inaccurate representation may lead to significant errors in flux estimates, affecting, for example, the weather and climate predictions or estimations of the near-surface current velocities in lake hydrodynamic models. Although the bulk transfer coefficients have been extensively studied over the past several decades (mainly in marine and large-lake environments), there has been no systematic analysis of measurements obtained at lakes of different size. A significant increase of the transfer coefficients at low wind speeds (< 3 m s-1) has been observed in several studies, but, to date, it has remained unexplained. We evaluated
the bulk transfer coefficients using flux measurements from 31 lakes and reservoirs. The estimates were generally within the range reported in previous studies for large lakes and oceans. All transfer coefficients increased substantially at low wind speeds, which was found to be associated with the presence of gusts and capillary waves (except the Dalton number). We found that the Stanton number is systematically higher than the Dalton number. This challenges the assumption made in the Bowen-ratio method, which is widely used for estimating evaporation rates from micrometeorological measurements. We found that the variability of the transfer coefficients among the lakes could be associated with lake surface area. In flux parameterizations at lake surfaces, it is recommended to consider variations in the drag coefficient and the Stanton number due to wind gustiness and capillary wave roughness while the Dalton number could be considered as constant at all wind speeds.
In the third part of the thesis, I address the key drivers of the near-surface turbulence that control the gas exchange in a large regulated river. As all inland waters, rivers are an important natural source of greenhouse gases. The effects of the widespread alteration and regulation of river flow for human demands on gas exchange is largely unknown. In particular, the near-surface turbulence in regulated rivers has been rarely measured and its drivers have not been identified. While in lakes and reservoirs, near-surface turbulence is mainly related to atmospheric forcing, in shallow rivers and streams it is generated by bottom friction of the gravity-forced flow. The studied large regulated river represents a transition between these two cases. Atmospheric forcing and gravity were the dominant drivers of the near-surface turbulence for a similar fraction of the measurement period. Based on validated scalings, we derived a simple model for estimating the relative contributions of wind and bottom friction to near-surface turbulence in lotic ecosystems with different flow depths. Large diel variability in the near-surface energy dissipation rates due to flow regulation leads to the same variability in gas exchange. This suggests that estimates of gas fluxes from rivers are biased by measurements performed predominantly during daytime.
In addition, the novelty in all the analyses described above is the use of the turbulent surface fluxes measured directly by the eddy-covariance technique – at the moment of writing, the most advanced method. Overall, this thesis is of a potential interest for a broad range of scientific disciplines, including limnology, micrometeorology and open channel hydraulics.
An assistance system has been developed for the purpuse of supporting the surgeon during the repositioning phase of intramedullary nailing of femural shaft fractures. As a basic principle the high forces generated by femural muscles and ligaments are taken by a linear sledge and a threaded rod. In order to move bone fragments directly Schanz screws are used as bone-machine interface. Two more Schanz screws are used for fine tuning orientation and position of the fragments according to the well known Joystick technique. The screws are fixed to two articulated arms, one passive and one fully robotic with manipulator. Thanks to the serial kinematic configuration of the system only minmal space of the surgeons working area gets occupied. Running a realtime operating system, the central control unit consits could be implemented as an embedded system comprising of a ARM Cortex-M0 microcontroller at it’s heart. This enables realtime computation and motor control of each joints value of the robotic arm using inverse kinematics. As inverse kinematics solver the iterative FABRIK algorithm was chosen. Serving as innovative and single user interface for the surgeon an optical force-torque sensor is used. The robotic arm always follows the surgeons motion when interacting with the sensor. Using the proposed demonstrator system a positioning resulution of <0,1mm could be accomplished. Thus by using the proposed solution during intramedullary nailing of femural shaft fractures a tremendous gain in positioning precision of bone fragments can be achieved. Furthermore a massive reduction of x-ray exposition of the surgeon is possible when applying the proposed approach. Also this approach enables the chance of cost reduction of femural fracture therapy due to reduction of needed time and staff.
The publication of freely available and machine-readable information has increased significantly in the last years. Especially the Linked Data initiative has been receiving a lot of attention. Linked Data is based on the Resource Description Framework (RDF) and anybody can simply publish their data in RDF and link it to other datasets. The structure is similar to the World Wide Web where individual HTML documents are connected with links. Linked Data entities are identified by URIs which are dereferenceable to retrieve information describing the entity. Additionally, so called SPARQL endpoints can be used to access the data with an algebraic query language (SPARQL) similar to SQL. By integrating multiple SPARQL endpoints it is possible to create a federation of distributed RDF data sources which acts like one big data store.
In contrast to the federation of classical relational database systems there are some differences for federated RDF data. RDF stores are accessed either via SPARQL endpoints or by resolving URIs. There is no coordination between RDF data sources and machine-readable meta data about a source- data is commonly limited or not available at all. Moreover, there is no common directory which can be used to discover RDF data sources or ask for sources which offer specific data. The federation of distributed and linked RDF data sources has to deal with various challenges. In order to distribute queries automatically, suitable data sources have to be selected based on query details and information that is available about the data sources. Furthermore, the minimization of query execution time requires optimization techniques that take into account the execution cost for query operators and the network communication overhead for contacting individual data sources. In this thesis, solutions for these problems are discussed. Moreover, SPLENDID is presented, a new federation infrastructure for distributed RDF data sources which uses optimization techniques based on statistical information.
Ecological assessment approaches based on benthic invertebrates in Euphrates tributaries in Turkey
(2019)
Sustainable water management requires methods for assessing ecological stream quality. Many years of limnological research are needed to provide a basis for developing such methods. However, research of this kind is still lacking in Turkey. Therefore, the aim of this doctoral thesis was to provide basic research in the field of aquatic ecology and to present methods for the assessment of ecological stream quality based on benthic invertebrates. For this purpose, I selected 17 tributaries of the Euphrates with a similar typology/water order and varying levels of pollution or not affected by pollution at all. The characterisation of the natural mountain streams was the first important step in the analysis of ecological quality. Based on community indices, I found that the five selected streams had a very good ecological status. I also compared the different biological indications, collected on two occasions ¬– once in spring (May) and once in autumn (September) – to determine the optimal sampling time. The macroinvertebrate composition differed considerably between the two seasons, with the number of taxa and Shannon index being significantly higher in autumn than in spring. In the final step, I examined the basal resources of the macroinvertebrates in the reference streams with an isotope analysis. I found that FPOM and biofilm were the most relevant basal resources of benthic invertebrates. Subsequently, based on the similarity of their community structures, I divided the 17 streams into three quality classes, supported by four community indices (EPT [%], EPTCBO [%], number of individuals, evenness). In this process, 23 taxa were identified as indicators for the three quality classes. In the next step, I presented two new or adapted indices for the assessment of quality class. Firstly, I adapted the Hindu Kush-Himalaya biotic index to the catchment area of the Euphrates and created a new, ecoregion-specific score list (Euph-Scores) for 93 taxa. The weighted ASPT values, which were renamed the Euphrates Biotic Score (EUPHbios) in this study, showed sharper differentiations of quality classes compared to the other considered ASPT values. Thus, this modified index has proved to be very effective and easy to implement in practical applications. As a second biological index, I suggested the proportion of habitat specialists. To calculate this index, the habitat preferences of the 20 most common benthic invertebrates were identified using the new habitat score. The proportion of habitat specialists differed significantly among the three quality classes with higher values in natural streams than in polluted streams. The methods and results presented in this doctoral thesis can be used in a multi-metric index for a Turkish assessment programme.
Changing Lives through Relocation - Ethnography of a Socialized Housing Project in the Philippines
(2018)
Changing lives through relocation - This is the vision of a socialized housing project in the Philippines which is subject of this dissertation. It is a polyphonic ethnography that investigates the transition process from an informal, marginalized, self-organized lifestyle in squatter areas and dumpsite settlements to an institutionalized and policy-based life in the socialized housing project. The transition process is analyzed from a material cultural perspective taking into account selected aesthetic theories. With this approach, the dissertation aims to contribute to the study of the aesthetic dimension of everyday life from an ethnographic perspective. Aesthetics are applied on three levels: (1) On the theoretical level, the concepts of social aesthetics and atmospheres enrich the analysis of the interrelation between the residents and their new house in the socialized housing project. (2) On the methodological level, the relevance of aesthetic experiences for the ethnographic knowledge process are discussed. And (3) on the descriptive level, selected and thematically relevant sensory and aesthetic experiences of the researcher inform the ethnographic account in the form of memoirs. By incorporating aesthetic theories, the dissertation gives a more holistic account of the dynamics active in the transition process. It shows that the change of lifestyle is experienced sensorily through the daily physical engagement with the new material environment, i.e. its specific materiality and infrastructure, its affordances and constraints giving rise to new experiences and needs. In this process, the residents become aware of the differences between the past and present way of life and thus experience the new challenges through their physical being in the new material environment. On the one hand, this evokes a nostalgic attitude towards their previous life, but on the other hand, it also gives form to ideas of a possible future.
Speziell in Anwendungen mit intensiver Temperatur- und Korrosionsbeanspruchung finden vermehrt Phosphate als sogenannte chemische Binder für Hochleistungskeramiken Verwendung. Konkret ist die Summe der Reaktionsverläufe während des Bindemechanismus in Folge einer thermisch-induzierten Aushärtung und somit die Wirkungsweise von Phosphatbindern prinzipiell innerhalb der Fachliteratur nicht eindeutig untersucht. Innerhalb dieser Arbeit wurden aufbauend auf einer umfangreichen strukturanalytischen Prüfungsanordnung (Festkörper-NMR, RBA, REM-EDX) einer exemplarischen phosphatgebundenen Al₂O₃-MgAl₂O₄-Hochtemperaturkeramikzusammensetzung unter Einbeziehung verschiedenartiger anorganischer Phosphate grundlegende Bindemechanismen charakterisiert. Mechanisch-physikochemische Eigenschaftsuntersuchungen (STA, Dilatometrie, DMA, KBF) deckten zudem den Einfluss der eingesetzten Phosphate auf die Eigenschaftsentwicklungen der Feuerfestkeramiken bezüglich des Abbindeverhaltens, der Biegefestigkeit sowie der thermischen Längenänderung auf, welche mit Strukturänderungen korreliert wurden. Es wurde gezeigt, dass sich Bindemechanismen bei Verwendung von Phosphaten temperaturgeleitet (20 °C ≤ T ≤ 1500 °C) grundsätzlich aus zwei parallel ablaufenden Reaktionsabfolgen zusammensetzen, wobei die sich entwickelnden Phosphatphasen innerhalb der Keramikmasse quantitativ und qualitativ bezüglich ihrer Bindewirkung bewertet wurden. Zum einen wurde die Bildung eines festigkeitssteigernden Bindenetzwerks aus Aluminiumphosphaten meist amorpher Struktur identifiziert und charakterisiert. Dieses bindungsfördernde, dreidimensionale Aluminiumphosphatnetzwerk baut sich innerhalb der Initialisierungs- und Vernetzungsphasen temperaturgeleitet kontinuierlich über multiple Vernetzungsreaktionen homogen auf. Zum anderen werden Reaktionsabfolgen durch parallel ablaufende Strukturumwandlungen nicht aktiv-bindender Phosphatspezies wie Magnesium-, Calcium- oder Zirkoniumphosphate ergänzt, welche lediglich thermische Umwandlungsreaktionen der Ausgangsphosphate darstellen. Vermehrt bei T > 800 °C geht das phosphatische Bindenetzwerk Festkörperreaktionen mit MgAl₂O₄ unter Ausbildung und Agglomeration von Magnesium-Orthophosphat-Sinterstrukturen ein. Die Bildung dieser niedrigschmelzenden Hochtemperaturphasen führt zu einem teilweisen Bruch des Bindenetzwerks.
Modern agriculture is a dominant land use in Europe, although it has been associated with negative effects on biodiversity in agricultural landscapes. One species-rich insect group in agro-ecosystems is the Lepidoptera (moths and butterflies); however, the populations of a number of Lepidoptera species are currently declining. The aims of this thesis were to assess the amount and structure of field margins in agricultural landscapes, study the effects of realistic field margin input rates of agrochemicals (fertilizer and pesticides) on Lepidoptera, and provide information on moth pollination services.
In general, field margins are common semi-natural habitat elements in agro-ecosystems; however, data on the structure, size, and width of field margins is limited. An assessment in two German agricultural landscapes (4,000 ha each) demonstrated that many of the evaluated field margins were less than 3 m wide (Rhineland‐Palatinate: 85% of margin length; Brandenburg: 45% margin length). In Germany, risk mitigation measures (such as buffer zones) to reduce pesticide inputs to terrestrial non-crop habitats do not have to be established by farmers next to narrow field margins. Thus, narrow field margins receive inputs of agrochemicals, especially via overspray and spray drift. These field margins were used as a development habitat for caterpillars, but the mean abundance of caterpillars was 35 – 60% lower compared with that in meadows. Caterpillars were sensitive to realistic field margin input rates of insecticide (pyrethroid, lambda-cyhalothrin) in a field experiment as well as in laboratory experiments. Moreover, 40% fewer Hadena bicruris eggs were observed on Silene latifolia plants treated with this insecticide compared with control plants, and the flowers of these insecticide-treated plants were less likely to be pollinated by moths. In addition, realistic field margin input rates of herbicides can also affect Lepidoptera. Ranunculus acris L. plants treated with sublethal rates of a sulfonylurea herbicide were used as host plants for Mamestra brassicae L. caterpillars, which resulted in significantly lower caterpillar weights, increased time to pupation, and increased overall development time compared with caterpillars feeding on control plants. These results might have been caused by lower nutritional value of the herbicide-treated plants or increased concentrations of secondary metabolites involved in plant defense. Fertilizer applications slightly increased the caterpillar abundance in the field experiment. However, fertilizers reduce plant diversity in the long term and thus, most likely, also reduce caterpillar diversity.
Moths such as Noctuidae and Sphingidae have been observed to act as pollinators for numerous plant species, including a number of Orchidaceae and Caryophyllaceae. Although in temperate agro-ecosystems moths are less likely to act as the main pollinators for crops, they can pollinate non-crop plants in semi-natural habitats. Currently, the role of moths as pollinators appears to be underestimated, and long-term research focusing on ecosystems is necessary to address temporal fluctuations in their abundance and community composition.
Lepidoptera represent a diverse organism group in agricultural landscapes and fulfill essential ecosystem services, such as pollination. To better protect moths and butterflies, agrochemical inputs to (narrow) field margins habitats should be reduced, for example, via risk mitigation measures and agro-environmental schemes.
Amphibian populations are declining worldwide for multiple reasons such as habitat destruction and climate change. An example for an endangered European amphibian is the yellow-bellied toad Bombina variegata. Populations have been declining for decades, particularly at the northern and western range margin. One of the extant northern range centres is the Westerwald region in Rhineland-Palatinate, Germany. To implement informed conservation activities on this threatened species, knowledge of its life-history strategy is crucial. This study therefore focused on different developmental stages to test predictions of life-history theory. It addressed (1) developmental, (2) demographic and (3) genetic issues of Bombina variegata as a model organism: (1) Carry-over effects from larval environment to terrestrial stages and associated vulnerability to predators were investigated using mesocosm approaches, fitness tests and predation trials. (2) The dynamics and demography of B. variegata populations were studied applying a capture-mark-recapture analysis and skeletochronology. The study was complemented through (3) an analysis of genetic diversity and structuring of B. variegata populations using 10 microsatellite loci. In order to reveal general patterns and characteristics among B. variegata populations, the study focused on three geographical scales: local (i.e. a former military training area), regional (i.e. the Westerwald region) and continental scale (i.e. the geographical range of B. variegata). The study revealed carry-over effects of larval environment on metamorph phenotype and behaviour causing variation in fitness in the early terrestrial stage of B. variegata. Metamorph size and condition are crucial factors for survival, as small-sized individuals were particularly prone to predator attacks. Yellow-bellied toads show a remarkable fast-slow continuum of the life-history trait longevity. A populations’ position within this continuum may be determined by local environmental stochasticity, i.e. an extrinsic source of variation, and the efficiency of chemical antipredator protection, i.e. an intrinsic source of variation. Extreme longevity seems to be an exception in B. variegata. Senescence was absent in this study. Weather variability affected reproductive success and thus population dynamics. The dispersal potential was low and short-term fragmentation of populations caused significant genetic differentiation at the local scale. Long-term isolation resulted in increased genetic distance at the regional scale. At the continental scale, populations inhabiting the marginal regions were deeply structured with reduced allelic richness. As consequence of environmental changes, short-lived and isolated B. variegata populations at the range margin may face an increased risk of extinction. Conservation measures should thus improve the connectivity among local populations and reinforce annual reproductive success. Further research on the intraspecific variation in B. variegata skin toxins is required to reveal potential effects on palatability and thus longevity.
Animationen können in instruktionalen Kontexten genutzt werden, um Wissen über Sachverhalte zu vermitteln, die Prozesse oder Abläufe beinhalten. So können dynamische Sachverhalte explizit dargestellt werden und müssen nicht vom Lerner selbst in Gedanken hergestellt, sondern nur anhand der Animation nachvollzogen werden. Dies sollte sich positiv auf den Wissenserwerb auswirken. Dabei stellen Animationen mit ihrer besonderen Eigenschaft der Darstellung zeitlicher Abläufe besondere Herausforderungen an den Lerner. Das menschliche Informationsverarbeitungssystem unterliegt bestimmten Begrenzungen im Hinblick auf die Wahrnehmung von Geschwindigkeiten. Zu schnelle und zu langsame Geschwindigkeiten können beispielsweise nur schwer wahrgenommen und dementsprechend auch nicht kognitiv verarbeitet werden. Die Zielsetzung der Arbeit, die sich daraus ergibt, war eine systematische Untersuchung der Wirkung unterschiedlicher Präsentationsgeschwindigkeiten auf das Wahrnehmen und Verstehen eines dynamischen Sachverhaltes anhand einer Animation.
Um die Fragestellungen der Arbeit beantworten zu können, wurden vier experimentelle Studien durchgeführt. Die Pilotstudie hatte das Ziel, sowohl das Lernmaterial als auch den entwickelten Wissenstest zu evaluieren. In Studie 1 wurde der Frage nach dem Einfluss der Präsentationsgeschwindigkeit auf den Wissenserwerb beim Lernen mit einer interaktiven Animation nachgegangen.
Die Studien 2 und 3 untersuchten den Einfluss verschiedener Reihenfolgen von Geschwindigkeiten auf den Wissenserwerb. Hier ging es um eine systematische Erfassung der perzeptuellen und kognitiven Verarbeitung dynamischer Informationen in zwei verschiedenen Geschwindigkeiten mittels Blickbewegungsmessung (Studie 2) und wiederholten Testungen des Wissenserwerbs zwischen den einzelnen Lernphasen (Studie 3).
Die Ergebnisse der Studien deuten darauf hin, dass bei langsamer Geschwindigkeit Wissen über Ereignisse auf untergeordneter zeitlicher Ebene erworben wurde und dass je schneller eine Animation gesehen wurde, umso mehr anteiliges Wissen auf einer übergeordneten zeitlichen Ebene erworben wurde (Studie 1), aber eindeutige Aussagen über den Einfluss der Geschwindigkeit auf den Wissenserwerb auf verschiedenen zeitlichen Hierarchieebenen lassen sich aufgrund der Ergebnisse der Studien nicht machen. Im Hinblick auf die Lernförderlichkeit verschiedener Arten der Sequenzierung von Geschwindigkeiten zeigten sich auch keine eindeutigen Ergebnisse. Aufgrund der Analyse der Blickbewegungsdaten deutet sich jedoch an, dass die Reihenfolge "langsam - schnell" den Bedingungen auf Seiten der Lerner eher entgegen kommt als die Reihenfolge "schnell - langsam".
Die Wirbelsäule als tragende Säule des menschlichen Körpers ist bei vielen Bewegungsabläufen hohen Belastungen ausgesetzt. Fehl- und Überbelastungen rufen dabei oft dauerhafte Schädigungen hervor. Daher ist es von Interesse, die innerhalb der Wirbelsäule auftretenden Belastungen zu bestimmen. Eine moderne und zuverlässige Methode zur Belastungsbestimmung ist der Aufbau eines Berechnungsmodells.
In der vorliegenden Arbeit wurde ein Mehr-Körper-System (MKS) Modell der Lendenwirbelsäule erstellt. Mit Hilfe des Modells können sowohl die übertragenen Kräfte und Momente in allen inneren Strukturen berechnet als auch die Kinematik des Bewegungsablaufs simuliert werden. Die Grundstruktur des Modells bilden die als Starrkörper angenommenen knöchernen Strukturen der fünf Lendenwirbel L1 bis L5, des Os Sacrums und des Os iliums, die über die Segmentierung eines CT-Datensatzes des Abgusses der Wirbeloberflächen eines durchschnittlich großen Europäers gewonnen wurden. Die elastischen Elemente der Wirbelsäule wurden unter Berücksichtigung ihrer physikalischen Eigenschaften in das Modell implementiert. Grundlage für die Modellierung der Zwischenwirbelscheiben waren dabei eigens durchgeführte experimentelle Messungen. Das charakteristische Kraft-Deformations-Verhalten der Ligamente wurde der Literatur entnommen.
Die Umsetzung im Computermodell berücksichtigt neben dem physikalischen Verhalten eines einzelnen Ligamentes zusätzlich durch einen Gewichtungsfaktor das Zusammenspiel aller Ligamente im komplex aufgebauten Ligamentapparat. Die Facettengelenke wurden durch Kontaktmodellierung in den Knorpelschichten realisiert. Daneben wurde ein Modell eines Implantatsystems entwickelt, das zur dynamischen Stabilisierung der Lendenwirbelsäule genutzt wird. Die Validierung der erstellten Modelle erfolgte über den Vergleich mit In-Vitro erhobenen Daten. Betrachtet wurden neben der intakten Wirbelsäule zudem degenerative Schädigungen der Zwischenwirbelscheibe und deren operative Versorgung durch Nukleotomie und dynamische Stabilisierung. Die Ergebnisse der Simulationen zeigen dabei eine sehr gute Näherung an die experimentell ermittelten Messwerte. Durch Anwendung der Computermodelle konnten die Auswirkungen verschiedener operativer Eingriffe, wie Interlaminotomie, Hemilaminektomie und Laminektomie auf die unterschiedlichen Strukturen der Lendenwirbelsäule berechnet werden. Ein weiteres Anwendungsgebiet lag in der Untersuchung des momentanen Drehzentrums. Neben der Bestimmung der Drehpunktbahn bei intakter Wirbelsäule konnten die Effekte einer degenerativ geschädigten und operativ versorgten Zwischenwirbelscheibe auf den Verlauf des momentanen Drehzentrums berechnet und simuliert werden.
Social Business Documents: An Investigation of their Nature, Structure and Long-term Management
(2018)
Business documents contain valuable information. In order to comply with legal requirements, to serve as organisational knowledge and to prevent risks they need to be managed. However, changes in technology with which documents are being produced introduced new kinds of documents and new ways of interacting with documents. Thereby, the web 2.0 led to the development of Enterprise Collaboration Systems (ECS), which enable employees to use wiki, blog or forum applications for conducting their business. Part of the content produced in ECS can be called Social Business Documents (SBD). Compared to traditional digital documents SBD are different in their nature and structure as they are, for example, less well-structured and do not follow a strict lifecycle. These characteristics bring along new management challenges. However, currently research literature lacks investigations on the characteristics of SBD, their peculiarities and management.
This dissertation uses document theory and documentary practice as theoretical lenses to investigate the new challenges of the long-term management of SBD in ECS. By using an interpretative, exploratory, mixed methods approach the study includes two major research parts. First, the nature and structure of Social Business Documents is addressed by analysing them within four different systems using four different modelling techniques each. The findings are used to develop general SBD information models, outlining the basic underlying components, structure, functions and included metadata, as well as a broad range of SBD characteristics. The second phase comprises a focus group, a case study including in-depth interviews and a questionnaire, all conducted with industry representatives. The focus group identified that the kind of SBD used for specific content and the actual place of storage differ between organisations as well as that there are currently nearly no management practices for SBD at hand. The case study provided deep insights into general document management activities and investigated requirements, challenges and actions for managing SBD. Finally, the questionnaire consolidated and deepened the previous findings. It provides insights about the value of SBD, their current management practices as well as management challenges and needs. Despite all participating organisations storing information worth managing in SBD most are not addressing them with management activities and many challenges remain.
Together, the investigations enable a contribution to practice and theory. The progress in practice is summarised through a framework, addressing the long-term management of Social Business Documents. The framework identifies and outlines the requirements and challenges of and the actions for SBD management. It also indicates the dependencies of the different aspects. Furthermore, the findings enable the progress in theory within documentary practice by discussing the extension of document types to include SBD. Existing problems are outlined along the definitions of records and the newly possible characteristics of documents emerging through Social Business Documents are taken into account.
The first group that was revised within my study is Ochralea Clark, 1865 (Hazmi & Wagner 2010a). I have checked the type specimen of most species that were originally described in Ochralea and there is no doubt that this genus is clearly distinct from Monolepta. Weise (1924) has synonymised Galeruca nigripes (Olivier, 1808) with O. nigricornis Clark, 1865 and the valid name of the species is O. nigripes (Olivier, 1808). Out of ten species originally described in this genus, only this species remain valid and O. pectoralis is a new synonym of O. nigripes. Additionally, Monolepta wangkliana Mohamedsaid, 2000 is very closely related to O. nigripes and need to be transferred to Ochralea. The second genus where the revision is still published is Arcastes Baly, 1865 (Hazmi & Wagner 2010b). I have checked the genitalic characters of A. biplagiata, and most of the type species of other Arcastes. Arcastes biplagiata possesses a peculiar shape of the median lobe and asymmetrically arranged endophallic structures. These peculiar characters are very useful to delimit this genus from the others. Therefore, only three valid species remain in Arcastes, while two new synonyms are found and fourrnother species need to be transferred to other genera. While checking the genitalic characteristics of type species of Arcastes sanguinea, thernmedian lobe as well as the spermatheca of this species possesses strong differences to A. biplagiata. The species was redescribed and transferred in a monotypic new genus Rubrarcastes Hazmi & Wagner, 2010c. The fourth genus that was already revised is Neolepta Jacoby, 1884. It was originally described on base of only two species by that time, N. biplagiata and N. fulvipennis. Jacoby has not designated a type species of the genus, and Maulik (1936) did it later, with the designation of N. biplagiata. Jacoby in his original description has only commented that Neolepta is very close and similar to Monolepta Chevrolat, 1837 and Candezea Chapuis, 1879. Subsequent authors have described further eight species, and transferred one species from Luperodes to it, summing up the total number of eleven described species in Neolepta. I have checked the genitalic characters of the type, N. biplagiata and have found out that the median lobe is not incised apically and stronger sclerotised ventral carinae with an apical hook close to the apex occur. Out of all described species, only two are closely related to the genero-type, N. sumatrensis (Jacoby, 1884) new combination and N. quadriplagiata Jacoby, 1886 that will remain in this group after the revision. All other species need to be transferred to other genera, including the newly described Paraneolepta and Orthoneolepta. The last distinct paper of this thesis presented the results on Monolepta Chevrolat, 1837. The massive number of Monolepta from the entire Oriental Region, with about 260 described species names is a more long-life project and not practicable within a PhD-study. Thus I have focused on the species of Monolepta known from the Sundaland area in this work. A comprehensive revision including the study of the primary types of the described species, has never been done for Monolepta from this sub-region, while new species have also been described in the last decade (e. g. Mohamedsaid 1993, 1997, 1998, 1999, 2000a,b, 2001, 2002, 2005).
On base of the most current species lists of Mohamedsaid (2001, 2004, 2005) and Kimoto (1990), the number of valid species described from this region is about 72. After my revision, only thirteen valid species can remain in Monolepta in the sense of the generotype M. bioculata (Wagner 2007), while seven species have been found as new synonyms, three have been already transferred to other genera and further 49 species need to be transferred to other genera.
Augmented reality (AR) applications typically extend the user's view of the real world with virtual objects.
In recent years, AR has gained increasing popularity and attention, which has led to improvements in the required technologies. AR has become available to almost everyone.
Researchers have made great progress towards the goal of believable AR, in which the real and virtual worlds are combined seamlessly.
They mainly focus on issues like tracking, display technologies and user interaction, and give little attention to visual and physical coherence when real and virtual objects are combined. For example, virtual objects should not only respond to the user's input; they should also interact with real objects. Generally, AR becomes more believable and realistic if virtual objects appear fixed or anchored in the real scene, appear indistinguishable from the real scene, and response to any changes within it.
This thesis examines on three challenges in the field of computer vision to meet the goal of a believable combined world in which virtual objects appear and behave like real objects.
Firstly, the thesis concentrates on the well-known tracking and registration problem. The tracking and registration challenge is discussed and an approach is presented to estimate the position and viewpoint of the user so that virtual objects appear fixed in the real world. Appearance-based line models, which keep only relevant edges for tracking purposes, enable absolute registration in the real world and provide robust tracking. On the one hand, there is no need to spend much time creating suitable models manually. On the other hand, the tracking can deal with changes within the object or the scene to be tracked. Experiments have shown that the use of appearance-based line models improves the robustness, accuracy and re-initialization speed of the tracking process.
Secondly, the thesis deals with the subject of reconstructing the surface of a real environment and presents an algorithm to optimize an ongoing surface reconstruction. A complete 3D surface reconstruction of the target scene
offers new possibilities for creating more realistic AR applications. Several interactions between real and virtual objects, such as collision and occlusions, can be handled with physical correctness. Whereas previous methods focused on improving surface reconstructions offline after a capturing step, the presented method de-noises, extends and fills holes during the capturing process. Thus, users can explore an unknown environment without any preparation tasks such as moving around and scanning the scene, and without having to deal with the underlying technology in advance. In experiments, the approach provided realistic results where known surfaces were extended and filled in plausibly for different surface types.
Finally, the thesis focuses on handling occlusions between the real and virtual worlds more realistically, by re-interpreting the occlusion challenge as an alpha matting problem. The presented method overcomes limitations in state-of-the-art methods by estimating a blending coefficient per pixel of the rendered virtual scene, instead of calculating only their visibility. In several experiments and comparisons with other methods, occlusion handling through alpha matting worked robustly and overcame limitations of low-cost sensor data; it also outperformed previous work in terms of quality, realism and practical applicability.
The method can deal with noisy depth data and yields realistic results in regions where foreground and background are not strictly separable (e.g. caused by fuzzy objects or motion blur).
Mathematical models of species dispersal and the resilience of metapopulations against habitat loss
(2021)
Habitat loss and fragmentation due to climate and land-use change are among the biggest threats to biodiversity, as the survival of species relies on suitable habitat area and the possibility to disperse between different patches of habitat. To predict and mitigate the effects of habitat loss, a better understanding of species dispersal is needed. Graph theory provides powerful tools to model metapopulations in changing landscapes with the help of habitat networks, where nodes represent habitat patches and links indicate the possible dispersal pathways between patches.
This thesis adapts tools from graph theory and optimisation to study species dispersal on habitat networks as well as the structure of habitat networks and the effects of habitat loss. In chapter 1, I will give an introduction to the thesis and the different topics presented in this thesis. Chapter 2 will then give a brief summary of tools used in the thesis.
In chapter 3, I present our model on possible range shifts for a generic species. Based on a graph-based dispersal model for a generic aquatic invertebrate with a terrestrial life stage, we developed an optimisation model that models dispersal directed to predefined habitat patches and yields a minimum time until these patches are colonised with respect to the given landscape structure and species dispersal capabilities. We created a time-expanded network based on the original habitat network and solved a mixed integer program to obtain the minimum colonisation time. The results provide maximum possible range shifts, and can be used to estimate how fast newly formed habitat patches can be colonised. Although being specific for this simulation model, the general idea of deriving a surrogate can in principle be adapted to other simulation models.
Next, in chapter 4, I present our model to evaluate the robustness of metapopulations. Based on a variety of habitat networks and different generic species characterised by their dispersal traits and habitat demands, we modeled the permanent loss of habitat patches and subsequent metapopulation dynamics. The results show that species with short dispersal ranges and high local-extinction risks are particularly vulnerable to the loss of habitat across all types of networks. On this basis, we then investigated how well different graph-theoretic metrics of habitat networks can serve as indicators of metapopulation robustness against habitat loss. We identified the clustering coefficient of a network as the only good proxy for metapopulation robustness across all types of species, networks, and habitat loss scenarios.
Finally, in chapter 5, I utilise the results obtained in chapter 4 to identify the areas in a network that should be improved in terms of restoration to maximise the metapopulation robustness under limited resources. More specifically, we exploit our findings that a network’s clustering coefficient is a good indicator for metapopulation robustness and develop two heuristics, a Greedy algorithm and a deducted Lazy Greedy algorithm, that aim at maximising the clustering coefficient of a network. Both algorithms can be applied to any network and are not specific to habitat networks only.
In chapter 6, I will summarize the main findings of this thesis, discuss their limitations and give an outlook of future research topics.
Overall this thesis develops frameworks to study the behaviour of habitat networks and introduces mathematical tools to ecology and thus narrows the gap between mathematics and ecology. While all models in this thesis were developed with a focus on aquatic invertebrates, they can easily be adapted to other metapopulations.
We are living in a world where environmental crises come to a head. To curb aggravation of these problems, a socio-ecological transformation within society is needed, going along with human behavior change. How to encourage such behavior changes on an individual level is the core issue of this dissertation. It takes a closer look at the role of individuals as consumers resulting in purchase decisions with more or less harmful impact on the environment. By using the example of plastic pollution, it takes up a current environmental problem and focuses on an understudied behavioral response to this problem, namely reduction behavior. More concrete, this dissertation examines which psychological factors can encourage the mitigation of plastic packaging consumption. Plastic packaging accounts for the biggest amount of current plastic production and is associated with products of daily relevance. Despite growing awareness of plastic pollution in society, behavioral responses do not follow accordingly and plastic consumption is still very high. As habits are often a pitfall when implementing more resource-saving behavior, this dissertation further examines if periods of discontinuity can open a ’window of opportunity’ to break old habits and facilitate behavior change. Four manuscripts approach this matter from the gross to the subtle. Starting with a literature review, a summary of 187 studies addresses the topic of plastic pollution and human behavior from a societal-scientific perspective. Based on this, a cross-sectional study (N = 648) examines the deter-minants of plastic-free behavior intentions in the private-sphere and public-sphere by structural equation modeling. Two experimental studies in pre-post design build upon this, by integrating the determinants in intervention studies. In addition, it was evaluated if the intervention presented during Lent (N = 140) or an action month of ‘Plastic Free July’ (N = 366) can create a ‘window of opportunity’ to mitigate plastic packaging consumption. The literature review emphasized the need for research on behavioral solutions to reduce plastic consumption. The empirical results revealed moral and control beliefs to be the main determinants of reduction behavior. Furthermore, the time point of an intervention influenced the likelihood to try out the new behavior. The studies gave first evidence that a ‘window of opportunity’ can facilitate change towards pro-environmental behavior within the application field of plastic consumption. Theoretical and practical implications of creating the right opportunity for individuals to contribute to a socio-ecological transformation are finally discussed.
Die Effekte kognitiv-behavioraler Interventionen bei Patienten mit multiplen somatoformen Symptomen liegen lediglich in einem mittleren Bereich und damit deutlich unter den in der Psychotherapiewirkungsforschung angegebenen Effektstärken. Bislang ist es jedoch nicht gelungen, eindeutig replizierbare, patientenseitige Prädiktoren, die für den Erfolg oder Misserfolg einer kognitiv-behavioralen, ambulanten Therapie bei somatoformen Beschwerden verantwortlich sein können, zu finden. In einem längsschnittlichen Untersuchungsdesign wurde an 78 Patienten (mit mindestens zwei somatoformen Körperbeschwerden), die an einer ambulanten Gruppenintervention teilgenommen haben, die Bedeutung von Symptomintensität und -anzahl, soziodemographischen Variablen, komorbiden psychischen Störungen sowie krankheits- und therapiebezogenen Einstellungen und Verhaltensweisen für die Prädiktion des kurzfristigen Therapieerfolgs untersucht. In bivariaten Analysen zeigte sich ein signifikanter Zusammenhang der zu Beginn der Behandlung erfassten Symptomanzahl, Ängstlichkeit, dysfunktionalen Kognitionen und der Inanspruchnahme medizinischer Ressourcen mit dem Therapieerfolg. Alter, Geschlecht, Bildungsniveau sowie das Vorliegen einer komorbiden Angststörung oder einer depressiven Edpisode waren nicht mit dem Therapieoutcome assoziiert. In multiplen Regressionsanalysen konnten die signifikanten Zusammenhänge jedoch nur für die Symptomanzahl und mit Einschränkungen für die Ängstlichkeit bestätigt werden. Die Ergebnisse werden vor dem Hintergrund des empirischen Forschungsstandes hinsichtlich ihrer praktischen Bedeutung für die differenzielle Therapieindikation diskutiert.
Scientific and public interest in epidemiology and mathematical modelling of disease spread has increased significantly due to the current COVID-19 pandemic. Political action is influenced by forecasts and evaluations of such models and the whole society is affected by the corresponding countermeasures for containment. But how are these models structured?
Which methods can be used to apply them to the respective regions, based on real data sets? These questions are certainly not new. Mathematical modelling in epidemiology using differential equations has been researched for quite some time now and can be carried out mainly by means of numerical computer simulations. These models are constantly being refinded and adapted to corresponding diseases. However, it should be noted that the more complex a model is, the more unknown parameters are included. A meaningful data adaptation thus becomes very diffcult. The goal of this thesis is to design applicable models using the examples of COVID-19 and dengue, to adapt them adequately to real data sets and thus to perform numerical simulations. For this purpose, first the mathematical foundations are presented and a theoretical outline of ordinary differential equations and optimization is provided. The parameter estimations shall be performed by means of adjoint functions. This procedure represents a combination of static and dynamical optimization. The objective function corresponds to a least squares method with L2 norm which depends on the searched parameters. This objective function is coupled to constraints in the form of ordinary differential equations and numerically minimized, using Pontryagin's maximum (minimum) principle and optimal control theory. In the case of dengue, due to the transmission path via mosquitoes, a model reduction of an SIRUV model to an SIR model with time-dependent transmission rate is performed by means of time-scale separation. The SIRUV model includes uninfected (U) and infected (V ) mosquito compartments in addition to the susceptible (S), infected (I) and recovered (R) human compartments, known from the SIR model. The unknwon parameters of the reduced SIR model are estimated using data sets from Colombo (Sri Lanka) and Jakarta (Indonesia). Based on this parameter estimation the predictive power of the model is checked and evaluated. In the case of Jakarta, the model is additionally provided with a mobility component between the individual city districts, based on commuter data. The transmission rates of the SIR models are also dependent on meteorological data as correlations between these and dengue outbreaks have been demonstrated in previous data analyses. For the modelling of COVID-19 we use several SEIRD models which in comparison to the SIR model also take into account the latency period and the number of deaths via exposed (E) and deaths (D) compartments. Based on these models a parameter estimation with adjoint functions is performed for the location Germany. This is possible because since the beginning of the pandemic, the cumulative number of infected persons and deaths
are published daily by Johns Hopkins University and the Robert-Koch-Institute. Here, a SEIRD model with a time delay regarding the deaths proves to be particularly suitable. In the next step, this model is used to compare the parameter estimation via adjoint functions with a Metropolis algorithm. Analytical effort, accuracy and calculation speed are taken into account. In all data fittings, one parameter each is determined to assess the estimated number of unreported cases.
Die vorliegende Doktorarbeit hatte zum Ziel zu prüfen, ob Emulsionspolymere auf Acrylatbasis als neuartige Photokatalysatoren bzw. Katalysatoren genutzt werden können.
Auf Grund der Beschaffenheit und der Eigenschaften von Emulsionspolymeren ist davon auszugehen, dass die Nutzung selbiger als Katalysatoren eine neue Art einer chemischen Katalyse ermöglicht. So sollen die Vorteile der heterogenen und homogenen Katalyse vereint und die jeweiligen Nachteile minimiert werden. Als besonders erfolgversprechend hat sich während der praktischen Arbeit die Nutzung von Emulsionspolymeren als Photokatalysatoren herausgestellt.
Die Anbindung der photokatalytisch aktiven Moleküle an/in den Polymerstrang soll kovalent erfolgen. Deshalb war ein erstes Teilziel dieser Arbeit prototypische Katalysatormoleküle zu synthetisieren, die über einen Acrylat-Substituenten verfügen, der in einer radikalischen Polymerisationsreaktion reagieren kann. Als Photokatalysatoren wurden Ruthenium- Polypyridin-Komplexe ausgewählt, die sowohl für eine inter- als auch intramolekulare Photokatalyse zur Herstellung von Wasserstoff aus Wasser geeignet sind. Für organokatalytische Zwecke wurde ein L-Prolin-Derivat synthetisiert, welches jedoch nicht auf seine Polymerisierbarkeit getestet wurde.
In einem ersten Schritt wurden die prototypischen 2,2’-Bipyridin-Liganden synthetisiert. Dabei konnte eine verbesserte Synthesemethode für 4-Brom-2,2’-bipyridin ausgearbeitet werden. Die Funktionalisierung erfolgte letztendlich durch eine Horner-Wadsworth-Emmons-Reaktion, die anschließend an eine Eintopfsynthese zur Darstellung von 4-Formyl-2,2’-biypridin erfolgte. Die prototypischen Photokatalysatoren zeigten mäßige Erfolge (TON: 37-136, 6h, 10% H2O, 470 nm) in Bezug auf die photokatalytische Wasserstoffproduktion, sodass an dieser Stelle eine Verbesserung der entsprechenden katalytischen Systeme erfolgen sollte.
Die Polymerisationsreaktion konnte für zwei intermolekulare Photokatalysatoren und zwei intramolekulare Photokatalysatoren durchgeführt werden. Dabei fiel auf, dass die intermolekularen Photokatalysatoren besser polymerisieren als die intramolekularen Photokatalysatoren. Es wird angenommen, dass dies mit der Löslichkeit der Substanzen im Monomer Ethylmethacrylat zusammen hängt.
Die photokatalytisch funktionalisierten Emulsionspolymere zeigten eine ähnliche photokatalytische Aktivität (TON: 9-101, 6h, 10% H2O, 470 nm) wie die jeweiligen Ausgangsstoffe selbst. Es konnte jedoch bewiesen werden, dass Emulsionspolymere als Photokatalysatoren genutzt werden können, wenn auch noch weitere Arbeiten zur Optimierung der Systeme nötig sind.
For software engineers, conceptually understanding the tools they are using in the context of their projects is a daily challenge and a prerequisite for complex tasks. Textual explanations and code examples serve as knowledge resources for understanding software languages and software technologies. This thesis describes research on integrating and interconnecting
existing knowledge resources, which can then be used to assist with understanding and comparing software languages and software technologies on a conceptual level. We consider the following broad research questions that we later refine: What knowledge resources can be systematically reused for recovering structured knowledge and how? What vocabulary already exists in literature that is used to express conceptual knowledge? How can we reuse the
online encyclopedia Wikipedia? How can we detect and report on instances of technology usage? How can we assure reproducibility as the central quality factor of any construction process for knowledge artifacts? As qualitative research, we describe methodologies to recover knowledge resources by i.) systematically studying literature, ii.) mining Wikipedia, iii.) mining available textual explanations and code examples of technology usage. The theoretical findings are backed by case studies. As research contributions, we have recovered i.) a reference semantics of vocabulary for describing software technology usage with an emphasis on software languages, ii.) an annotated corpus of Wikipedia articles on software languages, iii.) insights into technology usage on GitHub with regard to a catalog of pattern and iv.) megamodels of technology usage that are interconnected with existing textual explanations and code examples.
The formulation of the decoding problem for linear block codes as an integer program (IP) with a rather tight linear programming (LP) relaxation has made a central part of channel coding accessible for the theory and methods of mathematical optimization, especially integer programming, polyhedral combinatorics and also algorithmic graph theory, since the important class of turbo codes exhibits an inherent graphical structure. We present several novel models, algorithms and theoretical results for error-correction decoding based on mathematical optimization. Our contribution includes a partly combinatorial LP decoder for turbo codes, a fast branch-and-cut algorithm for maximum-likelihood (ML) decoding of arbitrary binary linear codes, a theoretical analysis of the LP decoder's performance for 3-dimensional turbo codes, compact IP models for various heuristic algorithms as well as ML decoding in combination with higher-order modulation, and, finally, first steps towards an implementation of the LP decoder in specialized hardware. The scientific contributions are presented in the form of seven revised reprints of papers that appeared in peer-reviewed international journals or conference proceedings. They are accompanied by an extensive introductory part that reviews the basics of mathematical optimization, coding theory, and the previous results on LP decoding that we rely on afterwards.
We consider variational discretization of three different optimal control problems.
The first being a parabolic optimal control problem governed by space-time measure controls. This problem has a nice sparsity structure, which motivates our aim to achieve maximal sparsity on the discrete level. Due to the measures on the right hand side of the partial differential equation, we consider a very weak solution theory for the state equation and need an embedding into the continuous functions for the pairings to make sense. Furthermore, we employ Fenchel duality to formulate the predual problem and give results on solution theory of both the predual and the primal problem. Later on, the duality is also helpful for the derivation of algorithms, since the predual problem can be differentiated twice so that we can apply a semismooth Newton method. We then retrieve the optimal control by duality relations.
For the state discretization we use a Petrov-Galerkin method employing piecewise constant states and piecewise linear and continuous test functions in time. For the space discretization we choose piecewise linear and continuous functions. As a result the controls are composed of Dirac measures in space-time, centered at points on the discrete space-time grid. We prove that the optimal discrete states and controls converge strongly in L^q and weakly-* in Μ, respectively, to their smooth counterparts, where q ϵ (1,min{2,1+2/d}] is the spatial dimension. The variational discrete version of the state equation with the above choice of spaces yields a Crank-Nicolson time stepping scheme with half a Rannacher smoothing step.
Furthermore, we compare our approach to a full discretization of the corresponding control problem, precisely a discontinuous Galerkin method for the state discretization, where the discrete controls are piecewise constant in time and Dirac measures in space. Numerical experiments highlight the sparsity features of our discrete approach and verify the convergence results.
The second problem we analyze is a parabolic optimal control problem governed by bounded initial measure controls. Here, the cost functional consists of a tracking term corresponding to the observation of the state at final time. Instead of a regularization term for the control in the cost functional, we consider a bound on the measure norm of the initial control. As in the first problem we observe a sparsity structure, but here the control resides only in space at initial time, so we focus on the space discretization to achieve maximal sparsity of the control. Again, due to the initial measure in the partial differential equation, we rely on a very weak solution theory of the state equation.
We employ a dG(0) approximation of the state equation, i.e. we choose piecewise linear and continuous functions in space, which are piecewise constant in time for our ansatz and test space. Then, the variational discretization of the problem together with the optimality conditions induce maximal discrete sparsity of the initial control, i.e. Dirac measures in space. We present numerical experiments to illustrate our approach and investigate the sparsity structure
As third problem we choose an elliptic optimal control governed by functions of bounded variation (BV) in one space dimension. The cost functional consists of a tracking term for the state and a BV-seminorm in terms of the derivative of the control. We derive a sparsity structure for the derivative of the BV control. Additionally, we utilize the mixed formulation for the state equation.
A variational discretization approach with piecewise constant discretization of the state and piecewise linear and continuous discretization of the adjoint state yields that the derivative of the control is a sum of Dirac measures. Consequently the control is a piecewise constant function. Under a structural assumption we even get that the number of jumps of the control is finite. We prove error estimates for the variational discretization approach in combination with the mixed formulation of the state equation and confirm our findings in numerical experiments that display the convergence rate.
In summary we confirm the use of variational discretization for optimal control problems with measures that inherit a sparsity. We are able to preserve the sparsity on the discrete level without discretizing the control variable.
The increase in plastic particles (< 5 mm) in the environment is a global problem, which is in direct correlation to the increasing production quantity and variety. Through direct input (primary) or through the degradation of macroplastics (secondary), particles enter the environmental compartments water and/or soil via conventional material transportation paths. The research and development work on the sustainable removal of microplastic particles (inert organic chemical stressors, IOCS) from wastewater is based on the construction of polymer inclusion compounds. IOCS describe a group of organic chemical molecules, which demonstrate a high level of persistence upon entry in the ecosystem and whose degradation is limited.
Following the principle of Cloud Point Technology, a novel separation technique has been developed which induces particle growth in microplastics and allows easier separation from the water by volume increase according to the state of the art. The concept for the sustainable removal of microplastics from Herbort and Schuhen is based on a three-step synthesis. This concept was further optimized as part of the research and adapted to the criteria of resource efficiency and profitability. The fundamental research is premised on the hypothesis that van der Waals forces with short ranges and localized hydrophobic interactions between precursors and/or material and the IOCS to be connected can induce a fixation through the formation of an inclusion compound with particle growth. Through the addition of silicon-based ecotoxicologically irrelevant coagulation and inclusion units, it is possible to initiate molecular self-organization with the hydrophobic stressors in an aggregation process induced through water. It results in adhesive particle growth around the polymer particles and between particles. Subsequently, the polymer extract can be separated from aquatic media through simple and cost-effective filtration processes (e.g. sand trap, grease trap), due to the 10,000 times larger volume microplastic agglomerates.
Water scarcity is already an omnipresent problem in many parts of the world, especially in sub-Saharan Africa. The dry years 2018 and 2019 showed that also in Germany water resources are finite. Projections and predictions for the next decades indicate that renewal rates of existing water resources will decline due the growing influence of climate change, but that water extraction rates will increase due to population growth. It is therefore important to find alternative and sustainable methods to make optimal use of the water resources currently available. For this reason, the reuse of treated wastewater for irrigation and recharge purposes has become one focus of scientific research in this field. However, it must be taken into account that wastewater contains so-called micropollutants, i.e., substances of anthropogenic origin. These are, e.g., pharmaceuticals, pesticides and industrial chemicals which enter the wastewater, but also metabolites that are formed in the human body from pharmaceuticals or personal care products. Through the treatment in wastewater treatment plants (WWTPs) as well as through chemical, biological and physical processes in the soil passage during the reuse of water, these micropollutants are transformed to new substances, known as transformation products (TPs), which further broaden the number of contaminants that can be detected within the whole water cycle.
Despite the fact that the presence of human metabolites and environmental TPs in untreated and treated wastewater has been known for a many years, they are rarely included in common routine analysis methods. Therefore, a first goal of this thesis was the development of an analysis method based on liquid chromatography - tandem mass spectrometry (LC-MS/MS) that contains a broad spectrum of frequently detected micropollutants including their known metabolites and TPs. The developed multi-residue analysis method contained a total of 80 precursor micropollutants and 74 metabolites and TPs of different substance classes. The method was validated for the analysis of different water matrices (WWTP influent and effluent, surface water and groundwater from a bank filtration site). The influence of the MS parameters on the quality of the analysis data was studied. Despite the high number of analytes, a sufficient number of datapoints per peak was maintained, ensuring a high sensitivity and precision as well as a good recovery for all matrices. The selection of the analytes proved to be relevant as 95% of the selected micropollutants were detected in at least one sample. Several micropollutants were quantified that were not in the focus of other current multi-residue analysis methods (e.g. oxypurinol). The relevance of including metabolites and TPs was demonstrated by the frequent detection of, e.g., clopidogrel acid and valsartan acid at higher concentrations than their precursors, the latter even being detected in samples of bank filtrate water.
By the integration of metabolites, which are produced in the body by biological processes, and biological and chemical TPs, the multi-residue analysis method is also suitable for elucidating degradation mechanisms in treatment systems for water reuse that, e.g., use a soil passage for further treatment. In the second part of the thesis, samples from two treatment systems based on natural processes were analysed: a pilot-scale above-ground sequential biofiltration system (SBF) and a full-scale soil aquifer treatment (SAT) site. In the SBF system mainly biological degradation was observed, which was clearly demonstrated by the detection of biological TPs after the treatment. The efficiency of the degradation was improved by an intermediate aeration, which created oxic conditions in the upper layer of the following soil passage. In the SAT system a combination of biodegradation and sorption processes occurred. By the different behaviour of some biodegradable micropollutants compared to the SBF system, the influence of redox conditions and microbial community was observed. An advantage of the SAT system over the SBF system was found in the sorption capacity of the natural soil. Especially positively charged micropollutants showed attenuation due to ionic interactions with negatively charged soil particles. Based on the physicochemical properties at ambient pH, the degree of removal in the investigated systems and the occurrence in the source water, a selection of process-based indicator substances was proposed.
Within the first two parts of this thesis a micropollutant was frequently detected at elevated concentrations in WWTPs effluents, which was not previously in the focus of environmental research: the antidiabetic drug sitagliptin (STG). STG showed low degradability in biological systems and thus it was investigated to what extend chemical treatment by ozonation can ensure attenuation of it. STG contains an aliphatic primary amine as the principal point of attack for the ozone molecule. There is only limited information about the behaviour of this functional group during ozonation and thus, STG served as an example for other micropollutants containing aliphatic primary amines. A pH-dependent degradation kinetic was observed due to the protonation of the primary amine at lower pH values. At pH values in the range 6 - 8, which is typical for the environment and in WWTPs, STG showed degradation kinetics in the range of 103 M-1s-1 and thus belongs to the group of readily degradable substances. However, complete degradation can only be expected at significantly higher pH values (> 9). The transformation of the primary amine moiety into a nitro group was observed as the major degradation mechanism for STG during ozonation. Other mechanisms involved the formation of a diketone, bond breakages and the formation of trifluoroacetic acid (TFA). Investigations at a pilot-scale ozonation plant using the effluent of a biological degradation of a municipal WWTP as source water confirmed the results of the laboratory studies: STG could not be removed completely even at high ozone doses and the nitro compound was formed as the main TP and remained stable during further ozonation and subsequent biological treatment. It can therefore be assumed that under realistic conditions both a residual concentration of STG and the formed main TP as well as other stable TPs such as TFA can be detected in the effluents of a WWTP consisting of conventional biological treatment followed by ozonation and subsequent biological polishing steps.
Microbial pollution of surface waters poses substantial risks for public health, amongst others during recreational use. Microbial pollution was studied at selected sampling sites in rivers Rhine, Moselle and Lahn (Germany) on the basis of commonly used fecal indicator organisms (FIO) indicating bacterial (Escherichia coli, intestinal enterococci) and viral (somatic coliphages) fecal contamination. In addition, blaCTX-Mantibiotic resistance genes (ARG) were quantified at twosites in river Lahn and were used as markers for tracking the spread of antibiotic resistance in the aquatic environment. The impact of changes in climate-related parameters on FIO was examined by studying monitoring results of contrasting flow conditions at rivers Rhine and Moselle. Analyses at all studied river sites clearly indicate that high discharge and precipitation enhance the influx of FIO, ARG and thus potentially (antibiotic resistant) pathogens into rivers. In contrast, a decrease in hygienic microbial pollution was observed under high solar irradiation and increasing water temperatures. Based on identified contributing key factors, multiple linear regression (MLR) models for five sites at a stretch of river Lahn were established that allow a timely assessment of fecal indicator abundances. An interaction between abiotic and biotic factors (i.e. enhanced grazing pressure) considerably contributed to the formation of seasonal patterns among FIO abundances. This was enhanced during extraordinary low flow conditions in rivers with pronounced trophic interactions, clearly hampering a transfer of model approaches between rivers of different biological and hydrological characteristics. Bacterial indicatorswere stronger influenced by grazing pressure than phages. Hence, bacterial indicators alone do not sufficiently describe viral pollution in rivers. BlaCTX-Mgenes were omnipresent in Lahn River water and corresponded to distribution patterns of FIO, indicating fecal sources. Agriculture and waste watertreatment plant effluents contributed to ARG loads and participants in non-bathing water sports were found to be at risk of ingesting antibiotic resistant bacteria (ARB) including ARG, bearing the risk of infection or colonization. Results of the present study highlight the need to be aware of such risks not only in designated bathing waters. ARG abundance at both riverine sampling sites could largely be explained by E. coliabundance and may thus also be incorporated into multiple regression models using E. colispecific environmental predictors. It can be expected that the frequency of short-term microbial pollution events will increase over the next decades due to climate change. Several challenges were identified with regard to the implementation of early warning systems to protect the public from exposure to pathogens in rivers. Most importantly, the concept of the Bathing Water Directive (Directive 2006/7/EC) itself as well as the lack of harmonization in the regulatory framework at European Union (EU) level are major drawbacks and require future adjustments to reliably manage health risks related to microbial water pollution in waters used in multifunctional ways.
In this thesis we examined the question whether personality traits of early child care workers influence process quality in preschool.
Research has shown that in educational settings such as preschool, pedagogical quality affects children’s developmental outcome (e.g. NICHD, 2002; Peisner-Feinberg et al., 1999). A substantial part of pedagogical quality known to be vital in this respect is the interaction between teacher and children (e.g., Tietze, 2008). Results of prior classroom research indicate that the teachers’ personality might be an important factor for good teacher-child-interaction (Mayr, 2011). Thus, personality traits might play a vital role for the interaction in preschool. Therefore, the aims of this thesis were to a) identify pivotal personality traits of child care workers, b) assess ideal levels of the identified personality traits and c) examine the relationship between pivotal personality traits and process quality. On that account, we conducted two requirement analyses and a video study. The results of these studies showed that subject matter experts (parents, child care workers, lecturers) partly agreed as to which personality traits are pivotal for child care workers. Furthermore, the experts showed high consensus with regard to the minimum, ideal and maximum personality trait profiles. Furthermore, child care workers whose profiles lay closer to the experts’ ideal also showed higher process quality. In addition, regression analyses showed that the child care workers’ levels of the Big Two (Communion and Agency) related significantly to their process quality.
For a comprehensive understanding of evolutionary processes and for providing reliable prognoses about the future consequences of environmental change, it is essential to reveal the genetic basis underlying adaptive responses. The importance of this goal increases in light of ongoing climate change, which confronts organisms worldwide with new selection pressures and requires rapid evolutionary change to avoid local extinction. Thereby, freshwater ectotherms like daphnids are particularly threatened. Unraveling the genetic basis of local adaptation is complicated by the interplay of forces affecting patterns of genetic divergence among populations. Due to their key position in freshwater communities, cyclic parthenogenetic mode of reproduction and resting propagules (which form biological archives), daphnids are particularly suited for this purpose.
The aim of this thesis was to assess the impact of local thermal selection on the Daphnia longispina complex and to reveal the underlying genetic loci. Therefore, I compared genetic differentiation among populations containing Daphnia galeata, Daphnia longispina and their interspecific hybrids across time, space, and species boundaries. I revealed strongly contrasting patterns of genetic differentiation between selectively neutral and functional candidate gene markers, between the two species, and among samples from different lakes, suggesting (together with a correlation with habitat temperatures) local thermal selection acting on candidate gene TRY5F and indicating adaptive introgression. To reveal the candidate genes’ impact on fitness, I performed association analyses among data on genotypes and phenotypic traits of D. galeata clones from seven populations. The tests revealed a general temperature effect as well as inter-population differences in phenotypic traits and imply a possible contribution of the candidate genes to life-history traits. Finally, utilizing a combined population transcriptomic and reverse ecology approach, I introduced a methodology with a wide range of applications in evolutionary biology and revealed that local thermal selection was probably a minor force in shaping sequence and gene expression divergence among four D. galeata populations, but contributed to sequence divergence among two populations. I identified many transcripts possibly under selection or contributing strongly to population divergence, a large amount thereof putatively under local thermal selection, and showed that genetic and gene expression variation is not depleted specifically in temperature-related candidate genes.
In conclusion, I detected signs of local adaptation in the D. longispina complex across space, time, and species barriers. Populations and species remained genetically divergent, although increased gene flow possibly contributed, together with genotypes recruited from the resting egg bank, to the maintenance of standing genetic variation. Further work is required to accurately determine the influence of introgression and the effects of candidate genes on individual fitness. While I found no evidence suggesting a response to intense local thermal selection, the high resilience and adaptive potential regarding environmental change I observed suggest positive future prospects for the populations of the D. longispina complex. However, overall, due to the continuing environmental degradation, daphnids and other aquatic invertebrates remain vulnerable and threatened.
Purpose: Leader-follower communication plays an important role in all organizations. However, research on school leadership did not focus on communication competency (CC) and its impact on school quality (e.g. on teachers' commitment, job satisfaction) so far; there is a striking lack of theory and empirical data. Therefore, this paper describes the development and validation of a standardized measure for CC of school principals. It is based on a theoretical framework with LMX as a communication-based approach to leadership and an interpersonal communication process model. Design/Methodology: CC was conceptualized as a 3-dimensional construct. Five studies were conducted with German teachers and school principals from all types of schools (total N=1649): selection of items via expert ratings, exploring the dimensionality, cross-validation of the structure, test-retest-reliability, external validity. Results: Analyses led to a 3-factor solution - Altercentrism, Knowledge and Willingness to Communicate, Self-Regulation - with 14 items and good internal consistencies. CFA, exploring the nomological network, and invariance measurement supported construct validity. Limitations: Empirical studies still have to be conducted to evaluate the application in other leader - follower contexts apart from school. Research/Practical Implications: The measure and the theoretical framework can be applied for analyzing principal - teacher interactions in detail and for detecting problems in communication processes. The measure can also be used for feedback processes and for demand-oriented trainings.Originality/Value: For the first time, this paper explicitly combines the concepts of CC and leadership in schools and provides a measure within a theoretical framework to assess principals' CC. Such a valid instrument is the prerequisite for further investigating the importance of CC in the context of (school) leadership.
Schule der Phantasie
(2007)
Diese Arbeit geht inhaltlich drei Aspekten nach:
1. Die Darstellung der Entwicklung der Schule der Phantasie. Hier werden die Inhalte aus der direkten Beobachtung bzw. aus amtlichen und halbamtlichen Quellen erfasst und in einen zeitlichen und logischen Zusammenhang gestellt. Damit wird erstmals eine zusammenfassende historische sowie den status quo bilanzierende Gesamtdarstellung angestrebt, so weit es die durchaus lückenhafte Quellenlage erlaubt.
2. Bei der Erstellung einer Theorie der Schule der Phantasie werden zwei Hauptstränge verfolgt: Zum einen soll eine Grundlage aus den konstitutiven Faktoren geschaffen, zum anderen eine sich daraus ableitende didaktische Konsequenz gezogen werden. Dies zielt weniger auf eine didaktische Handlungsanleitung "quasi durch die Hintertür", vielmehr geht es hier um die Begründung bzw. Begründbarkeit einer Haltung, die sich in vielen Beobachtungen der Schule der Phantasie offenbart. Grundlage dazu ist aber kein wertfreies quantitatives empirisches Erfassen aller bestehenden und beobachtbaren Erscheinungen, um daraus dann eine Theorie abzuleiten. Es geht vielmehr darum, Qualitäten zu erkennen, sie als Belege idealtypisch zu ordnen und in einen sinnvollen Kontext zu stellen. Damit ist immer ein Vorverständnis und eine Wertung verbunden.
3. Der Blick in eine mögliche Zukunft der Schule der Phantasie richtet sich auf Perspektiven innerhalb und außerhalb unseres bestehenden Schulsystems, das, wenn nicht anders angegeben, immer als das bayerische angenommen wird. Um dabei auf Erfahrungen zurückgreifen zu können und nicht das Rad neu erfinden zu müssen, werden auch Aspekte aus vergleichbaren pädagogischen und kunstpädagogischen Einrichtungen verglichen und zu Rate gezogen.
The amount of information on the Web is constantly increasing and also there is a wide variety of information available such as news, encyclopedia articles, statistics, survey data, stock information, events, bibliographies etc. The information is characterized by heterogeneity in aspects such as information type, modality, structure, granularity, quality and by its distributed nature. The two primary techniques by which users on the Web are looking for information are (1) using Web search engines and (2) browsing the links between information. The dominant mode of information presentation is mainly static in the form of text, images and graphics. Interactive visualizations offer a number of advantages for the presentation and exploration of heterogeneous information on the Web: (1) They provide different representations for different, very large and complex types of information and (2) large amounts of data can be explored interactively using their attributes and thus can support and expand the cognition process of the user. So far, interactive visualizations are still not an integral part in the search process of the Web. The technical standards and interaction paradigms to make interactive visualization usable by the mass are introduced only slowly through standardatization organizations. This work examines how interactive visualizations can be used for the linking and search process of heterogeneous information on the Web. Based on principles in the areas of information retrieval (IR), information visualization and information processing, a model is created, which extends the existing structural models of information visualization with two new processes: (1) linking of information in visualizations and (2) searching, browsing and filtering based on glyphs. The Vizgr toolkit implements the developed model in a web application. In four different application scenarios, aspects of the model will be instantiated and are evaluated in user tests or examined by examples.
Problembewältigung stellt eine essentielle (kognitive) Aktivität im alltäglichen Leben, wie auch im Berufsleben, dar. Bereits in der Grundschule wird diese Fähigkeit, z.B. bei der Lösung von mathematischen Textaufgaben, gefordert und gefördert. Trotzdem bereitet dies Schülern verschiedener Klassenstufen nach wie vor große Schwierigkeiten. Das Lösen von Textaufgaben erfordert eine Vielzahl kognitiver Operationen. Dies ist besonders dann der Fall, wenn es sich bei den Aufgaben nicht um Routineaufgaben handelt. Für die Bearbeitung von Textaufgaben können Schüler auf verschiedene Repräsentationen zurückgreifen, wobei sowohl interne und externe als auch depiktionale (abbildende) und deskriptionale (beschreibende) unterschieden werden können. Da bisher kaum empirische Forschungsarbeiten zum Umgang von Schülern mit problemhaltigen Textaufgaben vorliegen, und darüber hinaus sehr wenig über die selbstgenerierten Repräsentationen bei der Bearbeitung von Textaufgaben bekannt ist, ist die vorliegende Arbeit z.T. explorativ angelegt. Im Fokus stehen die spontanen und individuellen Lösungsprozesse von Schülern verschiedener Klassenstufen bei der Bearbeitung von problemhaltigen Textaufgaben. Dabei wurde vorrangig untersucht, auf welche Repräsentationsformen die Schüler bei ihren Lösungsprozessen zurückgreifen.
Insgesamt 268 Schüler verschiedener Klassenstufen (Grundschüler der 3. und 4. Klasse; Gymnasiasten der 6. und 9. Klasse) wurden in Einzeluntersuchungen gebeten, jeweils fünf problemhaltige Textaufgaben zu lösen. Diese wurden ursprünglich im Hinblick auf die Primarstufe entwickelt, scheinen jedoch auch für Schüler höherer Klassenstufen ihren anspruchsvollen Charakter nicht zu verlieren. Für die Aufgabenlösungen standen den einzelnen Schülern verschiedene Materialien zur Verfügung. Von Seiten der Versuchsleitung wurden keinerlei Hilfestellungen gegeben. Nach jeder Aufgabe wurden die Schüler zu ihren Vorgehensweisen mittels halbstrukturiertem Interview befragt. Das individuelle Vorgehen wurde durch Videoaufzeichnungen festgehalten.
Das gewonnene Videomaterial wurde durch drei geschulte Beobachter anhand eines selbstentwickelten Kodiersystems quantifiziert und schließlich statistisch ausgewertet.
Allgemein kann gesagt werden, dass alle Schüler Schwierigkeiten bei der Bearbeitung der fünf Textaufgaben aufwiesen. So gab es nicht einen Schüler, der alle fünf Aufgaben richtig lösen konnte. Trotzdem nahm die Anzahl der richtigen Lösungen mit höherer Klassenstufe signifikant zu. In diesem Zusammenhang ergaben sich jedoch unterschiedlich starke Zuwächse in Abhängigkeit von der jeweiligen Aufgabe und ihren Anforderungen.
Hinsichtlich der selbstgenerierten Repräsentationsformen ergaben sich u.a. folgende Ergebnisse. Erstens operierten die Grundschüler im Vergleich zu den Gymnasiasten sehr stark intern deskriptional, d.h., sie waren versucht, die gesamten Aufgaben durch Kopfrechnen zu bewältigen. Zweitens griffen die Schüler mit höherer Klassenstufe eher auf verschiedene Repräsentationsformen zurück, was für einen flexibleren Umgang mit ihnen spricht. Drittens scheint die Verwendung multipler Repräsentationen bei der Konstruktion des mathematischen Modells nicht per se zu einer richtigen Lösung zu führen. Aber sie verhindert, unabhängig von der Klassenstufe, eine gänzlich falsche Aufgabenbearbeitung bzw. -lösung. Im Bezug auf das Lösungsvorgehen konnte kein eindeutiges Ergebnis erzielt werden. Hier scheinen je nach Aufgabenanforderungen (und wahrscheinlich auch Personenfähigkeiten) ganzheitliche und zergliedernde Vorgehensweisen unterschiedlich gut geeignet zu sein im Hinblick auf den Lösungserfolg. Nachfolgende Untersuchungen müssen die hier berichteten Ergebnisse überprüfen. Die Ergebnisse legen jedoch nahe, dass hinsichtlich des Umgangs von Schülern mit anspruchsvollen Textaufgaben nach wie vor Handlungsbedarf bestehen sollte. Gerade der im Mathematikunterricht gelegte Schwerpunkt auf deskriptionale Repräsentationsformen im Sinne von Rechnungen und Gleichungen scheint bedenklich. Vielmehr sollte eine Schulkultur etabliert werden, die den flexiblen Einsatz von verschiedenen Repräsentationsformen, und hier im Besonderen die Verwendung depiktionaler Repräsentationen, als legitim und durchaus notwendig erachtet.
In dieser Arbeit wurde der Einfluss von nicht wässrigen Mahlflüssigkeiten auf Metall-Keramik-Pulvermischungen im Nassmahlprozess untersucht. Es wurden Al- und Cr-Al₂O₃-Pulvermischungen ausgewählt, um den Einfluss der Mahlflüssigkeiten auf die Mahlung von Metall-Keramik-Pulvermischungen mit einer duktilen (Aluminium) und einer spröden (Chrom) Metallkomponente zu untersuchen.
Die Studie befasst sich mit der Bildungsaspiration, der Wertevermittlung und -tradierung iranischstämmiger Familien in Deutschland. Ziel ist es, zu ermitteln, worauf die familiäre Bildungsaspiration zurückzuführen ist. Im Fokus der Fallstudie stehen drei Familien, die nach der Islamischen Revolution von 1979 aus ihrem Herkunftsland nach Deutschland immigriert sind. Es handelt sich um eine zweigenerationenperspektivische Befragung innerhalb einer Familie. Dies bedeutet, dass innerhalb einer Familie Einzelinterviews mit jeweils einer Person aus der Eltern- und Kindergeneration durchgeführt wurden. So werden gemeinsam erlebte Situationen, wie z.B. Flucht oder Teilhabe am deutschen Bildungssystem, oder Begriffe, wie z.B. Bildung oder Bildungserfolg, aus der jeweiligen Perspektive geschildert bzw. definiert. Darüber hinaus geben die Befragten Auskünfte über die Großelterngeneration, sodass das gesamte Datenmaterial Angaben über drei Generationen umfasst. Dieser qualitativen Exploration geht eine bundesweite Befragung voraus an der 70 Personen iranischer Herkunft teilnahmen. Weiterhin zeichnet sich die Arbeit durch einen ressourcenorientierten Forschungsansatz aus. Die erhobenen Daten geben Antwort auf die zentralen Fragen:
Auf welche Ursachen lässt sich Bildungsaspiration von immigrierten, iranischstämmigen Familien in Deutschland zurückführen?
Welche Rolle spielen die Sprachen des Herkunfts- und Einwanderungslandes in der Wertetradierung? In der Studie werden die individuellen Bildungsgeschichten der Befragten in den wissenschaftlichen Diskurs zur Bildungsaspiration, Bildungs-, Migrations- und Mehrsprachigkeitsforschung eingefügt. Hierbei wird ein interdisziplinäres und somit multiperspektivisches Theoriekonstrukt zugrunde gelegt, welches sich durch einen geschichts-, länder- und generationsübergreifenden Forschungsansatz auszeichnet. Ein zentrales Ergebnis ist, dass ein Verständnis für geschichtliche Entwicklungen im Herkunftsland erforderlich ist, um das Bildungsverhalten im iranischstämmigen Familienkontext nachzuvollziehen. Erst durch Kontextualisierung von individuellen und gesellschaftlichen Dimensionen kann nachvollzogen werden, warum im Falle dieser Minderheitengruppe geringes ökonomisches Kapital, das infolge von beruflichen Brüchen der Eltern nach ihrer Flucht im Einwanderungsland entstanden ist, keine negativen Auswirkungen auf die Entwicklung des kulturellen Kapitals der Folgegenerationen hat. Folglich zeigt die Studie, dass im Falle der interviewten Familien gängige Erklärungsmodelle, wie die Korrelation vom geringem ökonomischen Kapital und geringem kulturellen Kapital, nicht zutreffen. Vielmehr wird kulturelles Kapital von Generation zu Generation tradiert. Ein weiteres Ergebnis der Studie ist das erarbeitete Dreiebenen-Modell. Es umfasst geschichtlich-gesellschaftliche, politische sowie familiäre Entwicklungen und Einstellungen auf der Mikro-, Meso- und Makroebene. Damit ist eine Grundlage geschaffen, um das Tradierungsverhalten anderer Zuwanderungsgruppen zu erklären.
Refractory dry-vibratable mixes, which consist of a mineral filling material and an organic or anorganic binder system, are widely used for linings in industrial aggregates, where a very high temperature resistance is required (e.g. steel industry). During lining, all compounds are mixed and hardening is chemically or thermally initiated. The time span required for hardening is of special relevance for the application of refractory dry-vibratable mixes. It should be long enough for adequate processability, but simultaneously avoid too long downtimes. Prediction or regulation of the hardening time, necessary for an ideal processing, is currently limited. One the one hand, this is a result of the lack of an appropriate method for time-dependent determination of the harding process. On the other hand, the mechanisms responsible for this very complex process have not yet been investigated in detail and the effect of influencing factors, like the temperature or the composition of the refractory dry-vibratable mixes, are poorly documented.
To make a contribution to the understanding of the hardening mechanism of refractory dry-vibratable mixes, it was the aim of the present work, to develop an appropriate test method for the time-dependent investigation of this process. This was realized by means of the dynamic-mechanical analysis. In addition, the hardening mechanism was described for a refractory dry-vibratable mix with a binder system, which consists of a waterglass and a phosphate hardener (AlPO4 und BPO4), using supplement gravimetric investigations and determining solubility behavior of the phosphates. By means of X-ray diffraction analysis, nuclear magnetic resonance spectroscopy and scanning electron microscopy, the impact of the hardening mechanism on the crystal and amorphous structure was studied. It could be shown, that according to the two phosphates, the hardening leads to different network structures in respect of their link denseness. These structure characteristics correlate with the speed of the hardening reactions. In addition, the impact on selected properties (thermal linear deformation, temperature-dependent phase development and phase transition) could be deducted.
Studies in recent years have demonstrate adolescents and young adults to have a deficient data protection competence, however children and adolescents between the ages of ten and 13 were mostly not focus of these studies. Therefore, the guiding question of the work is how data protection competence is developed in children and adolescents at a young age in order to be able to infer suitable, educational concepts for this age group. At the beginning of the work, a data protection competence model is derived from a media competence model, which serves as the basis for the further field investigation. A survey was carried out at general secondary schools in Rhineland-Palatinate, which shows that the respondents still have sufficiently developed Risk Assessment Competence, but were insufficiently developed in terms of knowledge, Selection and Usage Competence and the Implementation Competence. Recommendations for actions are given in the last part of the work – containing learning goal descriptions to be possibly implemented in an educational framework – in order to address this issue.
The largest population of the anadromous Allis shad (A. alosa) of the 19th century was found in River Rhine and has to be considered extinct today. To facilitate the return of A. alosa into River Rhine an EU LIFE-project was initiated in 2007. The overall objective of this thesis was to assist aquaculture and stocking-measures at River Rhine, as well as to support restoration and conservation of populations of Allis shad in Europe.
By culturing the free-swimming nematode T. aceti in a solution of cider vinegar we developed a cost-effective live food organism for the larviculture of fish. As indicated by experiments with C. maraena, T. aceti cannot be regarded as an alternative to Artemia nauplii. However it has to be considered a suitable supplemental feed in the early rearing of C. maraena by providing essential fatty acids, thereby optimizing growth.
Also mass-marking practices with Oxytetracycline, as they are applied in the restocking of Allis shad have been evaluated. In experiments with D. rerio we demonstrated that water hardness can detrimentally affect mortality during marking and has to be considered crucial in the development of marking protocols for freshwater fish.
In order to get independent from wild spawners an ex-situ Broodstock-facility for Allis shad was established in 2011. Upon examination of two complete year classes of this broodstock, we found a high prevalence of various malformations, which could be traced back to distinct cysts developing one month post hatch. Despite applying a variety of clinical tests we could not identify any infectious agents causing these malformations. The observed malformations are probably a consequence of suboptimal feeding practices or the properties of the physio-chemical rearing environment.
The decline of stocks of A. alosa in Europe has been largely explained with the increase of river temperatures as a consequence of global warming. By investigating the temperature physiology of larval Allis shad we demonstrated that A. alosa ranges among the most thermo-tolerant species in Europe and that correlations between rising temperatures and the disappearance of this species have to be understood in a synecological context and by integrating a variety of stressors other than temperature. By capturing and examining juvenile and adult Allis shad from River Rhine, we demonstrated the first natural reproduction of A. alosa in River Rhine since nearly 100 years and the success of stocking measures within the framework of the LIFE project.
Within this thesis time evaluated predicate/transition nets (t-pr/t-nets) have been developed for the purpose to model, simulate and verify complex real-time systems. Therefore, t-pr/t-nets integrate concepts to model timing constraints and can be analysed by the means of structural analysis such as the calculation of s- and t-invariants as well as the identification of traps and co-traps. The applicability of t-pr/t-nets to model, simulate and verify complex systems in the domain of safety-critical real-time systems is proven by the Earliest-Deadline-First-Protocol (EDF) and the Priority-Inheritance-Protocol (PIP). Therefore, the EDF and PIP are modeled by means of t-pr/t-nets. The resulting t-pr/t-nets are verified using structural analysis methods. Due to the enormous complexity and the applicability of structural analysis methods for the verification of the EDF and PIP, it can be shown that t-pr/t-nets are appropriate to model, simulate and verify complex systems in the field of safety-critical real-time systems.
Empirical studies in software engineering use software repositories as data sources to understand software development. Repository data is either used to answer questions that guide the decision-making in the software development, or to provide tools that help with practical aspects of developers’ everyday work. Studies are classified into the field of Empirical Software Engineering (ESE), and more specifically into Mining Software Repositories (MSR). Studies working with repository data often focus on their results. Results are statements or tools, derived from the data, that help with practical aspects of software development. This thesis focuses on the methods and high order methods used to produce such results. In particular, we focus on incremental methods to scale the processing of repositories, declarative methods to compose a heterogeneous analysis, and high order methods used to reason about threats to methods operating on repositories. We summarize this as technical and methodological improvements. We contribute the improvements to methods and high-order methods in the context of MSR/ESE to produce future empirical results more effectively. We contribute the following improvements. We propose a method to improve the scalability of functions that abstract over repositories with high revision count in a theoretically founded way. We use insights on abstract algebra and program incrementalization to define a core interface of highorder functions that compute scalable static abstractions of a repository with many revisions. We evaluate the scalability of our method by benchmarks, comparing a prototype with available competitors in MSR/ESE. We propose a method to improve the definition of functions that abstract over a repository with a heterogeneous technology stack, by using concepts from declarative logic programming and combining them with ideas on megamodeling and linguistic architecture. We reproduce existing ideas on declarative logic programming with languages close to Datalog, coming from architecture recovery, source code querying, and static program analysis, and transfer them from the analysis of a homogeneous to a heterogeneous technology stack. We provide a prove-of-concept of such method in a case study. We propose a high-order method to improve the disambiguation of threats to methods used in MSR/ESE. We focus on a better disambiguation of threats, operationalizing reasoning about them, and making the implications to a valid data analysis methodology explicit, by using simulations. We encourage researchers to accomplish their work by implementing ‘fake’ simulations of their MSR/ESE scenarios, to operationalize relevant insights about alternative plausible results, negative results, potential threats and the used data analysis methodologies. We prove that such way of simulation based testing contributes to the disambiguation of threats in published MSR/ESE research.
This thesis addresses the problem of terrain classification in unstructured outdoor environments. Terrain classification includes the detection of obstacles and passable areas as well as the analysis of ground surfaces. A 3D laser range finder is used as primary sensor for perceiving the surroundings of the robot. First of all, a grid structure is introduced for data reduction. The chosen data representation allows for multi-sensor integration, e.g., cameras for color and texture information or further laser range finders for improved data density. Subsequently, features are computed for each terrain cell within the grid. Classification is performedrnwith a Markov random field for context-sensitivity and to compensate for sensor noise and varying data density within the grid. A Gibbs sampler is used for optimization and is parallelized on the CPU and GPU in order to achieve real-time performance. Dynamic obstacles are detected and tracked using different state-of-the-art approaches. The resulting information - where other traffic participants move and are going to move to - is used to perform inference in regions where the terrain surface is partially or completely invisible for the sensors. Algorithms are tested and validated on different autonomous robot platforms and the evaluation is carried out with human-annotated ground truth maps of millions of measurements. The terrain classification approach of this thesis proved reliable in all real-time scenarios and domains and yielded new insights. Furthermore, if combined with a path planning algorithm, it enables full autonomy for all kinds of wheeled outdoor robots in natural outdoor environments.
Beim Zugang zur dualen Berufsausbildung haben Jugendliche und junge Erwachsene mit Migrationshintergrund schlechtere Chancen in eine betriebliche Berufsausbildung einzumünden. Vermehrt wird ein Forschungsbedarf thematisiert, der den Blick darauf richtet, wie die entsprechenden Auswahlentscheidungen in den Betrieben getroffen werden. Die Dissertation setzt an dieser Frage an und befasst sich mittels eines biographischen Zugangs mit den Personalentscheiderinnen und Personalentscheidern, den sogenannten Gatekeepern, in den Betrieben. Im Erkenntnisinteresse steht die Frage, ob und wie biographische Erfahrungen Bedeutung für Auswahlentscheidungen zugunsten zugewanderter junger Menschen haben.
Manmade dams have been constructed from centuries for multiple purposes, and in the past decades they have been constructed in a fast pace, with the hotspot in tropical and subtropical regions. However, studies that explore hydrodynamics in these areas are scarce and biased to the rich literature available for temperate regions. Lakes and reservoirs have the same controlling mechanisms for physical processes and primary production, hence, analyses that were initially conceptualized for lakes are frequently applied for reservoirs. Nevertheless, longitudinal gradients in reservoirs challenges the application of these approaches.
Degradation of water quality in reservoirs is a major concern, and it is expected to be aggravated with climate change. Therefore, studies that explore mechanisms controlling water quality are essential for the maintenance of these systems, especially in tropical and subtropical regions. The aim of this thesis is to comprehend the role of hydrodynamic processes in the fate of nutrients in reservoirs and its implications on water quality, in a subtropical region. With focus on the relevance of different density current patterns. For that, analyses combining field measurements and numerical simulations were performed in a medium to small size subtropical drinking water reservoir for a complete seasonal cycle. Measurements were conducted combining several approaches: traditional sampling, sensors in high temporal and spatial resolution, and remote sensing. Besides, hydrodynamic models were set up and calibrated to reproduce observations, and to simulate scenarios that assisted on the analysis.
Results showed that different flow paths of density currents did not influence on phytoplankton dynamics. At the regions where the main nutrient supply was the river inflow (upstream), the density currents did not vary, the euphotic zone usually covered the entire depth, and vertical mixing was observed on a daily basis, turning the flow path of the density currents irrelevant. At downstream regions, the remobilization of nutrients in the sediment was the main source for primary production. Even though density currents had a seasonal pattern in the downstream region, thermal stratification conditions were the driver for variations in chlorophyll-a concentrations, with peaks after vertical mixing. This mechanism had in its favor the frequent anoxic conditions in the hypolimnion that enhanced the dissolution of reactive phosphorus from the sediment. Anoxic conditions were easily reached because the sediment in the downstream area was rich in organic matter. Phytoplankton produced in the upstream area was transported by the density currents, and for this reason, large concentrations of chl-a was observed below the euphotic zone. Further, the extensive measurements of temperature, and flow velocities, together with the hydrodynamic models, provided insights about the hydrodynamics of reservoirs. For instance, that the relevant processes occurred along the longitudinal, and mixing conditions varied along it. The relevance of inflow conditions regarding the presence of structures such as forebays and pre-dams, and the degree of stream shading in the catchment was assessed. And turbulence and internal waves had different features than the documented for high latitudes. Those findings can assist on the management of reservoirs, based on the comprehension of the physical processes.
In einem randomisierten Kontrollgruppen-Design wurde die Wirksamkeit zweier ambulanter psychologischer Tinnitus-Therapien (12-wöchige psychologische Tinnitus-Therapie (PTT) sowie 4-wöchige psychologische Kurzzeit-Tinnitus-Therapie (KPTT)) miteinander verglichen. Die Überprüfung der Wirksamkeit erfolgte mittels folgender Parameter: Tinnitusbelastung, subjektive Lautheit, Dauer der Tinnituswahrnehmung, Tinnitusbelästigung, Copingfähigkeit, Stimmung, Depressivität, Schlaf und Kontrollüberzeugung. An der Studie nahmen 90 Patienten mit chronischem Tinnitus teil, die zufällig der PTT, der KPTT oder einer Wartekontrollgruppe (WKG) zugeteilt wurden. Die Datenerhebung erfolgte zu drei Messzeitpunkten: vor Therapiebeginn, nach Therapieende sowie 6 Monate nach Therapieende. Beide Interventionen führen zur signifikanten Reduktion der Tinnitusbelastung, der subjektiv wahrgenommenen Lautheit, der Dauer der Tinnituswahrnehmung und der Tinnitusbelästigung. Weiterhin führen beide Therapien zum Erlernen und zur Anwendung von effektiven Copingstrategien und damit auch zur Erhöhung der Kontrollüberzeugung. Alle berichteten Verbesserungen bleiben auch 6 Monate nach dem Therapieende stabil. Nachdem in der WKG nach 4 und nach 12 Wochen keine signifikanten Veränderungen beobachtet werden konnten, sind die Veränderungen in den Therapiegruppen in den o.g. Variablen als Therapieeffekte zu sehen. Keine der beiden Interventionen führt zu einer signifikanten Veränderung des Schlafes und der Stimmung. Als einziger signifikanter Prädiktor des Therapieerfolgs hat sich die Tinnitusdauer herausgestellt. Während die PTT unabhängig von der Tinnitusdauer zu einer Reduktion der Tinnitusbelastung führt, ist die KPTT umso wirksamer, je kürzer die Tinnitusdauer ist. Das Vorhandensein einer komorbiden psychischen Störung, die Ausgangsbelastung sowie die Veränderungsbereitschaft beeinflussen die Therapiewirksamkeit nicht. Da sich diese beiden ambulanten psychologischen Tinnitus-Therapien nicht in ihrer Wirksamkeit und in ihren langfristigen Auswirkungen voneinander unterscheiden, kann resümiert werden, dass beide gut für eine ambulante psychologische Behandlung von Patienten mit chronischem Tinnitus geeignet sind. Die kürzere KPTT ist jedoch aufgrund des niedrigeren Zeit- und Kostenaufwandes als effizienter zu bewerten.
In the last decade, policy-makers around the world have turned their attention toward the creative industry as the economic engine and significant driver of employments. Yet, the literature suggests that creative workers are one of the most vulnerable work-forces of today’s economy. Because of the highly deregulated and highly individuated environment, failure or success are believed to be the byproduct of individual ability and commitment, rather than a structural or collective issue. This thesis taps into the temporal, spatial, and social resolution of digital behavioural data to show that there are indeed structural and historical issues that impact individuals’ and
groups’ careers. To this end, this thesis offers a computational social science research framework that brings together the decades-long theoretical and empirical knowledge of inequality studies, and computational methods that deal with the complexity and scale of digital data. By taking music industry and science as use cases, this thesis starts off by proposing a novel gender detection method that exploits image search and face-detection methods.
By analysing the collaboration patterns and citation networks of male and female computer scientists, it sheds lights on some of the historical biases and disadvantages that women face in their scientific career. In particular, the relation of scientific success and gender-specific collaboration patterns is assessed. To elaborate further on the temporal aspect of inequalities in scientific careers, this thesis compares the degree of vertical and horizontal inequalities among the cohorts of scientists that started their career at different point in time. Furthermore, the structural inequality in music industry is assessed by analyzing the social and cultural relations that breed from live performances and musics releases. The findings hint toward the importance of community belonging at different stages of artists’ careers. This thesis also quantifies some of the underlying mechanisms and processes of inequality, such as the Matthew Effect and the Hipster Paradox, in creative careers. Finally, this thesis argues that online platforms such as Wikipedia could reflect and amplify the existing biases.
Der Begriff Gebrauchsgrafik ist im Wesentlichen ein Sammelbegriff für jene grafischen Arbeiten, die, im Gegensatz zur freien Grafik Anteil haben an der bewussten künstlerischen Durchformung und Gestaltung praktischer Aufgaben. Hauptbetätigungsfeld der Gebrauchsgrafik als zweckgebundene künstlerische Äußerungsform ist um 1900 die Reklame in Form von Plakaten, Anzeigen, Verpackungen, Firmensignets sowie Akzidenzen. Sie alle brachten den vielseitigen Formenschatz des Jugendstils einem breiten Publikum näher. Die Gebrauchsgrafik als neues Medium um 1900 steht im engen Zusammenhang mit dem technischen Fortschritt der industriellen Revolution: Angesichts zunehmender Konkurrenz durch die Massenproduktion von Konsumgütern versprach künstlerisch ansprechende Werbung Wettbewerbsvorteile. Dank der Erfindung der Lithografie um 1799 war die Gebrauchsgrafik allseitig verfügbar und gut finanzierbar. Sie passte in das Konzept des Jugendstils als Medium, das alle Bereiche mitgestaltet. Sie fungierte als Schnittstelle zwischen Wirtschaft, Kunst und Kultur. Exemplarisch werden drei unterschiedliche, für die Region bis heute bedeutende Branchen untersucht: der Tourismus, der Weinbau und die Industrie. Zudem werden Pfälzer Künstler des Jugendstils vorgestellt, die sich auf dem Gebiet der Gebrauchsgrafik betätigten. So lässt sich in der Pfalz eine repräsentative Anzahl gebrauchsgrafischer Arbeiten des Jugendstils in unterschiedlicher Ausprägung nachweisen. Als entlegene bayrische Provinz nahm sie die Strömungen der Zeit wahr und setzte sie auf ihre Weise um. Ein eigenständiger Pfälzer Jugendstil entwickelte sich aber nicht.
Das Integrationspotenzial des Sports am speziellen Beispiel des Rollstuhlbasketball von Simone Janda "Integration ist eine ständige Aufgabe. Insofern kann es kein Ziel im Sinne eines Ergebnisses geben. Vielmehr geht es darum, Prozesse, die zu einer erfolgreichen Integration führen, immer besser zu verstehen, sie weiterzuentwickeln und sie vor allem an die sich ständig verändernden Rahmenbedingungen anzupassen" (DOSB 2009: 5).
Trotz traditioneller Zuschreibung sozial-integrativer Funktionen an sportliche Aktivität gibt es sowohl erhebliche empirische als auch theoretische Defizite (vgl. Hoffmann 2002), die in der vorliegenden Arbeit behandelt werden. In einer differenzierten Analyse werden verschiedene Integrationsaspekte und die Rahmenbedingungen integrativer sportlicher Aktivität berücksichtigt.
Da dem Sport eine besondere integrierende Funktion zugeschrieben wird und diese sozialen Prozesse vor allem in Gruppen stattfinden, soll in dieser Arbeit das Potenzial einer Mannschaftssportart untersucht werden, die für Menschen mit und ohne Behinderung gleichermaßen zugänglich ist " der Rollstuhlbasketball. Neben der körperlichen Belastung kommen hier in besonderem Maße soziale Faktoren hinzu, die das Gemeinschaftsgefühl im Mannschaftsverbund stärken und Individuen in ein soziales Gefüge integrieren können.
Aus der geschilderten Problematik ergibt sich folgende übergeordnete Fragestellung für die Untersuchung: Was macht das Integrationspotenzial einer Sportart (am Beispiel des Rollstuhlbasketballs) aus? Es stellt sich die Frage, welche Faktoren eine Sportart erfüllen muss, um integrativ ausgeführt werden zu können bzw. welche Schwierigkeiten und Hürden dabei überwunden werden müssen, um behinderten und nichtbehinderten Menschen gleichermaßen gerecht zu werden und deren speziellen Bedürfnisse zu befriedigen. In dieser Studie erfolgt die Datenerhebung durch die Kombination von qualitativer (Interviews) und quantitativer (Fragebogen) Forschungsmethoden. Diese Verknüpfung verschiedener Forschungsmethoden ermöglicht es, dass Messungen aus unterschiedlichen Blickwinkeln und individuelle Betrachtungsweisen möglichst vollständig erfasst werden können. Der Nutzen für die Wissenschaft liegt darin, eine aktuelle Situationsanalyse in Form eines Modells zu erstellen, um die Rahmenbedingungen, die ständigen Veränderungen unterliegen, weiterzuentwickeln und zu verbessern, um Menschen mit Behinderung ein maximal inklusives Sportangebot bieten zu können.
The distributed setting of RDF stores in the cloud poses many challenges. One such challenge is how the data placement on the compute nodes can be optimized to improve the query performance. To address this challenge, several evaluations in the literature have investigated the effects of existing data placement strategies on the query performance. A common drawback in theses evaluations is that it is unclear whether the observed behaviors were caused by the data placement strategies (if different RDF stores were evaluated as a whole) or reflect the behavior in distributed RDF stores (if cloud processing frameworks like Hadoop MapReduce are used for the evaluation). To overcome these limitations, this thesis develops a novel benchmarking methodology for data placement strategies that uses a data-placement-strategy-independent distributed RDF store to analyze the effect of the data placement strategies on query performance.
With this evaluation methodology the frequently used data placement strategies have been evaluated. This evaluation challenged the commonly held belief that data placement strategies that emphasize local computation, such as minimal edge-cut cover, lead to faster query executions. The results indicate that queries with a high workload may be executed faster on hash-based data placement strategies than on, e.g., minimal edge-cut covers. The analysis of the additional measurements indicates that vertical parallelization (i.e., a well-distributed workload) may be more important than horizontal containment (i.e., minimal data transport) for efficient query processing.
Moreover, to find a data placement strategy with a high vertical parallelization, the thesis tests the hypothesis that collocating small connected triple sets on the same compute node while balancing the amount of triples stored on the different compute nodes leads to a high vertical parallelization. Specifically, the thesis proposes two such data placement strategies. The first strategy called overpartitioned minimal edge-cut cover was found in the literature and the second strategy is the newly developed molecule hash cover. The evaluation revealed a balanced query workload and a high horizontal containment, which lead to a high vertical parallelization. As a result these strategies showed a better query performance than the frequently used data placement strategies.
This dissertation deals with the opportunities and restrictions that parties face in an election campaign at the supranational level of the EU. Using communication science concepts of agenda-setting (focus: media) and agenda-building (focus: political parties), the first part of the study is based on the election campaign for the European Parliament (EP) in 2014. It analyses to what extent political parties put the EU on the agenda. Second, it is examined whether parties have used their structural advantage of being able to influence the media agenda at the supranational level during the election campaign in the context of the EP election campaign. Third, it is examined whether parties can gain an advantage for the visibility of their campaigns by rejecting EU integration and the associated conflictual communication. Fourth and final, it will be explored whether agenda-building can influence the rankings of specific policy issues on the media agenda in the European context.
First, the analyses show that a European political focus of election campaign communication can no longer be found only on the part of the small (eurosceptic) parties. Second, parties have a good chance of being present in media coverage if the they pursue a European political focus in their campaign communication. Third, a negative tone in party communication turns out not to be decisive for the parties' visibility in the election campaign. Fourth, a clear positioning on political issues also prepares parties for restrictions of the further development of a European thematic agenda. After a discussion of these results, the paper concludes with an assessment of the analysis limitations and an outlook on further research approaches.
The PhD thesis offers a long-term investigation of German TV debates with regard to the extent to which the topics dealt with in the debates correspond to the political problems perceived by the voters and whether the actor-specific topics discussed in the TV debates have an influence on the voters' perception of the problems and the perceived responsiveness of the political actors. To this purpose, contents and effects of the Chancellor debates 2002-2013 and the TV debates between 1997 and 2016 are examined at the state level. The empirical investigation is based on an implementation and evaluation of content analyses, the evaluation of opinion polls and the investigation of experimental data on the occasion of the chancellor duels in 2002, 2009 and 2013. The analyses show that TV debates are not only show events, but that the focus of TV debates is on political content (rather than entertaining elements), that the (camp-specific) responsiveness of the format has increased over time, and that successful persuasion - as one of two theme management strategies - by challengers in particular can lead viewers to better assess the effectiveness of the political system.
Response-Shift bezeichnet eine Veränderung des Bewertungshintergrunds für subjektive Konzepte wie bspw. der gesundheitsbezogenen Lebensqualität (GLQ) im Rahmen der Auseinandersetzung mit einer schweren bzw. chronischen Erkrankung. Krebserkrankungen werden als Auslöser eines Response-Shifts angesehen. Response-Shift kann in drei verschiedenen Formen auftreten: Als Rekalibrierung, Reprioritisierung und Neukonzeptualisierung der GLQ. Eine Erfassung dieser drei Response-Shift-Prozesse bei einer homogenen Stichprobe von PCa-Patienten in der onkologischen Rehabilitation liegt in Deutschland bisher nicht vor. Es wurden von insgesamt 212 Prostatakrebspatienten (PCa-Patienten) in der Anschlussheilbehandlung Daten zur GLQ mittels des EORTC-QLQ-C30 vor Beginn der Maßnahme sowie 3 Monate nach Beginn erhoben und in die Auswertung zur Response-Shift-Erfassung aufgenommen. Die Prozesse Rekalibrierung, Reprioritisierung und Neukonzeptualisierung wurden mit Hilfe der konfirmatorischen Faktorenanalyse (KFA) erfasst. Eine Rekalibrierung läßt sich auf der Ebene der Messfehler und Intercepts überprüfen. Die Reprioritisierung und Neukonzeptualisierung kann anhand der Veränderung der Faktorladungen untersucht werden. Bei dem Vorgehen handelt es sich um einen Ansatz aus der Invarianzmessung von Variablen und Parametern im Längsschnitt. In der vorliegenden Arbeit wurde untersucht, ob in der Gesamtstichprobe ein Response-Shift auftritt, und falls ja, in welcher Form (Fragestellung 1). Weiterhin sollte anhand von Subgruppenanalysen (Medianisierung) geprüft werden, welchen Einfluss einerseits das Ausmaß der Veränderung der globalen Lebensqualität (Fragestellung 2) und andererseits die Variablen Benefit-Finding, Selbstwirksamkeit sowie Positive und Negative Affektivität (Fragestellung 3) auf einen Response-Shift haben. Es konnte ein Basismodell der GLQ zu zwei Messzeitpunkten bei den PCa-Patienten identifiziert werden, dass als geeignet bezeichnet wurde, um einen Response Shift in der vorliegenden Stichprobe zu untersuchen (ï£2/df = 1.50, RMSEA = 0.05, CFI = 0.98, TLI = 0.97). Ein so genanntes Nullmodell, in dem alle Response-Shift relevanten Parameter für beide Messzeitpunkte als gleich angenommen wurde, wies eine inakzeptable Modellpassung auf (ï£2/df = 5.30, RMSEA = 0.14, CFI = 0.76, TLI = 0.71). Über die Ableitung geschachtelter Modellvergleiche konnte ausgehend von dem Nullmodell ein Response-Shift-Modell berechnet werden, dass wiederum eine gute Passung aufwies (ï£2/df= 1.31, RMSEA = 0.04, CFI = 0.99, TLI = 0.98). In der vorliegenden Gesamtstichprobe wurden Rekalibrierungsprozesse auf den Skalen Physische, Kognitive und Rollenfunktionsfähigkeit identifiziert, die sich auf die Kalkulation der beobachteten Effekte auf Skalenebene auswirkten. Es konnte gezeigt werden, dass Response Shift einen erheblichen Einfluss auf die beobachteten Effekten auf Skalenebene hat (Fragestellung 1). So schätzen PCa-Patienten ihre Physische und Rollenfunktionsfähigkeit zum 2. Messzeitpunkt positiver ein, ohne dass dies auf Veränderungen der GLQ zurückgeführt werden kann. Unabhängig von der Veränderung der GLQ zwischen den Messzeitpunkten schätzen die Patienten ihre Kognitive Funktionsfähigkeit zum Postmesszeitpunkt negativer ein als zur ersten Messung. In den Subgruppenanalysen zeigt sich, dass das Ausmaß der Veränderung der globalen Lebensqualität einen Einfluss auf die Anzahl von Response-Shift-Prozessen hat. Dies gilt ebenso für die Variablen Benefit-Finding, Selbstwirksamkeit und Positive Affektivität. Jedoch lassen sich Unterschiede im Ausmaß von Response-Shift-Effekten zwischen verschiedenen Subgruppen nicht eindeutig nachweisen (Fragestellung 2 und 3). Response Shift sollte bei Untersuchungen der GLQ bei Krebserkrankungen mehr Berücksichtigung finden. Das setzt jedoch voraus, dass die Methoden zur Erfassung des Phänomens weiterentwickelt werden. Mit der hier eingesetzt Methode der KFA im Längsschnitt steht ein geeignetes Verfahren zur Verfügung, das Response-Shift bei einfachen Prä-Post-Messungen erfassen kann. Praktisch relevant erscheint das Ergebnis der Arbeit im Zusammenhang mit der Überprüfung der Wirksamkeit von Maßnahmen der onkologischen Rehabilitation. Werden zur Evaluation Skalen eingesetzt, die response-shift-beeinflusst sind, darf nicht ohne Weiteres auf eine Veränderung des zugrundeliegenden Konstrukts geschlussfolgert werden. In der vorliegenden Arbeit würde die Veränderung der GLQ anhand der Veränderung der manifesten Werte der Skalen Physische und Rollenfunktionsfähigkeit eher überschätzt. Anhand der Skala Kognitive Funktionsfähigkeit würden Veränderungen der GLQ jedoch eher unterschätzt. Somit bietet sich der KFA-Ansatz an, um die Änderungssensitivität von Lebensqualitätsinstrumenten zu prüfen und diese entsprechend weiter zu entwickeln. Außerdem bietet die Methode Möglichkeiten, um inhaltliche Einflussfaktoren auf einen Response-Shift zu prüfen.
The presence of anthropogenic chemicals in the natural environment may impact both habitats and human use of natural resources. In particular the contamination of aquatic resources by organic compounds used as pharmaceuticals or household chemicals has become evident. The newly identified environmental pollutants, also known as micropollutants, often have i) unknown ecotoxicological impacts, ii) unknown partitioning mechanisms, e.g. sorption to sediments, and iii) limited regulation to control their emission. Furthermore, like any compound, micropollutants can be transformed while in the environmental matrix to unknown transformation products (TPs), which add to the number of unknown chemicals to consider and thus increase the complexity of risk management. Transformation is at the same time a natural mechanism for the removal of anthropogenic compounds, either by complete degradation (mineralisation) or to innocuous TPs. However, how transformation occurs in real-world conditions is still largely unknown. During the transport of micropollutants from household wastewater to surface water, a large amount of transformation can occur during wastewater treatment—specifically during biological nitrifying–denitrifying treatment processes. The thesis considers the systematic optimisation of laboratory investigative techniques, application of sensitive mass-spectrometry-based analysis techniques and the monitoring of full-scale wastewater treatment plants (WWTPs) to elucidate transformation processes of five known micropollutants.
The first of the five compounds investigated was the antibiotic trimethoprim. Incubation experiments were conducted at different analyte spike concentrations and different sludge to wastewater ratios. Using high-resolution mass spectrometry, a total of six TPs were identified from trimethoprim. The types of TPs formed was clearly influenced by the spike concentration. To the best of our knowledge, such impacts have not been previously described in the literature. Beginning from the lower spike concentration, a relatively stable final TP was formed (2,4-diaminopyrimidine-5-carboxylic acid, DAPC), which could account for almost all of the transformed trimethoprim quantity. The results were compared to the process in a reference reactor. Both by the detection of TPs (e.g., DAPC) and by modelling the removal kinetics, it could be concluded that only experimental results at the low spike concentrations mirrored the real reactor. The limits of using elevated spike concentrations in incubation experiments could thus be shown.
Three phenolic micropollutants, the antiseptic ortho-phenylphenol (OPP), the plastics additive bisphenol A (BPA) and the psychoactive drug dextrorphan were investigated with regard to the formation of potentially toxic, nitrophenolic TPs. Nitrite is an intermediate in the nitrification– denitrification process occurring in activated sludge and was found to cause nitration of these phenols. To elucidate the processes, incubation experiments were conducted in purified water in the presence of nitrite with OPP as the test substance. The reactive species HNO2, N2O3 and the radicals ·NO and ·NO2 were likely involved as indicated by scavenger experiments. In conditions found at WWTPs the wastewater is usually at neutral pH, and nitrite, being an intermediate, usually has a low concentration. By conducting incubation experiments inoculated with sludge from a conventional WWTP, it was found that the three phenolic micropollutants, OPP, BPA and dextrorphan were quickly transformed to biological TPs. Nitrophenolic TPs were only formed after artificial increase of the nitrite concentration or lowering of the pH. However, nitrophenolic-TPs can be formed as sample preparation artefacts through acidification or freezing for preservation, creating optimal conditions for the reaction to take place.
The final micropollutant to be studied was the pain-reliever diclofenac, a micropollutant on the EU-watch list due to ecotoxicological effects on rainbow trout. The transformation was compared in two different treatment systems, one employing a reactor with suspended carriers as a biofilm growth surface, while the other system employed conventional activated sludge. In the biofilm-based system, the pathway was found to produce many TPs each at relatively low concentration, many of which were intermediate TPs that were further degraded to unknown tertiary TPs. In the conventional activated sludge system some of the same reactions took place but all at much slower rates. The main difference between the two systems was due to different reaction rates rather than different transformation pathways. The municipal WWTPs were monitored to verify these results. In the biofilm system, a 10-day monitoring campaign confirmed an 88% removal of diclofenac and the formation of the same TPs as those observed in the laboratory experiments. The proposed environmental quality standard of 0.05 μg/L might thus be met without the need for additional treatment processes such as activated carbon filtration or ozonation.
Die vorliegende Forschungsarbeit beschäftigt sich mit der Positionierung und anbieterinternen Kommunikation der innovativen IT-Architektur SOA. Die zentralen Ziele der vorliegenden explorativen und empirischen Forschungsarbeit, die im Kontext der Innovations-Erfolgsfaktorenforschung angesiedelt ist, bestehen in der Beantwor-tung der beiden folgenden forschungsleitenden Fragestellungen:
Forschungsfrage 1: Welche Bedingungen tragen zu einer erfolgreichen Positionierung von SOA bei? Forschungsfrage 2: Welche Bedingungen tragen zu einer erfolgreichen anbieterinternen Kommunikation bezüglich SOA bei? Zur Überprüfung dieser beiden Forschungsfragen wurde ein zweistufiges Delphi-Verfahren durchgeführt. Hierbei wurde zunächst eine qualitative Befragungswelle (N=53) zur Identifizierung der SOA-Positionierungsbedingungen und anbieterinternen SOA-Kommunikations-bedingungen durchgeführt. Insgesamt wurden in der ersten Befragungswelle 122 SOA-Positionierungsbedingungen identifiziert, die sich in 65 Bedingungen auf Anbieterseite, 35 Bedingungen auf Kundenseite, 19 Bedingungen auf SOA-Seite und 3 Bedingungen aufseiten des weiteren Umfeldes aufteilen. Im Rahmen der anbieterinternen SOA-Kommunikation konnten 31 Bedingungen identifiziert werden. Die in der ersten Welle identifizierten SOA-Positionie-rungsbedingungen und anbieterinternen SOA-Kommunikationsbedingungen wurden mittels der zweiten Befragungswelle (N=83) einer quantitativen Analyse unterzogen. Somit liefert die vorliegende Studie Bedingungen, die sowohl zu einer erfolgreichen SOA-Positionierung als auch zu einer erfolgreichen anbieterinternen SOA-Kommunikation beitragen.
Die Resultate dieser Arbeit werden zusammengefasst und theoretisch eingeordnet. Ebenfalls wird die methodische Vorgehensweise kritisch diskutiert und die Güte der Daten beurteilt. Schließlich wird ein Ausblick auf zukünftige Forschungsfelder gegeben.
Entwicklung eines Computermodells der lumbalen Wirbelsäule zur Bestimmung mechanischer Belastungen
(2009)
Ziel der Arbeit war die Erstellung eines MKS-Modells der menschlichen lumbalen Wirbelsäule zur Ermittlung der mechanischen Belastungen innerer Körperstrukturen. Die Oberflächen der Wirbelkörper wurden aus CT-Daten menschlichen Sektionsguts als CAD-Oberflächen generiert und bilden das Grundgerüst des Modells. Die genaue Positionierung des Facettengelenke ist dabei vorgegeben ebenso wie die Ansatzpunkte und Verlaufsrichtungen der ligamentösen Strukturen. Zwischen den starren Wirbelkörpern wurden elastische Bandscheiben eingeführt, deren Mittelpunkte als jeweiliges Drehzentrum der entsprechenden funktionalen Einheiten definiert sind. Damit sind gleichzeitig die Hebelarme zu den Ansatzpunkten der einzelnen Bänder festgelegt. Das mechanische Verhalten dieser verschiedenen Strukturen wurde über physiologische Gleichungen oder Kennlinien in das Modell implementiert. So wurde für die Facettengelenke ein Ansatz für Kontaktkräfte in horizontaler Richtung eingeführt. Für die Kraftentwicklung bei Dehnung der Bänder fanden individuelle Kennlinien aus der Literatur Verwendung. Bei der Deformation der Bandscheiben folgt die Kraftentwicklung einer mechanischen Relation in Abhängigkeit der Deformation sowie der Deformationsgeschwindigkeit. Die entsprechenden Materialkonstanten in den Gleichungen wurden über experimentelle Messdaten aus der Literatur ermittelt. Dem Aufbau von Drehmomenten bei Auslenkung der Bandscheiben um die drei möglichen Rotationsachsen liegen wiederum Kennlinien aus der Literatur zugrunde. In Anpassung an diese experimentell ermittelten Kurven wurden mechanische Gleichungen entwickelt, die letztendlich in das Modell implementiert wurden und die bei jeweiliger Verdrehung der Bandscheibe die Entwicklung eines entsprechenden Moments angeben. Die Validierung des Modells erfolgt auf der einen Seite über die Gleichgewichtsbedingung, bei der die Summe aller Kräfte und Drehmomente bezüglich des Schwerpunkts einer funktionalen Einheit Null sein muss. Dieser Zustand konnte mit dem Modell eindeutig nachgewiesen werden. Auf der anderen Seite konnten punktuell Messergebnisse aus der Literatur über die Modellrechnungen in guter Näherung reproduziert werden. Hier besteht jedoch die Schwierigkeit, dass Messungen an Sektionsgut immer nur in isoliertem Zustand und in einem definierten Versuchsaufbau mit Belastung nur einer Richtung durchgeführt wurden. Innerhalb des Modells befinden sich die Strukturen in einem beweglichen Verbund und unterliegen damit vielfältigen mechanischen Einflüssen, was der Realität im menschlichen Körper auch wesentlich mehr entspricht. Dennoch spiegelt das Materialverhalten der elastischen Elemente innerhalb des Modells größenordnungsmäßig die Ergebnisse der verschiedensten experimentellen Messungen aus der Literatur wider. Zur Simulation unterschiedlicher Belastungssituationen wurde das Modell der Lendenwirbelsäule in verschiedenen Fallbeispielen der Einwirkung einer jeweils konstanten äußeren Kraft unterschiedlicher Größe ausgesetzt. Nach einer kurzen Phase der Bewegung aller Teilstrukturen stellte sich in jedem gerechneten Fallbeispiel ein neuer Gleichgewichtszustand ein. Für alle implementierten Strukturen, wie Bandscheiben, Bänder und Facettengelenke, konnte der zeitliche Verlauf der Belastungszunahme sowie die Belastung im Endzustand berechnet werden. Eine Überprüfung ergab, dass sich alle Ergebnisse im physiologisch gesunden Wertebereich befanden. Damit ist der Nachweis erbracht, dass mit dem vorliegenden Modell ein Instrument entwickelt wurde, das im Rahmen der Genauigkeit des Modells die Belastung der inneren Strukturen bei äußerer Krafteinwirkung zuverlässig berechnet werden können. Die Anwendungen eines derartigen Modells sind vielfältiger Art. Durch Variationen von Parametern können die verschiedensten Situationen simuliert werden. Beispiele sind hier die Auswirkung von degenerierten Bandscheiben mit völlig anderem Materialverhalten auf die umgebenden gesunden Teilstrukturen. Weitere Krankheitsbilder wie schwache Bänder, Wirbelgleiten, Knochenveränderungen oder auch der Einfluss von operativen Maßnahmen wie Versteifung einzelner Abschnitte oder die Einsetzung von Implantaten können damit simuliert werden und ermöglichen quantitative Aussagen über die Veränderung der Beanspruchung der angrenzenden Strukturen. Als Beispiel einer Anwendung in der Medizin wurde der Fall einer degenerierten Bandscheibe aufgezeigt. Die Bandscheibe wurde chirurgisch entfernt und durch ein Implantat zur Versteifung ersetzt. Mit Hilfe der Simulationsrechnung wurde die Auswirkung der Versteifung auf die Deformation der angrenzenden Bandscheiben und die veränderte Kraftentwicklung dargelegt.
Over the last three decades researchers of Cognitive Metaphor Theory have shown conclusively that metaphor is motivated rather than arbitrary and often used to systematically map out conceptual territory. This cognitive semantic proposal holds the potential for alternative L2 teaching strategies. As an abstract domain, business discourse is naturally rich in metaphors and is additionally filled with consciously used metaphorical language to strategically manipulate clients and business partners. Business English courses especially stand to profit from metaphor-oriented language teaching, as (future) managers aim to quickly improve their language performance to be prepared for international business communication. In using metaphors, speakers as well as hearers conceptualize and thus experience one thing in terms of another. Having been made aware of the conceptual linkage, students are immediately equipped with a whole set of vocabulary they may already have learned for a concrete domain and are then able to elaborate in the more abstract area of business discourse. Enhanced metaphor awareness may thus prove to be a valuable vehicle for vocabulary acquisition as well as for vocabulary retention. This thesis is subdivided into ten chapters. With each successive chapter, the focus will increasingly sharpen on the main hypothesis that metaphor awareness raising and explicit teaching in the business English classroom assists the students to dip into their savings' and transfer already acquired vocabulary to abstract business discourse and thus to become more proficient business communicators. After an introduction to the main objectives, chapter two critically looks at the different strands of Cognitive Linguistic contributions to metaphor theory made within the last three decades and discusses the structure, function and processing of figurative language to single out relevant aspects of the language classroom applications. Chapter three narrows the perspective to the socio-economic discourse as the very target domain in focus and surveys the conceptual metaphors that have been identified for this target domain, namely the source domains most productive for the target and therefore most valuable for the language classroom. In chapter four Cognitive Linguistic findings are put in contact with language didactics; i.e., the Cognitive Linguistic basis is discussed in the context of language teaching and learning theories and a first classification of metaphor teaching in the theoretical framework of language didactics is proposed. Ten cornerstones summarize the theoretical output of the previous chapters and the respective didactic consequences are considered. Theories of cognitive psychology pertaining to noticing, processing, and storing metaphors are systematically revisited and expanded to formulate further didactic implications for metaphor teaching. The consequences drawn from both linguistic as well as didactic theory are translated into a list of ten short guidelines identifying essentials for the explicit integration of metaphors into the language classroom. In chapter five those experimental studies that have already been conducted in the field of Cognitive Linguistic-inspired figurative language teaching are systematically summarized and possible contributions to set up a didactic framework for metaphor teaching are investigated. Chapters six to nine then present a piece of original research. Starting out from five research questions tackling receptive and productive vocabulary acquisition and retention as well as the influence of and on the learner- level of language proficiency, a three-fold study was designed and conducted in a regular business English classroom and results are discussed in detail. The last chapter deals again with specific implications for teaching. Earlier statements about and claims for the language classroom are revisited and refined on the basis of the theoretical linguistic, didactic and empirical findings, and an agenda for further empirical investigations is sketched out.
Die Basis für die Untersuchung bilden die theoretischen Erkenntnisse zur Transition und zum Fremdsprachenunterricht. In der Studie wurden saarländische Grundschullehrer und Gymnasiallehrer zu für den Fremdsprachenunterricht relevanten Aspekten befragt. Aus den Ergebnissen wurden Konsequenzen für die Bildungspolitik und die Unterrichtspraxis abgeleitet.
In this study the influence of soil moisture and soil type on the selected pests and diseases Phytophthora infestans (Mont.) de Bary, Fusarium graminearum Schwabe and F. culmorum (W.G. Smith) Saccardo, respectively, as well as the larvae of the most important Elateridae Agriotes lineatus Linnaeus, A. obscurus Linnaeus, A. sputator Linnaeus, A. sordidus Illiger and A. ustulatus Schaller (called wireworms) was characterized. The aim was to integrate soil modules in prediction models for agricultural and horticultural pests and diseases, to optimise the scheduling of disease controls, to detect periods of high-intensity attacks and to reduce the number of preventive treatments. The measurement of soil moisture is time intensive and additionally soil moisture is highly varying within small areas. Therefore this parameter did not have a high impact in the prediction models for agricultural and horticultural pests and diseases in the past. In this study the possibilities for a site-specific simulation of soil moisture was investigated. The soil moisture simulation model SIMPEL (HÖRMANN 1998) was adapted to agricultural conditions and the site-specific simulation of soil moisture was realised on the basis of radar measured precipitation data from the German Meteorological Service (DWD) and on interpolated weather data (ZEUNER 2007). The comparison of the simulated soil moisture data with the field measurements showed a highly significant correlation (Alpha = 0.01) and an average of 3.8 % differences. Therefore the site-specific simulation of soil moisture will be possible and can be included in the prediction models for agricultural and horticultural pests and diseases. The results of laboratory and field experiments as well as analyses on monitoring data showed, that the influence of soil parameters on the three selected pests and diseases varied. Whereas the investigations on the influence of soil moisture on P. infestans identified no clear correlation, considerable correlations between soil moisture and soil type on wireworms as well as F. graminearum and F. culmorum could be detected. For the prediction of the appearance of wireworms in the upper soil level in relation to soil moisture and soil type the simulation model SIMAGRIO-W was developed. The validation of the new model showed highly significant correlations between soil moisture and soil type on the appearance of wireworms. The influence of soil type on F. graminearum and F. culmorum was integrated in the existing prediction model FUS-OPT. A correlation between air cached spores of F. graminearum and simulation results from the modified model FUS-OPT showed a high significance in PEARSON test. The results of this study showed that the implementation of soil moisture and soil type in the prediction models for agricultural and horticultural pests and diseases can be realised. The developed and modified prediction models can be used by the Governmental Crop Protection Services in Germany through www.isip.de.
Taxonomy and Systematics of Spiny-Backed Treefrogs, Genus Osteocephalus (Amphibia: Anura: Hylidae)
(2015)
The pan-Amazonian treefrog genus Ostecephalus is poorly understood both on a taxonomic and phylogenetic level. The status of several frogs already or not yet referred to the genus is unclear and the relationships among the genus and with respect to related genera is not understood. In this work O. cabrerai (Cochran and Goin, 1970) from Colombia and Peru is redescribed and O. festae (Peracca, 1904) from the foothills of the Andes in Ecuador is revalidated. Hyla inframaculata Boulenger, 1882, from the lower Amazon in Brazil, is reallocated to Osteocephalus and O. elkejungingerae (Henle, 1981) from the Andean foothills in Peru is shown to be a synonym of Hyla mimetica (Melin, 1941), the valid name being O. mimeticus. Hyla vilarsi Melin, 1941 is considered a valid species in the genus Osteocephalus and revalidated from the synonymies of several other frogs. Three new species, O. castaneicola from northern Bolivia and southern Peru, O. duellmani from a sub-Andean mountain range in southern Ecuador, and O. camufatus from central Amazonian Brazil, are described. A phylogenetic analysis based on up to nine mitochondrial genes and one nuclear one reveals the paraphyly of the genus as previously understood with respect to the genus Tepuihyla. A new taxonomy is proposed, securing the monophyly of Osteocephalus and Tepuihyla by rearranging and redefining the content of both genera. A new genus, Dryaderces, is erected for the sister group of Osteocephalus. The colouration of newly metamorphosed individuals is proposed as a morphological synapomorphy for Osteocephalus. Five monophyletic species groups within Osteocephalus are recognized, three species of Osteocephalus (O. germani, O.rnphasmatus, O. vilmae) and three species of Tepuihyla (T. celsae, T. galani, T. talbergae) synonymized and three species (Hyla helenae to Osteocephalus, O.rnexophthalmus to Tepuihyla and O. pearsoni to Dryaderces gen. n.) reallocated. Furthermore, nine putative new species are flagged (an increase to 138% of the current diversity), an indication that species numbers are largely underestimated, with most hidden diversity centred on widespread and polymorphic nominal species. The evolutionary origin of breeding strategies within Osteocephalus is discussed in the light of this new phylogeny and a novel type of amplexus (gular amplexus) is described.
This doctoral thesis concerns the theoretical basis, development and validation of a multipart instrument to provide students of educational sciences with some feedback about their competence-development (instrument KIPBI) and report to their experiences in practical training (instrument EIPRA). It thus supports the reflection capability of the student teachers. Both instruments are part of an online self-assessment (Tour 3-RLP), which is similar to the Career-Counselling for Teachers (CCT, see cct-germany.de) but unlike this, does not focus on ability but on competence-development. The instruments serve to promote the professional development of student teachers. This is a goal of the reformed concept of teacher training in Rhineland-Palatinate, the so-called KMK-standards for teacher training and education, the Rhineland-Palatinate "Framework School Quality" as well as of handouts and manuals for the implementation of in-depth practical training in teacher education. The first part of this dissertation describes the theoretical framework for the development of instruments for the "CCT-Tour 3-RLP" starting with the issue of professionalization in teacher profession. Historical dimensions and their associated approaches to teacher education research are manifold. They range from the aspect of "teacher personality" to the implementation of teaching-standards and standards for teacher education and output orientation. While within the 60s and 70s the personality approach was pivotal in teacher education research, nowadays expertise-development and models of skills-development are discussed as central issues. Therefore, the theory section describes the influence of previous research-paradigms and their semantic content upon the current trend. As one part of the analysis the construct of "teacher self-efficacy" will be explained, along topological and typological models of competence-development.
Furthermore the practical elements of teacher education and their contribution to the professionalization of future teachers are presented. Professional reflection is assumed to be a basis for developing expertise. Therefore it is necessary to investigate how theoretical knowledge can be "transferred" to practical performance. A unifying theory connecting the integration hypothesis (assuming that knowledge is directly transferable into practice) with the differentiation hypothesis (teacher skills form their own knowledgebase independent from both theoretical and practical knowledge) is provided by cognitive psychology (particularly by research on expertise). Endpoint of the theoretical discourse is the evaluation of the referred theoretical positions and their meaning for the Rhineland-Palatinate reform of teacher education and teacher training and the role that the "Tour 3-RLP" is going to play in this context.In the empirical part the methodical and methodological steps for validation of the instruments are discussed. At first the Swiss standards for teacher profession (developed by the Pädagogische Hochschule Schwyz, Central Switzerland) are presented. They can serve as a model building a competency-oriented tool that refers to competency stages.
An analysis of various methods for the setting of cut-scores aiming to develop appropriate competency levels is also a fundament for the procedures developed in the context of the empirical investigations. The results of the pilot-study and a follow-up study conducted with both described instruments (KIPBI and EIPRA) show that the instruments for competence-measurement fulfil the requirements of psychometric criteria (like a scale-structure) and tend to support student teachers´ self-reflection. Core concern of both instruments is the promotion of students, realized by the online-self-assessment by the use of appropriate feedback structures and related recommendations for action.The final chapter of this thesis includes the discussion of the results of the validation and implementation study. This final view is devoted to the question whether or not competency-modeling or standard-based approaches to professionalization are the only possible accesses to map and explain skills development. The role of reflection seems to be important for both the integral and the differential approach and interconnects them.
How to begin? This short question addresses a problem that is anything but simple, especially when regarding something as sophisticated and multilayered as musical theatre. However, scholars of this vast research area have mostly neglected this question so far. This study analyses and compares the initial sections of late Victorian popular musical theatre and is therefore a contribution to several fields of research: the analysis of initial sections of musical theatre in general, the analysis of the music of popular musical theatre in particular, and therefore operetta studies. The 1890s are especially interesting times for popular musical theatre in London: The premiered works include the last collaborations of Gilbert and Sullivan as well as offshoots of Savoy opera; but the so-called ‘naughty nineties’ also saw the emergence of a new genre, musical comedy, which captured the late Victorian zeitgeist like no other. This new form of theatrical entertainment was carefully and consciously constructed and promoted as modern and fashionable, walking a fine line between respectability and mildly risqué excitement.
Because a deep understanding of the developments and new tendencies concerning popular musical theatre in the 1890s is crucial in order to interpret differences as well as similarities, the analyses of the opening numbers are preceded by a detailed discussion of the relevant genres: comic opera, musical comedy, musical play and operetta. Since the producers of the analysed works wanted to distance themselves from former and supposedly old-fashioned traditions, this book also considers influences from their British predecessors, but also from Viennese operetta and French opéra bouffe.
Homonegative discrimination such as the denial of leadership qualities and higher salaries concern not only lesbians and gay men but also individuals who were perceived as lesbian or gay (Fasoli et al., 2017). Hence, it is assumed that especially straight people become victims of homonegative discrimination (Plöderl, 2014). The perception of sexual orientation is indeed stereotype-driven (e.g., Cox et al., 2015) but there is a lack of knowledge on how accurate stereotypes are – particularly those referring to speech. Despite a variety of sociophonetic and social psychological research related to sexual orientation and gender, an encompassing understanding is missing on how sexual orientation is expressed and perceived.
The present thesis aims to fill these gaps. The two major aims of the present work are a) the examination of the accuracy of speech stereotypes in the context of sexual orientation and b) the development of a model on how sexual orientation is interpersonally construed. Overall, the present thesis comprises five manuscripts with the following aspects in common: They integratively deal with social psychological and sociophonetic perspectives, share a social identity approach, and primarily center speech instead of facial appearance. Moreover, mostly German and German native speaking participants, respectively, have been investigated.
Manuscript 1 establishes the Traditional Masculinity/Femininity-Scale as a reliable and valid instrument for assessing gender-role self-concept. The invention was necessary because existing scales insufficiently represented the self-ascribed masculinity/femininity yet (e.g., Abele, 2003; Evers & Sieverding, 2014). Manuscripts 2, 3, and 4 address the (in)accuracy of speech stereotypes regarding stereotypic content and suggested within-group homogeneity. This is carried out by the application of different methodological approaches. On the one hand, relevant acoustic parameters of lesbian/gay and straight women and men were averaged for each group. On the other hand, voice morphing was applied in order to create prototypical and naturally sounding voice averages (Kawahara et al., 2008). Lesbians and straight women differed in none, gay and straight men in one of the analyzed acoustic parameters only. In contrast, a fine-grained psychological analysis yielded various evidence for acoustic within-group heterogeneity. In particular, the exclusivity of sexual orientation and gender-role self-concept have been acoustically indexicalized which suggests that speech stereotypes are inaccurate. However, voice averages do carry perceivable sexual orientation information. Hence, speech stereotypes can be considered as exaggerations of tiny kernels of truth. In Manuscript 5, previous literature on the interpersonal construction of sexual orientation is integrated in a model: The Expression and Perception of Sexual Orientation Model (EPSOM). This model postulates an indirect route and describes how sexual orientation information is transmitted from producer to perceiver by proposing three mediating components. Thereby, the model is able to offer an explanation why sexual orientation can be perceived with above-chance but far-away-from-perfect accuracy.
Overall, the present thesis provides meaningful impulses for enhancements of research on social markers of sexual orientation and gender. This thesis offers a model on how sexual orientation is expressed and perceived, shows the benefits of combining sociophonetic and social psychological approaches, and points out the value of applying novel methods and technologies. Beyond that, the present thesis offers useful implications for practice. Speech stereotypes in the context of sexual orientation can be rejected as inaccurate – for example, native German straight men do not nasalize more or less than gay men. Thereby, the present thesis contributes to an erosion of stereotypes and a potential reduction of homonegative discrimination.
This work is about three subjects: Virtualisation, real-time computing and parallel computing. Taken by itself, each of these subjects has already been wellresearched, however, when considering all three together, as is necessary when looking at embedded systems, numerous questions as well as new possibilities arise. In this work we develop models describing the behaviour and requirements of real-time applications which execute in a hierarchy of processes as they do when running in a virtual machine. Also, the real-time capabilities of existing virtual machines are evaluated and new interfaces for virtualisation of multiprocessor machines which take into account the characteristics of embedded systems"specifically real-time computing" are defined, implemented and tested. This enables safe, secure and efficient coexistence of programs with largely differing time constraints within separate virtual machines on a single, common multiprocessor computer.
Die Dissertation "Schreibprozesse in der Wikipedia" untersucht auf Basis einer linguistischen Textanalyse individuelle und gemeinschaftliche Schreibhandlungen in der Wikipedia. An Beispiel ausgewählter Artikel werden Artikelinitiationen, Handlungsmuster beim Er- und Überarbeiten der Texte, Prozesse des Planens, Gliederns und Bewertens sowie Kontroversen, Edit Wars und Vandalismus als grundlegende Aspekte des Schreibens in gemeinschaftlichen Online-Kontexten betrachtet. Dabei erscheinen vor allem Kontroversen als ein in der Schreibforschung bisher vernachlässigter Einflussfaktor, der nicht nur Auswirkungen auf die Schreibprozesse in der Wikipedia, sondern auch auf die gesamte Abbildung, Genese und vor allem Aushandlung von Wissen hat.
A taxonomic revision of the genus Pteris in tropical Africa revealed 26 species. An identification key to the species is provided. Morphological characters were used to prepare a cladistic analysis of the relevant taxa. Each species was evaluated concerning the IUCN red list status. Only Pteris mkomaziensis was considered as Near Threatened, and all other species only as Least Concern. An inventory of the ferns of Kakamega Forest / Kenya and Budongo Forest / Uganda revealed 85 species in Kakamega and 66 species in Budongo. Life form spectra were analysed and the ferns were studied for their value for bioindication.
Demografische und gesellschaftliche Wandlungsprozesse drohen die informelle Pflege als tragende Säule des Pflegesystems in Deutschland zunehmend zu erodieren. Während einerseits die Zahl an Pflegebedürftigen zukünftig ansteigen wird, nimmt andererseits die Zahl derer, die Pflegetätigkeiten übernehmen könnten ab. Zudem werden Frauen, die heute noch die Hauptlast der Pflegeverantwortung tragen, in Zukunft vermehrt erwerbstätig sein.
Vor diesem Hintergrund ergibt sich folgendes Problemfeld: Bei der Übernahme von informellen Pflegeaufgaben entsteht häufig ein Vereinbarkeitsproblem zwischen Pflege- und Erwerbstätigkeit. Pflegende reduzieren deshalb nicht selten ihre Erwerbsarbeitszeit oder geben die berufliche Tätigkeit ganz auf. Im Kontext des deutschen Sozialversicherungssystems bedeutet das, dass sich informell Pflegende erhöhten sozialen Risiken aussetzen, wenn sie ihre Erwerbs- und Verdienstpotenziale am Arbeitsmarkt nicht vollumfänglich ausschöpfen. Ferner entstehen indirekte Kosten beim Fiskus, die auf verringerte Einkommensteuer- und Sozialversicherungsbeiträge und/oder erhöhte Transferleistungen zurückzuführen sind. Diese sogenannten fiskalischen Kosten wurden im wissenschaftlichen und gesellschaftspolitischen Diskurs bisher jedoch nur unzureichend berücksichtigt.
Demnach ist das Ziel der vorliegenden Arbeit, die fiskalischen Kosten aufgrund informeller Pflege im deutschen Wohlfahrtsstaat empirisch zu prognostizieren. Dazu werden zunächst die Auswirkungen einer Pflegeübernahme auf das Arbeitsangebot evaluiert und dann in ein Steuer- und Sozialversicherungsmodell überführt, um so die fiskalischen Kosten aufgrund informeller Pflege zu quantifizieren. Mithilfe eines dynamischen Populationsmodells erfolgt abschließend die empirische Prognose der fiskalischen Kosten.
Im Ergebnis zeigt sich, dass beim Fiskus erhebliche (Mehr-)kosten durch informelle Pflege in Form von entgangenen Einkommensteuer- und Sozialversicherungsbeiträgen und/oder erhöhte Transferleistungen entstehen. Darüberhinaus ist die informelle Pflegeerbringung von Frauen aus staatlicher Perspektive kostengünstiger als die von Männern.
Gegeben sei eine Basis b>=10 und eine Ziffer a0 aus der Menge {0,..., b − 1}. Wir untersuchen, ob es unendlich viele Primzahlen gibt, die in ihrer b-adischen Zifferndarstellung die Ziffer a0 nicht besitzen. Ferner schätzen wir die Anzahl dieser Primzahlen, die kleiner sind als X = b^k, nach oben und unten ab.
Damit gelingt uns eine Verallgemeinerung von Maynards Beweis für den Fall b = 10 und wir nutzen hierzu auch die in seiner Arbeit verwendeten Werkzeuge. Unter Anderem benötigen wir die Hardy-Littlewoodsche Kreismethode sowie diverse Siebmethoden, um die Minor Arcs zu kontrollieren.
Schließlich sehen wir, dass wir Maynard's Aussage vor allem dann auf beliebige Basen b>= 10 und ausgeschlossene Ziffern a0 aus {0, ..., b − 1} übertragen können, wenn zwei betragsmäßig größte Eigenwerte von Matrizen, die von b und a0 parametrisiert werden, bestimmte Abschätzungen erfüllen. Dass diese Abschätzungen im Fall b>=102 erfüllt sind, beweisen wir im letzten Kapitel. Für die Fälle b = 10 und b = 11 liegt ebenfalls ein Mathematica-Code vor, der die Abschätzungen bestätigt.
Different techniques (weight loss, electrochemical, and spray
corrosion measurements) have been used to evaluate four sarcosine derivatives to inhibit corrosion and one commercial compound as synergist. The basic metal was low carbon steel CR4 tested at different conditions. As working media mainly neutral water and 0.1 M NaCl was applied. The protective film was formed on the steel surface via direct absorption of the tested substances during the immersion process. A highly improved corrosion protection with direct correlation to the molecular weight and carbon chain length of the tested compounds was detected. The protection of steel CR4 against corrosion in 0.1 M NaCl enhanced with increasing concentration of selected sarcosine compounds. The best inhibitor throughout all tested concentrations and all evaluation systems was Oleoylsarcosine (O) with efficiencies up to 97 % in potentiodynamic polarization (PP), 83 % electrochemical impedance spectroscopy (EIS), and 85 % weight loss (WL) at 100 mmol/L as highest concentration tested here. The second best inhibitor was Myristoylsarcosine (M) with efficiencies up to 82 % in PP, 69 % in EIS, and 75 % in WL at highest concentration. The inhibitor with the shortest hydrocarbon chain in this series is Lauroylsarcosine (L). It showed lowest effects to inhibit corrosion compared to O and M. The efficiencies of L were a bit more than 50 % at 75 and 100 mmol/L and less than 50 % at 25 and 50 mmol/L in all used evaluation systems. Furthermore, the overall efficiency is promoted with longer dip coating times during the steel CR4 immersion as shown for 50 mmol/L for all present derivatives. This survey indicated 10 min as best time in respect of cost and protection efficiency. The commercial inhibitor Oley-Imidazole (OI) improved significantly the effectiveness of compound Cocoylsarcosine (C), which contains the naturally mixture of carbon chain lengths from coconut oil (C8 - C18), and enhanced protection when used in combination (C+OI, 1:1 molar ration). In this system the efficiency increased from 47 % to 91 % in PP, from 40 % to 84 % in EIS, and from 45 % to 82 % in WL at highest concentration. Spray corrosion tests were used to evaluate all present sarcosine substances on steel CR4 in a more realistic system. The best inhibitor after a 24 h test was O followed by the combination C+OI and M with efficiencies up to 99 %, 80 %, and 79 %, respectively. The obtained results indicate a good stability of the protective film formed by the present inhibitors even after 24 h. All evaluation systems used in the current investigation were in good agreement and resulted in the same inhibitor sequence. Furthermore, the adsorption process of the tested compounds is assumed to follow the Langmuir type isotherm. Response surface methodology (RSM) is an optimization method depending on Box- Behnken Design (BBD). It was used in the current system to find the optimum efficiency for inhibitor O to protect steel CR4 against corrosion in salt water. Four independent variables were used here: inhibitor concentration (A), dip coating time (B), temperature (C), and NaCl concentration (D); each with three respective levels: lower (-1), mid (0), and upper (+1). According to the present result, temperature has the greatest effect on the protection process as individual parameter followed by the inhibitor concentration itself. In this investigation an optimum efficiency of 99 % is calculated by the following parameter and level combination: upper level (+1) for inhibitor concentration, dip coating time, and NaCl concentration while lower level (-1) for temperature.
Modellbildung zum Abbindeverhalten von PCE-verflüssigten und CA-Zement-gebundenen Feuerbetonen
(2021)
Feuerbetone werden als Auskleidung in industriellen Hochtemperaturaggregaten, wie beispielsweise in der Eisen- und Stahlindustrie, eingesetzt. Nach dem Mischen und dem Gießen eines Feuerbetons in Formen bzw. Schalungen, muss dieser abbinden und eine ausreichende Festigkeit ausbilden. Die Kinetik der Abbindevorgänge und somit auch das Erhärtungsverhalten variiert dabei stark in Abhängigkeit der Zusammensetzung, vor allem hinsichtlich des Bindemittels und der Additive, des Feuerbetons. In der Praxis der Herstellung von Feuerbetonen kommt es häufig zu Beschädigungen der hergestellten Bauteile oder Auskleidungen durch das Fließen noch nicht ausreichend abgebundener Feuerbetone bzw. die Beschädigung von Ecken und Kanten während des Ausschalens oder Bauteilen reißen durch mechanische Belastung beim Umsetzen oder beim Transport. Diese Beschädigungen basieren auf Fehleinschätzungen zum Abbindefortschritt und der korrespondierenden Festigkeitsausprägung der Feuerbetone. Diese wiederum sind auf Lücken im Stand der Technik zurück zu führen.
Für PCE-verflüssigte und CA-Zement-gebundene Korund-Feuerbetonen mit einer Al₂O₃- und Al₂O₃-SiO₂-Matrix, werden die Defizite des Stands der Technik für diese Feuerbetonklasse identifiziert: Diese liegen im Bereich der Verflüssigungswirkung und des ersten Ansteifens der Feuerbetone, der Hydratation des CA-Zements in den Feuerbetonen und der Festigkeitsausprägung der Feuerbetone. Hieraus leitet sich ein entsprechender Forschungsbedarf ab.
Am Beispiel von zwei PCE-verflüssigten (PCE mit kurzer Hauptkette und langen Seitenketten sowie PCE mit langer Hauptkette und kurzen Seitenketten) und CA-Zement-gebundenen (70 % Al₂O₃) Feuerbetonen mit einer reaktivtonerde-basierten und einer reaktivtonerde-mikrosilika-basierten Matrix werden abbindekinetische Untersuchungen durchgeführt. Anhand verschiedener abbindekinetischer Messmethoden, wie Schallgeschwindigkeit oder elektrischer Leitfähigkeit, und einigen ergänzenden Messung, wie beispielsweise das ζ-Potential, wird der Abbindeverlauf der Feuerbetone untersucht und die Defizite aus dem Stand der Technik aufgeklärt.
Im Detail wurde der Stand der Technik um folgende Erkenntnisse ergänzt:
• Verflüssigung von Feuerbetonen mit PCE-Molekülen: Es wurde festgestellt, dass die Verflüssigungswirkung und das erste Ansteifen maßgeblich durch die Struktur der Verflüssigermoleküle hervorgerufen werden.
PCE-Moleküle mit langen Seitenketten verflüssigen eher sterisch. Durch die Vermittlung von Ca²⁺ aus dem CA-Zement wird die Adsorption der PCE-Moleküle verstärkt. Freie PCE-Moleküle können auf CAH-Phasen (Hydratationsprodukte) adsorbieren und somit die Fließfähigkeit des Feuerbetons für eine gewisse Zeit aufrechterhalten.
PCE-Moleküle mit kurzen Seitenketten verflüssigen elektrosterisch. Bei Lösung von Ca²⁺ aus dem CA-Zement kommt es zu einer Ca-PCE Gel-Bildung und einer korrespondierenden Koagulation der Feuerbetonmatrix und der Feuerbeton steift zeitnah nach dem Mischen an.
• Hydratation von CA-Zement: Die Hydratation von CA-Zement in den Feuerbetonen wird wesentlich durch die Länge der Seitenketten der PCE-Moleküle bzw. der Anwesenheit von Zitronensäure beeinflusst.
In Feuerbetonen die mit PCE-Molekülen mit langen Seitenketten verflüssigt wurden, kann der CA-Zement weitestgehend frei hydratisieren. Der CAH-Phasenanteil steigt in zwei Abschnitten, getrennt von einer dormanten Phase. Die dormante Phase der CA Zementhydratation wurde auf eine Lösungshemmung der Al-O-H-Passivierungsschicht auf dem CA-Zement bei mäßigem pH-Wert (pH = 12,3) zurückgeführt.
In Feuerbetonen die mit PCE-Molekülen mit kurze Seitenketten und Zitronensäure verflüssigt werden, wird die Hydratationsreaktion durch eine Ca-Citrat-PCE-Al(OH)₄-Gelbildung stark gehemmt. Es besteht die Vermutung, dass die Bindung von Ca²⁺ in dem Gel zum einen eine sehr ausgeprägte dormante Phase (pH < 12,3), mit einer schlechten Löslichkeit der Passivierungsschicht, bewirkt und zum anderen auch die Fällung von CAH-Phasen unterdrückt.
• Festigkeitsausprägung von CA-Zement-gebundenen Feuerbetonen:
Eine Koagulation bewirkt die erste Festigkeitssteigerung der Feuerbetone auf σB < 1 MPa. Im Anschluss findet die Hauptfestigkeitssteigerung auf Grund von Austrocknung durch Hydratation und Verdunstung statt. Der weitere Teil der Festigkeitssteigerung wird durch die Reduzierung der Porosität durch expansive CA-Zementhydratation und die hohe spezifische Oberfläche sowie deren weitere Erhöhung durch die CA-Zementhydratation bewirkt.
Aus den einzelnen abbindekinetischen Effekten können, zusammen mit dem Stand der Technik und den neuen Erkenntnissen, mikrostrukturelle Abbindemodelle und Modelle zur Festigkeitsentwicklung der Feuerbetone abgeleitet werden. In Folge kann für die zwei Feuerbetone zu jedem Zeitpunkt des Abbindens eine Aussage zum Abbindefortschritt und zur Festigkeitsausprägung getätigt werden. Einige der Abbindemechanismen und festigkeitsbildenden Mechanismen können auf andere Feuerbetonzusammensetzungen übertragen werden.
Confidentiality, integrity, and availability are often listed as the three major requirements for achieving data security and are collectively referred to as the C-I-A triad. Confidentiality of data restricts the data access to authorized parties only, integrity means that the data can only be modified by authorized parties, and availability states that the data must always be accessible when requested. Although these requirements are relevant for any computer system, they are especially important in open and distributed networks. Such networks are able to store large amounts of data without having a single entity in control of ensuring the data's security. The Semantic Web applies to these characteristics as well as it aims at creating a global and decentralized network of machine-readable data. Ensuring the confidentiality, integrity, and availability of this data is therefore also important and must be achieved by corresponding security mechanisms. However, the current reference architecture of the Semantic Web does not define any particular security mechanism yet which implements these requirements. Instead, it only contains a rather abstract representation of security.
This thesis fills this gap by introducing three different security mechanisms for each of the identified security requirements confidentiality, integrity, and availability of Semantic Web data. The mechanisms are not restricted to the very basics of implementing each of the requirements and provide additional features as well. Confidentiality is usually achieved with data encryption. This thesis not only provides an approach for encrypting Semantic Web data, it also allows to search in the resulting ciphertext data without decrypting it first. Integrity of data is typically implemented with digital signatures. Instead of defining a single signature algorithm, this thesis defines a formal framework for signing arbitrary Semantic Web graphs which can be configured with various algorithms to achieve different features. Availability is generally supported by redundant data storage. This thesis expands the classical definition of availability to compliant availability which means that data must only be available as long as the access request complies with a set of predefined policies. This requirement is implemented with a modular and extensible policy language for regulating information flow control. This thesis presents each of these three security mechanisms in detail, evaluates them against a set of requirements, and compares them with the state of the art and related work.
Inland waters play an active role in the global carbon cycle. They collect carbon from upstream landmasses and transport it downstream until it finally reaches the ocean. Along this path, manifold processing steps are evident, resulting in (permanent) retention of carbon by sediment burial as well as loss by evasion to the atmosphere. Constraining these carbon fluxes and their anthropogenic perturbation is an urgent need. In this context, attention needs to be set on a widespread feature of inland waters: their partial desiccation. This results in the emergence of formerly inundated sediments to the atmosphere, referred to as dry inland waters. One observed feature of dry inland waters are disproportional high carbon dioxide (CO2) emissions. However, this observation was so far based on local case studies and knowledge on the global prevalence and fundamental mechanisms of these emissions is lacking. Against this background, this thesis aims to provide a better understanding of the magnitude and mechanisms of carbon emissions from dry inland waters on the global and local scale and to assess the impact of dry inland waters on the global carbon cycle. The specific research questions of this thesis were: (1) How do gaseous carbon emissions from dry inland waters integrate into the global carbon cycle and into global greenhouse gas (GHG) budgets? (2) What effect do seasonal and long term drying have on the carbon cycling of inland waters? The thesis revealed that dry inland waters emit disproportional large amounts of CO 2 on a global scale and that these emissions share common drivers across ecosystems. Quantifying global reservoir drawdown and upscaling carbon fluxes to the global scale suggests that reservoirs emit more carbon than they bury, challenging the current understanding of reservoirs as net carbon sinks. On the local scale, this thesis revealed that both, heterogeneous emission pattern between different habitats and seasonal variability of carbon emissions from the drawdown area, needs to be considered. Further, this thesis showed that re-mobilization of buried carbon upon permanent desiccation of water bodies can explain the observed emission rates, supporting the hypothesis of a positive feedback-loop between climate change and desiccation of inland waters. Overall, the present thesis highlights the importance of adding emissions from dry inland waters as a pathway to the global carbon cycle of inland waters.
Ray tracing acceleration through dedicated data structures has long been an important topic in computer graphics. In general, two different approaches are proposed: spatial and directional acceleration structures. The thesis at hand presents an innovative combined approach of these two areas, which enables a further acceleration of the tracing process of rays. State-of-the-art spatial data structures are used as base structures and enhanced by precomputed directional visibility information based on a sophisticated abstraction concept of shafts within an original structure, the Line Space.
In the course of the work, novel approaches for the precomputed visibility information are proposed: a binary value that indicates whether a shaft is empty or non-empty as well as a single candidate approximating the actual surface as a representative candidate. It is shown how the binary value is used in a simple but effective empty space skipping technique, which allows a performance gain in ray tracing of up to 40% compared to the pure base data structure, regardless of the spatial structure that is actually used. In addition, it is shown that this binary visibility information provides a fast technique for calculating soft shadows and ambient occlusion based on blocker approximations. Although the results contain a certain inaccuracy error, which is also presented and discussed, it is shown that a further tracing acceleration of up to 300% compared to the base structure is achieved. As an extension of this approach, the representative candidate precomputation is demonstrated, which is used to accelerate the indirect lighting computation, resulting in a significant performance gain at the expense of image errors. Finally, techniques based on two-stage structures and a usage heuristic are proposed and evaluated. These reduce memory consumption and approximation errors while maintaining the performance gain and also enabling further possibilities with object instancing and rigid transformations.
All performance and memory values as well as the approximation errors are measured, presented and discussed. Overall, the Line Space is shown to result in a considerate improvement in ray tracing performance at the cost of higher memory consumption and possible approximation errors. The presented findings thus demonstrate the capability of the combined approach and enable further possibilities for future work.
Studies on the toxicity of chemical mixtures find that components at levels below no-observed-effect concentrations (NOECs) may cause toxicity resulting from the combined effects of mixed chemicals. However, chemical risk assessment frequently focuses on individual chemical substances, although most living organisms are substantially exposed to chemical mixtures rather than single substances. The concepts of additive toxicity, concentration addition (CA), and independent action (IA) models are often applied to predict the mixture toxicity of similarly and dissimilarly acting chemicals, respectively. However, living organisms and the environment may be exposed to both types of chemicals at the same time and location. In addition, experimental acquisition of toxicity data for every conceivable mixture is unfeasible since the number of chemical combinations is extremely large. Therefore, an integrated model to predict mixture toxicity on the basis of single mixture components having various modes of toxic action (MoAs) needs to be developed. The objectives of the present study were to analyze the challenges in predicting mixture toxicity in the environment, and to develop integrated models that overcome the limitations of the existing prediction models for estimating the toxicity of non-interactive mixtures through computational models. For these goals, four sub-topics were generated in this study. Firstly, applicable domains and limitations of existing integrated models were analyzed and grouped into three kinds of categories in this study. There are current approaches used to assess mixture toxicity; however, there is a need for a new research concept to overcome challenges associated with such approaches, which recent studies have addressed. These approaches are discussed with particular emphasis on those studies involved in computational approaches to predict the toxicity of chemical mixtures based on the toxicological data of individual chemicals. Secondly, through a case study and a computational simulation, it was found that the Key Critical Component (KCC) and Composite Reciprocal (CR) methods (as described in the European Union (EU) draft technical guidance notes for calculating the Predicted No Effect Concentration (PNEC) and Derived No Effect Level (DNEL) of mixtures) could derive significantly different results. As the third and fourth sub-topics of this study, the following two integrated addition models were developed and successfully applied to overcome the inherent limitations of the CA and IA models, which could be theoretically used for either similarly or dissimilarly acting chemicals: i) a Partial Least Squares-Based Integrated Addition Model (PLS-IAM), and, ii) a Quantitative Structure-Activity Relationship-Based Two-Stage Prediction (QSAR-TSP) model. In this study, it was shown that the PLS-IAM might be useful to estimate mixture toxicity when the toxicity data of similar mixtures having the same compositions were available. In the case of the QSAR-TSP model, it showed the potential to overcome the critical limitation of the conventional TSP model, which requires knowledge of the MoAs for all chemicals. Therefore, this study presented good potential for the advanced integrated models (e.g., PLS-IAM and QSAR-TSP), while considering various non-interactive constituents that have different MoAs in order to increase the reliance of conventional models and simplify the procedure for risk assessment of mixtures.
Die vorliegende Arbeit wurde verfasst, um Einflüsse und Zusammenhänge von Mitarbeiterzufriedenheit auf Burnout zu erforschen. Zudem wurde der Frage nachgegangen, inwieweit organisationsbedingte Variablen beim Zusammenhang von Mitarbeiterzufriedenheit und Burnout bei Flugbegleitern eine Rolle spielen. Hierfür wurde die Airline-Branche, genau genommen das Kabinenpersonal als Repräsentant, verwendet und eine umfangreiche Arbeitsplatzanalyse durchgeführt.
Zur Erfassung der vielfältigen Dimensionen der Mitarbeiterzufriedenheit diente der Fragebogen zur Erfassung der Mitarbeiterzufriedenheit (FEMAZU), sowie für die Facetten von Burnout die Burnout-Screening-Skalen (BOSS) "Beruf" und "körperliche Beschwerden". Die organisationsbedingten Variablen wurden im Zuge einer Gruppendiskussion erhoben und zu entsprechenden Statements ausformuliert, um für den Fragebogen verwendet werden zu können. Dieser wurde im nach der Erstellung durch einen Pretest praktisch validiert. Die Online-Erhebung erfolgte anhand 420 Flugbegleitern im deutschsprachigen Raum.rnZu der quantitativen Erhebung wurde eine qualitative Methode angewandt, um detaillierter Informationen bezüglich der Komplexität in Entstehung und Verlauf der Krankheitsgeschichte von Burnout zu bekommen und Hinweise für die Einwirkung organisationsbedingter Variablen zu erhalten.
Entsprechend der repräsentativen Stichprobe in der vorliegenden Arbeit konnte ein starker Zusammenhang von Mitarbeiterzufriedenheit auf die verschiedenen Facetten von Burnout erwiesen sowie überwiegend signifikante Ergebnisse verzeichnet werden. Als wesentliche Einflussvariablen konnten individuelle Ziele, Arbeitsbedingungen, Aufgabenfeld, Empowerment, Commitment, Emotionsregulation und Work-Life-Balance identifiziert werden. Bedingt durch die ohnehin sehr speziellen Arbeitsbedingungen im Berufsleben der Flugbegleiter sollten die Airlines bei ihren Mitarbeitern auf eine ausgeglichene Work-Life-Balance achten. Die Ergebnisse zeigten hierfür deutliche Zusammenhänge mit Mitarbeiterzufriedenheit und Burnout. Um entsprechende ausfallsbedingten Kosten vorzubeugen, wurde in der vorliegenden Arbeit eine Handlungsempfehlung gegeben. Durch die hohe Flexibilitätsanforderung in der Tätigkeit des Kabinenpersonals sowie den zahlreichen physiologischen Einflüssen wie beispielsweise Lärm, Jetlag, Schichtdienst oder Zeitverschiebungen ist es für Unternehmen, die auf langfristige Mitarbeiter setzen besonders wichtig, an dieser Stelle häufigen und langen krankheitsbedingten Fehlzeiten vorzubeugen. Eine höhere Planstabilität, ein größerer Einfluss auf die Plangestaltung und ausreichend Ruhezeit würde beispielsweise der Vereinbarkeit von Beruf und Privat entgegen kommen.
Andere in der Literatur als bedeutend eruierten Variablen, wie bspw. die Arbeitsplatzsicherheit und der Zivilstand, zeigten keine oder nur eine sehr geringe Signifikanz. Abschließend wurden Faktoren, welche einen Einfluss auf das Ergebnis haben könnten, ausführlich erläutert und diskutiert sowie eine Empfehlung für die Praxis und zukünftige wissenschaftliche Untersuchungen abgeleitet. Insbesondere der Forschungsausblick zeigt, dass der Zusammenhang der Dimensionen Mitarbeiterzufriedenheit und Burnout, welche anhand der Stichprobe der Flugbegleiter in dieser Arbeit erhoben wurde, noch umfassende Erforschungen erfordert, um hier vor allem für die Wirtschaft Verbesserungen und entsprechende Kostenreduktion bewirken zu können. Sowohl für die Wirtschaft als auch die Wissenschaft wäre eine engere Zusammenarbeit wünschenswert und gewinnbringend.
With the increasing importance and urgency of climate change, companies are challenged to contribute to sustainable development, especially by younger generations. However, existing corporate contributions have been criticized as insufficient, which could be particularly caused by a lack of employee engagement in corporate sustainability. In this context, gamification has been proposed and increasingly investigated in recent years as a promising, innovative tool to motivate sustainable employee behaviors in the workplace. However, there are few studies and applicable gamification solutions that address more than one specific sustainability issue and thus take a holistic perspective on sustainable behaviors in the workplace. Moreover, previous research lacks a comprehensive understanding of how different gamification elements elicit specific psychological effects, how these manifest in behavioral changes, and how these, in turn, cumulatively result in measurable corporate outcomes. The path from gamification as ”input” to corporate sustainability as ”output” thus remains unexplored.
This dissertation fills this gap by conceptualizing, designing, and evaluating a holistic gamified intervention that supports employees in various sustainable behaviors in their daily activities. The project uses a design science research approach that closely involves employees in the incremental development of the solution. As part of the iterative design process, this dissertation presents six studies to extend the theoretical understanding of gamification for sustainable employee behaviors. First, a comprehensive review of existing research on gamification for sustainable employee behavior is provided, analyzing gamification designs and results of previous studies and outlining an agenda for further research (Study 1). Theoretical foundations of research on gamification, serious games, and game-based learning (Study 2) and empirical design principles for gamification and persuasive systems (Study 3) are then systematically reviewed as a basis for the successful design of gamified applications. Subsequently, empirical studies explore employees’ motivations for sustainable behavior and illuminate their expectations for design features (Study 4), and identify contextual challenges and design dilemmas when implementing gamification in an organizational context (Study 5). Finally, a quantitative field study (Study 6) explores how different gamification designs influence sustainable employee behavior and corporate sustainability in organizations. Based on the findings, this dissertation presents a comprehensive framework of gamification for sustainable employee behavior that incorporates design, individual behavior, and organizational perspectives. Finally, building on these insights, it provides practical recommendations for designing gamification to encourage sustainable employee behavior at work.
Technologische Fortschritte auf dem Gebiet der integrierten Halbleitertechnik, die unter anderem auch zur gestiegenen Leistungsfähigkeit der Kamerasensoren beitragen, konzentrierten sich bisher primär auf die Schnelligkeit und das Auflösungsvermögen der Sensoren. Die sich ständig verändernde Entwicklung hat jedoch direkte Folgen auf das physikalische Verhalten einer Kamera und damit auch Konsequenzen für die erreichbare geometrische Genauigkeit einer photogrammetrischen 3D-Rekonstruktion. Letztere stand bisher nicht im Fokus der Forschung und ist eine Aufgabe, der sich diese Arbeit im Sinne der Photogrammetrie und Messtechnik stellt. Aktuelle Untersuchungen und Erfahrungen aus industriellen Projekten zeigen in diesem Zusammenhang, dass das geometrisch-physikalische Verhalten digitaler Kameras - für höchste photogrammetrische Ansprüche - noch nicht ausreichend modelliert ist. Direkte Aussagen zur erreichbaren Genauigkeit bei gegebener Hardware erweisen sich daher bislang als unzureichend. Ferner kommt es aufgrund der unpräzisen Modellierung zu Einbußen in der Zuverlässigkeit der erreichten Ergebnisse. Für den Entwickler präziser kamerabasierter Messverfahren folgt daraus, dass zu einer optimalen Schätzung der geometrischen Genauigkeit und damit auch vollständigen Ausschöpfung der Messkamera geeignete mathematische Modelle erforderlich sind, die das geometrisch physikalische Verhalten bestmöglich beschreiben. Diese Arbeit beschreibt, wie die erreichbare Genauigkeit einer Bündelblockausgleichung, schon a priori mithilfe des EMVA1288 Standards approximiert werden kann. Eine in diesem Zusammenhang wichtige Teilaufgabe ist die Schaffung einer optimalen Messanordnung. Hierzu gehören Untersuchungen der üblicherweise verwendeten Kalibrierkörper und die Beseitigung von systematischen Fehlern vor und nach der Bündelblockausgleichung. Zum Nachweis dieser Systematiken wird eine auf statistischem Lernen basierende Methode beschrieben und untersucht. Erst wenn alle genauigkeitsmindernden Einflüsse berücksichtigt sind, wird der Anteil des Sensors in den Messdaten sichtbar und damit auch mathematisch parametrisierbar. Die Beschreibung des Sensoreinflusses auf die erreichbare Genauigkeit der Bündelblockausgleichung erfolgt in drei Schritten. Der erste Schritt beschreibt den Zusammenhang zwischen ausgewählten EMVA1288-Kennzahlen und der Unsicherheit eines Grauwertes. Der zweite Schritt ist eine Modellierung dieser Grauwertunsicherheit als Zentrumsunsicherheit einer Zielmarke. Zur Beschreibung dieser Unsicherheit innerhalb der Bündelblockausgleichung wird ein stochastisches Modell, basierend auf dem EMVA1288-Standard, vorgeschlagen. Ausgehend vom Rauschen des Zielmarkenmittelpunktes wird im dritten Schritt die Unsicherheit im Objektraum beispielhaft mit Hilfe von physikalisch orientierten Simulationen approximiert. Die Wirkung der vorgeschlagenen Methoden wird anhand von Realkalibrierungen nachgewiesen. Abschließend erfolgt die Diskussion der vorgeschlagenen Methoden und erreichten Ergebnisse sowie ein Ausblick auf kommende Untersuchungen.
Software is vital for modern society. The efficient development of correct and reliable software is of ever-growing importance. An important technique to achieve this goal is deductive program verification: the construction of logical proofs that programs are correct. In this thesis, we address three important challenges for deductive verification on its way to a wider deployment in the industry: 1. verification of thread-based concurrent programs 2. correctness management of verification systems 3. change management in the verification process. These are consistently brought up by practitioners when applying otherwise mature verification systems. The three challenges correspond to the three parts of this thesis (not counting the introductory first part, providing technical background on the KeY verification approach). In the first part, we define a novel program logic for specifying correctness properties of object-oriented programs with unbounded thread-based concurrency. We also present a calculus for the above logic, which allows verifying actual Java programs. The calculus is based on symbolic execution resulting in its good understandability for the user. We describe the implementation of the calculus in the KeY verification system and present a case study. In the second part, we provide a first systematic survey and appraisal of factors involved in reliability of formal reasoning. We elucidate the potential and limitations of self-application of formal methods in this area and give recommendations based on our experience in design and operation of verification systems. In the third part, we show how the technique of similarity-based proof reuse can be applied to the problems of industrial verification life cycle. We address issues (e.g., coping with changes in the proof system) that are important in verification practice, but have been neglected by research so far.
This dissertation investigates the emergence of dialects in a model of a multi-agent simulation based on neural networks that is developed within this thesis. First the linguistic foundation of language is illustrated. Besides discussing some important definitions of language, this is achieved by giving a summary of the evolutionary steps in language evolution followed by an overview of the elements of human modern languages including the ways of language change. Subsequently some examples of socially learned animals" communicative behaviour and its formations of dialects are shown.
In the following the computational and mathematical basis are to be explained. Besides the expressions model and simulation, these comprehend the setup of multi-agent simulations and the functionality of artificial neural networks. Based on the abovementioned basics the model of this dissertation is derived and described in a detailed way. Results drawn out of several hundreds of simulation runs are explicated thereafter. Each destructive factor defined in the model is examined separately and its domain is divided into intervals with different effects on the outcome of the simulation.
Furthermore, existing interdependences between the single factors and the process of language merging after a prior dialect divide are shown. Results and outlook are followed by specification, draft, architecture, a detailed illustration of the implementation and a user guide of the tool named DiaLex. DiaLex is a java based tool providing users the opportunity to simulate and analyse the influence of different destructive factors on dialect formation within a commonly used language of one or multiple communities of agents.
Scientific experimentation in the special needs schools for pupils with intellectual disabilities
(2022)
Naturwissenschaftliches Experimentieren im Förderschwerpunkt geistige Entwicklung: An Schulen mit dem Förderschwerpunkt geistige Entwicklung führen Schülerinnen und Schüler nur selten naturwissenschaftliche Experimente durch. Doch auch diese Schülerschaft kann mit adressatengerechten Lernmaterialien Experimente durchführen, deren Gestaltungskriterien in dieser Studie ermittelt werden. Zudem wird in der Studie erfasst, wie sich die Schülerinnen und Schüler über ein Schuljahr in ihrer experimentellen Kompetenz weiterentwickeln.