Filtern
Erscheinungsjahr
Dokumenttyp
- Dissertation (249)
- Masterarbeit (92)
- Ausgabe (Heft) zu einer Zeitschrift (84)
- Bachelorarbeit (44)
- Diplomarbeit (27)
- Wissenschaftlicher Artikel (22)
- Studienarbeit (10)
- Konferenzveröffentlichung (9)
- Habilitation (4)
- Sonstiges (2)
Sprache
- Englisch (545) (entfernen)
Volltext vorhanden
- ja (545) (entfernen)
Gehört zur Bibliographie
- nein (545) (entfernen)
Schlagworte
- Pestizid (8)
- Pflanzenschutzmittel (6)
- Software Engineering (6)
- Internet of Things (5)
- Biodiversität (4)
- Bluetooth (4)
- Bodenchemie (4)
- Landwirtschaft (4)
- Semantic Web (4)
- ecotoxicology (4)
Institut
- Fachbereich 4 (115)
- Institut für Informatik (85)
- Fachbereich 7 (78)
- Institut für Wirtschafts- und Verwaltungsinformatik (52)
- Institut für Computervisualistik (51)
- Institute for Web Science and Technologies (50)
- Institut für Management (30)
- Institut für Integrierte Naturwissenschaften, Abt. Biologie (24)
- Institut für Umweltwissenschaften (23)
- Fachbereich 8 (20)
Empirische Studien in der Softwaretechnik verwenden Software Repositories als Datenquellen, um die Softwareentwicklung zu verstehen. Repository-Daten werden entweder verwendet, um Fragen zu beantworten, die die Entscheidungsfindung in der Softwareentwicklung leiten, oder um Werkzeuge bereitzustellen, die bei praktischen Aspekten der Entwicklung helfen. Studien werden in die Bereiche Empirical Software Engineering (ESE) und Mining Software Repositories (MSR) eingeordnet. Häufig konzentrieren sich Studien, die mit Repository-Daten arbeiten, auf deren Ergebnisse. Ergebnisse sind aus den Daten abgeleitete Aussagen oder Werkzeuge, die bei der Softwareentwicklung helfen. Diese Dissertation konzentriert sich hingegen auf die Methoden und High-Order-Methoden, die verwendet werden, um solche Ergebnisse zu erzielen. Insbesondere konzentrieren wir uns auf inkrementelle Methoden, um die Verarbeitung von Repositories zu skalieren, auf deklarative Methoden, um eine heterogene Analyse durchzuführen, und auf High-Order-Methoden, die verwendet werden, um Bedrohungen für Methoden, die auf Repositories arbeiten, zu operationalisieren. Wir fassen dies als technische und methodische Verbesserungen zusammen um zukünftige empirische Ergebnisse effektiver zu produzieren. Wir tragen die folgenden Verbesserungen bei. Wir schlagen eine Methode vor, um die Skalierbarkeit von Funktionen, welche über Repositories mit hoher Revisionszahl abstrahieren, auf theoretisch fundierte Weise zu verbessern. Wir nutzen Erkenntnisse aus abstrakter Algebra und Programminkrementalisierung, um eine Kernschnittstelle von Funktionen höherer Ordnung zu definieren, die skalierbare statische Abstraktionen eines Repositorys mit vielen Revisionen berechnen. Wir bewerten die Skalierbarkeit unserer Methode durch Benchmarks, indem wir einen Prototyp mit MSR/ESE Wettbewerbern vergleichen. Wir schlagen eine Methode vor, um die Definition von Funktionen zu verbessern, die über ein Repository mit einem heterogenen Technologie-Stack abstrahieren, indem Konzepte aus der deklarativen Logikprogrammierung verwendet werden, und mit Ideen zur Megamodellierung und linguistischen Architektur kombiniert werden. Wir reproduzieren bestehende Ideen zur deklarativen Logikprogrammierung mit Datalog-nahen Sprachen, die aus der Architekturwiederherstellung, der Quellcodeabfrage und der statischen Programmanalyse stammen, und übertragen diese aus der Analyse eines homogenen auf einen heterogenen Technologie-Stack. Wir liefern einen Proof-of-Concept einer solchen Methode in einer Fallstudie. Wir schlagen eine High-Order-Methode vor, um die Disambiguierung von Bedrohungen für MSR/ESE Methoden zu verbessern. Wir konzentrieren uns auf eine bessere Disambiguierung von Bedrohungen durch Simulationen, indem wir die Argumentation über Bedrohungen operationalisieren und die Auswirkungen auf eine gültige Datenanalysemethodik explizit machen. Wir ermutigen Forschende, „gefälschte“ Simulationen ihrer MSR/ESE-Szenarien zu erstellen, um relevante Erkenntnisse über alternative plausible Ergebnisse, negative Ergebnisse, potenzielle Bedrohungen und die verwendeten Datenanalysemethoden zu operationalisieren. Wir beweisen, dass eine solche Art des simulationsbasierten Testens zur Disambiguierung von Bedrohungen in der veröffentlichten MSR/ESE-Forschung beiträgt.
Herein, the particle size distributions (PSDs) and shape analysis of in vivo bioproduced particles from aqueous Au3+ and Eu3+ solutions by the cyanobacterium Anabaena sp. are examined in detail at the nanoscale. Generally, biosynthesis is affected by numerous parameters. Therefore, it is challenging to find the key set points for generating tailored nanoparticles (NPs). PSDs and shape analysis of the Au and Eu-NPs were performed with ImageJ using high-resolution transmission electron microscopy (HR-TEM) images. As the HR-TEM image analysis reflects only a fraction of the detected NPs within the cells, additional PSDs of the complete cell were performed to determine the NP count and to evaluate the different accuracies. Furthermore, local PSDs were carried out at five randomly selected locations within a single cell to identify local hotspots or agglomerations. The PSDs show that particle size depends mainly on contact time, while the particle shape is hardly affected. The particles formed are distributed quite evenly within the cells. HR-PSDs for Au-NPs show an average equivalent circular diameter (ECD) of 8.4 nm (24 h) and 7.2 nm (51 h). In contrast, Eu-NPs preferably exhibit an average ECD of 10.6 nm (10 h) and 12.3 nm (244 h). Au-NPs are classified predominantly as “very round” with an average reciprocal aspect ratio (RAR) of ~0.9 and a Feret major axis ratio (FMR) of ~1.17. Eu-NPs mainly belong to the “rounded” class with a smaller RAR of ~0.6 and a FMR of ~1.3. These results show that an increase in contact time is not accompanied by an average particle growth for Au-NPs, but by a doubling of the particle number. Anabaena sp. is capable of biosorbing and bioreducing dissolved Au3+ and Eu3+ ions from aqueous solutions, generating nano-sized Au and Eu particles, respectively. Therefore, it is a low-cost, non-toxic and effective candidate for a rapid recovery of these sought-after metals via the bioproduction of NPs with defined sizes and shapes, providing a high potential for scale-up.
This thesis explores and examines the effectiveness and efficacy of traditional machine learning (ML), advanced neural networks (NN) and state-of-the-art deep learning (DL) models for identifying mental distress indicators from the social media discourses based on Reddit and Twitter as they are immensely used by teenagers. Different NLP vectorization techniques like TF-IDF, Word2Vec, GloVe, and BERT embeddings are employed with ML models such as Decision Tree (DT), Random Forest (RF), Logistic Regression (LR) and Support Vector Machine (SVM) followed by NN models such as Convolutional Neural Network (CNN), Recurrent Neural Network (RNN) and Long Short-Term Memory (LSTM) to methodically analyse their impact as feature representation of models. DL models such as BERT, DistilBERT, MentalRoBERTa and MentalBERT are end-to-end fine tuned for classification task. This thesis also compares different text preprocessing techniques such as tokenization, stopword removal and lemmatization to assess their impact on model performance. Systematic experiments with different configuration of vectorization and preprocessing techniques in accordance with different model types and categories have been implemented to find the most effective configurations and to gauge the strengths, limitations, and capability to detect and interpret the mental distress indicators from the text. The results analysis reveals that MentalBERT DL model significantly outperformed all other model types and categories due to its specific pretraining on mental data as well as rigorous end-to-end fine tuning gave it an edge for detecting nuanced linguistic mental distress indicators from the complex contextual textual corpus. This insights from the results acknowledges the ML and NLP technologies high potential for developing complex AI systems for its intervention in the domain of mental health analysis. This thesis lays the foundation and directs the future work demonstrating the need for collaborative approach of different domain experts as well as to explore next generational large language models to develop robust and clinically approved mental health AI systems.
Coat color and pattern are a distinguished feature in mammalian carnivores, shaped by climatic cycles and habitat type. It can be expressed in various ways, such as gradients, polymorphisms, and rare color variants. Although natural selection explains much of the phenotypic variation found in the wild, genetic drift and heterozygote deficiency, as prominent in small and fragmented populations, may also affect phenotypic variability through the fixation of recessive alleles. The aim of this study was to test whether rare color variants in the wild could relate to a deficiency of heterozygotes, resulting from habitat fragmentation and small population size. We present an overview of all rare color variants in the order Carnivora, and compiled demographic and genetic data of the populations where they did and did not occur, to test for significant correlations. We also tested how phylogeny and body weight influenced the presence of color variants with phylogenetic generalized linear mixed models (PGLMMs). We found 40 color-variable species and 59 rare color variants. In 17 variable phenotypic populations for which genetic diversity was available, the average AR was 4.18, HO = 0.59, and HE= 0.66, and FIS= 0.086. We found that variable populations displayed a significant reduction in heterozygosity and allelic richness compared to non-variable populations across species. We also found a significant negative correlation between population size and inbreeding coefficients. Therefore, it is possible that small effective size had phenotypic consequences on the extant populations. The high frequency of the rare color variants (averaging 20%) also implies that genetic drift is locally overruling natural selection in small effective populations. As such, rare color variants could be added to the list of phenotypic consequences of inbreeding in the wild.
The production of isolated metallic nanoparticles with multifunctionalized properties, such as size and shape, is crucial for biomedical, photocatalytic, and energy storage or remediation applications. This study investigates the initial particle formations of gold nanoparticles (AuNPs) bioproduced in the cyanobacteria Anabaena sp. using high-resolution transmission electron microscopy images for digital image analysis. The developed method enabled the discovery of cerium nanoparticles (CeNPs), which were biosynthesized in the cyanobacteria Calothrix desertica. The particle size distributions for AuNPs and CeNPs were analyzed. After 10 h, the average equivalent circular diameter for AuNPs was 4.8 nm, while for CeNPs, it was approximately 5.2 nm after 25 h. The initial shape of AuNPs was sub-round to round, while the shape of CeNPs was more roundish due to their amorphous structure and formation restricted to heterocysts. The local PSDs indicate that the maturation of AuNPs begins in the middle of vegetative cells and near the cell membrane, compared to the other regions of the cell.
Well-being is essential for all people. Therefore, important factors influencing people’s well-being must be investigated. Well-being is multifaceted and defined as, for example, psychological, emotional, mental, physical, or social well-being. Here, we focus on psychological well-being. The study aimed to analyze different aspects of connectedness as potential predictors of psychological well-being. For this purpose, we conducted a study examining the psychological well-being of 184 participants (130 women, 54 men, age: M = 31.39, SD = 15.24) as well as their connectedness with oneself (self-love), with others (prosocialness), with nature (nature connectedness), and with the transcendent (spirituality). First, significant positive correlations appeared between psychological well-being and self-love, nature connectedness, and spirituality. Furthermore, correlations between the four aspects of connectedness were significant, except for the relationship between self-love and prosocialness. A regression analysis revealed that self-love and nature connectedness positively predicted participants’ psychological well-being, while spirituality and prosocialness did not explain any incremental variance. The strong relationship between self-love and well-being was partly mediated by nature connectedness. Hence, self love, understood as a positive attitude of self-kindness, should be considered in more detail to enhance psychological well-being. Besides this, a more vital connectedness to the surrounding nature could benefit people’s well-being.
The title compound, [Fe(C5H5)(C21H24NO2)], which is produced by the oxidation of 1-(4-tert-butylphenyl)-2-ethyl-3-ferrocenylpyrrole, crystallizes as a racemic mixture in the centrosymmetric space group P21/n. The central heterocyclic pyrrole ring system subtends dihedral angles of 13.7 (2)° with respect to the attached cyclopentadienyl ring and of 43.6 (7)° with the major component of the disordered phenyl group bound to the N atom. The 4-tert-butylphenyl group, as well as the non-substituted Cp ring are disordered with s.o.f. values of 0.589 (16) and 0.411 (16), respectively. In the crystal, molecules with the same absolute configuration are linked into infinite chains along the b-axis direction by O—H···O hydrogen bonds between the hydroxy substituent and the carbonyl O atom of the adjacent molecule.
This work addresses the challenge of calibrating multiple solid-state LIDAR systems. The study focuses on three different solid-state LIDAR sensors that implement different hardware designs, leading to distinct scanning patterns for each system. Consequently, detecting corresponding points between the point clouds generated by these LIDAR systems—as required for calibration—is a complex task. To overcome this challenge, this paper proposes a method that involves several steps. First, the measurement data are preprocessed to enhance its quality. Next, features are extracted from the acquired point clouds using the Fast Point Feature Histogram method, which categorizes important characteristics of the data. Finally, the extrinsic parameters are computed using the Fast Global Registration technique. The best set of parameters for the pipeline and the calibration success are evaluated using the normalized root mean square error. In a static real-world indoor scenario, a minimum root mean square error of 7 cm was achieved. Importantly, the paper demonstrates that the presented approach is suitable for online use, indicating its potential for real-time applications. By effectively calibrating the solid-state LIDAR systems and establishing point correspondences, this research contributes to the advancement of multi-LIDAR fusion and facilitates accurate perception and mapping in various fields such as autonomous driving, robotics, and environmental monitoring.
Focusing on the triangulation of detective fiction, masculinity studies and disability studies, "Investigating the Disabled Detective – Disabled Masculinity and Masculine Disability in Contemporary Detective Fiction" shows that disability challenges common ideals of (hegemonic) masculinity as represented in detective fiction. After a theoretical introduction to the relevant focal points of the three research fields, the dissertation demonstrates that even the archetypal detectives Dupin and Holmes undermine certain nineteenth-century masculine ideals with their peculiarities. Shifting to contemporary detective fiction and adopting a literary disability studies perspective, the dissertation investigates how male detectives with a form of neurodiversity or a physical impairment negotiate their masculine identity in light of their disability in private and professional contexts. It argues that the occupation as a detective supports the disabled investigator to achieve ‘masculine disability’. Inversing the term ‘disabled masculinity’, predominantly used in research, ‘masculine disability’ introduces a decisively gendered reading of neurodiversity and (acquired) physical impairment in contemporary detective fiction. The term implies that the disabled detective (re)negotiates his masculine identity by implementing the disability in his professional investigations and accepting it as an important, yet not defining, characteristic of his (gender) identity. By applying this approach to five novels from contemporary British and American detective fiction, the dissertation demonstrates that masculinity and disability do not negate each other, as commonly assumed. Instead, it emphasises that disability allows the detective, as much as the reader, to rethink masculinity.
X-ray computed tomography (XRT) is a three-dimensional (3D), non-destructive, and reproducible investigation method capable of visualizing and examining internal and external structures of components independent of the material and geometry. In this work, XRT with its unique abilities complements conventionally utilized examination methods for the investigation of microstructure weakening induced by hydrogen corrosion and furthermore provides a new approach to corrosion research. The motivation for this is the current inevitable transformation to hydrogen-based steel production. Refractories of the system Al2O3-SiO2 are significant as lining materials. Two exemplary material types A and B, which differ mainly in their Al2O3:SiO2 ratio, are examined here using XRT. Identical samples of the two materials are measured, analyzed, and then compared before and after hydrogen attack. In this context, hydrogen corrosion-induced porosity and its spatial distribution and morphology are investigated. The results show that sample B has an higher resistance to hydrogen-induced attack than sample A. Furthermore, the 3D-representation revealed a differential porosity increase within the microstructure.
Die aus nachwachsenden Rohstoffen hergestellten biologisch abbaubaren Polymere Polymilchsäure (polylactic acid, PLA) und Polyhydroxybuttersäure (polyhydroxybutyrate, PHB) wurden im Rahmen dieser Arbeit mit hydrierten amorphen Kohlenstoffschichten (amorphous hydrogenated carbon, a-C:H) bei unterschiedlichen Winkeleinstellungen mit verschiedenen Dicken beschichtet. Ähnlich wie herkömmliche Polymere haben Biopolymere oft ungeeignete Oberflächeneigenschaften für industrielle Zwecke, z.B. eine geringe Härte. Für manche Anwendungen ist es daher notwendig und vorteilhaft, die Oberflächeneigenschaften von Biopolymeren unter Beibehaltung der Haupteigenschaften des Trägermaterials zu modifizieren. Eine geeignete Oberflächenmodifikation ist das Aufbringen von dünnen a-C:H Schichten. Ihre Eigenschaften hängen wesentlich vom sp²- und sp³-Hybridisierungsverhältnis der Kohlenstoffatome und dem Gehalt an Wasserstoffatomen ab. Das sp²/sp³-Verhältnis sollte in der vorliegenden Arbeit durch Variation der Beschichtungsgeometrie gesteuert werden. Da Beschichtungen bei 0°, direkt vor der Plasmaquelle, einen höheren Anteil an sp³ und indirekt (180°) beschichtete ein höheren Anteil an sp² aufweisen, wird in dieser Arbeit gezeigt, dass es möglich ist, das sp²/sp³ -Verhältnis zu kontrollieren. Dazu werden die einzelnen Proben in den Winkeln 0, 30, 60, 90, 120, 150 und 180° vor der Plasmaquelle platziert und mit einer Dauer von 2.5, 5.0, 7.5 und 10.0 Minuten beschichtet. Für den Winkeln 0° ergaben sich die Schichtdicken von 25, 50, 75 und 100 nm. Die a-C:H Schichten wurden alle mit Radiofrequenzplasma-unterstützter chemischer Gasphasenabscheidung und Acetylen als C und H Quelle abgeschieden, nachdem sie 10 Minuten lang mit einem Sauerstoffplasma vorbehandelt worden waren. Nach dieser O₂-Behandlung und der a-C:H Abscheidung werden die Oberflächen mit makroskopischen und mikroskopischen Messmethoden untersucht und die Daten anschließend analysiert. Die Oberflächenmorphologie wird mit Hilfe der Rasterelektronenmikroskopie und der Rasterkraftmikroskopie erfasst. Auf diese Weise können auch Informationen über die Stabilität der Schicht und die Oberflächenrauhigkeit gesammelt werden. Mit Kontaktwinkelmessungen (contact angle, CA) wird nicht nur die Benetzbarkeit, sondern auch die Kontaktwinkelhysterese durch Auf- und Abpumpen des Tropfenvolumens bestimmt. Durch Messung des CA von verschiedenen Flüssigkeiten und deren Vergleich werden die freie Oberflächenenergie (surface free energy, SFE) und ihre polaren und dispersiven Bestandteile bestimmt. Die Veränderungen der Barriereeigenschaften werden durch Wasserdampftransmissionstests (water vapor transmission rate, WVTR) überprüft. Die chemische Analyse der Oberfläche erfolgt zum einen durch Fourier-Transformations-Infrarotspektroskopie mit spiegelnder Reflexion und zum anderen durch Synchrotron unterstützte Techniken wie der Nahkanten-Röntgenabsorptionsfeinstruktur und der Röntgen-Photoelektronenspektroskopie. Bei der Analyse der Oberflächen nach der O₂ Behandlung, von der zunächst angenommen wurde, dass sie nur der Reinigung und Aktivierung der Oberfläche für die a-C:H Beschichtung dient, wurde festgestellt, dass die Veränderungen drastischer sind als ursprünglich angenommen. Wird PLA zum Beispiel bei 0° für 10 Minuten behandelt, steigt die Rauheit um das Fünffache. Mit zunehmendem Winkel verringert sich diese wieder, bis sie bei 180° wieder dem Ausgangswert entspricht. Bei PHB hingegen wird durchgehend ein ähnlicher Wert gemessen. Für beide Polymere lässt sich zeigen, dass der polare Anteil der SFE zunimmt. In der WVTR ist bei PLA eine Abnahme der Permeabilität und bei PHB ein Anstieg des Ausgangswertes zu beobachten. Die chemische Oberflächenanalyse zeigt, dass die O₂ Behandlung kaum Auswirkungen auf die Oberflächenbindungen hat. Insgesamt kann in dieser Arbeit gezeigt werden, dass sich die O₂ Behandlung auf die Eigenschaften der Oberfläche auswirkt und nicht ausschließlich als Reinigungs- und Aktivierungsprozess betrachtet werden kann. Bei direkter a-C:H Beschichtung (bei 0°) ist sowohl bei PLA als auch bei PHB ein Schichtversagen bei 10.0 Minuten aufgrund von Eigenspannung zu beobachten. Dies ist bei PHB in geringerem Maße auch bei 30° zu erkennen. Die Durchlässigkeit der Polymere wird bei einer fünf minütigen Beschichtung um 47% reduziert und auch die Schicht bei 10.0 Minuten führt diesen Effekt trotz auftretender Risse weiter. Die Aufbringung von a-C:H Schichten zeigt für beide Polymertypen bei direkter Beschichtung eine Dominanz von sp³-Bindungen. Mit zunehmendem Winkel nimmt diese ab und bei indirekten Beschichtungen werden sp²-Bindungen dominierend. Dieses Ergebnis ist für alle Schichtdicken ähnlich, nur der Winkel, bei dem der Wechsel der dominanten Bindung stattfindet, ist unterschiedlich. Es wird gezeigt, dass es möglich ist, die Oberflächeneigenschaften durch eine winkelabhängige Beschichtung zu steuern und somit das Verhältnis sp²/sp³ zu kontrollieren.
In Geschäftsbeziehungen, z. B. im internationalen Eisenbahnverkehr, können große Datenmengen zwischen den beteiligten Parteien ausgetauscht werden. Für den Austausch solcher Daten wird erwartet, dass das Risiko betrogen zu werden, z. B. durch den Erhalt gefälschter Daten, marginal ist, als auch angemessene Kosten sowie ein vorhersehbarer Gewinn. Da die ausgetauschten Daten für wichtige Geschäftsentscheidungen verwendet werden können, besteht für eine Partei ein hoher Anreiz, die Daten zu ihren Gunsten zu manipulieren. Um diese Art von Manipulation zu unterbinden existieren Mechanismen zur Sicherstellung der Integrität und Authentizität der Daten. In Kombination mit einem Protokoll für einen fairen Austausch kann sichergestellt werden, dass die Integrität und Authentizität dieser Daten auch über einen Austausch mit einer anderen Partei hinweg erhalten bleibt. Gleichzeitig stellt ein solches Protokoll sicher, dass der Austausch der Daten nur bei gleichzeitigem Austausch mit der vereinbarten Gegenleistung, z. B. einer Bezahlung erfolgt, aber auch, dass die Bezahlung nur erfolgt, wenn die Integrität und Authentizität der Daten wie vorher vereinbart, gegeben ist. Um jedoch Fairness garantieren zu können, muss ein Fair Exchange Protokoll eine vertrauenswürdige dritte Partei einbeziehen. Um Betrug durch eine einzelne zentralisierte Partei zu vermeiden, die als vertrauenswürdige dritte Partei agiert, schlägt die aktuelle Forschung vor, die vertrauenswürdige dritte Partei zu dezentralisieren, indem z. B. ein Distributed Ledger-basiertes Fair-Exchange Protokoll verwendet wird. Bei der Bewertung der Fairness eines solchen Austauschs vernachlässigen die aktuellen Ansätze jedoch die Kosten, die den Parteien entstehen, die den Fair Exchange durchführen. Dies kann zu einer Verletzung der skizzierten Erwartung angemessener Kosten führen, insbesondere wenn Distributed Ledger beteiligt sind, deren Benutzung in der Regel mit nicht-vernachlässigbaren Kosten verbunden ist. Darüber hinaus ist die Geschwindigkeit typischer Distributed-Ledger-basierter Fair Exchange-Protokolle begrenzt, was einer breiten Verwendung im Wege steht.
Um diese Herausforderungen zu überwinden, stellen wir in dieser Arbeit die Grundlage für eine Datenaustauschplattform vor, die einen vollständig dezentralisierten Fair Exchange mit angemessenen Kosten und Geschwindigkeit ermöglicht. Als theoretische Grundlage führen wir das Konzept der cost fairness ein, die die Kosten für die Fairnessbewertung berücksichtigt, indem verlangt wird, dass eine Partei, die dem Fair Exchange-Protokoll folgt, niemals einseitige Nachteile erleidet. Wir weisen nach, dass cost fairness nicht mit typischen öffentlichen Distributed Ledgers erreicht werden kann, sondern maßgeschneiderte Distributed Ledger-Instanzen erfordert, die in der Regel nicht vollständig dezentralisiert sind. Wir zeigen jedoch, dass die meisten einseitigen Kosten durch Angriffe einer unehrlichen Partei im Rahmen einer Grieving Attack verursacht werden können. Um einen Fair Exchange zu angemessenen Kosten und Geschwindigkeit zu ermöglichen, stellen wir FairSCE vor, ein Distributed Ledger-basiertes Fair Exchange-Protokoll, welches Distributed Ledger State Channels verwendet und einen Mechanismus zum Schutz vor Grieving Attacks enthält, wodurch die möglichen einseitigen Kosten auf ein Minimum reduziert werden. Auf der Grundlage unserer Evaluation von FairSCE sind die Worst Case-Kosten für den Datenaustausch selbst bei Vorhandensein von böswilligen Parteien bekannt, was eine Abschätzung des möglichen Nutzens und damit eine vorläufige Schätzung des wirtschaftlichen Nutzens ermöglicht. Um eine eindeutige Bewertung der korrekten übertragenen Daten zu ermöglichen und gleichzeitig sensible Teile der Daten zu maskieren, stellen wir außerdem einen Ansatz für das Hashing hierarchisch strukturierter Daten vor, mit dem die Integrität und Authentizität der übertragenen Daten sichergestellt werden kann.
How to begin? This short question addresses a problem that is anything but simple, especially when regarding something as sophisticated and multilayered as musical theatre. However, scholars of this vast research area have mostly neglected this question so far. This study analyses and compares the initial sections of late Victorian popular musical theatre and is therefore a contribution to several fields of research: the analysis of initial sections of musical theatre in general, the analysis of the music of popular musical theatre in particular, and therefore operetta studies. The 1890s are especially interesting times for popular musical theatre in London: The premiered works include the last collaborations of Gilbert and Sullivan as well as offshoots of Savoy opera; but the so-called ‘naughty nineties’ also saw the emergence of a new genre, musical comedy, which captured the late Victorian zeitgeist like no other. This new form of theatrical entertainment was carefully and consciously constructed and promoted as modern and fashionable, walking a fine line between respectability and mildly risqué excitement.
Because a deep understanding of the developments and new tendencies concerning popular musical theatre in the 1890s is crucial in order to interpret differences as well as similarities, the analyses of the opening numbers are preceded by a detailed discussion of the relevant genres: comic opera, musical comedy, musical play and operetta. Since the producers of the analysed works wanted to distance themselves from former and supposedly old-fashioned traditions, this book also considers influences from their British predecessors, but also from Viennese operetta and French opéra bouffe.
Reducing gender bias in STEM is key to generating more equality and contributing to a more balanced workforce in this field. Spatial ability and its components are cognitive processes crucial to success in STEM education and careers. Significant gender differences have consistently been found in mental rotation (MR), the ability to mentally transform two- and three-dimensional objects. The aim of this pilot study is to examine factors in psychological assessment which may contribute to gender differences in MR performance. Moreover, findings will inform the development of the new approaches to assessment using computer adaptive testing (CAT). (1) Background: The study examines the impact of emotional regulation on MR performance in primary school children whose mean age was 9.28 years old. (2) Methods: Skin conductance was measured to assess the impact of emotional reactivity (ER) on performance during an MR task. (3) Results: Patterns of ER influence response time (RT) on specific items in the task. (4) Conclusions: Identifying the effects of emotional arousal and issues of test construction such as stereotyped stimuli and item difficulty in tests of spatial ability warrants ongoing investigation. It is vital to ensure that these factors do not compromise the accurate measurement of performance and inadvertently contribute to the gender gap in STEM.
Examining the role of post-event processing in test anxiety—Pilot testing in three student samples
(2024)
This work investigates the occurrence of post-event processing (PEP) in the context of test anxiety; PEP involves rumination and self-critical thinking following an event and commonly observed in social anxiety. Three short-term longitudinal studies in student samples examined whether PEP occurs after exams and how it is associated with test anxiety. University students (N =35 in Study 1, N =146 in Study 2, and N =37 in Study 3) completed measures of trait and state test anxiety before an actual exam; PEP related to the exam was assessed at various time points afterward. Results revealed that PEP occurred to a meaningful extent after exam situations. Overall, it was positively associated with trait and state test anxiety, although some variations in the relations were found across the three studies. These findings underscore the relevance of PEP in the context of test anxiety, as PEP might contribute to maintaining test anxiety in the long term. Implications for future studies are discussed.
The goal of this PhD thesis is to investigate possibilities of using symbol elimination for solving problems over complex theories and analyze the applicability of such uniform approaches in different areas of application, such as verification, knowledge representation and graph theory. In the thesis we propose an approach to symbol elimination in complex theories that follows the general idea of combining hierarchical reasoning with symbol elimination in standard theories. We analyze how this general approach can be specialized and used in different areas of application.
In the verification of parametric systems it is important to prove that certain safety properties hold. This can be done by showing that a property is an inductive invariant of the system, i.e. it holds in the initial state of the system and is invariant under updates of the system. Sometimes this is not the case for the condition itself, but for a stronger condition it is. In this thesis we propose a method for goal-directed invariant strengthening.
In knowledge representation we often have to deal with huge ontologies. Combining two ontologies usually leads to new consequences, some of which may be false or undesired. We are interested in finding explanations for such unwanted consequences. For this we propose a method for computing interpolants in the description logics EL and EL⁺, based on a translation to the theory of semilattices with monotone operators and a certain form of interpolation in this theory.
In wireless network theory one often deals with classes of geometric graphs in which the existence or non-existence of an edge between two vertices in a graph relies on properties on their distances to other nodes. One possibility to prove properties of those graphs or to analyze relations between the graph classes is to prove or disprove that one graph class is contained in the other. In this thesis we propose a method for checking inclusions between geometric graph classes.
Recent studies show that biofilm substances in contact with nanoplastics play an important role in the aggregation and sedimentation of nanoplastics. Consequences of these processes are changes in biofilm formation and stability and changes in the transport and fate of pollutants in the environment. Having a deeper understanding of the nanoplastics–biofilm interaction would help to evaluate the risks posed by uncontrolled nanoplastic pollution. These interactions are impacted by environmental changes due to climate change, such as, e.g., the acidification of surface waters. We apply fluorescence correlation spectroscopy (FCS) to investigate the pH-dependent aggregation tendency of non-functionalized polystyrene (PS) nanoparticles (NPs) due to intermolecular forces with model extracellular biofilm substances. Our biofilm model consists of bovine serum albumin (BSA), which serves as a representative for globular proteins, and the polysaccharide alginate, which is a main component in many biofilms, in solutions containing Na+ with an ionic strength being realistic for fresh-water conditions. Biomolecule concentrations ranging from 0.5 g/L up to at maximum 21 g/L are considered. We use non-functionalized PS NPs as representative for mostly negatively charged nanoplastics. BSA promotes NP aggregation through adsorption onto the NPs and BSA-mediated bridging. In BSA–alginate mixtures, the alginate hampers this interaction, most likely due to alginate–BSA complex formation. In most BSA–alginate mixtures as in alginate alone, NP aggregation is predominantly driven by weaker, pH-independent depletion forces. The stabilizing effect of alginate is only weakened at high BSA contents, when the electrostatic BSA–BSA attraction is not sufficiently screened by the alginate. This study clearly shows that it is crucial to consider correlative effects between multiple biofilm components to better understand the NP aggregation in the presence of complex biofilm substances. Single-component biofilm model systems based on comparing the total organic carbon (TOC) content of the extracellular biofilm substances, as usually considered, would have led to a misjudgment of the stability towards aggregation.
Exploring Academic Perspectives: Sentiments and Discourse on ChatGPT Adoption in Higher Education
(2024)
Artificial intelligence (AI) is becoming more widely used in a number of industries, including in the field of education. Applications of artificial intelligence (AI) are becoming crucial for schools and universities, whether for automated evaluation, smart educational systems, individualized learning, or staff support. ChatGPT, anAI-based chatbot, offers coherent and helpful replies based on analyzing large volumes of data. Integrating ChatGPT, a sophisticated Natural Language Processing (NLP) tool developed by OpenAI, into higher education has sparked significant interest and debate. Since the technology is already adapted by many students and teachers, this study delves into analyzing the sentiments expressed on university websites regarding ChatGPT integration into education by creating a comprehensive sentiment analysis framework using Hierarchical Residual RSigELU Attention Network (HR-RAN). The proposed framework addresses several challenges in sentiment analysis, such as capturing fine-grained sentiment nuances, including contextual information, and handling complex language expressions in university review data. The methodology involves several steps, including data collection from various educational websites, blogs, and news platforms. The data is preprocessed to handle emoticons, URLs, and tags and then, detect and remove sarcastic text using the eXtreme Learning Hyperband Network (XLHN). Sentences are then grouped based on similarity and topics are modeled using the Non-negative Term-Document Matrix Factorization (NTDMF) approach. Features, such as lexico-semantic, lexico structural, and numerical features are extracted. Dependency parsing and coreference resolution are performed to analyze grammatical structures and understand semantic relationships. Word embedding uses the Word2Vec model to capture semantic relationships between words. The preprocessed text and extracted features are inputted into the HR-RAN classifier to categorize sentiments as positive, negative, or neutral. The sentiment analysis results indicate that 74.8% of the sentiments towards ChatGPT in higher education are neutral, 21.5% are positive, and only 3.7% are negative. This suggests a predominant neutrality among users, with a significant portion expressing positive views and a very small percentage holding negative opinions. Additionally, the analysis reveals regional variations, with Canada showing the highest number of sentiments, predominantly neutral, followed by Germany, the UK, and the USA. The sentiment analysis results are evaluated based on various metrics, such as accuracy, precision, recall, F-measure, and specificity. Results indicate that the proposed framework outperforms conventional sentiment analysis models. The HR-RAN technique achieved a precision of 98.98%, recall of 99.23%, F-measure of 99.10%, accuracy of 98.88%, and specificity of 98.31%. Additionally, word clouds are generated to visually represent the most common terms within positive, neutral, and negative sentiments, providing a clear and immediate understanding of the key themes in the data. These findings can inform educators, administrators, and developers about the benefits and challenges of integrating ChatGPT into educational
settings, guiding improvements in educational practices and AI tool development.
Assessing ChatGPT’s Performance in Analyzing Students’ Sentiments: A Case Study in Course Feedback
(2024)
The emergence of large language models (LLMs) like ChatGPT has impacted fields such as education, transforming natural language processing (NLP) tasks like sentiment analysis. Transformers form the foundation of LLMs, with BERT, XLNet, and GPT as key examples. ChatGPT, developed by OpenAI, is a state-of-the-art model and its ability in natural language tasks makes it a potential tool in sentiment analysis. This thesis reviews current sentiment analysis methods and examines ChatGPT’s ability to analyze sentiments across three labels (Negative, Neutral, Positive) and five labels (Very Negative, Negative, Neutral, Positive, Very Positive) on a dataset of student course reviews. Its performance is compared with fine tuned state-of-the-art models like BERT, XLNet, bart-large-mnli, and RoBERTa-large-mnli using quantitative metrics. With the help of 7 prompting techniques which are ways to instruct ChatGPT, this work also analyzed how well it understands complex linguistic nuances in the given texts using qualitative metrics. BERT and XLNet outperform ChatGPT mainly due to their bidirectional nature, which allows them to understand the full context of a sentence, not just left to right. This, combined with fine-tuning, helps them capture patterns and nuances better. ChatGPT, as a general purpose, open-domain model, processes text unidirectionally, which can limit its context understanding. Despite this, ChatGPT performed comparably to XLNet and BERT in three-label scenarios and outperformed others. Fine-tuned models excelled in five label cases. Moreover, it has shown impressive knowledge of the language. Chain-of-Thought (CoT) was the most effective technique for prompting with step by step instructions. ChatGPT showed promising performance in correctness, consistency, relevance, and robustness, except for detecting Irony. As education evolves with diverse learning environments, effective feedback analysis becomes increasingly valuable. Addressing ChatGPT’s limitations and leveraging its strengths could enhance personalized learning through better sentiment analysis.
Predictive Process Monitoring setzt sich als Hilfsmittel zur Unterstützung der betrieblichen Abläufe in Unternehmen immer mehr durch Die meisten heute verfüg-baren Softwareanwendungen erfordern jedoch ein umfangreiches technisches Know-how des Betreibers und sind daher für die meisten realen Szenarien nicht geeignet. Daher wird in dieser Arbeit eine prototypische Implementierung eines Predictive Process Monitoring Dashboards in Form einer Webanwendung vorgestellt. Das System basiert auf dem von Bartmann et al. (2021) vorgestellten PPM-Camunda-Plugin und ermöglicht es dem Benutzer, auf einfache Weise Metriken, Visualisierungen zur Darstellung dieser Metriken und Dashboards, in denen die Visualisierungen angeordnet werden können, zu erstellen. Ein Usability-Test mit Testnutzern mit unterschiedlichen Computerkenntnissen wird durchgeführt, um die Benutzerfreundlichkeit der Anwendung zu bestätigen.
Challenges of Implementing Innovation Strategies at Large Organizations: A case of Lotte Group
(2023)
For many decades, one of the most important focuses of research has been on determining whether or not there is a correlation between the size of an organization and its level of innovation. Unlike small companies, large companies often have well-established structure that are hard to change and change managements seems to be much more difficult especially related to innovation. Nevertheless, there are many examples to prove the opposites. Some large organization like Apple, Amazon... always show great innovation efforts and keep changing in a much positive way. Therefore, the aim of this thesis is to discuss of how large organization can be able to implement innovation when having much drawbacks compare to SMEs. Through the use of a qualitative research approach, researcher was able to explore essential information on the innovation strategies that large companies are using in order to innovate and how they could overcome existing challenges by studying the working process of Lotte Group – one of the biggest companies in Korea.
Die Aufmerksamkeit politischer Entscheidungsträger weltweit richtet sich in den letzten 10 Jahren verstärkt auf die Kreativwirtschaft als signifikanter Wachstums- und Beschäftigungsmotor in Städten. Die Literatur zeigt jedoch, dass Kreativschaffende zu den gefährdetsten Arbeitskräften in der heutigen Wirtschaft gehören. Aufgrund des enorm deregulierten und stark individualisierten Umfelds werden Misserfolg oder Erfolg eher individuellen Fähigkeiten und Engagement zugeschrieben und strukturelle oder kollektive Aspekte vernachlässigt. Diese Arbeit widmet sich zeitlichen, räumlichen und sozialen Aspekten digitaler behavioraler Daten, um zu zeigen, dass es tatsächlich strukturelle und historische Faktoren gibt, die sich auf die Karrieren von Individuen und Gruppen auswirken. Zu diesem Zweck bietet die Arbeit einen computergestützten, sozialwissenschaftlichen Forschungsrahmen, der das theoretische und empirisches Wissen aus jahrelanger Forschung zu Ungleichheit mit computergestützten Methoden zum Umgang mit komplexen und umfangreichen digitalen Daten verbindet. Die Arbeit beginnt mit der Darlegung einer neuartigen Methode zur Geschlechtererkennung, welche sich Image Search und Gesichtserkennungsmethoden bedient. Die Analyse der kollaborativen Verhaltensweisen sowie der Zitationsnetzwerke männlicher und weiblicher Computerwissenschaftler*innen verdeutlicht einige der historischen Bias und Nachteile, welchen Frauen in ihren wissenschaftlichen Karrieren begegnen. Zur weiterfuhrenden Elaboration der zeitlichen Aspekte von Ungleichheit, wird der Anteil vertikaler und horizontaler Ungleichheit in unterschiedlichen Kohorten von Wissenschaftler*innen untersucht, die ihre Karriere zu unterschiedlichen Zeitpunkten begonnen haben. Im Weiteren werden einige der zugrunde liegenden Mechanismen und Prozesse von Ungleichheit in kreativen Berufen analysiert, wie der Matthew-Effekt und das Hipster-Paradoxon. Schließlich zeigt diese Arbeit auf, dass Online-Plattformen wie Wikipedia bestehenden Bias reflektieren sowie verstärken können.
The diversity within amphibian communities in cultivated areas in Rwanda and within two selected, taxonomically challenging groups, the genera Ptychadena and Hyperolius, were investigated in this thesis. The amphibian community of an agricultural wetland near Butare in southern Rwanda comprised 15 anuran species. Rarefaction and jackknife analyses corroborated that the complete current species richness of the assemblage had been recorded, and the results of acoustic niche analysis suggested species saturation of the community. Surveys at many other Rwandan localities showed that the species recorded in Butare are widespread in cultivated and pristine wetlands. The species were readily distinguishable using morphological, bioacoustic, and molecular (DNA barcoding) features, but only eight of the 15 species could be assigned unambiguously to nominal species. The remaining represented undescribed or currently unrecognized taxa, including three species of Hyperolius, two Phrynobatrachus species, one Ptychadena species, and one species of Amietia. The diversity of the Ridged Frogs in Rwanda was investigated in two studies (Chapters III and IV). Three species of Ptychadena were recorded in wetlands in the catchment of the Nile. They can be distinguished by morphological characters (morphometrics and qualitative features) as well as by their advertisement calls and genetics. The Rwandan species of the P. mascareniensis group was shown to differ from the topotypic population as well as from other genetic lineages in sub-Saharan Africa and an old available name, P. nilotica, was resurrected from synonymy for this lineage. Two further Ptychadena species were identified among voucher specimens from Rwanda deposited in the collection of the RMCA, P. chrysogaster and P. uzungwensis. Morphologically they can be unambiguously distinguished from each other and the three other Rwandan species. A key based on qualitative morphological characters was developed, which allows unequivocal identification of specimens of all species that have been recorded from Rwanda. DNA was isolated from a Rwandan voucher specimen of P. chrysogaster, and the genetic analysis corroborated the species" distinct status.
A species of Hyperolius collected in the Nyungwe National Park was compared to all other Rwandan species of the genus and to morphologically or genetically similar species from neighbouring countries. Its distinct taxonomic status was justified by morphological, bioacoustic, and molecular evidence and it was described as a new species, H. jackie. A species of the H. nasutus group collected at agricultural sites in Rwanda was described as a new species in the course of a revision of the species of the Hyperolius nasutus group. The group was shown to consist of 15 distinct species which can be distinguished from each other genetically, bioacoustically, and morphologically.
The aerial performance, i.e. parachuting, of the Disc-fingered Reed Frog, Hyperolius discodactylus, was described. It represents a novel observation of a behaviour that has been known from a number of Southeast Asian and Neotropical frog species. Parachuting frogs, including H. discodactylus, exhibit certain morphological characteristics and, while airborne, assume a distinct posture which is best-suited for maneuvering in the air. Another study on the species addressed the validity of the taxon H. alticola which had been considered either a synonym of H. discodactylus or a distinct species. Type material of both taxa was re-examined and the status of H. alticola reassessed using morphological data from historic and new collections, call recordings, and molecular data from animals collected on recent expeditions. A northern and a southern genetic clade were identified, a divide that is weakly supported by diverging morphology of the vouchers from the respective localities. No distinction in advertisement call features could be recovered to support this split and both genetic and morphological differences between the two geographic clades are marginal and not always congruent and more likely reflect population-level variation. Therefore it was concluded that H. alticola is not a valid taxon and should be treated as a synonym of H. discodactylus.
On the recognition of human activities and the evaluation of its imitation by robotic systems
(2023)
This thesis addresses the problem of action recognition through the analysis of human motion and the benchmarking of its imitation by robotic systems.
For our action recognition related approaches, we focus on presenting approaches that generalize well across different sensor modalities. We transform multivariate signal streams from various sensors to a common image representation. The action recognition problem on sequential multivariate signal streams can then be reduced to an image classification task for which we utilize recent advances in machine learning. We demonstrate the broad applicability of our approaches formulated as a supervised classification task for action recognition, a semi-supervised classification task for one-shot action recognition, modality fusion and temporal action segmentation.
For action classification, we use an EfficientNet Convolutional Neural Network (CNN) model to classify the image representations of various data modalities. Further, we present approaches for filtering and the fusion of various modalities on a representation level. We extend the approach to be applicable for semi-supervised classification and train a metric-learning model that encodes action similarity. During training, the encoder optimizes the distances in embedding space for self-, positive- and negative-pair similarities. The resulting encoder allows estimating action similarity by calculating distances in embedding space. At training time, no action classes from the test set are used.
Graph Convolutional Network (GCN) generalized the concept of CNNs to non-Euclidean data structures and showed great success for action recognition directly operating on spatio-temporal sequences like skeleton sequences. GCNs have recently shown state-of-the-art performance for skeleton-based action recognition but are currently widely neglected as the foundation for the fusion of various sensor modalities. We propose incorporating additional modalities, like inertial measurements or RGB features, into a skeleton-graph, by proposing fusion on two different dimensionality levels. On a channel dimension, modalities are fused by introducing additional node attributes. On a spatial dimension, additional nodes are incorporated into the skeleton-graph.
Transformer models showed excellent performance in the analysis of sequential data. We formulate the temporal action segmentation task as an object detection task and use a detection transformer model on our proposed motion image representations. Experiments for our action recognition related approaches are executed on large-scale publicly available datasets. Our approaches for action recognition for various modalities, action recognition by fusion of various modalities, and one-shot action recognition demonstrate state-of-the-art results on some datasets.
Finally, we present a hybrid imitation learning benchmark. The benchmark consists of a dataset, metrics, and a simulator integration. The dataset contains RGB-D image sequences of humans performing movements and executing manipulation tasks, as well as the corresponding ground truth. The RGB-D camera is calibrated against a motion-capturing system, and the resulting sequences serve as input for imitation learning approaches. The resulting policy is then executed in the simulated environment on different robots. We propose two metrics to assess the quality of the imitation. The trajectory metric gives insights into how close the execution was to the demonstration. The effect metric describes how close the final state was reached according to the demonstration. The Simitate benchmark can improve the comparability of imitation learning approaches.
FinTech is deemed to be an underexplored phenomenon even in academic and real environments. Among (1) “Sustainable FinTech” – the application of information technology as innovation in established financial services providers’ business operation; and (2) “Disruptive FinTech” – the provision of financial products and services by non-incumbents which in most cases are information technology entrepreneurs, the former receives more attention. In order to contribute to Disruptive FinTech category, the thesis strive to examine Entrepreneurial Strategy framework applied for technology players taking part in Vietnam financial market.
Digital Transformation Maturity of Vietnam Aviation Industry: The Effect of Organizational Readiness
(2023)
The paper studies the digital transformation maturity in the context of the aviation industry in Vietnam. Digital transformation can mean enhancing existing processes, finding new opportunities within existing business domains, or finding new opportunities outside existing business domains. In the era of post Covid-19, digital transformation will play a vital role in the recovery with the support from digital technology to leverage the communication and implementation of new projects or changes.
Digital transformation and digital transformation maturity sometimes are used indistinguishing, but they are two different definitions. This paper will further explain the differences and will apply digital transformation maturity as a scale for the digital transformation in the report.
Due to the lack of experiment in the relationship between digital transformation maturity and the organizational readiness, the study will explore four components of organizational readiness, including digital leadership, digital culture, digital capabilities, and digital partnering.
The paper is a study focusing on exploring which factors and examining the impact of those factors influencing the entrepreneurial intention among students in the Construction industry, specifically among students of Hanoi Construction University and Hanoi Architecture University. The study also mentions some solution of this findings for entrepreneurship in the Construction field in Vietnam that the author might think of based on this research work for future study. The Theory of planned behavior is used as the theoritical framework for this study. Both qualitative and quantitative methods are employed. The questionaire will be conducted among students of the two universities mentioned above. Then, an exploratory factor analysis (EFA) will performed to test the validity of the constructs. The research findings provide factors and their impact factors influencing the entrepreneurial intention and propose some solutions to improve the entrepreneurship in the Construction field in Vietnam.
Digital transformation is a prevailing trend in the world, especially in dynamic Asia. Vietnam has recorded remarkable changes in the economy as domestic enterprises have made new strides in the digital transformation process. MB Bank, one of the prestigious financial groups in Vietnam, also takes advantage of digital transformation to have the opportunity to break through to become a large-scale technology enterprise with many factors such as improving customer experience, increasing customer base and increasing customer satisfaction. enhance competitiveness, build trust and loyalty for customers. However, in the process of converting MB, there are also many challenges that require banks to have appropriate policies to handle. It can be said that MB Bank is a typical case study of digital transformation in the banking sector in Vietnam.
Durch die zunehmende Wichtigkeit und Dringlichkeit des Klimawandels sind Unternehmen aufgefordert, einen Beitrag zu nachhaltiger Entwicklung zu leisten, insbesondere durch die jüngeren Generationen. Bisherige Beiträge von Unternehmen werden jedoch als unzureichend kritisiert, was insbesondere am mangelnden Engagement der Mitarbeiterinnen und Mitarbeiter für Nachhaltigkeit in Unternehmen liegen könnte. In diesem Zusammenhang wurde in den letzten Jahren Gamification als ein vielversprechendes, innovatives Tool um nachhaltige Verhaltensweisen der Mitarbeiterinnen und Mitarbeiter am Arbeitsplatz zu motivieren, vorgeschlagen und zunehmend erforscht. Es gibt jedoch nur wenige Studien und anwendbare Gamification-Lösungen, die mehr als ein spezifisches Nachhaltigkeitsthema behandeln und somit eine ganzheitliche Perspektive auf nachhaltige Verhaltensweisen am Arbeitsplatz einnehmen. Darüber hinaus mangelt es bisheriger Forschung an einem umfassenden Verständnis dafür, wie verschiedene Gamification-Elemente spezifische psychologische Effekte hervorrufen, wie sich diese in Verhaltensänderungen manifestieren und wie diese wiederum kumulativ in messbaren Unternehmensergebnissen resultieren. Der Weg von Gamification als ”Input” zu unternehmerischer Nachhaltigkeit als ”Output” ist also bislang unerforscht.
Diese Dissertation schließt diese Lücke, indem eine ganzheitliche gamifizierte Intervention konzipiert, gestaltet und evaluiert wird, die Mitarbeiterinnen und Mitarbeiter bei verschiedenen nachhaltigen Verhaltensweisen in ihren täglichen Aktivitäten unterstützt. Das Projekt verwendet einen designwissenschaftlichen Forschungsansatz, der die Mitarbeiterinnen und Mitarbeiter eng in die schrittweise Entwicklung der Lösung einbezieht. Als Teil des iterativen Designprozesses werden in dieser Dissertation sechs Studien vorgestellt, um das theoretische Verständnis von Gamification für nachhaltige Verhaltensweisen von Mitarbeiterinnen und Mitarbeitern zu erweitern. Zunächst wird ein umfassender Überblick über die bestehende Forschung zu Gamification für nachhaltiges Mitarbeiterverhalten gegeben, wobei Gamification-Designs und Ergebnisse früherer Studien analysiert und eine Agenda für die weitere Forschung aufgezeigt werden (Studie 1). Danach werden theoretische Grundlagen der Forschung zu Gamification, Serious Games und Game-based Learning (Studie 2) und empirische Gestaltungsprinzipien
für Gamification und persuasive Systeme (Studie 3) als Basis für die erfolgreiche Gestaltung gamifizierter Anwendungen systematisch untersucht. Anschließend werden in empirischen Studien Motivationen der Mitarbeiterinnen und Mitarbeiter für nachhaltiges Verhalten erforscht und ihre Erwartungen an Gestaltungsmerkmale beleuchtet (Studie 4) sowie kontextuelle Herausforderungen und Gestaltungsdilemmata bei der Implementierung von Gamification in einem organisatorischen Kontext aufgezeigt (Studie 5). Schließlich wird in einer quantitativen Feldstudie (Studie 6) untersucht, wie verschiedene Gamification-Designs nachhaltiges Mitarbeiterverhalten und unternehmerische Nachhaltigkeitskennzahlen in Organisationen beeinflussen. Basierend auf den Ergebnissen wird in dieser Dissertation ein umfassendes Framework für Gamification für nachhaltiges Mitarbeiterverhalten präsentiert, welches Design-, individuelle Verhaltens- und Unternehmensperspektiven einbezieht. Schließlich werden darauf aufbauend praktische Empfehlungen für die Gestaltung von Gamification zur Förderung nachhaltigen Mitarbeiterverhaltens am Arbeitsplatz präsentiert.
Counts of SARS-CoV-2-related deaths have been key numbers for justifying severe political, social and economical measures imposed by authorities world-wide. A particular focus thereby was the concomitant excess mortality (EM), i.e. fatalities above the expected all-cause mortality (AM). Recent studies, inter alia by the WHO, estimated the SARS-CoV-2-related EM in Germany between 2020 and 2021 as high as 200 000. In this study, we attempt to scrutinize these numbers by putting them into the context of German AM since the year 2000. We propose two straightforward, age-cohort-dependent models to estimate German AM for the ‘Corona pandemic’ years, as well as the corresponding flu seasons, out of historic data. For Germany, we find overall negative EM of about −18 500 persons for the year 2020, and a minor positive EM of about 7000 for 2021, unveiling that officially reported EM counts are an exaggeration. In 2022, the EM count is about 41 200. Further, based on NAA-test-positive related death counts, we are able to estimate how many Germans have died due to rather than with CoViD-19; an analysis not provided by the appropriate authority, the RKI. Through 2020 and 2021 combined, our due estimate is at no more than 59 500. Varying NAA test strategies heavily obscured SARS-CoV-2-related EM, particularly within the second year of the proclaimed pandemic. We compensated changes in test strategies by assuming that age-cohort-specific NAA-conditional mortality rates during the first pandemic year reflected SARS-CoV-2-characteristic constants.
X-ray computer tomography (XRT) is a three-dimensional, nondestructive, and thus reproducible examination method that allows for the investigation of internal and external structures of objects. Due to its characteristics, the XRT technique has increasingly established itself as an alternative examination method and is also applied in the field of mineral processing. Within this work, XRT is used to investigate the influence of hydrochloric acid leaching of iron-rich bauxites on grain composition. Acid leaching is a promising method for the beneficiation of iron-rich bauxites for refractories. Many studies have already established that leaching with hydrochloric acid can reduce the Fe₂O₃ content in bauxites. However, apart from the influence of the leaching process on the composition of the bauxites, aspects such as the influence of the acid on the exact grain constitution or the porosity behavior have rarely been considered so far. To address these open questions, XRT analysis was used to examine and characterize various bauxites. By comparing identical grains before and after leaching, it was observed that in gibbsite bauxites the acid penetration is deeper, and the volume decreases significantly. In diasporic and boehmitic bauxites, clear leaching edges can be seen in which the iron content has been reduced.
Leichte Sprache (LS) ist eine vereinfachte Varietät des Deutschen in der barrierefreie Texte für ein breites Spektrum von Menschen, einschließlich gering literalisierten Personen mit Lernschwierigkeiten, geistigen oder entwicklungsbedingten Behinderungen (IDD) und/oder komplexen Kommunikationsbedürfnissen (CCN), bereitgestellt werden. LS-Autor*innen sind i.d.R. der deutschen Standardsprache mächtig und gehören nicht der genannten Personengruppe an. Unser Ziel ist es, diese zu befähigen, selbst am schriftlichen Diskurs teilzunehmen. Hierfür bedarf es eines speziellen Schreibsystems, dessen linguistische Unterstützung und softwareergonomische Gestaltung den spezifischen Bedürfnissen der Zielgruppe gerecht wird. EasyTalk ist ein System basierend auf computerlinguistischer Verarbeitung natürlicher Sprache (NLP) für assistives Schreiben in einer erweiterten Variante von LS (ELS). Es stellt den Nutzenden ein personalisierbares Vokabular mit individualisierbaren Kommunikationssymbolen zur Verfügung und unterstützt sie entsprechend ihres persönlichen Fähigkeitslevels durch interaktive Benutzerführung beim Schreiben. Intuitive Formulierungen für linguistische Entscheidungen minimieren das erforderliche grammatikalische Wissen für die Erstellung korrekter und kohärenter komplexer Inhalte. Einfache Dialoge kommunizieren mit einem natürlichsprachlichen Paraphrasengenerator, der kontextsensitiv Vorschläge für Satzkomponenten und korrekt flektierte Wortformen bereitstellt. Außerdem regt EasyTalk die Nutzer*innen an, Textelemente hinzuzufügen, welche die Verständlichkeit des Textes für dessen Leserschaft fördern (z.B. Zeit- und Ortsangaben) und die Textkohärenz verbessern (z.B. explizite Diskurskonnektoren). Um das System auf die Bedürfnisse der Zielgruppe zuzuschneiden, folgte die Entwicklung von EasyTalk den Grundsätzen der menschzentrierten Gestaltung (UCD). Entsprechend wurde das System in iterativen Entwicklungszyklen ausgereift, kombiniert mit gezielten Evaluierungen bestimmter Aspekte durch Gruppen von Expert*innen aus den Bereichen CCN, LS und IT sowie L2-Lernende der deutschen Sprache. Eine Fallstudie, in welcher Mitglieder der Zielgruppe das freie Schreiben mit dem System testeten, bestätigte, dass Erwachsene mit geringen Lese-, Schreib- und Computerfähigkeiten mit IDD und/oder CCN mit EasyTalk eigene persönliche Texte in ELS verfassen können. Das positive Feedback aller Tests inspiriert Langzeitstudien mit EasyTalk und die Weiterentwicklung des prototypischen Systems, wie z.B. die Implementierung einer s.g. Schreibwerkstatt.
In the last years, the public interest in epidemiology and mathematical modeling of disease spread has increased - mainly caused by the COVID-19 pandemic, which has emphasized the urgent need for accurate and timely modelling of disease transmission. However, even prior to that, mathematical modelling has been used for describing the dynamics and spread of infectious diseases, which is vital for developing effective interventions and controls, e.g., for vaccination campaigns and social restrictions like lockdowns. The forecasts and evaluations provided by these models influence political actions and shape the measures implemented to contain the virus.
This research contributes to the understanding and control of disease spread, specifically for Dengue fever and COVID-19, making use of mathematical models and various data analysis techniques. The mathematical foundations of epidemiological modelling, as well as several concepts for spatio-temporal diffusion like ordinary differential equation (ODE) models, are presented, as well as an originally human-vector model for Dengue fever, and the standard (SEIR)-model (with the potential inclusion of an equation for deceased persons), which are suited for the description of COVID-19. Additionally, multi-compartment models, fractional diffusion models, partial differential equations (PDE) models, and integro-differential models are used to describe spatial propagation of the diseases.
We will make use of different optimization techniques to adapt the models to medical data and estimate the relevant parameters or finding optimal control techniques for containing diseases using both Metropolis and Lagrangian methods. Reasonable estimates for the unknown parameters are found, especially in initial stages of pandemics, when little to no information is available and the majority of the population has not got in contact with the disease. The longer a disease is present, the more complex the modelling gets and more things (vaccination, different types, etc.) appear and reduce the estimation and prediction quality of the mathematical models.
While it is possible to create highly complex models with numerous equations and parameters, such an approach presents several challenges, including difficulties in comparing and evaluating data, increased risk of overfitting, and reduced generalizability. Therefore, we will also consider criteria for model selection based on fit and complexity as well as the sensitivity of the model with respect to specific parameters. This also gives valuable information on which political interventions should be more emphasized for possible variations of parameter values.
Furthermore, the presented models, particularly the optimization using the Metropolis algorithm for parameter estimation, are compared with other established methods. The quality of model calculation, as well as computational effort and applicability, play a role in this comparison. Additionally, the spatial integro-differential model is compared with an established agent-based model. Since the macroscopic results align very well, the computationally faster integro-differential model can now be used as a proxy for the slower and non-traditionally optimizable agent-based model, e.g., in order to find an apt control strategy.
In a world where language defines the boundaries of one's understanding, the words of Austrian philosopher Ludwig Wittgenstein resonate profoundly. Wittgenstein's assertion that "Die Grenzen meine Sprache bedeuten die Grenzen meiner Welt" (Wittgenstein 2016: v. 5.6) underscores the vital role of language in shaping our perceptions. Today, in a globalized and interconnected society, fluency in foreign languages is indispensable for individual success. Education must break down these linguistic barriers, and one promising approach is the integration of foreign languages into content subjects.
Teaching content subjects in a foreign language, a practice known as Content Language Integrated Learning (CLIL), not only enhances language skills but also cultivates cognitive abilities and intercultural competence. This approach expands horizons and aligns with the core principles of European education (Leaton Gray, Scott & Mehisto 2018: 50). The Kultusministerkonferenz (KMK) recognizes the benefits of CLIL and encourages its implementation in German schools (cf. KMK 2013a).
With the rising popularity of CLIL, textbooks in foreign languages have become widely available, simplifying teaching. However, the appropriateness of the language used in these materials remains an unanswered question. If textbooks impose excessive linguistic demands, they may inadvertently limit students' development and contradict the goal of CLIL.
This thesis focuses on addressing this issue by systematically analyzing language requirements in CLIL teaching materials, emphasizing receptive and productive skills in various subjects based on the Common European Framework of Reference. The aim is to identify a sequence of subjects that facilitates students' language skill development throughout their school years. Such a sequence would enable teachers to harness the full potential of CLIL, fostering a bidirectional approach where content subjects facilitate language learning.
While research on CLIL is extensive, studies on language requirements for bilingual students are limited. This thesis seeks to bridge this gap by presenting findings for History, Geography, Biology, and Mathematics, allowing for a comprehensive understanding of language demands. This research endeavors to enrich the field of bilingual education and CLIL, ultimately benefiting the academic success of students in an interconnected world.
Potential impacts of invasive crayfish on native
benthic fish: shelter use and agonistic behaviour
(2023)
Spinycheek crayfish (Faxonius limosus) and signal crayfish (Pacifastacus leniusculus) are successful North American invasive crayfish species distributed throughout Europe. Both species compete with native benthic fish for shelter. In a laboratory approach, we assessed competition for shelter and antagonistic interactions between these invasive crayfish species and the native benthic fish species, stone loach (Barbatula barbatula) and bullhead (Cottus gobio). This allows for studying the potential impacts of invasive crayfish on native benthic fish. Spinycheek crayfish and signal crayfish were able to gain control of the shelter and could successfully displace both benthic fish species. For stone loach, the presence of crayfish significantly decreased their shelter use and caused several behavioural changes such as reduced activity and increased hiding behaviour outside the shelter. Although the shelter use by bullheads was not reduced, they displayed similar behavioural changes, if less intense. Invasive crayfish species showed remarkable combative interactions against both species of benthic fishes, evidenced by the high number of aggressive interactions, especially concerning stone loach. Our results highlight the pronounced dominance of invasive crayfish over benthic fish in terms of shelter competition and aggressive interactions under laboratory conditions, which consequently might promote the latter’s exposure to predation.
Increasingly, problematic smartphone use behavior (PSU) and excessive consumption are reported. In this study, an experiment was developed to investigate the influence of screen coloration using the grayscale setting on smartphone usage time in repeated measurements. We also investigated how individuals perceived suffering correlates with smartphone usage time and PSU, and whether differences exist by smartphone usage type (social, process, habitual). 240 subjects completed a questionnaire about smartphone usage time, PSU, perceived suffering, and smartphone usage types. Afterward, their smartphones were switched to grayscale setting for at least 24h, and thereafter 92 of these participants completed the second questionnaire. Analyses showed that grayscale setting decreases usage time and that there is a positive correlation between PSU, smartphone usage duration, and perceived suffering. The types of use (process and habitual) influence one’s perceived suffering. Thus, it shows that individuals are aware of their PSU and suffer from it. Using grayscale setting is effective in reducing smartphone use time.
Künstliche neuronale Netze sind ein beliebtes Forschungsgebiet der künst-
lichen Intelligenz. Die zunehmende Größe und Komplexität der riesigen
Modelle bringt gewisse Probleme mit sich. Die mangelnde Transparenz
der inneren Abläufe eines neuronalen Netzes macht es schwierig, effiziente
Architekturen für verschiedene Aufgaben auszuwählen. Es erweist sich als
herausfordernd, diese Probleme zu lösen. Mit einem Mangel an aufschluss-
reichen Darstellungen neuronaler Netze verfestigt sich dieser Zustand. Vor
dem Hintergrund dieser Schwierigkeiten wird eine neuartige Visualisie-
rungstechnik in 3D vorgestellt. Eigenschaften für trainierte neuronale Net-
ze werden unter Verwendung etablierter Methoden aus dem Bereich der
Optimierung neuronaler Netze berechnet. Die Batch-Normalisierung wird
mit Fine-tuning und Feature Extraction verwendet, um den Einfluss der Be-
standteile eines neuronalen Netzes abzuschätzen. Eine Kombination dieser
Einflussgrößen mit verschiedenen Methoden wie Edge-bundling, Raytra-
cing, 3D-Impostor und einer speziellen Transparenztechnik führt zu einem
3D-Modell, das ein neuronales Netz darstellt. Die Validität der ermittelten
Einflusswerte wird demonstriert und das Potential der entwickelten Visua-
lisierung untersucht.
The trends of industry 4.0 and the further enhancements toward an ever changing factory lead to more mobility and flexibility on the factory floor. With that higher need of mobility and flexibility the requirements on wireless communication rise. A key requirement in that setting is the demand for wireless Ultra-Reliability and Low Latency Communication (URLLC). Example use cases therefore are cooperative Automated Guided Vehicles (AGVs) and mobile robotics in general. Working along that setting this thesis provides insights regarding the whole network stack. Thereby, the focus is always on industrial applications. Starting on the physical layer, extensive measurements from 2 GHz to 6 GHz on the factory floor are performed. The raw data is published and analyzed. Based on that data an improved Saleh-Valenzuela (SV) model is provided. As ad-hoc networks are highly depended onnode mobility, the mobility of AGVs is modeled. Additionally, Nodal Encounter Patterns (NEPs) are recorded and analyzed. A method to record NEP is illustrated. The performance by means of latency and reliability are key parameters from an application perspective. Thus, measurements of those two parameters in factory environments are performed using Wireless Local Area Network (WLAN) (IEEE 802.11n), private Long Term Evolution (pLTE) and 5G. This showed auto-correlated latency values. Hence, a method to construct confidence intervals based on auto-correlated data containing rare events is developed. Subsequently, four performance improvements for wireless networks on the factory floor are proposed. Of those optimization three cover ad-hoc networks, two deal with safety relevant communication, one orchestrates the usage of two orthogonal networks and lastly one optimizes the usage of information within cellular networks.
Finally, this thesis is concluded by an outlook toward open research questions. This includes open questions remaining in the context of industry 4.0 and further the ones around 6G. Along the research topics of 6G the two most relevant topics concern the ideas of a network of networks and overcoming best-effort IP.
Harvesting Season?
(2022)
Seit etwa zwanzig Jahren existieren Bemühungen Kunden zum Kauf von Lebensmitteln im Internet zu bewegen. Dabei waren die Marktstrukturen des digitalen Lebensmittelhandels zu Beginn noch stark fragementiert und schlecht abgestimmt. Durch die technologische Weiterentwicklung in den vergangenen Dekade hat der digitale Lebensmittelkauf an Attraktivität gewonnen. Die Nutzungsadaption dieser Diensteleistungen variiert dabei jedoch sehr stark zwischen unterschiedlichen Regionen. Besonders in Deutschland stagniert der digitalen Lebensmittelhandel auf einem vergleichweise niedrigen Niveau. Diesbezüglich analysiert diese Dissertation sowohl die händlerseitigen Marktstrukturen, als auch die Erwartungen und Hindernisse deutscher Konsumenten gegenüber der Serviceleistung.
Das Jahr 2020 konnotiert einen Wendepunkt für den Online Lebensmittelhandel, als, im Zuge politischer Maßnahmen zur Reduzierung der COVID-19 Infektionen, tägliche Routinen wie der Lebensmitteleinkauf starken Regulierungen unterlagen. Gleichzeitig hat es der digitale Lebensmittelhandel trotz dieser Gelegenheit noch nicht geschafft sich flächendeckend in Deutschland zu etablieren. Dies kann auf fehlende Investments, aber auch unzulängliche Digitalisierungsmaßnahmen zurückgeführt werden. Eine stärkere Synchronisierung zwischen digitalem und stationärem Angebot, bessere Integration digitaler Lebensmitteldienste auf regionaler Ebene sowie angepasste, zielgruppenadäquate digitale Lösungen zum effizienten Abbau von Nutzungsbarrieren. Die Wichtigkeit stabiler Lebensmittelketten und –verteilung konnte durch die COVID-19 Pandemie illustriert werden. Weitere Forschung sollte helfen den digitalen Lebensmittelhandel zu einer nachhaltigen und stabilen Supplementierung des stationären Angebots zu entwickeln.
Die heutige Landwirtschaft ist in hohem Maße auf den Einsatz von Pestiziden angewiesen, um verschiedene Schädlinge zu bekämpfen und die Ernteerträge zu maximieren. Trotz detaillierter Vorschriften für den Einsatz von Pestiziden, die auf einem komplexen System der Risikobewertung beruhen, hat sich gezeigt, dass der weit verbreitete Einsatz dieser biologisch aktiven Substanzen eine Gefahr für die Umwelt darstellt. In Oberflächengewässern wurde beobachtet, dass die Pestizidbelastung die als noch umweltverträglich angesehenen Konzentrationen übersteigt und sich negativ auf die Ökologie der Fließgewässer auswirkt, was die Frage aufwirft, ob die derzeitige Risikobewertung einen nachhaltigen Einsatz von Pestiziden gewährleistet. Um diese Frage zu beantworten, hat das umfassende "Kleingewässer-Monitoring" (KgM) in den Jahren 2018 und 2019 das Vorkommen von Pestiziden und die damit verbundenen ökologischen Auswirkungen in 124 Fließgewässern in ganz Deutschland untersucht.
Basierend auf fünf wissenschaftlichen Publikationen, die aus dem KgM hervorgegangen sind, werden in dieser Arbeit die Pestizidbelastung in Fließgewässern, die ökologischen Auswirkungen und die regulatorischen Implikationen bewertet. Mehr als 1000 Wasserproben wurden auf über 100 Pestizid-Analyten untersucht, um das Vorkommen zu charakterisieren (Publikation 1). Die gemessenen Konzentrationen und Auswirkungen wurden zur Validierung der in der Risikobewertung vorhergesagten Umweltkonzentrationen und Wirkungschwellen verwendet (Veröffentlichung 2). Durch die gemeinsame Analyse von realen Pestizidanwendungsdaten und gemessenen Pestizidmischungen in Fließgewässern wurde die Missachtung von Pestizidmischungen in der Umwelt in der Risikobewertung beurteilt (Veröffentlichung 3). Das Risikopotenzial von Mischungen in Fließgewässern wurde zusätzlich mit Hilfe eines Verdachtsscreenings für 395 Chemikalien und einer Batterie von In-vitro-Bioassays untersucht (Publikation 4). Schließlich wurden die Ergebnisse des KgM verwendet, um die Eignung staatlicher Monitoringprogramme zur Identifizierung von Pestizidrisiken in Oberflächengewässern zu bewerten (Publikation 5).
Die Ergebnisse dieser Arbeit zeigen das weit verbreitete Vorkommen von Pestiziden in den Nichtzielökosystemen der Fließgewässer. Die Wasserproben wiesen eine Vielzahl von Pestiziden auf, die in komplexen Mischungen vor allem in kurzzeitigen Spitzenwerten nach Niederschlagsereignissen auftraten (Veröffentlichungen 1 & 4). Die jeweiligen Höchstwerte der Pestizidkonzentration wurden mit dem Rückgang empfindlicher Wirbellosenarten in Verbindung gebracht und überstiegen in etwa 80 % der landwirtschaftlich geprägten Fließgewässer die gesetzlich zulässigen Konzentrationen, welche als Schwellenwerte zum Teil noch als unzureichend für den Schutz der Wirbellosengemeinschaft angesehen wurden (Publikation 2). Das gleichzeitige Vorkommen von Pestiziden in Fließgewässern führte zu einem Risiko, das in der auf eine einzelne Substanzen ausgerichteten Risikobewertung in realistischen Worst-Case-Szenarien um einen Faktor von etwa 3,2 unterschätzt wurde. Dies wird durch die hohe Häufigkeit, mit der Nichtzielorganismen den Pestiziden ausgesetzt sind, weiter verstärkt (Veröffentlichung 3). Wasserproben, die nach Regenfällen entnommen wurden, verursachten in den Bioassays deutliche Effekte, die nur zu einem geringen Teil durch die vielen detektierten Analyten erklärbar waren, was auf die Relevanz unbekannter chemischer oder biologischer Mischungskomponenten hinweist (Publikation 4). Schließlich wurde festgestellt, dass die behördliche Überwachung von Oberflächengewässern gemäß der Wasserrahmenrichtlinie (WRRL) die Risiken von Pestiziden erheblich unterschätzt, da hier etwa drei Viertel der kritischen Pestizide und mehr als die Hälfte der gefährdeten Gewässer übersehen worden wären (Veröffentlichung 5).
Im Wesentlichen liefert diese Arbeit eine neue Ebene der Validierung der Risikobewertung von Pestiziden in aquatischen Ökosystemen, indem das Auftreten von Pestiziden und ihre Auswirkungen auf die Umwelt in einem bisher einzigartigen Maßstab bewertet werden. Die Ergebnisse zeigen generell, dass der derzeitige landwirtschaftliche Einsatz von Pestiziden zu erheblichen Auswirkungen auf die Fließgewässerökologie führt, die über das von der Risikobewertung tolerierte Maß hinausgehen. In dieser Arbeit wurden die Unterschätzung der Pestizidexposition, die potenzielle Unzulänglichkeit der gesetzlichen Schwellenwerte und die allgemeine Trägheit des Zulassungsverfahrens als Hauptursachen dafür ermittelt, dass entsprechende gesetzlich verankerte, ökologische Zielsetzungen momentan erwiesenermaßen nicht erreicht werden. Um einen nachhaltigen Einsatz von Pestiziden zu gewährleisten, schlägt die Arbeit wesentliche Änderungen der Risikobewertung vor. Monitoringprogramme wie das KgM, die über die derzeitigen staatlichen Überwachungsbemühungen hinausgehen, werden weiterhin erforderlich sein, um die Regulierungsbehörden für Pestizide ständig über die Gültigkeit ihrer prospektiven Risikobewertung zu informieren, die immer mit Unsicherheiten behaftet sein wird.
In den letzten Jahrzehnten wurde deutlich, dass die Welt mit einer beispiellosen, vom Menschen verursachten Biodiversitätskrise konfrontiert ist. Eine der am stärksten bedrohten Artengruppen stellen dabei die Amphibien dar, so gelten laut IUCN 41% der Amphibienarten als gefährdet. Jedoch können selbst bei Arten die als "least concern" klassifiziert sind Populationsrückgänge auf lokaler Ebene beobachtet werden. Mit einer veränderten Landnutzung und dem Einsatz von Pestiziden sind zwei der Hauptursachen für diese Gefährdung direkt mit intensiver Landwirtschaft verbunden. Daher ist das Wissen um die Situation von Amphibien in der Agrarlandschaft von entscheidender Bedeutung für Schutzmaßnahmen. In der vorliegenden Arbeit wurden Amphibienpopulationen in der durch Weinbau geprägten Gegend um Landau in der Pfalz (Deutschland) im Hinblick auf Lebensraumnutzung, Pestizidexposition, biometrische Merkmale sowie der Alters- und genetischer Populationsstruktur untersucht. Da Agrarflächen oft zur Fragmentierung von Landschaften führen, ging ich der Frage nach ob dies auch auf Weinberge zutrifft und ob eingesetzte Pestizide zur Fragmentierung beitragen, ob also eine chemische Landschaftsfragmentierung vorliegt. Durch die Telemetrierung von Erdkröten (Bufo bufo) konnte ich zeigen, dass diese Art direkt in Weinbergen gefunden werden kann, diese aber generell eher meidet. Die Analyse der genetischen Struktur von Grasfröschen (Rana temporaria) ergab, dass Weinberge als Barriere für Amphibien anzusehen sind. Um herauszufinden, ob Pestizide zu der daraus resultierenden Landschaftsfragmentierung beitragen, führte ich einen Wahlversuch im Labor durch, bei dem ich ein Vermeidungsverhalten gegenüber kontaminierten Böden fand, was zu einer chemischen Landschaftsfragmentierung führen könnte. Durch die Kombination von Telemetriedaten mit Daten über Pestizidanwendungen von lokalen Winzern konnte ich zeigen, dass ein großer Teil der Erdkröten mit Pestiziden in Kontakt kommt. Außerdem konnte gezeigt werden, dass sich die Agrarlandschaft und hier wahrscheinlich Pestizide negativ auf die Fortpflanzungsfähigkeit von Erdkröten auswirkt. Bei der Untersuchung von Fadenmolchen (Lissotriton helveticus) stellte ich fest, dass adulte Molche aus Gewässern in der Agrarlandschaft kleiner sind als Individuen aus Gewässern im Wald. Da kein Unterschied in der Altersstruktur festgestellt werden konnte, könnten diese Größenunterschiede auf suboptimale Bedingungen für Larven und/oder Jungtiere hindeuten, wenngleich Gewässer in der Agrarlandschaft geeignete Lebensräume für adulte Teichmolche sein könnten. Ich komme zu dem Schluss, dass die beste Maßnahme zum Schutz von Amphibien in der Agrarlandschaft eine heterogene Kulturlandschaft mit einem Mosaik aus verschiedenen Lebensräumen wäre, die ohne oder zumindest mit weniger Pestiziden auskommt. Grüne Korridore zwischen Populationen und Teillebensräumen würden es wandernden Individuen ermöglichen, landwirtschaftliche und damit pestizidbelastete Flächen zu vermeiden. Dies würde das Risiko der Pestizidexposition von Amphibien verringern und gleichzeitig die Fragmentierung der Landschaft und damit die Isolation von Populationen verhindern.
This thesis was motivated by the need to advance the knowledge on the variability and dynamics of energy fluxes in lakes and reservoirs, as well as about the physical processes that regulate the fluxes at both the air and water side of the air-water interface.
In the first part, I re-examine how mechanical energy, resulting from its major source – the vertical wind energy flux - distributes into the various types of water motions, including turbulent flows and surface and internal waves. Although only a small fraction of the wind energy flux from the atmosphere is transferred to the water, it is crucial for physical, biogeochemical and ecological processes in lentic ecosystems. Based on extensive air- and water-side measurements collected at two small water bodies (< 10 km2), we estimated the energy fluxes and energy content in surface and in internal waves. Overall, the estimated energy fluxes and energy content agree well with results reported for larger water bodies, suggesting that the energetics driving the water motions in enclosed basins is similar, independently of the basin size. Our findings highlight the importance of the surface waves that receive the largest fraction of the wind energy flux, which strongly nonlinearly increases for wind speeds exceeding 3 m s-1. We found that the existing parameterization of the wave height as a function of wind speed and fetch length did not reproduce the measured wave amplitude in lakes. On average, the highest energy content was observed in basin-scale internal waves, together with high-frequency internal waves exhibiting seasonal variability and varies among the aquatic systems. During our analysis, we discovered the diurnal variability of the energy dissipation rates in the studied lake, suggesting biogenic turbulence generation, which appears to be a widespread phenomenon in lakes and reservoirs.
In the second part of the thesis, I addressed current knowledge gaps related to the bulk transfer coefficients (also known as the drag coefficient, the Stanton number and the Dalton number), which are of a particular importance for the bulk estimation of the surface turbulent fluxes of momentum, sensible and latent heat in the atmospheric boundary layer. Their inaccurate representation may lead to significant errors in flux estimates, affecting, for example, the weather and climate predictions or estimations of the near-surface current velocities in lake hydrodynamic models. Although the bulk transfer coefficients have been extensively studied over the past several decades (mainly in marine and large-lake environments), there has been no systematic analysis of measurements obtained at lakes of different size. A significant increase of the transfer coefficients at low wind speeds (< 3 m s-1) has been observed in several studies, but, to date, it has remained unexplained. We evaluated
the bulk transfer coefficients using flux measurements from 31 lakes and reservoirs. The estimates were generally within the range reported in previous studies for large lakes and oceans. All transfer coefficients increased substantially at low wind speeds, which was found to be associated with the presence of gusts and capillary waves (except the Dalton number). We found that the Stanton number is systematically higher than the Dalton number. This challenges the assumption made in the Bowen-ratio method, which is widely used for estimating evaporation rates from micrometeorological measurements. We found that the variability of the transfer coefficients among the lakes could be associated with lake surface area. In flux parameterizations at lake surfaces, it is recommended to consider variations in the drag coefficient and the Stanton number due to wind gustiness and capillary wave roughness while the Dalton number could be considered as constant at all wind speeds.
In the third part of the thesis, I address the key drivers of the near-surface turbulence that control the gas exchange in a large regulated river. As all inland waters, rivers are an important natural source of greenhouse gases. The effects of the widespread alteration and regulation of river flow for human demands on gas exchange is largely unknown. In particular, the near-surface turbulence in regulated rivers has been rarely measured and its drivers have not been identified. While in lakes and reservoirs, near-surface turbulence is mainly related to atmospheric forcing, in shallow rivers and streams it is generated by bottom friction of the gravity-forced flow. The studied large regulated river represents a transition between these two cases. Atmospheric forcing and gravity were the dominant drivers of the near-surface turbulence for a similar fraction of the measurement period. Based on validated scalings, we derived a simple model for estimating the relative contributions of wind and bottom friction to near-surface turbulence in lotic ecosystems with different flow depths. Large diel variability in the near-surface energy dissipation rates due to flow regulation leads to the same variability in gas exchange. This suggests that estimates of gas fluxes from rivers are biased by measurements performed predominantly during daytime.
In addition, the novelty in all the analyses described above is the use of the turbulent surface fluxes measured directly by the eddy-covariance technique – at the moment of writing, the most advanced method. Overall, this thesis is of a potential interest for a broad range of scientific disciplines, including limnology, micrometeorology and open channel hydraulics.
Ponds in agricultural landscapes are often used by amphibians as breeding habitat. However, the characteristics of agricultural ponds and especially the surrounding area are usually said to be suboptimal for many amphibian species. Using suboptimal habitats might allow a species’ survival and reproduction, but can have negative consequences at the individual and population level. In the present study, we investigated Palmate Newt (Lissotriton helveticus) populations from an intensive wine-growing region in southern Germany and compared them with populations located in a nearby forested area in terms of biometric traits, age and genetic structure. By analyzing over 900 adult newts from 11 ponds, we could show that newts reproducing in forest ponds were larger than newts reproducing in agricultural ponds. We did not find differences in the newt age and growth rate between habitat types. Therefore, differences in the body size of newts might already existed in larvae and/or juveniles, what might be related to a lower habitat quality for larvae and/or juveniles in the agricultural landscape. Body mass, body condition and sexual dimorphic traits (length of the caudal filament and max. height of the tail) correlated with body size, but no additional effect of the habitat type was found. The analysis of microsatellites revealed a higher genetic diversity in forest ponds. However, no clear sign of inbreeding was observed in any agricultural population, suggesting some degree of gene flow between them. We conclude, that agricultural ponds can be suitable habitats for the Palmate Newt and that conservation effort should aim to preserve them. The observed effects on body size indicate the need to increase the quality of the aquatic and terrestrial habitat for early life stages of this newt species in agricultural landscapes.
In dieser Arbeit werden die Möglichkeiten der Echtzeitvisualisierung von
OpenVDB-Dateien untersucht. Die Grundlagen von OpenVDB, dessen
Möglichkeiten, und NanoVDB, der GPU-Schnittstelle, werden erforscht.
Es wird ein System entwickelt, welches PNanoVDB, die Grafik-APIPortierung
von OpenVDB, verwendet. Außerdem werden Techniken
zur Verbesserung und Beschleunigung eines Einzelstrahlansatzes zur
Strahlenverfolgung getestet und angepasst. Um eine Echtzeitfähigkeit
zu realisieren, werden zwei Einzelstreuungsansätze implementiert, von
denen einer ausgewählt, weiter untersucht und optimiert wird.
Dies ermöglicht potenziellen Nutzern eine direkte Rückmeldung über
ihre Anpassungen zu erhalten, sowie die Möglichkeit, alle Parameter zu
ändern, um einen freien Gestaltungsprozess zu gewährleisten.
Neben dem visuellen Rendering werden auch entsprechende Benchmarks
gesammelt, um verschiedene Verbesserungsansätze zu vergleichen und
deren Relevanz zu beweisen. Um eine optimale Nutzung zu erreichen,
wird auf die Rendering-Zeiten und den Speicherverbrauch auf der GPU
geachtet. Ein besonderes Augenmerk wird auf die Integrierbarkeit und
Erweiterbarkeit des Programms gelegt, um eine einfache Integration in
einen bestehenden Echtzeit-Renderer wie U-Render zu ermöglichen.
This dissertation is dedicated to a new concept for capturing renunciation-oriented attitudes and beliefs — sufficiency orientation. Sufficiency originates in the interdisciplinary sustain-ability debate. In contrast to efficiency and consistency, sufficiency considers human behaviour as the cause of socio-ecological crises and strives for a reduction in consumption respecting the planetary boundaries. The present work places sufficiency in a psychological research context and explores it qualitatively and quantitatively. On the basis of five manuscripts, the overarching question pursued is to what extent sufficiency orientation contributes to socio-ecological transformation. Based on one qualitative study and five further quantitative studies, sufficiency orientation is investigated in different behavioural contexts that are of particular importance with regard to CO2 emissions. In addition, sufficiency orientation is linked to a wider range of psychologically relevant theories that help gain an overview of correlates and possible causes for the development of a sufficiency orientation.
Manuscript 1 uses expert interviews (N = 21) to develop a heuristic framework on a transformation towards societal sufficiency orientation including barriers and enablers, as well as ambiguities on such a change. The derived elements are interpreted in the light of the leverage points approach. This framework can serve as a heuristic for future research and to develop measures concerning sufficiency orientation.
As part of an online study (N = 648), Manuscript 2 examines the extent to which sufficiency orientation can be embedded in classic models for explaining pro-environmental intentions and behaviour (Theory of Planned Behaviour, Norm Activation Model), and showed a significant contribution to the explanation of intentions and behaviour in the field of plastic consumption.
Manuscript 3 reports two framing experiments (Study 1, N = 123, Study 2, N = 330) to investigate how pro-social justice sensitivity contributes to making sufficiency orientation more salient and promoting it. While sufficiency orientation and pro-social facets of justice sensitivity were positively related to each other, there was no effect of the framing intervention in the hypothesised direction. The results indicate that justice-related information at least in the presented manner is more likely to generate reactance.
Manuscript 4 presents an online study (N = 317) and targets the importance of sufficiency orientation for predicting actual greenhouse gas emissions in relation to flight behav-iour and policy support for the decarbonisation of mobility. In addition, the connection between sufficiency orientation and global identity is examined. It turns out that sufficiency orientation is superior to global identity in predicting actual emissions and decarbonisation policies. Contrary to expectations, sufficiency orientation and the form of global identity operationalised in the presented study shows a positive correlation and are compatible.
Manuscript 5 reports a reflective diary intervention (N = 252) that should lead to a short- and long-term increase in sufficiency orientation by satisfying basic psychological needs through induced self-reflection. For both groups with or without the intervention, sufficiency orientation increased slightly but significantly. Although no specific effect of the manipulation was found, basic psychological need satisfaction turns out to be the largest predictor for sufficiency orientation. Subjective well-being is positively associated with sufficiency orientation, while time affluence shows no clear associations in the study.
Overall, the results highlight the relevance of sufficiency orientation in relation to socio-ecological transformation and actual behavioural change. Sufficiency orientation is related to low-emission behaviour and support for political measures to decarbonize infrastructures. These results contribute to the discussion on the intention-behaviour gap in regard to impact-relevant behaviour, i.e. behaviour producing high emissions. The present findings suggest, that sufficiency orientation could be related to a strong intention-behavioural consistency. However, further research is needed to validate these results and improve the measurement of sufficiency orientation. Furthermore, the studies provided insights on correlates of sufficiency orientation: justice sensitivity, global identity, subjective well-being and left-wing liberal political ideologies are all found to be positively related to sufficiency orien-tation. Moreover, basic psychological need satisfaction was identified as a potential mechanism that can support the emergence of sufficiency orientation, however, causality remains unclear. From these findings, the work derives practical implications how to possibly strengthen sufficiency orientation on the micro, meso and macro levels of society.
Taken together, the dissertation provides important insights into a new and still developing concept, and shows its connectivity to psychological theories. However, future research is required in order to grasp more precisely the complexity of sufficiency orientation and to understand origins and predictors of sufficiency orientation. This work contributes to the interdisciplinary debate on socio-ecological transformation and points out that sufficiency orientation can serve to a future worth living as being related to reduced consumption.
Soziale Netzwerke sind allgegenwärtige Strukturen, die wir jeden Tag generieren und bereichern, während wir uns über Plattformen der sozialen Medien, E-Mails und jede andere Art von Interaktion mit Menschen verbinden. Während diese Strukturen für uns nicht greifbar sind, sind sie sehr wichtige Informationsträger. Zum Beispiel kann die politische Neigung unserer Freunde ein Näherungswert sein, um unsere eigenen politischen Präferenzen zu identifizieren. Gleichermaßen
kann die Kreditwürdigkeit unserer Freunde entscheidend bei der Gewährung oder Ablehnung unserer eigenen Kredite sein. Diese Erklärungskraft wird bei der Gesetzgebung, bei Unternehmensentscheidungen und in der Forschung genutzt, da sie maschinellen Lerntechniken hilft, genaue Vorhersagen zu treffen. Diese Verallgemeinerungen kommen jedoch häufig nur der Mehrheit der Menschen zugute, welche die allgemeine Struktur des Netzwerks prägen, und benachteiligen unterrepräsentierte Gruppen, indem sie ihre Mittel und Möglichkeiten begrenzen. Daher ist es wichtig zuerst zu verstehen, wie sich soziale Netzwerke bilden, um dann zu überprüfen, inwieweit ihre Mechanismen der Kantenbildung dazu beitragen, soziale Ungleichheiten in Algorithmen des maschinellen Lernens zu verstärken.
Zu diesem Zweck schlage ich im ersten Teil dieser Arbeit HopRank und Janus vor, zwei Methoden um die Mechanismen der Kantenbildung in realen ungerichteten sozialen Netzwerken zu charakterisieren. HopRank ist ein Modell der Daten-Hamsterei in Netzwerken. Sein Schlüsselkonzept ist ein gezinkter zufälliger Wanderer, der auf Übergangswahrscheinlichkeiten zwischen K-Hop-Nachbarschaften basiert. Janus ist ein Bayessches Rahmenwerk, mit dem wir plausible Hypothesen der Kantenbildung in Fällen identifizieren und bewerten können, in denen Knoten zusätzliche Daten enthalten. Im zweiten Teil dieser Arbeit untersuche ich die Auswirkungen dieser Mechanismen - welche die Kantenbildung in sozialen Netzwerken erklären - auf das maschinelle Lernen. Insbesondere untersuche ich den Einfluss von Homophilie, bevorzugter Bindung, Kantendichte, Anteil von Minderheiten und der Richtung von Verbindungen sowohl auf Leistung als auch auf systematische Fehler von kollektiver Klassifizierung und auf die Sichtbarkeit von Minderheiten in Top-K-Rängen. Meine Ergebnisse zeigen eine starke Korrelation zwischen der Netzwerkstruktur und den Ergebnissen des maschinellen Lernens. Dies legt nahe, dass die systematische Diskriminierung spezieller Personen: (i) durch den Netzwerktyp vorweggenommen und (ii) durch strategisches Verbinden im Netzwerk verhindert werden kann.
This study examines the contribution of saving and credit cooperatives (SACCOS) on the improvement of members‘ socio economic development in Rwanda: Opportunities and challenges‖, Evidence from Umwalimu SACCO- Huye District‖. The appearance of saving and credit cooperatives or credit unions has been known as remedy for social ills rooted in poverty because of its efficiency in loans or credits dispensation, social equality for enhancement and reduction of poverty amongst low income earners. Therefore, millions and millions of poor people and non-bankable in developing countries (or third world countries) have been provided access to formal financial services through saving and credit cooperatives‘ programs.
The targeted population concerned by the study was 1,940 members of USACCO from which a sample of 92 respondents was purposively selected. The study has adopted a combination of correlation and descriptive research design. It has employed both quantitative and qualitative approaches. The study used both primary and secondary data. The primary data was collected using questionnaire and interview and, while secondary data was collected using documentations techniques whereby, Manual of procedures and Credit policies of USACCO and financial reports have been consulted. The analysis of data was done using SPSS version 21. The data was presented in form of tables, charts and graphs designed by SPSS v. 21. The bio-characteristics of respondents showed that, the majority of respondents were women with 55.4%, majority of respondents‘ age is between 26 to 45 years Furthermore, and majority 77.20% of respondents were married. 100% of respondents attended school, where the majority of respondents attended secondary school with certificate A2.
The study has revealed that Umwalimu SACCO services offered to its members have a positive effect on the improvement of members‘ welfare. It was found that USACCO services have slightly affected income level of members, assets acquired, access to education and medical care as well as small income generating activities established by members in Huye District. The analysis of data also revealed that there are some variables which have effected USACCO members‘ socio-economic status, these were listed as: Education background of a member, number of dependents, the occupation of a member, and number of loans got from USACCO, government programs against teachers‘ welfare, and membership duration played very important role on the improvement of standard living of teachers. All these variables were found to have positive effects on teachers‘ socio-economic status, except the family size of respondents.
In addition, the findings showed that, the majority of respondents confirmed that, they did not find opportunities to save with other financial institutions, and other respondents did not have access to loan from other financial institutions due to complicated loan requirements. In addition, after they
have joined USACCO, their deplorable status somewhat changed, both socially and economically which has contributed to the improvement of their welfare. Therefore, the study testified that, the welfare of USACCO members in terms of assets acquired, income increased was improved compared to situation before joining USACCO. The study concludes that ―the level of improvement of living conditions of teachers depends largely to the level of loan granted by USACCO to teachers and membership duration. If the level of teachers‘ loan and saving increases, there will be also improvement of teachers‘ wellbeing‖ and finally, USACCO financial service is a veritable instrument for better improvement of economic and social conditions of teachers. The study recommends that, USACCO should provide frequent and regular trainings on business management game to their members. This could help members for good management of their loans and reducing loan defaulters‘ cases observed at USACCO. Challenges observed were lack of physical collateral security required by USACCO, complicated loan requirements terms and conditions and insufficient trainings on business management game.
Instructor feedback on written assignments is one of the most important elements in the writing process, especially for students writing in English as a foreign language. However, students are often critical of both the amount and quality of the feedback they receive. In order to better understand what makes feedback effective, this study explored the nature of students’ assessments of the educational alliance, and how their receptivity to, perceptions of, and decisions about using their instructors’ feedback differed depending on how strong they believed the educational alliance to be. This exploratory case study found that students not only assessed the quality of the educational alliance based on goal compatibility, task relevance, and teacher effectiveness, but that there was also a reciprocal relationship between these elements. Furthermore, students’ perceptions of the educational alliance directly influenced how they perceived the feedback, which made the instructor’s choice of feedback method largely irrelevant. Stronger educational alliances resulted in higher instances of critical engagement, intrinsic motivation, and feelings of self-efficacy. The multidirectional influence of goal, task, and bond mean that instructors who want to maximize their feedback efforts need to attend to all three.
The growing numbers of breeding rooks (Corvus frugilegus) in the city of Landau (Rhineland- Palatinate, Germany) increase the potential for conflict between rooks and humans, which is mainly associated with noise and faeces. Therefore, the aim of this work is a better understanding of the breeding tree selection of the rook in order to develop options for action and management in the future.
Part I of this thesis provides general background information on the rook and includes mapping of the rookeries in the Anterior Palatinate and South Palatinate including Landau in the year 2020. That mapping revealed that the number of rural colonies has decreased, while the number of urban colonies has increased in the study area in the last few years. In line with current literature, tree species and tree size were important criteria for breeding tree selection. However, the mapping showed that additional factors must be important as well.
Therefore, as rooks seem to often breed along traffic axes, Part II of this thesis examines how temperature, artificial light and noise, which are all linked to traffic axes, affect the breeding tree selection of the rook in the city of Landau. The following three hypotheses are developed: (1) manually selected breeding trees (Bm) have a warmer microclimate than manually selected non-breeding trees (Nm) or randomly selected non-breeding trees (Nr), (2) Bm are exposed to a higher light level than Nm or Nr and (3) Bm are exposed to a higher noise level than Nm or Nr. To test these hypotheses, 15 Bm, 13 Nm and 16 Nr are investigated.
The results show that Bm were exposed to more noise than both types of non-breeding trees (μBm, noise = 36.52481 dB, μNm, noise = 31.27229 dB, μNr, noise = 29.17417 dB) where the difference between Bm and Nr was significant. In addition, there was a tendency for Bm to be exposed to less light (μBm, light = 0.356 lx) than Nm (μNm, light = 0.4107692 lx) and significantly less light than Nr (μNr, light = 1.995 lx), while temperature did not differ between the groups (μBm, temp = 16.90549 °C, μNm, temp = 16.93118 °C, μNr, temp = 17.28639 °C).
This study shows for the first time that rooks prefer trees which are exposed to low light levels and high noise levels, i.e. more intense traffic noise, for breeding. It can only be speculated that the cause of this is lower enemy pressure at such sites. The fact that temperature does not seem to have any influence on breeding tree selection may be due to only small temperature differences at nest height, which might be compensated by breeding behaviour. Consequently, in the long term one management approach could be to divert traffic from inner-city areas, especially schools and hospitals, to bypasses. If tree genera suitable for rooks, such as plane trees, are planted along the bypasses, those sites could provide suitable alternative habitats to inner-city breeding locations, which become less attractive for breeding due to noise reduction. In the short term in addition to locally implemented repellent measures the most effective approach is to strengthen rook acceptance among the population. However, further research is needed to verify the results of this thesis and to gain further insights into rook breeding site selection in order to develop effective management measures.