Filtern
Erscheinungsjahr
Dokumenttyp
- Dissertation (245)
- Masterarbeit (92)
- Ausgabe (Heft) zu einer Zeitschrift (84)
- Bachelorarbeit (45)
- Diplomarbeit (27)
- Wissenschaftlicher Artikel (16)
- Studienarbeit (11)
- Konferenzveröffentlichung (9)
- Habilitation (4)
- Sonstiges (2)
- Buch (Monographie) (1)
- Preprint (1)
Sprache
- Englisch (537) (entfernen)
Gehört zur Bibliographie
- nein (537) (entfernen)
Schlagworte
- Pestizid (8)
- Pflanzenschutzmittel (6)
- Software Engineering (6)
- Internet of Things (5)
- Biodiversität (4)
- Bluetooth (4)
- Bodenchemie (4)
- Landwirtschaft (4)
- Semantic Web (4)
- ecotoxicology (4)
Institut
- Fachbereich 4 (116)
- Institut für Informatik (81)
- Fachbereich 7 (78)
- Institut für Wirtschafts- und Verwaltungsinformatik (53)
- Institut für Computervisualistik (51)
- Institute for Web Science and Technologies (51)
- Institut für Management (30)
- Institut für Integrierte Naturwissenschaften, Abt. Biologie (24)
- Institut für Umweltwissenschaften (23)
- Fachbereich 8 (20)
Coat color and pattern are a distinguished feature in mammalian carnivores, shaped by climatic cycles and habitat type. It can be expressed in various ways, such as gradients, polymorphisms, and rare color variants. Although natural selection explains much of the phenotypic variation found in the wild, genetic drift and heterozygote deficiency, as prominent in small and fragmented populations, may also affect phenotypic variability through the fixation of recessive alleles. The aim of this study was to test whether rare color variants in the wild could relate to a deficiency of heterozygotes, resulting from habitat fragmentation and small population size. We present an overview of all rare color variants in the order Carnivora, and compiled demographic and genetic data of the populations where they did and did not occur, to test for significant correlations. We also tested how phylogeny and body weight influenced the presence of color variants with phylogenetic generalized linear mixed models (PGLMMs). We found 40 color-variable species and 59 rare color variants. In 17 variable phenotypic populations for which genetic diversity was available, the average AR was 4.18, HO = 0.59, and HE= 0.66, and FIS= 0.086. We found that variable populations displayed a significant reduction in heterozygosity and allelic richness compared to non-variable populations across species. We also found a significant negative correlation between population size and inbreeding coefficients. Therefore, it is possible that small effective size had phenotypic consequences on the extant populations. The high frequency of the rare color variants (averaging 20%) also implies that genetic drift is locally overruling natural selection in small effective populations. As such, rare color variants could be added to the list of phenotypic consequences of inbreeding in the wild.
The production of isolated metallic nanoparticles with multifunctionalized properties, such as size and shape, is crucial for biomedical, photocatalytic, and energy storage or remediation applications. This study investigates the initial particle formations of gold nanoparticles (AuNPs) bioproduced in the cyanobacteria Anabaena sp. using high-resolution transmission electron microscopy images for digital image analysis. The developed method enabled the discovery of cerium nanoparticles (CeNPs), which were biosynthesized in the cyanobacteria Calothrix desertica. The particle size distributions for AuNPs and CeNPs were analyzed. After 10 h, the average equivalent circular diameter for AuNPs was 4.8 nm, while for CeNPs, it was approximately 5.2 nm after 25 h. The initial shape of AuNPs was sub-round to round, while the shape of CeNPs was more roundish due to their amorphous structure and formation restricted to heterocysts. The local PSDs indicate that the maturation of AuNPs begins in the middle of vegetative cells and near the cell membrane, compared to the other regions of the cell.
In zeitgenössischen Entscheidungssystemen ist die Integration von maschinellen Lernmodellen (ML) wie CatBoost, Random Forest und Entscheidungsbäumen allge- genwärtig und übt erheblichen Einfluss auf gesellschaftliche Dynamiken aus. Diese weitverbreitete Anwendung betont die kritische Notwendigkeit wirksamer Fairness- Interventionen, um inhärente Verzerrungen und Diskriminierungen zu mildern. Allerdings adressieren vorherrschende Ansätze überwiegend binäre Klassifikationen und stützen sich häufig auf begrenzte, regionsspezifische Datensätze, was ihre Relevanz und Anwendbarkeit einschränkt. Um diese Mängel zu beheben, schlagen wir eine Erweiterung des Fairness-Projektionsmodells vor, das Ensemble-Learning-basierten Klassifikatoren als Basis Klassifizierungsmodell verwendet. Das vorgeschlagene Modell wird Fairness Projection with Ensemble Trees (FPET) genannt, eine innovative Nachbearbeitungsintervention, die speziell für Multi- Class-Klassifikationsaufgaben entwickelt wurde. Fairness Projection with Ensemble Trees ist einzigartig darauf ausgelegt, mehrere und sich überschneidende geschützte Gruppen zu berücksichtigen, was es vielseitig und inklusiv macht. Ein herausragendes Merkmal von FPET ist seine Modellagnostik und Skalierbarkeit auf große Datensätze, erleichtert durch ein informationstheoretisches Framework, das auf Informationsprojektion basiert. Dieser Ansatz liefert robuste theoretische Garantien hinsichtlich Konvergenz und Stichprobenkomplexität und gewährleistet somit seine praktische Umsetzbarkeit. Darüber hinaus wird das Design von FPET durch die Unterstützung für parallele Verarbeitung verstärkt, was seine Eignung für groß angelegte Anwendungen weiter erhöht.
Umfassende Bewertungen an diversen Datensätzen, darunter das ENEM- Prüfungsdatensatz aus Brasilien, HSLS und COMPAS, zeigen die überlegene Leistung unseres vorgeschlagenen Modells, Fairness Projection with Ensemble Trees (FPET), das den CatBoost-Klassifikator sowohl für binäre als auch für Multi- Class- Klassifikationsaufgaben verwendet. In allen Datensätzen zeigte CatBoost herausragende Leistungen. Unsere Fairness-Methode übertraf auch andere Benchmark Modelle wie Equality of Odds (EqOdds), Level Equal Opportunity (LevEqOpp), Reduktionsmethode und Ablehnungsverfahren. Die Ergebnisse wurden anhand von zwei Metriken verglichen: Mean Equal Opportunity und Statistical Parity. Diese Ergebnisse unterstreichen die Wirksamkeit von FPET in verschiedenen Kontexten und führen einen neuartigen Ansatz zur Fairness im maschinellen Lernen ein, der gerechte und inklusive Entscheidungsfindungen sicherstellt.
X-ray computed tomography (XRT) is a three-dimensional (3D), non-destructive, and reproducible investigation method capable of visualizing and examining internal and external structures of components independent of the material and geometry. In this work, XRT with its unique abilities complements conventionally utilized examination methods for the investigation of microstructure weakening induced by hydrogen corrosion and furthermore provides a new approach to corrosion research. The motivation for this is the current inevitable transformation to hydrogen-based steel production. Refractories of the system Al2O3-SiO2 are significant as lining materials. Two exemplary material types A and B, which differ mainly in their Al2O3:SiO2 ratio, are examined here using XRT. Identical samples of the two materials are measured, analyzed, and then compared before and after hydrogen attack. In this context, hydrogen corrosion-induced porosity and its spatial distribution and morphology are investigated. The results show that sample B has an higher resistance to hydrogen-induced attack than sample A. Furthermore, the 3D-representation revealed a differential porosity increase within the microstructure.
This thesis explores and examines the effectiveness and efficacy of traditional machine learning (ML), advanced neural networks (NN) and state-of-the-art deep learning (DL) models for identifying mental distress indicators from the social media discourses based on Reddit and Twitter as they are immensely used by teenagers. Different NLP vectorization techniques like TF-IDF, Word2Vec, GloVe, and BERT embeddings are employed with ML models such as Decision Tree (DT), Random Forest (RF), Logistic Regression (LR) and Support Vector Machine (SVM) followed by NN models such as Convolutional Neural Network (CNN), Recurrent Neural Network (RNN) and Long Short-Term Memory (LSTM) to methodically analyse their impact as feature representation of models. DL models such as BERT, DistilBERT, MentalRoBERTa and MentalBERT are end-to-end fine tuned for classification task. This thesis also compares different text preprocessing techniques such as tokenization, stopword removal and lemmatization to assess their impact on model performance. Systematic experiments with different configuration of vectorization and preprocessing techniques in accordance with different model types and categories have been implemented to find the most effective configurations and to gauge the strengths, limitations, and capability to detect and interpret the mental distress indicators from the text. The results analysis reveals that MentalBERT DL model significantly outperformed all other model types and categories due to its specific pretraining on mental data as well as rigorous end-to-end fine tuning gave it an edge for detecting nuanced linguistic mental distress indicators from the complex contextual textual corpus. This insights from the results acknowledges the ML and NLP technologies high potential for developing complex AI systems for its intervention in the domain of mental health analysis. This thesis lays the foundation and directs the future work demonstrating the need for collaborative approach of different domain experts as well as to explore next generational large language models to develop robust and clinically approved mental health AI systems.
This work addresses the challenge of calibrating multiple solid-state LIDAR systems. The study focuses on three different solid-state LIDAR sensors that implement different hardware designs, leading to distinct scanning patterns for each system. Consequently, detecting corresponding points between the point clouds generated by these LIDAR systems—as required for calibration—is a complex task. To overcome this challenge, this paper proposes a method that involves several steps. First, the measurement data are preprocessed to enhance its quality. Next, features are extracted from the acquired point clouds using the Fast Point Feature Histogram method, which categorizes important characteristics of the data. Finally, the extrinsic parameters are computed using the Fast Global Registration technique. The best set of parameters for the pipeline and the calibration success are evaluated using the normalized root mean square error. In a static real-world indoor scenario, a minimum root mean square error of 7 cm was achieved. Importantly, the paper demonstrates that the presented approach is suitable for online use, indicating its potential for real-time applications. By effectively calibrating the solid-state LIDAR systems and establishing point correspondences, this research contributes to the advancement of multi-LIDAR fusion and facilitates accurate perception and mapping in various fields such as autonomous driving, robotics, and environmental monitoring.
Focusing on the triangulation of detective fiction, masculinity studies and disability studies, "Investigating the Disabled Detective – Disabled Masculinity and Masculine Disability in Contemporary Detective Fiction" shows that disability challenges common ideals of (hegemonic) masculinity as represented in detective fiction. After a theoretical introduction to the relevant focal points of the three research fields, the dissertation demonstrates that even the archetypal detectives Dupin and Holmes undermine certain nineteenth-century masculine ideals with their peculiarities. Shifting to contemporary detective fiction and adopting a literary disability studies perspective, the dissertation investigates how male detectives with a form of neurodiversity or a physical impairment negotiate their masculine identity in light of their disability in private and professional contexts. It argues that the occupation as a detective supports the disabled investigator to achieve ‘masculine disability’. Inversing the term ‘disabled masculinity’, predominantly used in research, ‘masculine disability’ introduces a decisively gendered reading of neurodiversity and (acquired) physical impairment in contemporary detective fiction. The term implies that the disabled detective (re)negotiates his masculine identity by implementing the disability in his professional investigations and accepting it as an important, yet not defining, characteristic of his (gender) identity. By applying this approach to five novels from contemporary British and American detective fiction, the dissertation demonstrates that masculinity and disability do not negate each other, as commonly assumed. Instead, it emphasises that disability allows the detective, as much as the reader, to rethink masculinity.
Empirische Studien in der Softwaretechnik verwenden Software Repositories als Datenquellen, um die Softwareentwicklung zu verstehen. Repository-Daten werden entweder verwendet, um Fragen zu beantworten, die die Entscheidungsfindung in der Softwareentwicklung leiten, oder um Werkzeuge bereitzustellen, die bei praktischen Aspekten der Entwicklung helfen. Studien werden in die Bereiche Empirical Software Engineering (ESE) und Mining Software Repositories (MSR) eingeordnet. Häufig konzentrieren sich Studien, die mit Repository-Daten arbeiten, auf deren Ergebnisse. Ergebnisse sind aus den Daten abgeleitete Aussagen oder Werkzeuge, die bei der Softwareentwicklung helfen. Diese Dissertation konzentriert sich hingegen auf die Methoden und High-Order-Methoden, die verwendet werden, um solche Ergebnisse zu erzielen. Insbesondere konzentrieren wir uns auf inkrementelle Methoden, um die Verarbeitung von Repositories zu skalieren, auf deklarative Methoden, um eine heterogene Analyse durchzuführen, und auf High-Order-Methoden, die verwendet werden, um Bedrohungen für Methoden, die auf Repositories arbeiten, zu operationalisieren. Wir fassen dies als technische und methodische Verbesserungen zusammen um zukünftige empirische Ergebnisse effektiver zu produzieren. Wir tragen die folgenden Verbesserungen bei. Wir schlagen eine Methode vor, um die Skalierbarkeit von Funktionen, welche über Repositories mit hoher Revisionszahl abstrahieren, auf theoretisch fundierte Weise zu verbessern. Wir nutzen Erkenntnisse aus abstrakter Algebra und Programminkrementalisierung, um eine Kernschnittstelle von Funktionen höherer Ordnung zu definieren, die skalierbare statische Abstraktionen eines Repositorys mit vielen Revisionen berechnen. Wir bewerten die Skalierbarkeit unserer Methode durch Benchmarks, indem wir einen Prototyp mit MSR/ESE Wettbewerbern vergleichen. Wir schlagen eine Methode vor, um die Definition von Funktionen zu verbessern, die über ein Repository mit einem heterogenen Technologie-Stack abstrahieren, indem Konzepte aus der deklarativen Logikprogrammierung verwendet werden, und mit Ideen zur Megamodellierung und linguistischen Architektur kombiniert werden. Wir reproduzieren bestehende Ideen zur deklarativen Logikprogrammierung mit Datalog-nahen Sprachen, die aus der Architekturwiederherstellung, der Quellcodeabfrage und der statischen Programmanalyse stammen, und übertragen diese aus der Analyse eines homogenen auf einen heterogenen Technologie-Stack. Wir liefern einen Proof-of-Concept einer solchen Methode in einer Fallstudie. Wir schlagen eine High-Order-Methode vor, um die Disambiguierung von Bedrohungen für MSR/ESE Methoden zu verbessern. Wir konzentrieren uns auf eine bessere Disambiguierung von Bedrohungen durch Simulationen, indem wir die Argumentation über Bedrohungen operationalisieren und die Auswirkungen auf eine gültige Datenanalysemethodik explizit machen. Wir ermutigen Forschende, „gefälschte“ Simulationen ihrer MSR/ESE-Szenarien zu erstellen, um relevante Erkenntnisse über alternative plausible Ergebnisse, negative Ergebnisse, potenzielle Bedrohungen und die verwendeten Datenanalysemethoden zu operationalisieren. Wir beweisen, dass eine solche Art des simulationsbasierten Testens zur Disambiguierung von Bedrohungen in der veröffentlichten MSR/ESE-Forschung beiträgt.
In the last years, the public interest in epidemiology and mathematical modeling of disease spread has increased - mainly caused by the COVID-19 pandemic, which has emphasized the urgent need for accurate and timely modelling of disease transmission. However, even prior to that, mathematical modelling has been used for describing the dynamics and spread of infectious diseases, which is vital for developing effective interventions and controls, e.g., for vaccination campaigns and social restrictions like lockdowns. The forecasts and evaluations provided by these models influence political actions and shape the measures implemented to contain the virus.
This research contributes to the understanding and control of disease spread, specifically for Dengue fever and COVID-19, making use of mathematical models and various data analysis techniques. The mathematical foundations of epidemiological modelling, as well as several concepts for spatio-temporal diffusion like ordinary differential equation (ODE) models, are presented, as well as an originally human-vector model for Dengue fever, and the standard (SEIR)-model (with the potential inclusion of an equation for deceased persons), which are suited for the description of COVID-19. Additionally, multi-compartment models, fractional diffusion models, partial differential equations (PDE) models, and integro-differential models are used to describe spatial propagation of the diseases.
We will make use of different optimization techniques to adapt the models to medical data and estimate the relevant parameters or finding optimal control techniques for containing diseases using both Metropolis and Lagrangian methods. Reasonable estimates for the unknown parameters are found, especially in initial stages of pandemics, when little to no information is available and the majority of the population has not got in contact with the disease. The longer a disease is present, the more complex the modelling gets and more things (vaccination, different types, etc.) appear and reduce the estimation and prediction quality of the mathematical models.
While it is possible to create highly complex models with numerous equations and parameters, such an approach presents several challenges, including difficulties in comparing and evaluating data, increased risk of overfitting, and reduced generalizability. Therefore, we will also consider criteria for model selection based on fit and complexity as well as the sensitivity of the model with respect to specific parameters. This also gives valuable information on which political interventions should be more emphasized for possible variations of parameter values.
Furthermore, the presented models, particularly the optimization using the Metropolis algorithm for parameter estimation, are compared with other established methods. The quality of model calculation, as well as computational effort and applicability, play a role in this comparison. Additionally, the spatial integro-differential model is compared with an established agent-based model. Since the macroscopic results align very well, the computationally faster integro-differential model can now be used as a proxy for the slower and non-traditionally optimizable agent-based model, e.g., in order to find an apt control strategy.
Künstliche neuronale Netze sind ein beliebtes Forschungsgebiet der künst-
lichen Intelligenz. Die zunehmende Größe und Komplexität der riesigen
Modelle bringt gewisse Probleme mit sich. Die mangelnde Transparenz
der inneren Abläufe eines neuronalen Netzes macht es schwierig, effiziente
Architekturen für verschiedene Aufgaben auszuwählen. Es erweist sich als
herausfordernd, diese Probleme zu lösen. Mit einem Mangel an aufschluss-
reichen Darstellungen neuronaler Netze verfestigt sich dieser Zustand. Vor
dem Hintergrund dieser Schwierigkeiten wird eine neuartige Visualisie-
rungstechnik in 3D vorgestellt. Eigenschaften für trainierte neuronale Net-
ze werden unter Verwendung etablierter Methoden aus dem Bereich der
Optimierung neuronaler Netze berechnet. Die Batch-Normalisierung wird
mit Fine-tuning und Feature Extraction verwendet, um den Einfluss der Be-
standteile eines neuronalen Netzes abzuschätzen. Eine Kombination dieser
Einflussgrößen mit verschiedenen Methoden wie Edge-bundling, Raytra-
cing, 3D-Impostor und einer speziellen Transparenztechnik führt zu einem
3D-Modell, das ein neuronales Netz darstellt. Die Validität der ermittelten
Einflusswerte wird demonstriert und das Potential der entwickelten Visua-
lisierung untersucht.