The 10 most recently published documents
Focusing on the triangulation of detective fiction, masculinity studies and disability studies, "Investigating the Disabled Detective – Disabled Masculinity and Masculine Disability in Contemporary Detective Fiction" shows that disability challenges common ideals of (hegemonic) masculinity as represented in detective fiction. After a theoretical introduction to the relevant focal points of the three research fields, the dissertation demonstrates that even the archetypal detectives Dupin and Holmes undermine certain nineteenth-century masculine ideals with their peculiarities. Shifting to contemporary detective fiction and adopting a literary disability studies perspective, the dissertation investigates how male detectives with a form of neurodiversity or a physical impairment negotiate their masculine identity in light of their disability in private and professional contexts. It argues that the occupation as a detective supports the disabled investigator to achieve ‘masculine disability’. Inversing the term ‘disabled masculinity’, predominantly used in research, ‘masculine disability’ introduces a decisively gendered reading of neurodiversity and (acquired) physical impairment in contemporary detective fiction. The term implies that the disabled detective (re)negotiates his masculine identity by implementing the disability in his professional investigations and accepting it as an important, yet not defining, characteristic of his (gender) identity. By applying this approach to five novels from contemporary British and American detective fiction, the dissertation demonstrates that masculinity and disability do not negate each other, as commonly assumed. Instead, it emphasises that disability allows the detective, as much as the reader, to rethink masculinity.
Im Rahmen der Masterthesis „Analyse des Managements invasiver gebietsfremder Arten am Beispiel des Roten Amerikanischen Sumpfkrebses (Procambarus clarkii) während und im Anschluss an notwendige Sanierungsarbeiten am Hochwasserrückhaltebecken Breitenauer See östlich von Heilbronn“ wurde das Vorkommen des invasiven Roten Amerikanischen Sumpfkrebses am Breitenauer See umfangreich kartiert. Auch die nahegelegene Sulm mit bekanntem Vorkommen des Signalkrebses sowie das Nonnenbachsystem mit bekanntem Vorkommen des Steinkrebses wurden erfasst. Der Fokus lag auf der Beantwortung dreier Kernfragen. Zunächst wurde untersucht, ob und wie ein dauerhaftes IAS-Management (invasive alien species) des Roten Amerikanischen Sumpfkrebses am Breitenauer See nachhaltig durchgeführt werden kann, um inakzeptable ökologische Effekte zu vermeiden. Die zweite Fragestellung bezog sich auf die Wirksamkeit ergriffener Risikomanagementmaßnahmen während der Ablassaktion des Breitenauer Sees. Abschließend war fraglich, wie sich der Rote Amerikanische Sumpfkrebs verhält, wenn sein besiedeltes Gewässer trockenfällt.
Empirical studies in software engineering use software repositories as data sources to understand software development. Repository data is either used to answer questions that guide the decision-making in the software development, or to provide tools that help with practical aspects of developers’ everyday work. Studies are classified into the field of Empirical Software Engineering (ESE), and more specifically into Mining Software Repositories (MSR). Studies working with repository data often focus on their results. Results are statements or tools, derived from the data, that help with practical aspects of software development. This thesis focuses on the methods and high order methods used to produce such results. In particular, we focus on incremental methods to scale the processing of repositories, declarative methods to compose a heterogeneous analysis, and high order methods used to reason about threats to methods operating on repositories. We summarize this as technical and methodological improvements. We contribute the improvements to methods and high-order methods in the context of MSR/ESE to produce future empirical results more effectively. We contribute the following improvements. We propose a method to improve the scalability of functions that abstract over repositories with high revision count in a theoretically founded way. We use insights on abstract algebra and program incrementalization to define a core interface of highorder functions that compute scalable static abstractions of a repository with many revisions. We evaluate the scalability of our method by benchmarks, comparing a prototype with available competitors in MSR/ESE. We propose a method to improve the definition of functions that abstract over a repository with a heterogeneous technology stack, by using concepts from declarative logic programming and combining them with ideas on megamodeling and linguistic architecture. We reproduce existing ideas on declarative logic programming with languages close to Datalog, coming from architecture recovery, source code querying, and static program analysis, and transfer them from the analysis of a homogeneous to a heterogeneous technology stack. We provide a prove-of-concept of such method in a case study. We propose a high-order method to improve the disambiguation of threats to methods used in MSR/ESE. We focus on a better disambiguation of threats, operationalizing reasoning about them, and making the implications to a valid data analysis methodology explicit, by using simulations. We encourage researchers to accomplish their work by implementing ‘fake’ simulations of their MSR/ESE scenarios, to operationalize relevant insights about alternative plausible results, negative results, potential threats and the used data analysis methodologies. We prove that such way of simulation based testing contributes to the disambiguation of threats in published MSR/ESE research.
Sind Menschen von einer Pflegebedürftigkeit in Deutschland betroffen, so regelt der durch § 14 SGB XI festgeschriebene Pflegebedürftigkeitsbegriff den Zugang zu Leistungen der Pflegeversicherung. Der Pflegebedürftigkeitsbegriff ist dabei ein normativ gesetzter und basiert bislang nicht auf empirischen Studien aus dem Bereich der Pflege und der Pflegewissenschaft. Durch seine gesetzliche Fundierung lenkt er die Bedingungen und Strukturen, unter welchen Pflegeleistungen in Deutschland von Pflegefachpersonen erbracht werden. Weiterhin ist davon auszugehen, dass die Pflegefachpersonen durch ihre professionelle Sozialisierung einen fachlichen Fokus auf das Konstrukt der Pflegebedürftigkeit legen, welcher sich vom Pflegebedürftigkeitsbegriff unterscheidet und strukturell nicht in die Leistungsbemessung einfließt. Daraus ergeben sich Aspekte einer pflegerischen Unter- und Überversorgung.
Die vorliegende Ph.D.-Thesis verfolgt das Anliegen, die Herausforderungen des Pflegebedürftigkeitsbegriffs in Deutschland aufzuzeigen, indem die Aspekte der Pflegebedürftigkeit von Pflegefachpersonen im ambulanten Setting im Hinblick auf deren Interaktion mit pflegebedürftigen Menschen empirisch erfasst und zu einem theoretischen Konzept ausgearbeitet werden. Zur methodischen Bearbeitung des Forschungsinteresses werden problemzzentrierte Interviews mit ambulanten Pflegefachpersonen geführt, die mit Rückbezug auf den Symbolischen Interaktionismus nach Herbert Blumer unter methodologischen und methodischen Gesichtspunkten mittels einer Grounded Theory nach Kathy Charmaz sowie Juliet Corbin und Anselm Strauss erhoben und ausgewertet werden. Dabei kommt ein reflexives-konstruktivistisches Forschen und Schreiben als Konsequenz der epistemologisch-methodologischen Fundierung der Autorin zur Anwendung.
Die erarbeitete Theorie beschreibt die Herausforderungen der Pflegebedürftigkeit aus Sicht der befragten Pflegefachpersonen. So werden in der Kernkategorie Aushandlungsprozesse in den Bereichen Nähe und Distanz, Anwaltschaft und Verantwortungsüberlassung sowie Ethos und Technokratie beschrieben. Sämtliche Aspekte zeigen auf, inwiefern der gesetzliche Pflegebedürftigkeitsbegriff zu Herausforderungen innerhalb der pflegerischen Arbeit führt. Die Ph.D.-Thesis liefert mit ihren Ergebnissen einen Beitrag zur Einordnung und Relevanz pflegerischer Beziehungsarbeit im Hinblick auf herrschende Rahmenbedingungen der Pflegebedürftigkeit und zeigt auf, inwiefern sich Interaktion und Kommunikation der Akteur*innen vor dem Anspruch individueller Pflege und dem deutschen ambulanten Pflegesystem wechselseitig bedingen. Sie liefert damit einen professionell und empirisch begründeten Ansatz für die Einschätzung und Bearbeitung von pflegefachlich erlebter Pflegebedürftigkeit.
In the last years, the public interest in epidemiology and mathematical modeling of disease spread has increased - mainly caused by the COVID-19 pandemic, which has emphasized the urgent need for accurate and timely modelling of disease transmission. However, even prior to that, mathematical modelling has been used for describing the dynamics and spread of infectious diseases, which is vital for developing effective interventions and controls, e.g., for vaccination campaigns and social restrictions like lockdowns. The forecasts and evaluations provided by these models influence political actions and shape the measures implemented to contain the virus.
This research contributes to the understanding and control of disease spread, specifically for Dengue fever and COVID-19, making use of mathematical models and various data analysis techniques. The mathematical foundations of epidemiological modelling, as well as several concepts for spatio-temporal diffusion like ordinary differential equation (ODE) models, are presented, as well as an originally human-vector model for Dengue fever, and the standard (SEIR)-model (with the potential inclusion of an equation for deceased persons), which are suited for the description of COVID-19. Additionally, multi-compartment models, fractional diffusion models, partial differential equations (PDE) models, and integro-differential models are used to describe spatial propagation of the diseases.
We will make use of different optimization techniques to adapt the models to medical data and estimate the relevant parameters or finding optimal control techniques for containing diseases using both Metropolis and Lagrangian methods. Reasonable estimates for the unknown parameters are found, especially in initial stages of pandemics, when little to no information is available and the majority of the population has not got in contact with the disease. The longer a disease is present, the more complex the modelling gets and more things (vaccination, different types, etc.) appear and reduce the estimation and prediction quality of the mathematical models.
While it is possible to create highly complex models with numerous equations and parameters, such an approach presents several challenges, including difficulties in comparing and evaluating data, increased risk of overfitting, and reduced generalizability. Therefore, we will also consider criteria for model selection based on fit and complexity as well as the sensitivity of the model with respect to specific parameters. This also gives valuable information on which political interventions should be more emphasized for possible variations of parameter values.
Furthermore, the presented models, particularly the optimization using the Metropolis algorithm for parameter estimation, are compared with other established methods. The quality of model calculation, as well as computational effort and applicability, play a role in this comparison. Additionally, the spatial integro-differential model is compared with an established agent-based model. Since the macroscopic results align very well, the computationally faster integro-differential model can now be used as a proxy for the slower and non-traditionally optimizable agent-based model, e.g., in order to find an apt control strategy.
Artificial neural networks is a popular field of research in artificial intelli-
gence. The increasing size and complexity of huge models entail certain
problems. The lack of transparency of the inner workings of a neural net-
work makes it difficult to choose efficient architectures for different tasks.
It proves to be challenging to solve these problems, and with a lack of in-
sightful representations of neural networks, this state of affairs becomes
entrenched. With these difficulties in mind a novel 3D visualization tech-
nique is introduced. Attributes for trained neural networks are estimated
by utilizing established methods from the area of neural network optimiza-
tion. Batch normalization is used with fine-tuning and feature extraction to
estimate the importance of different parts of the neural network. A combi-
nation of the importance values with various methods like edge bundling,
ray tracing, 3D impostor and a special transparency technique results in a
3D model representing a neural network. The validity of the extracted im-
portance estimations is demonstrated and the potential of the developed
visualization is explored.
Leichte Sprache (LS, easy-to-read German) is a simplified variety of German. It is used to provide barrier-free texts for a broad spectrum of people, including lowliterate individuals with learning difficulties, intellectual or developmental disabilities (IDD) and/or complex communication needs (CCN). In general, LS authors are proficient in standard German and do not belong to the aforementioned group of people. Our goal is to empower the latter to participate in written discourse themselves. This requires a special writing system whose linguistic support and ergonomic software design meet the target group’s specific needs. We present EasyTalk a system profoundly based on natural language processing (NLP) for assistive writing in an extended variant of LS (ELS). EasyTalk provides users with a personal vocabulary underpinned with customizable communication symbols and supports in writing at their individual level of proficiency through interactive user guidance. The system minimizes the grammatical knowledge needed to produce correct and coherent complex contents by intuitively formulating linguistic decisions. It provides easy dialogs for selecting options from a natural-language paraphrase generator, which provides context-sensitive suggestions for sentence components and correctly inflected word forms. In addition, EasyTalk reminds users to add text elements that enhance text comprehensibility in terms of audience design (e.g., time and place of an event) and improve text coherence (e.g., explicit connectors to express discourse-relations). To tailor the system to the needs of the target group, the development of EasyTalk followed the principles of human-centered design (HCD). Accordingly, we matured the system in iterative development cycles, combined with purposeful evaluations of specific aspects conducted with expert groups from the fields of CCN, LS, and IT, as well as L2 learners of the German language. In a final case study, members of the target audience tested the system in free writing sessions. The study confirmed that adults with IDD and/or CCN who have low reading, writing, and computer skills can write their own personal texts in ELS using EasyTalk. The positive feedback from all tests inspires future long-term studies with EasyTalk and further development of this prototypical system, such as the implementation of a so-called Schreibwerkstatt (writing workshop)
In a world where language defines the boundaries of one's understanding, the words of Austrian philosopher Ludwig Wittgenstein resonate profoundly. Wittgenstein's assertion that "Die Grenzen meine Sprache bedeuten die Grenzen meiner Welt" (Wittgenstein 2016: v. 5.6) underscores the vital role of language in shaping our perceptions. Today, in a globalized and interconnected society, fluency in foreign languages is indispensable for individual success. Education must break down these linguistic barriers, and one promising approach is the integration of foreign languages into content subjects.
Teaching content subjects in a foreign language, a practice known as Content Language Integrated Learning (CLIL), not only enhances language skills but also cultivates cognitive abilities and intercultural competence. This approach expands horizons and aligns with the core principles of European education (Leaton Gray, Scott & Mehisto 2018: 50). The Kultusministerkonferenz (KMK) recognizes the benefits of CLIL and encourages its implementation in German schools (cf. KMK 2013a).
With the rising popularity of CLIL, textbooks in foreign languages have become widely available, simplifying teaching. However, the appropriateness of the language used in these materials remains an unanswered question. If textbooks impose excessive linguistic demands, they may inadvertently limit students' development and contradict the goal of CLIL.
This thesis focuses on addressing this issue by systematically analyzing language requirements in CLIL teaching materials, emphasizing receptive and productive skills in various subjects based on the Common European Framework of Reference. The aim is to identify a sequence of subjects that facilitates students' language skill development throughout their school years. Such a sequence would enable teachers to harness the full potential of CLIL, fostering a bidirectional approach where content subjects facilitate language learning.
While research on CLIL is extensive, studies on language requirements for bilingual students are limited. This thesis seeks to bridge this gap by presenting findings for History, Geography, Biology, and Mathematics, allowing for a comprehensive understanding of language demands. This research endeavors to enrich the field of bilingual education and CLIL, ultimately benefiting the academic success of students in an interconnected world.
The trends of industry 4.0 and the further enhancements toward an ever changing factory lead to more mobility and flexibility on the factory floor. With that higher need of mobility and flexibility the requirements on wireless communication rise. A key requirement in that setting is the demand for wireless Ultra-Reliability and Low Latency Communication (URLLC). Example use cases therefore are cooperative Automated Guided Vehicles (AGVs) and mobile robotics in general. Working along that setting this thesis provides insights regarding the whole network stack. Thereby, the focus is always on industrial applications. Starting on the physical layer, extensive measurements from 2 GHz to 6 GHz on the factory floor are performed. The raw data is published and analyzed. Based on that data an improved Saleh-Valenzuela (SV) model is provided. As ad-hoc networks are highly depended onnode mobility, the mobility of AGVs is modeled. Additionally, Nodal Encounter Patterns (NEPs) are recorded and analyzed. A method to record NEP is illustrated. The performance by means of latency and reliability are key parameters from an application perspective. Thus, measurements of those two parameters in factory environments are performed using Wireless Local Area Network (WLAN) (IEEE 802.11n), private Long Term Evolution (pLTE) and 5G. This showed auto-correlated latency values. Hence, a method to construct confidence intervals based on auto-correlated data containing rare events is developed. Subsequently, four performance improvements for wireless networks on the factory floor are proposed. Of those optimization three cover ad-hoc networks, two deal with safety relevant communication, one orchestrates the usage of two orthogonal networks and lastly one optimizes the usage of information within cellular networks.
Finally, this thesis is concluded by an outlook toward open research questions. This includes open questions remaining in the context of industry 4.0 and further the ones around 6G. Along the research topics of 6G the two most relevant topics concern the ideas of a network of networks and overcoming best-effort IP.