The 10 most recently published documents
Assessing ChatGPT’s Performance in Analyzing Students’ Sentiments: A Case Study in Course Feedback
(2024)
The emergence of large language models (LLMs) like ChatGPT has impacted fields such as education, transforming natural language processing (NLP) tasks like sentiment analysis. Transformers form the foundation of LLMs, with BERT, XLNet, and GPT as key examples. ChatGPT, developed by OpenAI, is a state-of-the-art model and its ability in natural language tasks makes it a potential tool in sentiment analysis. This thesis reviews current sentiment analysis methods and examines ChatGPT’s ability to analyze sentiments across three labels (Negative, Neutral, Positive) and five labels (Very Negative, Negative, Neutral, Positive, Very Positive) on a dataset of student course reviews. Its performance is compared with fine tuned state-of-the-art models like BERT, XLNet, bart-large-mnli, and RoBERTa-large-mnli using quantitative metrics. With the help of 7 prompting techniques which are ways to instruct ChatGPT, this work also analyzed how well it understands complex linguistic nuances in the given texts using qualitative metrics. BERT and XLNet outperform ChatGPT mainly due to their bidirectional nature, which allows them to understand the full context of a sentence, not just left to right. This, combined with fine-tuning, helps them capture patterns and nuances better. ChatGPT, as a general purpose, open-domain model, processes text unidirectionally, which can limit its context understanding. Despite this, ChatGPT performed comparably to XLNet and BERT in three-label scenarios and outperformed others. Fine-tuned models excelled in five label cases. Moreover, it has shown impressive knowledge of the language. Chain-of-Thought (CoT) was the most effective technique for prompting with step by step instructions. ChatGPT showed promising performance in correctness, consistency, relevance, and robustness, except for detecting Irony. As education evolves with diverse learning environments, effective feedback analysis becomes increasingly valuable. Addressing ChatGPT’s limitations and leveraging its strengths could enhance personalized learning through better sentiment analysis.
Exploring Academic Perspectives: Sentiments and Discourse on ChatGPT Adoption in Higher Education
(2024)
Artificial intelligence (AI) is becoming more widely used in a number of industries, including in the field of education. Applications of artificial intelligence (AI) are becoming crucial for schools and universities, whether for automated evaluation, smart educational systems, individualized learning, or staff support. ChatGPT, anAI-based chatbot, offers coherent and helpful replies based on analyzing large volumes of data. Integrating ChatGPT, a sophisticated Natural Language Processing (NLP) tool developed by OpenAI, into higher education has sparked significant interest and debate. Since the technology is already adapted by many students and teachers, this study delves into analyzing the sentiments expressed on university websites regarding ChatGPT integration into education by creating a comprehensive sentiment analysis framework using Hierarchical Residual RSigELU Attention Network (HR-RAN). The proposed framework addresses several challenges in sentiment analysis, such as capturing fine-grained sentiment nuances, including contextual information, and handling complex language expressions in university review data. The methodology involves several steps, including data collection from various educational websites, blogs, and news platforms. The data is preprocessed to handle emoticons, URLs, and tags and then, detect and remove sarcastic text using the eXtreme Learning Hyperband Network (XLHN). Sentences are then grouped based on similarity and topics are modeled using the Non-negative Term-Document Matrix Factorization (NTDMF) approach. Features, such as lexico-semantic, lexico structural, and numerical features are extracted. Dependency parsing and coreference resolution are performed to analyze grammatical structures and understand semantic relationships. Word embedding uses the Word2Vec model to capture semantic relationships between words. The preprocessed text and extracted features are inputted into the HR-RAN classifier to categorize sentiments as positive, negative, or neutral. The sentiment analysis results indicate that 74.8% of the sentiments towards ChatGPT in higher education are neutral, 21.5% are positive, and only 3.7% are negative. This suggests a predominant neutrality among users, with a significant portion expressing positive views and a very small percentage holding negative opinions. Additionally, the analysis reveals regional variations, with Canada showing the highest number of sentiments, predominantly neutral, followed by Germany, the UK, and the USA. The sentiment analysis results are evaluated based on various metrics, such as accuracy, precision, recall, F-measure, and specificity. Results indicate that the proposed framework outperforms conventional sentiment analysis models. The HR-RAN technique achieved a precision of 98.98%, recall of 99.23%, F-measure of 99.10%, accuracy of 98.88%, and specificity of 98.31%. Additionally, word clouds are generated to visually represent the most common terms within positive, neutral, and negative sentiments, providing a clear and immediate understanding of the key themes in the data. These findings can inform educators, administrators, and developers about the benefits and challenges of integrating ChatGPT into educational
settings, guiding improvements in educational practices and AI tool development.
Die Untersuchung der Erwartungen und Anforderungen der Generation Z an Führungskräfte ist sowohl praktisch als auch wissenschaftlich von hoher Relevanz. Praktisch gesehen erfordert der Fachkräftemangel in Deutschland und die veränderten Vorstellungen der Generation Z ein Umdenken bei Führungskräften. Wissenschaftlich besteht eine Forschungslücke, da traditionelle Führungstheorien oft nicht mehr zeitgemäß sind und spezifische Studien zu den Präferenzen der Generation Z fehlen.
Ziel der Masterthesis ist es, diese Lücke durch qualitative Experteninterviews zu schließen und neue Erkenntnisse über die Ansichten der Generation Z zu gewinnen. Dabei sollen Unternehmen unterstützt werden, junge Fachkräfte langfristig zu gewinnen.
Die Methode umfasst eine qualitative Studie mit 14 Interviewpartnern, darunter Mitarbeiter der Generation Z und Führungskräfte. Die semistrukturierten Interviews wurden transkribiert und mithilfe der qualitativen Inhaltsanalyse nach Mayring und der Software MAXQDA ausgewertet. Induktiv wurden Kategorien aus dem Material gebildet.
Die Ergebnisse zeigen, dass die Generation Z authentische, empathische Führung und klare Perspektiven erwartet. Hauptkategorien wie Organisation, soziale Kompetenz, Teamfähigkeit und zukunftsfähige Wege wurden identifiziert. Beide Gruppen betonen die Bedeutung von Befähigung, regelmäßiger Kommunikation, Fairness und Transparenz.
Praktisch implizieren die Ergebnisse, dass Unternehmen praktische Maßnahmen und Weiterentwicklungen vornehmen müssen, um den Bedürfnissen der jungen Generation gerecht zu werden. Für die Forschung bietet die Studie eine Basis für weiterführende quantitative Untersuchungen und ein konzeptionelles Framework zur Darstellung der wichtigsten Kategorien und ihrer Zusammenhänge.
Recent studies show that biofilm substances in contact with nanoplastics play an important role in the aggregation and sedimentation of nanoplastics. Consequences of these processes are changes in biofilm formation and stability and changes in the transport and fate of pollutants in the environment. Having a deeper understanding of the nanoplastics–biofilm interaction would help to evaluate the risks posed by uncontrolled nanoplastic pollution. These interactions are impacted by environmental changes due to climate change, such as, e.g., the acidification of surface waters. We apply fluorescence correlation spectroscopy (FCS) to investigate the pH-dependent aggregation tendency of non-functionalized polystyrene (PS) nanoparticles (NPs) due to intermolecular forces with model extracellular biofilm substances. Our biofilm model consists of bovine serum albumin (BSA), which serves as a representative for globular proteins, and the polysaccharide alginate, which is a main component in many biofilms, in solutions containing Na+ with an ionic strength being realistic for fresh-water conditions. Biomolecule concentrations ranging from 0.5 g/L up to at maximum 21 g/L are considered. We use non-functionalized PS NPs as representative for mostly negatively charged nanoplastics. BSA promotes NP aggregation through adsorption onto the NPs and BSA-mediated bridging. In BSA–alginate mixtures, the alginate hampers this interaction, most likely due to alginate–BSA complex formation. In most BSA–alginate mixtures as in alginate alone, NP aggregation is predominantly driven by weaker, pH-independent depletion forces. The stabilizing effect of alginate is only weakened at high BSA contents, when the electrostatic BSA–BSA attraction is not sufficiently screened by the alginate. This study clearly shows that it is crucial to consider correlative effects between multiple biofilm components to better understand the NP aggregation in the presence of complex biofilm substances. Single-component biofilm model systems based on comparing the total organic carbon (TOC) content of the extracellular biofilm substances, as usually considered, would have led to a misjudgment of the stability towards aggregation.
The goal of this PhD thesis is to investigate possibilities of using symbol elimination for solving problems over complex theories and analyze the applicability of such uniform approaches in different areas of application, such as verification, knowledge representation and graph theory. In the thesis we propose an approach to symbol elimination in complex theories that follows the general idea of combining hierarchical reasoning with symbol elimination in standard theories. We analyze how this general approach can be specialized and used in different areas of application.
In the verification of parametric systems it is important to prove that certain safety properties hold. This can be done by showing that a property is an inductive invariant of the system, i.e. it holds in the initial state of the system and is invariant under updates of the system. Sometimes this is not the case for the condition itself, but for a stronger condition it is. In this thesis we propose a method for goal-directed invariant strengthening.
In knowledge representation we often have to deal with huge ontologies. Combining two ontologies usually leads to new consequences, some of which may be false or undesired. We are interested in finding explanations for such unwanted consequences. For this we propose a method for computing interpolants in the description logics EL and EL⁺, based on a translation to the theory of semilattices with monotone operators and a certain form of interpolation in this theory.
In wireless network theory one often deals with classes of geometric graphs in which the existence or non-existence of an edge between two vertices in a graph relies on properties on their distances to other nodes. One possibility to prove properties of those graphs or to analyze relations between the graph classes is to prove or disprove that one graph class is contained in the other. In this thesis we propose a method for checking inclusions between geometric graph classes.
In international business relationships, such as international railway operations, large amounts of data can be exchanged among the parties involved. For the exchange of such data, a limited risk of being cheated by another party, e.g., by being provided with fake data, as well as reasonable cost and a foreseeable benefit, is expected. As the exchanged data can be used to make critical business decisions, there is a high incentive for one party to manipulate the data in its favor. To prevent this type of manipulation, mechanisms exist to ensure the integrity and authenticity of the data. In combination with a fair exchange protocol, it can be ensured that the integrity and authenticity of this data is maintained even when it is exchanged with another party. At the same time, such a protocol ensures that the exchange of data only takes place in conjunction with the agreed compensation, such as a payment, and that the payment is only made if the integrity and authenticity of the data is ensured as previously agreed. However, in order to be able to guarantee fairness, a fair exchange protocol must involve a trusted third party. To avoid fraud by a single centralized party acting as a trusted third party, current research proposes decentralizing the trusted third party, e.g., by using a distributed ledger based fair exchange protocol. However, for assessing the fairness of such an exchange, state-of-the-art approaches neglect costs arising for the parties conducting the fair exchange. This can result in a violation of the outlined expectation of reasonable cost, especially when distributed ledgers are involved, which are typically associated with non-negligible costs. Furthermore, the performance of typical distributed ledger-based fair exchange protocols is limited, posing an obstacle to widespread adoption.
To overcome the challenges, in this thesis, we introduce the foundation for a data exchange platform allowing for a fully decentralized fair data exchange with reasonable cost and performance. As a theoretical foundation, we introduce the concept of cost fairness, which considers cost for the fairness assessment by requesting that a party following the fair exchange protocol never suffers any unilateral disadvantages. We prove that cost fairness cannot be achieved using typical public distributed ledgers but requires customized distributed ledger instances, which usually lack complete decentralization. However, we show that the highest unilateral cost are caused by a grieving attack.
To allow fair data exchanges to be conducted with reasonable cost and performance, we introduce FairSCE, a distributed ledger-based fair exchange protocol using distributed ledger state channels and incorporating a mechanism to protect against grieving attacks, reducing the possible unilateral cost that have to be covered to a minimum. Based on our evaluation of FairSCE, the worst-case cost for data exchange, even in the presence of malicious parties, is known, which allows an estimate of the possible benefit and, thus, the preliminary estimate of economic utility. Furthermore, to allow for an unambiguous assessment of the correct data being transferred while still allowing for sensitive parts of the data to be masked, we introduce an approach for the hashing of hierarchically structured data, which can be used to ensure integrity and authenticity of the data being transferred.
Well-being is essential for all people. Therefore, important factors influencing people’s well-being must be investigated. Well-being is multifaceted and defined as, for example, psychological, emotional, mental, physical, or social well-being. Here, we focus on psychological well-being. The study aimed to analyze different aspects of connectedness as potential predictors of psychological well-being. For this purpose, we conducted a study examining the psychological well-being of 184 participants (130 women, 54 men, age: M = 31.39, SD = 15.24) as well as their connectedness with oneself (self-love), with others (prosocialness), with nature (nature connectedness), and with the transcendent (spirituality). First, significant positive correlations appeared between psychological well-being and self-love, nature connectedness, and spirituality. Furthermore, correlations between the four aspects of connectedness were significant, except for the relationship between self-love and prosocialness. A regression analysis revealed that self-love and nature connectedness positively predicted participants’ psychological well-being, while spirituality and prosocialness did not explain any incremental variance. The strong relationship between self-love and well-being was partly mediated by nature connectedness. Hence, self love, understood as a positive attitude of self-kindness, should be considered in more detail to enhance psychological well-being. Besides this, a more vital connectedness to the surrounding nature could benefit people’s well-being.
Examining the role of post-event processing in test anxiety—Pilot testing in three student samples
(2024)
This work investigates the occurrence of post-event processing (PEP) in the context of test anxiety; PEP involves rumination and self-critical thinking following an event and commonly observed in social anxiety. Three short-term longitudinal studies in student samples examined whether PEP occurs after exams and how it is associated with test anxiety. University students (N =35 in Study 1, N =146 in Study 2, and N =37 in Study 3) completed measures of trait and state test anxiety before an actual exam; PEP related to the exam was assessed at various time points afterward. Results revealed that PEP occurred to a meaningful extent after exam situations. Overall, it was positively associated with trait and state test anxiety, although some variations in the relations were found across the three studies. These findings underscore the relevance of PEP in the context of test anxiety, as PEP might contribute to maintaining test anxiety in the long term. Implications for future studies are discussed.
Degenerative changes in the spine as well as back pain can be considered a common ailment. Incorrect loading of the lumbar spine structures is often considered as one of the factors that can accelerate degenerative processes, leading to back pain. For example, a degenerative change could be the occurrence of spinal stenosis following spondylolisthesis. Surgical treatment of spinal stenosis mainly focuses on decompressing the spinal canal with or without additional fusion through dorsal spondylodesis. There are differing opinions on whether fusion along with decompression provides potential benefits to patients or represents an overtreatment. Both conventional therapies and surgical methods aim to restore a “healthy” (or at least pain-free) distribution of load. Surprisingly little is known about the interindividual variability of load distribution in “healthy” lumbar spines. Since medical imaging does not provide information on internal forces, computer simulation of individual patients could be a tool to gain a set of new decision criteria for these cases. The advantage lies in calculating the internal load distribution, which is not feasible in in-vivo studies, as measurements of internal forces in living subjects are ethically and partially technically unfeasible. In the present research, the forward dynamic approach is used to calculate load distribution in multi-body models of individual lumbar spines. The work is structured into three parts: (I) Load distribution is quantified depending on the individual curvature of the lumbar spine. (II) Confidence intervals of the instantaneous center of rotation over time are determined, with which the motion behavior of healthy lumbar spines can be described. (III) Lastly, the effects of decompression surgeries on the load distribution of lumbar spines are determined.
The biodegradable polymers polylactic acid (PLA) and polyhydroxybutyrate (PHB) produced from renewable raw materials were coated with hydrogenated amorphous carbon layers (a-C:H) at different deposition angles with various thicknesses as part of this thesis. Similar to conventional polymers, biopolymers often have unsuitable surface properties for industrial purposes, e.g. low hardness. For some applications, it is therefore necessary and advantageous to modify the surface properties of biopolymers while retaining the main properties of the substrate material. A suitable surface modification is the deposition of thin a-C:H layers. Their properties depend essentially on the sp² and sp³ hybridization ratio of the carbon atoms and the content of hydrogen atoms. The sp²/sp³ ratio was to be controlled in the present work by varying the coating geometry. Since coatings at 0°, directly in front of the plasma source, contain a higher percentage of sp³ and indirectly coated (180°) a higher amount of sp², it is shown in this work that it is possible to control the sp²/sp³ ratio. For this purpose, the samples are placed in front of the plasma source at angles of 0, 30, 60, 90, 120, 150 and 180° and coated for 2.5, 5.0, 7.5 and 10.0 minutes. For the angles 0°, the layer thicknesses were 25, 50, 75 and 100 nm. The a-C:H layers were all deposited using radio-frequency plasma-enhanced chemical vapor deposition and acetylene as C and H sources after being pretreated with an oxygen plasma for 10 minutes. Following the O₂ treatment and the a-C:H deposition, the surfaces are examined using macroscopic and microscopic measurement methods and the data is then analyzed. The surface morphology is recorded using scanning electron microscopy and atomic force microscopy. In addition, data on the stability of the layer and the surface roughness can be collected. Contact angle (CA) measurements are used to determine not only the wettability, but also the contact angle hysteresis by pumping the drop volume up and down. By measuring the CA with different liquids and comparing them, the surface free energy (SFE) and its polar and disperse components are determined. The changes in barrier properties are verified by water vapor transmission rate tests (WVTR). The chemical analysis of the surface is carried out on the one hand by Fourier transform infrared spectroscopy with specular reflection and on the other hand by synchrotron-supported techniques such as near-edge X-ray absorption fine structure and X-ray photoelectron spectroscopy. When analyzing the surfaces after the O₂ treatment, which was initially assumed to serve only to clean and activate the surface for the a-C:H coating, it was found that the changes were more drastic than originally assumed. For example, if PLA is treated at 0° for 10 minutes, the roughness increases fivefold. As the angle increases, it decreases again until it returns to the initial value at 180°. This can be recognized to a lesser extent with PHB at 30°. For both polymers, it can be shown that the polar fraction of the SFE increases. In the WVTR, a decrease in permeability can be observed for PLA and an increase in the initial value for PHB. The chemical surface analysis shows that the O₂ treatment has little effect on the surface bonds. Overall, it can be shown in this work that the O₂ treatment has an effect on the properties of the surface and cannot be regarded exclusively as a cleaning and activation process. With direct a-C:H coating (at 0°), a layer failure due to internal stress can be observed for both PLA and PHB. This also occurs with PHB at 30°, but to a lesser extent. Permeability of the polymers is reduced by 47% with a five-minute coating and the layer at 10.0 minutes continues to have this effect despite cracks appearing. The application of a-C:H layers shows a dominance of sp³ bonds for both polymer types with direct coating. This decreases with increasing angle and sp² bonds become dominant for indirect coatings. This result is similar for all coating thicknesses, only the angle at which the change of the dominant bond takes place is different. It is shown that it is possible to control the surface properties by an angle-dependent coating and thus to control the ratio sp²/sp³.