Refine
Year of publication
Document Type
- Part of Periodical (38)
- Doctoral Thesis (35)
- Diploma Thesis (24)
- Study Thesis (19)
- Master's Thesis (16)
- Bachelor Thesis (14)
- Report (1)
Keywords
- Routing (5)
- Bluetooth (4)
- Knowledge Compilation (4)
- Netzwerk (4)
- Semantic Web (4)
- Software Engineering (4)
- VNUML (4)
- E-KRHyper (3)
- Netzwerksimulation (3)
- RIP-MTI (3)
- XTPeer (3)
- Campus Information System (2)
- Drahtloses Sensorsystem (2)
- Equality (2)
- Information Retrieval (2)
- Konvergenz (2)
- Modellgetriebene Entwicklung (2)
- Network (2)
- Ontology (2)
- Petri-Netze (2)
- Prädikatenlogik (2)
- RIP (2)
- Rechnernetz (2)
- Theorem Proving (2)
- UML (2)
- University (2)
- constraint logic programming (2)
- hybrid automata (2)
- probability propagation nets (2)
- API (1)
- API analysis (1)
- API-Analyse (1)
- AUTOSAR (1)
- Abduktion <Logik> (1)
- Ad-hoc-Netz (1)
- Adaptation (1)
- Administration (1)
- Algorithm Engineering (1)
- Algorithmische Geometrie (1)
- Anpassung (1)
- Architektur <Informatik> (1)
- Auditing (1)
- Augmented Reality (1)
- Aussagenlogik (1)
- Authentisierung (1)
- Automated Theorem Proving (1)
- Automated Theorem Proving Systems (1)
- Automatisches Beweisverfahren (1)
- Automotive Systems (1)
- Backtrack (1)
- Bayes Procedures (1)
- Beaconless (1)
- Befahrbarkeit (1)
- Betriebssystem (1)
- Border Gateway Protocol (1)
- Border Gateway Protocol 4 (1)
- C-Socketbibliothek (1)
- CMS (1)
- CSCA (1)
- CTI (1)
- CTI -Problem (1)
- Calculus (1)
- Cheops (1)
- Cicero (1)
- Circumscription (1)
- Cisco Catalyst 3500XL (1)
- Communication Networks (1)
- Computational logic (1)
- Computer Security (1)
- Computer Supported Cooperative Work (1)
- Computersicherheit (1)
- Computersimulation (1)
- Conference (1)
- Connected Vehicles (1)
- Context-aware processes (1)
- Controlling (1)
- Counting-to-Infinity (1)
- DHCPv6 (1)
- DNNF (1)
- DPLL (1)
- DPLL procedure (1)
- Data Mining (1)
- Datenschutz (1)
- Deduktion (1)
- Description Logics (1)
- Destiny (1)
- Developer profiling (1)
- DiaLex (1)
- Dialekt (1)
- Dijkstras Algorithmus (1)
- Dimension 3 (1)
- Diskrete Simulation (1)
- Distanz-Vektor (1)
- Distributed Algorithm (1)
- Distributed Environments (1)
- Distributed process execution (1)
- Dokumentation (1)
- Drahtloses vermachtes Netz (1)
- Driver Assistance Systems (1)
- E-Participation (1)
- Ebener Graph (1)
- Echtzeit-Interaktion (1)
- Echtzeitsysteme (1)
- Empirical Studies (1)
- Emulation (1)
- Enhanced Reality (1)
- Enterprise Architecture Framework (1)
- Entscheidungsprozess (1)
- Entwickler Profil (1)
- Fahrerassistenzsysteme (1)
- Fahrzeug (1)
- Fließgewässer (1)
- Flowing Inland Waters (1)
- Formal Methods (1)
- Formale Methoden (1)
- Formale Ontologie (1)
- Fragebeantwortung (1)
- GPS (1)
- GReTL (1)
- Gelände (1)
- Geographic routing (1)
- Geometric spanner (1)
- Graph (1)
- Grounded Theory (1)
- HNF-Algorithmus (1)
- Hermite-Normalform (1)
- Heuristiken (1)
- Hindernis (1)
- Horn Clauses (1)
- Human-Computer Interaction (1)
- Hyper Tableau Calculus (1)
- IASON (1)
- IPv6 (1)
- IT Guru (1)
- IT-Security (1)
- Information (1)
- Information Centric Networking (1)
- Information Visualization (1)
- Informationsverlinkung (1)
- Informationsvisualisierung (1)
- InstaHub (1)
- Intelligent Information Network (1)
- Interactive Video Retrieval (1)
- Interactive Visualizations (1)
- Interaktive Visualisierungen (1)
- Interior Gateway Protokoll (1)
- JGraLab (1)
- JML (1)
- Java (1)
- Java Modeling Language (1)
- Java. Programmiersprache (1)
- KRHyper (1)
- Klassifikation (1)
- Klausellogik (1)
- Knowledge (1)
- Knowledge Engineering (1)
- Komplexität / Algorithmus (1)
- Laser (1)
- Linking of Information (1)
- Linux (1)
- Local algorithm (1)
- Logischer Schluss (1)
- Lookup-Algorithmen (1)
- MIA (1)
- MPEG-7 (1)
- MSR (1)
- Mail-Filter (1)
- Mehrprozessorsystem (1)
- Mensch-Maschine-Interaktion (1)
- Metamodel (1)
- Meteor (1)
- Metriken (1)
- Mikrocontroller (1)
- Mining (1)
- Mining Software Repositories (1)
- Mobile Information Systems (1)
- Mobiltelefon (1)
- Modelltransformation (1)
- Monitoring (1)
- Multi-Agenten-Simulation (1)
- Multi-robot System (1)
- Multiagent System (1)
- Multiagentensysteme (1)
- Nachbarschaftsgraph (1)
- Nagios (1)
- Named Function Networking (1)
- Network Monitoring (1)
- Network Simulation (1)
- Netzwerk Protokollen (1)
- Netzwerkanalyse (1)
- Netzwerkchip (1)
- Netzwerkmanagment (1)
- Netzwerktopologie (1)
- Netzwerküberwachung (1)
- Neuronales Netz (1)
- Nutzerzufriedenheit (1)
- OPNET (1)
- OWL <Informatik> (1)
- Ontologie <Wissensverarbeitung> (1)
- OpenWrt (1)
- Oracle Generation (1)
- Oraklegenerierung (1)
- Orientierungsstufe (1)
- PDA (1)
- Personal Key Infrastruktur (1)
- Personalised Information Systems (1)
- Petri Nets (1)
- Petri net (1)
- Petrinetz (1)
- Placement Strategies (1)
- Planar graphs (1)
- Policy Language (1)
- Proactive Caching (1)
- Probability (1)
- Probability propagation nets (1)
- Process (1)
- Process tracing (1)
- Projekt iCity (1)
- Propagation (1)
- Propositional Logic (1)
- Protocol (1)
- Prototyp (1)
- Prototype (1)
- Provenance (1)
- Quasi unit disk graph (1)
- Query Expansion (1)
- RDF Graphs (1)
- RDF modeling (1)
- RIP-MTI Algorithmus (1)
- RMTI (1)
- Reactive algorithm (1)
- Rechnernetze (1)
- Reference Model (1)
- Relationale Datenbank (1)
- Relevance Feedback (1)
- Requirement-Engineering (1)
- Resource Description Framework (RDF) (1)
- Robocup 2008 (1)
- Roboter (1)
- Router (1)
- Routing Information Protocol (1)
- Routing Information Protocol (RIP) (1)
- Routing Loops (1)
- Routing information protocol (1)
- Routing with Metric based Topology Investigation (RMTI) (1)
- Routing- und Forwarding Tabellen (1)
- Routing-Loop (1)
- SAC (1)
- SNMP (1)
- SOA (1)
- SPARQL (1)
- STP (1)
- Schleifenerkennung (1)
- Schlussfolgern (1)
- Security Requirements (1)
- Semantic Web Data (1)
- Semantics (1)
- Semantisches Wiki (1)
- Sentiment Analysis, ChatGPT, Students sentiments (1)
- Serdar Ayalp (1)
- Serviceorientierte Architektur (1)
- Software Language (1)
- Software Repositories (1)
- Software Technology (1)
- Softwarearchitektur (1)
- Softwaretest (1)
- Softwaretesting (1)
- Soziale Netzwerke (1)
- Spam-Mail (1)
- Spannender Baum (1)
- Spanning Tree (1)
- Specification (1)
- Spezifikation (1)
- Statistische Analyse (1)
- Stochastic Logic (1)
- Straßenzustand (1)
- Suat Algin (1)
- TAP (1)
- TGraph (1)
- TRECVID (1)
- Tableau <Logik> (1)
- Tableau Calculus (1)
- Technologischer Raum (1)
- Technology Acceptance Model (1)
- Temperaturmessung (1)
- Test Generation (1)
- Testgenerierung (1)
- Theorem prover (1)
- Theorembeweiser (1)
- Tokens (1)
- Transaction concept (1)
- Transaktion (1)
- Transaktionskonzept (1)
- Ultraschall (1)
- Unified Modeling Language (UML ) (1)
- Unit disk graph (1)
- VDE (1)
- VDE-Switch (1)
- VLAN (1)
- VTP (1)
- Vegetation distribution (1)
- Verifikation (1)
- Verortung (1)
- Verteilter Algorithmus (1)
- Virtual Square (1)
- Virtual network user mode Linux (1)
- Virtualisation (1)
- Virtualisierung (1)
- Visualisierung (1)
- Wahrscheinlichkeitsrechnung (1)
- Wasseroberfläche (1)
- Water Surface Temperature (1)
- Web Ontology Language (OWL) (1)
- Werbung (1)
- Wiki (1)
- Wireless Sensor Networks (1)
- Wireless sensor network (1)
- Wireshark (1)
- Wissenskompilation (1)
- Wissensmanagement (1)
- World Wide Web (1)
- World Wide Web 2.0 (1)
- XML (1)
- XML-Dokument (1)
- Zertifikat (1)
- Zirkumskription (1)
- Zusammenhängender Graph (1)
- andLinux (1)
- andLinux. (1)
- automated theorem prover (1)
- backpropagation (1)
- bias (1)
- classification (1)
- concurrency (1)
- convergence analysis (1)
- data mining (1)
- deductive (1)
- description logic (1)
- e-learning (1)
- e-service quality (1)
- ediv (1)
- event model (1)
- event-based systems (1)
- first-order logic (1)
- folksonomies (1)
- gaze information (1)
- grassland (1)
- hybrid systems (1)
- hybride Automaten (1)
- iCity project (1)
- image semantics (1)
- information retrieval (1)
- knowledge management system (1)
- landmarks (1)
- living book (1)
- model generation (1)
- multi-agent systems (1)
- multiagent systems (1)
- multiprocessing (1)
- networks (1)
- neural (1)
- ontology (1)
- parallel computing (1)
- personal information management (1)
- persönliches Informationsmanagement (1)
- privacy protection (1)
- public key infrastructure (1)
- question answering (1)
- real-time computing (1)
- reasoning (1)
- recurrent (1)
- rip (1)
- rmti (1)
- semantic desktop (1)
- semantics (1)
- semantischer Desktop (1)
- sequent calculi (1)
- simulation (1)
- summative evaluation (1)
- tag recommendation (1)
- tagging (1)
- vegetation modeling (1)
- verification (1)
- virtual network user mode linux (1)
- vnuml (1)
- web 2.0 (1)
- xtpeer (1)
Institute
- Institut für Informatik (147) (remove)
Assessing ChatGPT’s Performance in Analyzing Students’ Sentiments: A Case Study in Course Feedback
(2024)
The emergence of large language models (LLMs) like ChatGPT has impacted fields such as education, transforming natural language processing (NLP) tasks like sentiment analysis. Transformers form the foundation of LLMs, with BERT, XLNet, and GPT as key examples. ChatGPT, developed by OpenAI, is a state-of-the-art model and its ability in natural language tasks makes it a potential tool in sentiment analysis. This thesis reviews current sentiment analysis methods and examines ChatGPT’s ability to analyze sentiments across three labels (Negative, Neutral, Positive) and five labels (Very Negative, Negative, Neutral, Positive, Very Positive) on a dataset of student course reviews. Its performance is compared with fine tuned state-of-the-art models like BERT, XLNet, bart-large-mnli, and RoBERTa-large-mnli using quantitative metrics. With the help of 7 prompting techniques which are ways to instruct ChatGPT, this work also analyzed how well it understands complex linguistic nuances in the given texts using qualitative metrics. BERT and XLNet outperform ChatGPT mainly due to their bidirectional nature, which allows them to understand the full context of a sentence, not just left to right. This, combined with fine-tuning, helps them capture patterns and nuances better. ChatGPT, as a general purpose, open-domain model, processes text unidirectionally, which can limit its context understanding. Despite this, ChatGPT performed comparably to XLNet and BERT in three-label scenarios and outperformed others. Fine-tuned models excelled in five label cases. Moreover, it has shown impressive knowledge of the language. Chain-of-Thought (CoT) was the most effective technique for prompting with step by step instructions. ChatGPT showed promising performance in correctness, consistency, relevance, and robustness, except for detecting Irony. As education evolves with diverse learning environments, effective feedback analysis becomes increasingly valuable. Addressing ChatGPT’s limitations and leveraging its strengths could enhance personalized learning through better sentiment analysis.
Exploring Academic Perspectives: Sentiments and Discourse on ChatGPT Adoption in Higher Education
(2024)
Artificial intelligence (AI) is becoming more widely used in a number of industries, including in the field of education. Applications of artificial intelligence (AI) are becoming crucial for schools and universities, whether for automated evaluation, smart educational systems, individualized learning, or staff support. ChatGPT, anAI-based chatbot, offers coherent and helpful replies based on analyzing large volumes of data. Integrating ChatGPT, a sophisticated Natural Language Processing (NLP) tool developed by OpenAI, into higher education has sparked significant interest and debate. Since the technology is already adapted by many students and teachers, this study delves into analyzing the sentiments expressed on university websites regarding ChatGPT integration into education by creating a comprehensive sentiment analysis framework using Hierarchical Residual RSigELU Attention Network (HR-RAN). The proposed framework addresses several challenges in sentiment analysis, such as capturing fine-grained sentiment nuances, including contextual information, and handling complex language expressions in university review data. The methodology involves several steps, including data collection from various educational websites, blogs, and news platforms. The data is preprocessed to handle emoticons, URLs, and tags and then, detect and remove sarcastic text using the eXtreme Learning Hyperband Network (XLHN). Sentences are then grouped based on similarity and topics are modeled using the Non-negative Term-Document Matrix Factorization (NTDMF) approach. Features, such as lexico-semantic, lexico structural, and numerical features are extracted. Dependency parsing and coreference resolution are performed to analyze grammatical structures and understand semantic relationships. Word embedding uses the Word2Vec model to capture semantic relationships between words. The preprocessed text and extracted features are inputted into the HR-RAN classifier to categorize sentiments as positive, negative, or neutral. The sentiment analysis results indicate that 74.8% of the sentiments towards ChatGPT in higher education are neutral, 21.5% are positive, and only 3.7% are negative. This suggests a predominant neutrality among users, with a significant portion expressing positive views and a very small percentage holding negative opinions. Additionally, the analysis reveals regional variations, with Canada showing the highest number of sentiments, predominantly neutral, followed by Germany, the UK, and the USA. The sentiment analysis results are evaluated based on various metrics, such as accuracy, precision, recall, F-measure, and specificity. Results indicate that the proposed framework outperforms conventional sentiment analysis models. The HR-RAN technique achieved a precision of 98.98%, recall of 99.23%, F-measure of 99.10%, accuracy of 98.88%, and specificity of 98.31%. Additionally, word clouds are generated to visually represent the most common terms within positive, neutral, and negative sentiments, providing a clear and immediate understanding of the key themes in the data. These findings can inform educators, administrators, and developers about the benefits and challenges of integrating ChatGPT into educational
settings, guiding improvements in educational practices and AI tool development.
The goal of this PhD thesis is to investigate possibilities of using symbol elimination for solving problems over complex theories and analyze the applicability of such uniform approaches in different areas of application, such as verification, knowledge representation and graph theory. In the thesis we propose an approach to symbol elimination in complex theories that follows the general idea of combining hierarchical reasoning with symbol elimination in standard theories. We analyze how this general approach can be specialized and used in different areas of application.
In the verification of parametric systems it is important to prove that certain safety properties hold. This can be done by showing that a property is an inductive invariant of the system, i.e. it holds in the initial state of the system and is invariant under updates of the system. Sometimes this is not the case for the condition itself, but for a stronger condition it is. In this thesis we propose a method for goal-directed invariant strengthening.
In knowledge representation we often have to deal with huge ontologies. Combining two ontologies usually leads to new consequences, some of which may be false or undesired. We are interested in finding explanations for such unwanted consequences. For this we propose a method for computing interpolants in the description logics EL and EL⁺, based on a translation to the theory of semilattices with monotone operators and a certain form of interpolation in this theory.
In wireless network theory one often deals with classes of geometric graphs in which the existence or non-existence of an edge between two vertices in a graph relies on properties on their distances to other nodes. One possibility to prove properties of those graphs or to analyze relations between the graph classes is to prove or disprove that one graph class is contained in the other. In this thesis we propose a method for checking inclusions between geometric graph classes.
In international business relationships, such as international railway operations, large amounts of data can be exchanged among the parties involved. For the exchange of such data, a limited risk of being cheated by another party, e.g., by being provided with fake data, as well as reasonable cost and a foreseeable benefit, is expected. As the exchanged data can be used to make critical business decisions, there is a high incentive for one party to manipulate the data in its favor. To prevent this type of manipulation, mechanisms exist to ensure the integrity and authenticity of the data. In combination with a fair exchange protocol, it can be ensured that the integrity and authenticity of this data is maintained even when it is exchanged with another party. At the same time, such a protocol ensures that the exchange of data only takes place in conjunction with the agreed compensation, such as a payment, and that the payment is only made if the integrity and authenticity of the data is ensured as previously agreed. However, in order to be able to guarantee fairness, a fair exchange protocol must involve a trusted third party. To avoid fraud by a single centralized party acting as a trusted third party, current research proposes decentralizing the trusted third party, e.g., by using a distributed ledger based fair exchange protocol. However, for assessing the fairness of such an exchange, state-of-the-art approaches neglect costs arising for the parties conducting the fair exchange. This can result in a violation of the outlined expectation of reasonable cost, especially when distributed ledgers are involved, which are typically associated with non-negligible costs. Furthermore, the performance of typical distributed ledger-based fair exchange protocols is limited, posing an obstacle to widespread adoption.
To overcome the challenges, in this thesis, we introduce the foundation for a data exchange platform allowing for a fully decentralized fair data exchange with reasonable cost and performance. As a theoretical foundation, we introduce the concept of cost fairness, which considers cost for the fairness assessment by requesting that a party following the fair exchange protocol never suffers any unilateral disadvantages. We prove that cost fairness cannot be achieved using typical public distributed ledgers but requires customized distributed ledger instances, which usually lack complete decentralization. However, we show that the highest unilateral cost are caused by a grieving attack.
To allow fair data exchanges to be conducted with reasonable cost and performance, we introduce FairSCE, a distributed ledger-based fair exchange protocol using distributed ledger state channels and incorporating a mechanism to protect against grieving attacks, reducing the possible unilateral cost that have to be covered to a minimum. Based on our evaluation of FairSCE, the worst-case cost for data exchange, even in the presence of malicious parties, is known, which allows an estimate of the possible benefit and, thus, the preliminary estimate of economic utility. Furthermore, to allow for an unambiguous assessment of the correct data being transferred while still allowing for sensitive parts of the data to be masked, we introduce an approach for the hashing of hierarchically structured data, which can be used to ensure integrity and authenticity of the data being transferred.
Empirical studies in software engineering use software repositories as data sources to understand software development. Repository data is either used to answer questions that guide the decision-making in the software development, or to provide tools that help with practical aspects of developers’ everyday work. Studies are classified into the field of Empirical Software Engineering (ESE), and more specifically into Mining Software Repositories (MSR). Studies working with repository data often focus on their results. Results are statements or tools, derived from the data, that help with practical aspects of software development. This thesis focuses on the methods and high order methods used to produce such results. In particular, we focus on incremental methods to scale the processing of repositories, declarative methods to compose a heterogeneous analysis, and high order methods used to reason about threats to methods operating on repositories. We summarize this as technical and methodological improvements. We contribute the improvements to methods and high-order methods in the context of MSR/ESE to produce future empirical results more effectively. We contribute the following improvements. We propose a method to improve the scalability of functions that abstract over repositories with high revision count in a theoretically founded way. We use insights on abstract algebra and program incrementalization to define a core interface of highorder functions that compute scalable static abstractions of a repository with many revisions. We evaluate the scalability of our method by benchmarks, comparing a prototype with available competitors in MSR/ESE. We propose a method to improve the definition of functions that abstract over a repository with a heterogeneous technology stack, by using concepts from declarative logic programming and combining them with ideas on megamodeling and linguistic architecture. We reproduce existing ideas on declarative logic programming with languages close to Datalog, coming from architecture recovery, source code querying, and static program analysis, and transfer them from the analysis of a homogeneous to a heterogeneous technology stack. We provide a prove-of-concept of such method in a case study. We propose a high-order method to improve the disambiguation of threats to methods used in MSR/ESE. We focus on a better disambiguation of threats, operationalizing reasoning about them, and making the implications to a valid data analysis methodology explicit, by using simulations. We encourage researchers to accomplish their work by implementing ‘fake’ simulations of their MSR/ESE scenarios, to operationalize relevant insights about alternative plausible results, negative results, potential threats and the used data analysis methodologies. We prove that such way of simulation based testing contributes to the disambiguation of threats in published MSR/ESE research.
The trends of industry 4.0 and the further enhancements toward an ever changing factory lead to more mobility and flexibility on the factory floor. With that higher need of mobility and flexibility the requirements on wireless communication rise. A key requirement in that setting is the demand for wireless Ultra-Reliability and Low Latency Communication (URLLC). Example use cases therefore are cooperative Automated Guided Vehicles (AGVs) and mobile robotics in general. Working along that setting this thesis provides insights regarding the whole network stack. Thereby, the focus is always on industrial applications. Starting on the physical layer, extensive measurements from 2 GHz to 6 GHz on the factory floor are performed. The raw data is published and analyzed. Based on that data an improved Saleh-Valenzuela (SV) model is provided. As ad-hoc networks are highly depended onnode mobility, the mobility of AGVs is modeled. Additionally, Nodal Encounter Patterns (NEPs) are recorded and analyzed. A method to record NEP is illustrated. The performance by means of latency and reliability are key parameters from an application perspective. Thus, measurements of those two parameters in factory environments are performed using Wireless Local Area Network (WLAN) (IEEE 802.11n), private Long Term Evolution (pLTE) and 5G. This showed auto-correlated latency values. Hence, a method to construct confidence intervals based on auto-correlated data containing rare events is developed. Subsequently, four performance improvements for wireless networks on the factory floor are proposed. Of those optimization three cover ad-hoc networks, two deal with safety relevant communication, one orchestrates the usage of two orthogonal networks and lastly one optimizes the usage of information within cellular networks.
Finally, this thesis is concluded by an outlook toward open research questions. This includes open questions remaining in the context of industry 4.0 and further the ones around 6G. Along the research topics of 6G the two most relevant topics concern the ideas of a network of networks and overcoming best-effort IP.
In the last decade, policy-makers around the world have turned their attention toward the creative industry as the economic engine and significant driver of employments. Yet, the literature suggests that creative workers are one of the most vulnerable work-forces of today’s economy. Because of the highly deregulated and highly individuated environment, failure or success are believed to be the byproduct of individual ability and commitment, rather than a structural or collective issue. This thesis taps into the temporal, spatial, and social resolution of digital behavioural data to show that there are indeed structural and historical issues that impact individuals’ and
groups’ careers. To this end, this thesis offers a computational social science research framework that brings together the decades-long theoretical and empirical knowledge of inequality studies, and computational methods that deal with the complexity and scale of digital data. By taking music industry and science as use cases, this thesis starts off by proposing a novel gender detection method that exploits image search and face-detection methods.
By analysing the collaboration patterns and citation networks of male and female computer scientists, it sheds lights on some of the historical biases and disadvantages that women face in their scientific career. In particular, the relation of scientific success and gender-specific collaboration patterns is assessed. To elaborate further on the temporal aspect of inequalities in scientific careers, this thesis compares the degree of vertical and horizontal inequalities among the cohorts of scientists that started their career at different point in time. Furthermore, the structural inequality in music industry is assessed by analyzing the social and cultural relations that breed from live performances and musics releases. The findings hint toward the importance of community belonging at different stages of artists’ careers. This thesis also quantifies some of the underlying mechanisms and processes of inequality, such as the Matthew Effect and the Hipster Paradox, in creative careers. Finally, this thesis argues that online platforms such as Wikipedia could reflect and amplify the existing biases.
Currently, there are a variety of digital tools in the humanities, such
as annotation, visualization, or analysis software, which support researchers in their work and offer them new opportunities to address different research questions. However, the use of these tools falls far
short of expectations. In this thesis, twelve improvement measures are
developed within the framework of a design science theory to counteract the lack of usage acceptance. By implementing the developed design science theory, software developers can increase the acceptance of their digital tools in the humanities context.
For software engineers, conceptually understanding the tools they are using in the context of their projects is a daily challenge and a prerequisite for complex tasks. Textual explanations and code examples serve as knowledge resources for understanding software languages and software technologies. This thesis describes research on integrating and interconnecting
existing knowledge resources, which can then be used to assist with understanding and comparing software languages and software technologies on a conceptual level. We consider the following broad research questions that we later refine: What knowledge resources can be systematically reused for recovering structured knowledge and how? What vocabulary already exists in literature that is used to express conceptual knowledge? How can we reuse the
online encyclopedia Wikipedia? How can we detect and report on instances of technology usage? How can we assure reproducibility as the central quality factor of any construction process for knowledge artifacts? As qualitative research, we describe methodologies to recover knowledge resources by i.) systematically studying literature, ii.) mining Wikipedia, iii.) mining available textual explanations and code examples of technology usage. The theoretical findings are backed by case studies. As research contributions, we have recovered i.) a reference semantics of vocabulary for describing software technology usage with an emphasis on software languages, ii.) an annotated corpus of Wikipedia articles on software languages, iii.) insights into technology usage on GitHub with regard to a catalog of pattern and iv.) megamodels of technology usage that are interconnected with existing textual explanations and code examples.
Social media provides a powerful way for people to share opinions and sentiments about a specific topic, allowing others to benefit from these thoughts and feelings. This procedure generates a huge amount of unstructured data, such as texts, images, and references that are constantly increasing through daily comments to related discussions. However, the vast amount of unstructured data presents risks to the information-extraction process, and so decision making becomes highly challenging. This is because data overload may cause the loss of useful data due to its inappropriate presentation and its accumulation. To this extent, this thesis contributed to the field of analyzing and detecting feelings in images and texts. And that by extracting the feelings and opinions hidden in a huge collection of image data and texts on social networks After that, these feelings are classified into positive, negative, or neutral, according to the features of the classified data. The process of extracting these feelings greatly helps in decision-making processes on various topics as will be explained in the first chapter of the thesis. A system has been built that can classify the feelings inherent in the images and texts on social media sites, such as people’s opinions about products and companies, personal posts, and general messages. This thesis begins by introducing a new method of reducing the dimension of text data based on data-mining approaches and then examines the sentiment based on neural and deep neural network classification algorithms. Subsequently, in contrast to sentiment analysis research in text datasets, we examine sentiment expression and polarity classification within and across image datasets by building deep neural networks based on the attention mechanism.
Connected vehicles will have a tremendous impact on tomorrow’s mobility solutions. Such systems will heavily rely on information delivery in time to ensure the functional reliability, security and safety. However, the host-centric communication model of today’s networks questions efficient data dissemination in a scale, especially in networks characterized by a high degree of mobility. The Information-Centric Networking (ICN) paradigm has evolved as a promising candidate for the next generation of network architectures. Based on a loosely coupled communication model, the in-network processing and caching capabilities of ICNs are promising to solve the challenges set by connected vehicular systems. In such networks, a special class of caching strategies which take action by placing a consumer’s anticipated content actively at the right network nodes in time are promising to reduce the data delivery time. This thesis contributes to the research in active placement strategies in information-centric and computation-centric vehicle networks for providing dynamic access to content and computation results. By analyzing different vehicular applications and their requirements, novel caching strategies are developed in order to reduce the time of content retrieval. The caching strategies are compared and evaluated against the state-of-the-art in both extensive simulations as well as real world deployments. The results are showing performance improvements by increasing the content retrieval (availability of specific data increased up to 35% compared to state-of-the-art caching strategies), and reducing the delivery times (roughly double the number of data retrieval from neighboring nodes). However, storing content actively in connected vehicle networks raises questions regarding security and privacy. In the second part of the thesis, an access control framework for information-centric connected vehicles is presented. Finally, open security issues and research directions in executing computations at the edge of connected vehicle networks are presented.
Virtual reality is a growing field of interest as it provides a particular intuitive way of user-interaction. However, there are still open technical issues regarding latency — the delay between interaction and display reaction — and the trade-off between visual quality and frame-rate of real-time graphics, especially when taking visual effects like specular and semi-transparent surfaces and volumes into account. One solution, a distributed rendering setup, is presented in this thesis, in which the image synthesis is divided into an accurate but costly physically based rendering thread with a low refresh rate and a fast reprojection thread to remain a responsive interactivity with a high frame-rate. Two novel reprojection techniques are proposed that cover reflections and refractions produced by surface ray-tracing as well as volumetric light transport generated by volume ray-marching. The introduced setup can enhance the VR experience within several domains. In this thesis, three innovative training applications have been realized to investigate the added value of virtual reality to the three learning stages of observation, interaction and collaboration. For each stage an interdisciplinary curriculum, currently taught with traditional media, was transferred to a VR setting in order to investigate how virtual reality is capable of providing a natural, flexible and efficient learning environment
Initial goal of the current dissertation was the determination of image-based biomarkers sensitive for neurodegenerative processes in the human brain. One such process is the demyelination of neural cells characteristic for Multiple sclerosis (MS) - the most common neurological disease in young adults for which there is no cure yet. Conventional MRI techniques are very effective in localizing areas of brain tissue damage and are thus a reliable tool for the initial MS diagnosis. However, a mismatch between the clinical fndings and the visualized areas of damage is observed, which renders the use of the standard MRI diffcult for the objective disease monitoring and therapy evaluation. To address this problem, a novel algorithm for the fast mapping of myelin water content using standard multiecho gradient echo acquisitions of the human brain is developed in the current work. The method extents a previously published approach for the simultaneous measurement of brain T1, T∗ 2 and total water content. Employing the multiexponential T∗ 2 decay signal of myelinated tissue, myelin water content is measured based on the quantifcation of two water pools (myelin water and rest) with different relaxation times. Whole brain in vivo myelin water content maps are acquired in 10 healthy controls and one subject with MS. The in vivo results obtained are consistent with previous reports. The acquired quantitative data have a high potential in the context of MS. However, the parameters estimated in a multiparametric acquisition are correlated and constitute therefore an ill-posed, nontrivial data analysis problem. Motivated by this specific problem, a new data clustering approach is developed called Nuclear Potential Clustering, NPC. It is suitable for the explorative analysis of arbitrary dimensional and possibly correlated data without a priori assumptions about its structure. The developed algorithm is based on a concept adapted from nuclear physics. To partition the data, the dynamic behavior of electrically even charged nucleons interacting in a d-dimensional feature space is modeled. An adaptive nuclear potential, comprised of a short-range attractive (Strong interaction) and a long-range repulsive term (Coulomb potential), is assigned to each data point. Thus, nucleons that are densely distributed in space fuse to build nuclei (clusters), whereas single point clusters are repelled (noise). The algorithm is optimized and tested in an extensive study with a series of synthetic datasets as well as the Iris data. The results show that it can robustly identify clusters even when complex configurations and noise are present. Finally, to address the initial goal, quantitative MRI data of 42 patients are analyzed employing NPC. A series of experiments with different sets of image-based features show a consistent grouping tendency: younger patients with low disease grade are recognized as cohesive clusters, while those of higher age and impairment are recognized as outliers. This allows for the definition of a reference region in a feature space associated with phenotypic data. Tracking of the individual's positions therein can disclose patients at risk and be employed for therapy evaluation.
Modern software projects are composed of several software languages, software technologies and different kind of artifacts. Therefore, the understanding of the software project at hand, including the semantic links between the different parts, becomes a difficult challenge for a developer. One approach to attack this issue is to document the software project with the help of a linguistic architecture. This kind of architecture can be described with the help of the MegaL ontology. A remaining challenge is the creation of it since it requires different kind of skills. Therefore, this paper proposes an approach for the automatic extraction of a linguistic architecture. The open source framework Apache Jena, which is focusing on semantic web technologies like RDF and OWL, is used to define custom rules that are capable to infer new knowledge based on the defined or already extracted RDF triples. The complete approach is tested in a case study on ten different open source projects. The aim of the case study is to extract a linguistic architecture that is describing the use of Hibernate in the selected projects. In the end, the result is evaluated with the help of different metrics. The evaluation is performed with the help of an internal and external approach.
The term “Software Chrestomaty” is defined as a collection of software systems meant to be useful in learning about or gaining insight into software languages, software technologies, software concepts, programming, and software engineering. 101companies software chrestomathy is a community project with the attributes of a Research 2.0 infrastructure for various stakeholders in software languages and technology communities. The core of 101companies combines a semantic wiki and confederated open source repositories. We designed and developed an integrated ontology-based knowledge base about software languages and technologies. The knowledge is created by the community of contributors and supported with a running example and structured documentation. The complete ecosystem is exposed by using Linked Data principles and equipped with the additional metadata about individual artifacts. Within the context of software chrestomathy we explored a new type of software architecture – linguistic architecture that is targeted on the language and technology relationships within a software product and based on the megamodels. Our approach to documentation of the software systems is highly structured and makes use of the concepts of the newly developed megamodeling language MegaL. We “connect” an emerging ontology with the megamodeling artifacts to raise the cognitive value of the linguistic architecture.
Die Forschung im Bereich der modellbasierten Objekterkennung und Objektlokalisierung hat eine vielversprechende Zukunft, insbesondere die Gebäudeerkennung bietet vielfaltige Anwendungsmöglichkeiten. Die Bestimmung der Position und der Orientierung des Beobachters relativ zu einem Gebäude ist ein zentraler Bestandteil der Gebäudeerkennung.
Kern dieser Arbeit ist es, ein System zur modellbasierten Poseschätzung zu entwickeln, das unabhängig von der Anwendungsdomäne agiert. Als Anwendungsdomäne wird die modellbasierte Poseschätzung bei Gebäudeaufnahmen gewählt. Vorbereitend für die Poseschätzung bei Gebäudeaufnahmen wird die modellbasierte Erkennung von Dominosteinen und Pokerkarten realisiert. Eine anwendungsunabhängige Kontrollstrategie interpretiert anwendungsspezifische Modelle, um diese im Bild sowohl zu lokalisieren als auch die Pose mit Hilfe dieser Modelle zu bestimmen. Es wird explizit repräsentiertes Modellwissen verwendet, sodass Modellbestandteilen Bildmerkmale zugeordnet werden können. Diese Korrespondenzen ermöglichen die Kamerapose aus einer monokularen Aufnahme zurückzugewinnen. Das Verfahren ist unabhängig vom Anwendungsfall und kann auch mit Modellen anderer rigider Objekte umgehen, falls diese der definierten Modellrepräsentation entsprechen. Die Bestimmung der Pose eines Modells aus einem einzigen Bild, das Störungen und Verdeckungen aufweisen kann, erfordert einen systematischen Vergleich des Modells mit Bilddaten. Quantitative und qualitative Evaluationen belegen die Genauigkeit der bestimmten Gebäudeposen.
In dieser Arbeit wird zudem ein halbautomatisches Verfahren zur Generierung eines Gebäudemodells vorgestellt. Das verwendete Gebäudemodell, das sowohl semantisches als auch geometrisches Wissen beinhaltet, den Aufgaben der Objekterkennung und Poseschätzung genügt und sich dennoch an den bestehenden Normen orientiert, ist Voraussetzung für das Poseschätzverfahren. Leitgedanke der Repräsentationsform des Modells ist, dass sie für Menschen interpretierbar bleibt. Es wurde ein halbautomatischer Ansatz gewählt, da die automatische Umsetzung dieses Verfahrens schwer die nötige Präzision erzielen kann. Das entwickelte Verfahren erreicht zum einen die nötige Präzision zur Poseschätzung und reduziert zum anderen die Nutzerinteraktionen auf ein Minimum. Eine qualitative Evaluation belegt die erzielte Präzision bei der Generierung des Gebäudemodells.
Skalierbarkeit und garantierte Ausliererung sind essentielle Eigenschaften eines jeden Routingalgorithmus. Beides bietet bei drahtlosen Ad-hoc Netzwerken die Kombination aus Greedy- und Face- Routing, sofern ein planarer Graph zur Verfügung steht. Doch gerade die fehlerfreie Planarisierung bereitet bei realistischen Netzwerken Schwierigkeiten. Daher soll mit dieser Arbeit die Frage beantwortet werden, zu welcher Fehlerrate es führt, wenn der Graph lediglich mit lokalen Methoden teilplanarisiert wird. Dazu wurde eine Simulationsumgebung geschaffen, um unter Anwendung des Log-Normal-Shadowing-Modells zufällige Konnektivitätsgraphen zu generieren. Diese wurden anschließend durch zwei unterschiedliche, lokale Strategien teilplanarisiert. Es wurden neun verschiedene Settings definiert, die sich aus drei unterschiedlichen Graphendichten und drei unterschiedlichen Werten für den Sigmaparameter des Log-Normal-Shadowing-Modells ergeben. Für jedes Setting wurde in 2000 Simulationsdurchläufen das Verhalten von Greedy-, Face- und kombiniertem Greedy-Face-Routing untersucht und ausgewertet. Zum Abschluss wurden die Ergebnisse dieser Simulation bewertet und diskutiert.
E-Learning Anwendungen sind heutzutage im Trend. Jedoch nicht nur aus diesem Grund sind sie vom großen Interesse. Im Vordergrund steht ihr didaktisches Konzept. Heutzutage ist es technisch möglich, die Teilnehmer interaktiv in einen
E- Learning Kurs einzubinden. Die Lernschritte können durch die Anwendung kontrolliert werden. Im Gegensatz zu einer Vorlesung, kann die Theorie direkt mit der Praxis verknüpft werden. Das Ziel der Arbeit ist die Erstellung eines Konzeptes und die Entwicklung eines Prototypen einer interaktiven Entwicklungsumgebung von Java für Programmieranfänger. In dieser Arbeit wurden aktuelle E-Learning Plattformen zu Java analysiert. Aus den gewonnen Erkenntnissen wurde ein Konzept für eine E-Learningplattform entwickelt. Dieses Konzept wurde erfolgreich als Prototyp implementiert. Für die Implementation wurden bewährte Technologien, wie das Django-Framework und das Javascript-Framework Vue.js eingesetzt. Der Kurs wurde von neun Teilnehmern getestet. Zusätzlich wurde eine Umfrage zu der Kursanwendung durchgeführt. Das Testergebnis bestätigt, dass die Teilnehmer den Kurs bestanden haben.
Using semantic data from general-purpose programming languages does not provide the unified experience one would want for such an application. Static error checking is lacking, especially with regards to static typing of the data. Based on the previous work of λ-DL, which integrates semantic queries and concepts as types into a typed λ-calculus, this work takes its ideas a step further to meld them into a real-world programming language. This thesis explores how λ-DL's features can be extended and integrated into an existing language, researches an appropriate extension mechanism and produces Semantics4J, a JastAdd-based Java language semantic data extension for type-safe OWL programming, together with examples of its usage.