Refine
Year of publication
Document Type
- Part of Periodical (38)
- Doctoral Thesis (35)
- Diploma Thesis (24)
- Study Thesis (19)
- Master's Thesis (16)
- Bachelor Thesis (14)
- Report (1)
Keywords
- Routing (5)
- Bluetooth (4)
- Knowledge Compilation (4)
- Netzwerk (4)
- Semantic Web (4)
- Software Engineering (4)
- VNUML (4)
- E-KRHyper (3)
- Netzwerksimulation (3)
- RIP-MTI (3)
- XTPeer (3)
- Campus Information System (2)
- Drahtloses Sensorsystem (2)
- Equality (2)
- Information Retrieval (2)
- Konvergenz (2)
- Modellgetriebene Entwicklung (2)
- Network (2)
- Ontology (2)
- Petri-Netze (2)
- Prädikatenlogik (2)
- RIP (2)
- Rechnernetz (2)
- Theorem Proving (2)
- UML (2)
- University (2)
- constraint logic programming (2)
- hybrid automata (2)
- probability propagation nets (2)
- API (1)
- API analysis (1)
- API-Analyse (1)
- AUTOSAR (1)
- Abduktion <Logik> (1)
- Ad-hoc-Netz (1)
- Adaptation (1)
- Administration (1)
- Algorithm Engineering (1)
- Algorithmische Geometrie (1)
- Anpassung (1)
- Architektur <Informatik> (1)
- Auditing (1)
- Augmented Reality (1)
- Aussagenlogik (1)
- Authentisierung (1)
- Automated Theorem Proving (1)
- Automated Theorem Proving Systems (1)
- Automatisches Beweisverfahren (1)
- Automotive Systems (1)
- Backtrack (1)
- Bayes Procedures (1)
- Beaconless (1)
- Befahrbarkeit (1)
- Betriebssystem (1)
- Border Gateway Protocol (1)
- Border Gateway Protocol 4 (1)
- C-Socketbibliothek (1)
- CMS (1)
- CSCA (1)
- CTI (1)
- CTI -Problem (1)
- Calculus (1)
- Cheops (1)
- Cicero (1)
- Circumscription (1)
- Cisco Catalyst 3500XL (1)
- Communication Networks (1)
- Computational logic (1)
- Computer Security (1)
- Computer Supported Cooperative Work (1)
- Computersicherheit (1)
- Computersimulation (1)
- Conference (1)
- Connected Vehicles (1)
- Context-aware processes (1)
- Controlling (1)
- Counting-to-Infinity (1)
- DHCPv6 (1)
- DNNF (1)
- DPLL (1)
- DPLL procedure (1)
- Data Mining (1)
- Datenschutz (1)
- Deduktion (1)
- Description Logics (1)
- Destiny (1)
- Developer profiling (1)
- DiaLex (1)
- Dialekt (1)
- Dijkstras Algorithmus (1)
- Dimension 3 (1)
- Diskrete Simulation (1)
- Distanz-Vektor (1)
- Distributed Algorithm (1)
- Distributed Environments (1)
- Distributed process execution (1)
- Dokumentation (1)
- Drahtloses vermachtes Netz (1)
- Driver Assistance Systems (1)
- E-Participation (1)
- Ebener Graph (1)
- Echtzeit-Interaktion (1)
- Echtzeitsysteme (1)
- Empirical Studies (1)
- Emulation (1)
- Enhanced Reality (1)
- Enterprise Architecture Framework (1)
- Entscheidungsprozess (1)
- Entwickler Profil (1)
- Fahrerassistenzsysteme (1)
- Fahrzeug (1)
- Fließgewässer (1)
- Flowing Inland Waters (1)
- Formal Methods (1)
- Formale Methoden (1)
- Formale Ontologie (1)
- Fragebeantwortung (1)
- GPS (1)
- GReTL (1)
- Gelände (1)
- Geographic routing (1)
- Geometric spanner (1)
- Graph (1)
- Grounded Theory (1)
- HNF-Algorithmus (1)
- Hermite-Normalform (1)
- Heuristiken (1)
- Hindernis (1)
- Horn Clauses (1)
- Human-Computer Interaction (1)
- Hyper Tableau Calculus (1)
- IASON (1)
- IPv6 (1)
- IT Guru (1)
- IT-Security (1)
- Information (1)
- Information Centric Networking (1)
- Information Visualization (1)
- Informationsverlinkung (1)
- Informationsvisualisierung (1)
- InstaHub (1)
- Intelligent Information Network (1)
- Interactive Video Retrieval (1)
- Interactive Visualizations (1)
- Interaktive Visualisierungen (1)
- Interior Gateway Protokoll (1)
- JGraLab (1)
- JML (1)
- Java (1)
- Java Modeling Language (1)
- Java. Programmiersprache (1)
- KRHyper (1)
- Klassifikation (1)
- Klausellogik (1)
- Knowledge (1)
- Knowledge Engineering (1)
- Komplexität / Algorithmus (1)
- Laser (1)
- Linking of Information (1)
- Linux (1)
- Local algorithm (1)
- Logischer Schluss (1)
- Lookup-Algorithmen (1)
- MIA (1)
- MPEG-7 (1)
- MSR (1)
- Mail-Filter (1)
- Mehrprozessorsystem (1)
- Mensch-Maschine-Interaktion (1)
- Metamodel (1)
- Meteor (1)
- Metriken (1)
- Mikrocontroller (1)
- Mining (1)
- Mining Software Repositories (1)
- Mobile Information Systems (1)
- Mobiltelefon (1)
- Modelltransformation (1)
- Monitoring (1)
- Multi-Agenten-Simulation (1)
- Multi-robot System (1)
- Multiagent System (1)
- Multiagentensysteme (1)
- Nachbarschaftsgraph (1)
- Nagios (1)
- Named Function Networking (1)
- Network Monitoring (1)
- Network Simulation (1)
- Netzwerk Protokollen (1)
- Netzwerkanalyse (1)
- Netzwerkchip (1)
- Netzwerkmanagment (1)
- Netzwerktopologie (1)
- Netzwerküberwachung (1)
- Neuronales Netz (1)
- Nutzerzufriedenheit (1)
- OPNET (1)
- OWL <Informatik> (1)
- Ontologie <Wissensverarbeitung> (1)
- OpenWrt (1)
- Oracle Generation (1)
- Oraklegenerierung (1)
- Orientierungsstufe (1)
- PDA (1)
- Personal Key Infrastruktur (1)
- Personalised Information Systems (1)
- Petri Nets (1)
- Petri net (1)
- Petrinetz (1)
- Placement Strategies (1)
- Planar graphs (1)
- Policy Language (1)
- Proactive Caching (1)
- Probability (1)
- Probability propagation nets (1)
- Process (1)
- Process tracing (1)
- Projekt iCity (1)
- Propagation (1)
- Propositional Logic (1)
- Protocol (1)
- Prototyp (1)
- Prototype (1)
- Provenance (1)
- Quasi unit disk graph (1)
- Query Expansion (1)
- RDF Graphs (1)
- RDF modeling (1)
- RIP-MTI Algorithmus (1)
- RMTI (1)
- Reactive algorithm (1)
- Rechnernetze (1)
- Reference Model (1)
- Relationale Datenbank (1)
- Relevance Feedback (1)
- Requirement-Engineering (1)
- Resource Description Framework (RDF) (1)
- Robocup 2008 (1)
- Roboter (1)
- Router (1)
- Routing Information Protocol (1)
- Routing Information Protocol (RIP) (1)
- Routing Loops (1)
- Routing information protocol (1)
- Routing with Metric based Topology Investigation (RMTI) (1)
- Routing- und Forwarding Tabellen (1)
- Routing-Loop (1)
- SAC (1)
- SNMP (1)
- SOA (1)
- SPARQL (1)
- STP (1)
- Schleifenerkennung (1)
- Schlussfolgern (1)
- Security Requirements (1)
- Semantic Web Data (1)
- Semantics (1)
- Semantisches Wiki (1)
- Sentiment Analysis, ChatGPT, Students sentiments (1)
- Serdar Ayalp (1)
- Serviceorientierte Architektur (1)
- Software Language (1)
- Software Repositories (1)
- Software Technology (1)
- Softwarearchitektur (1)
- Softwaretest (1)
- Softwaretesting (1)
- Soziale Netzwerke (1)
- Spam-Mail (1)
- Spannender Baum (1)
- Spanning Tree (1)
- Specification (1)
- Spezifikation (1)
- Statistische Analyse (1)
- Stochastic Logic (1)
- Straßenzustand (1)
- Suat Algin (1)
- TAP (1)
- TGraph (1)
- TRECVID (1)
- Tableau <Logik> (1)
- Tableau Calculus (1)
- Technologischer Raum (1)
- Technology Acceptance Model (1)
- Temperaturmessung (1)
- Test Generation (1)
- Testgenerierung (1)
- Theorem prover (1)
- Theorembeweiser (1)
- Tokens (1)
- Transaction concept (1)
- Transaktion (1)
- Transaktionskonzept (1)
- Ultraschall (1)
- Unified Modeling Language (UML ) (1)
- Unit disk graph (1)
- VDE (1)
- VDE-Switch (1)
- VLAN (1)
- VTP (1)
- Vegetation distribution (1)
- Verifikation (1)
- Verortung (1)
- Verteilter Algorithmus (1)
- Virtual Square (1)
- Virtual network user mode Linux (1)
- Virtualisation (1)
- Virtualisierung (1)
- Visualisierung (1)
- Wahrscheinlichkeitsrechnung (1)
- Wasseroberfläche (1)
- Water Surface Temperature (1)
- Web Ontology Language (OWL) (1)
- Werbung (1)
- Wiki (1)
- Wireless Sensor Networks (1)
- Wireless sensor network (1)
- Wireshark (1)
- Wissenskompilation (1)
- Wissensmanagement (1)
- World Wide Web (1)
- World Wide Web 2.0 (1)
- XML (1)
- XML-Dokument (1)
- Zertifikat (1)
- Zirkumskription (1)
- Zusammenhängender Graph (1)
- andLinux (1)
- andLinux. (1)
- automated theorem prover (1)
- backpropagation (1)
- bias (1)
- classification (1)
- concurrency (1)
- convergence analysis (1)
- data mining (1)
- deductive (1)
- description logic (1)
- e-learning (1)
- e-service quality (1)
- ediv (1)
- event model (1)
- event-based systems (1)
- first-order logic (1)
- folksonomies (1)
- gaze information (1)
- grassland (1)
- hybrid systems (1)
- hybride Automaten (1)
- iCity project (1)
- image semantics (1)
- information retrieval (1)
- knowledge management system (1)
- landmarks (1)
- living book (1)
- model generation (1)
- multi-agent systems (1)
- multiagent systems (1)
- multiprocessing (1)
- networks (1)
- neural (1)
- ontology (1)
- parallel computing (1)
- personal information management (1)
- persönliches Informationsmanagement (1)
- privacy protection (1)
- public key infrastructure (1)
- question answering (1)
- real-time computing (1)
- reasoning (1)
- recurrent (1)
- rip (1)
- rmti (1)
- semantic desktop (1)
- semantics (1)
- semantischer Desktop (1)
- sequent calculi (1)
- simulation (1)
- summative evaluation (1)
- tag recommendation (1)
- tagging (1)
- vegetation modeling (1)
- verification (1)
- virtual network user mode linux (1)
- vnuml (1)
- web 2.0 (1)
- xtpeer (1)
Institute
- Institut für Informatik (147) (remove)
Assessing ChatGPT’s Performance in Analyzing Students’ Sentiments: A Case Study in Course Feedback
(2024)
The emergence of large language models (LLMs) like ChatGPT has impacted fields such as education, transforming natural language processing (NLP) tasks like sentiment analysis. Transformers form the foundation of LLMs, with BERT, XLNet, and GPT as key examples. ChatGPT, developed by OpenAI, is a state-of-the-art model and its ability in natural language tasks makes it a potential tool in sentiment analysis. This thesis reviews current sentiment analysis methods and examines ChatGPT’s ability to analyze sentiments across three labels (Negative, Neutral, Positive) and five labels (Very Negative, Negative, Neutral, Positive, Very Positive) on a dataset of student course reviews. Its performance is compared with fine tuned state-of-the-art models like BERT, XLNet, bart-large-mnli, and RoBERTa-large-mnli using quantitative metrics. With the help of 7 prompting techniques which are ways to instruct ChatGPT, this work also analyzed how well it understands complex linguistic nuances in the given texts using qualitative metrics. BERT and XLNet outperform ChatGPT mainly due to their bidirectional nature, which allows them to understand the full context of a sentence, not just left to right. This, combined with fine-tuning, helps them capture patterns and nuances better. ChatGPT, as a general purpose, open-domain model, processes text unidirectionally, which can limit its context understanding. Despite this, ChatGPT performed comparably to XLNet and BERT in three-label scenarios and outperformed others. Fine-tuned models excelled in five label cases. Moreover, it has shown impressive knowledge of the language. Chain-of-Thought (CoT) was the most effective technique for prompting with step by step instructions. ChatGPT showed promising performance in correctness, consistency, relevance, and robustness, except for detecting Irony. As education evolves with diverse learning environments, effective feedback analysis becomes increasingly valuable. Addressing ChatGPT’s limitations and leveraging its strengths could enhance personalized learning through better sentiment analysis.
Exploring Academic Perspectives: Sentiments and Discourse on ChatGPT Adoption in Higher Education
(2024)
Artificial intelligence (AI) is becoming more widely used in a number of industries, including in the field of education. Applications of artificial intelligence (AI) are becoming crucial for schools and universities, whether for automated evaluation, smart educational systems, individualized learning, or staff support. ChatGPT, anAI-based chatbot, offers coherent and helpful replies based on analyzing large volumes of data. Integrating ChatGPT, a sophisticated Natural Language Processing (NLP) tool developed by OpenAI, into higher education has sparked significant interest and debate. Since the technology is already adapted by many students and teachers, this study delves into analyzing the sentiments expressed on university websites regarding ChatGPT integration into education by creating a comprehensive sentiment analysis framework using Hierarchical Residual RSigELU Attention Network (HR-RAN). The proposed framework addresses several challenges in sentiment analysis, such as capturing fine-grained sentiment nuances, including contextual information, and handling complex language expressions in university review data. The methodology involves several steps, including data collection from various educational websites, blogs, and news platforms. The data is preprocessed to handle emoticons, URLs, and tags and then, detect and remove sarcastic text using the eXtreme Learning Hyperband Network (XLHN). Sentences are then grouped based on similarity and topics are modeled using the Non-negative Term-Document Matrix Factorization (NTDMF) approach. Features, such as lexico-semantic, lexico structural, and numerical features are extracted. Dependency parsing and coreference resolution are performed to analyze grammatical structures and understand semantic relationships. Word embedding uses the Word2Vec model to capture semantic relationships between words. The preprocessed text and extracted features are inputted into the HR-RAN classifier to categorize sentiments as positive, negative, or neutral. The sentiment analysis results indicate that 74.8% of the sentiments towards ChatGPT in higher education are neutral, 21.5% are positive, and only 3.7% are negative. This suggests a predominant neutrality among users, with a significant portion expressing positive views and a very small percentage holding negative opinions. Additionally, the analysis reveals regional variations, with Canada showing the highest number of sentiments, predominantly neutral, followed by Germany, the UK, and the USA. The sentiment analysis results are evaluated based on various metrics, such as accuracy, precision, recall, F-measure, and specificity. Results indicate that the proposed framework outperforms conventional sentiment analysis models. The HR-RAN technique achieved a precision of 98.98%, recall of 99.23%, F-measure of 99.10%, accuracy of 98.88%, and specificity of 98.31%. Additionally, word clouds are generated to visually represent the most common terms within positive, neutral, and negative sentiments, providing a clear and immediate understanding of the key themes in the data. These findings can inform educators, administrators, and developers about the benefits and challenges of integrating ChatGPT into educational
settings, guiding improvements in educational practices and AI tool development.
The goal of this PhD thesis is to investigate possibilities of using symbol elimination for solving problems over complex theories and analyze the applicability of such uniform approaches in different areas of application, such as verification, knowledge representation and graph theory. In the thesis we propose an approach to symbol elimination in complex theories that follows the general idea of combining hierarchical reasoning with symbol elimination in standard theories. We analyze how this general approach can be specialized and used in different areas of application.
In the verification of parametric systems it is important to prove that certain safety properties hold. This can be done by showing that a property is an inductive invariant of the system, i.e. it holds in the initial state of the system and is invariant under updates of the system. Sometimes this is not the case for the condition itself, but for a stronger condition it is. In this thesis we propose a method for goal-directed invariant strengthening.
In knowledge representation we often have to deal with huge ontologies. Combining two ontologies usually leads to new consequences, some of which may be false or undesired. We are interested in finding explanations for such unwanted consequences. For this we propose a method for computing interpolants in the description logics EL and EL⁺, based on a translation to the theory of semilattices with monotone operators and a certain form of interpolation in this theory.
In wireless network theory one often deals with classes of geometric graphs in which the existence or non-existence of an edge between two vertices in a graph relies on properties on their distances to other nodes. One possibility to prove properties of those graphs or to analyze relations between the graph classes is to prove or disprove that one graph class is contained in the other. In this thesis we propose a method for checking inclusions between geometric graph classes.
In international business relationships, such as international railway operations, large amounts of data can be exchanged among the parties involved. For the exchange of such data, a limited risk of being cheated by another party, e.g., by being provided with fake data, as well as reasonable cost and a foreseeable benefit, is expected. As the exchanged data can be used to make critical business decisions, there is a high incentive for one party to manipulate the data in its favor. To prevent this type of manipulation, mechanisms exist to ensure the integrity and authenticity of the data. In combination with a fair exchange protocol, it can be ensured that the integrity and authenticity of this data is maintained even when it is exchanged with another party. At the same time, such a protocol ensures that the exchange of data only takes place in conjunction with the agreed compensation, such as a payment, and that the payment is only made if the integrity and authenticity of the data is ensured as previously agreed. However, in order to be able to guarantee fairness, a fair exchange protocol must involve a trusted third party. To avoid fraud by a single centralized party acting as a trusted third party, current research proposes decentralizing the trusted third party, e.g., by using a distributed ledger based fair exchange protocol. However, for assessing the fairness of such an exchange, state-of-the-art approaches neglect costs arising for the parties conducting the fair exchange. This can result in a violation of the outlined expectation of reasonable cost, especially when distributed ledgers are involved, which are typically associated with non-negligible costs. Furthermore, the performance of typical distributed ledger-based fair exchange protocols is limited, posing an obstacle to widespread adoption.
To overcome the challenges, in this thesis, we introduce the foundation for a data exchange platform allowing for a fully decentralized fair data exchange with reasonable cost and performance. As a theoretical foundation, we introduce the concept of cost fairness, which considers cost for the fairness assessment by requesting that a party following the fair exchange protocol never suffers any unilateral disadvantages. We prove that cost fairness cannot be achieved using typical public distributed ledgers but requires customized distributed ledger instances, which usually lack complete decentralization. However, we show that the highest unilateral cost are caused by a grieving attack.
To allow fair data exchanges to be conducted with reasonable cost and performance, we introduce FairSCE, a distributed ledger-based fair exchange protocol using distributed ledger state channels and incorporating a mechanism to protect against grieving attacks, reducing the possible unilateral cost that have to be covered to a minimum. Based on our evaluation of FairSCE, the worst-case cost for data exchange, even in the presence of malicious parties, is known, which allows an estimate of the possible benefit and, thus, the preliminary estimate of economic utility. Furthermore, to allow for an unambiguous assessment of the correct data being transferred while still allowing for sensitive parts of the data to be masked, we introduce an approach for the hashing of hierarchically structured data, which can be used to ensure integrity and authenticity of the data being transferred.
Empirical studies in software engineering use software repositories as data sources to understand software development. Repository data is either used to answer questions that guide the decision-making in the software development, or to provide tools that help with practical aspects of developers’ everyday work. Studies are classified into the field of Empirical Software Engineering (ESE), and more specifically into Mining Software Repositories (MSR). Studies working with repository data often focus on their results. Results are statements or tools, derived from the data, that help with practical aspects of software development. This thesis focuses on the methods and high order methods used to produce such results. In particular, we focus on incremental methods to scale the processing of repositories, declarative methods to compose a heterogeneous analysis, and high order methods used to reason about threats to methods operating on repositories. We summarize this as technical and methodological improvements. We contribute the improvements to methods and high-order methods in the context of MSR/ESE to produce future empirical results more effectively. We contribute the following improvements. We propose a method to improve the scalability of functions that abstract over repositories with high revision count in a theoretically founded way. We use insights on abstract algebra and program incrementalization to define a core interface of highorder functions that compute scalable static abstractions of a repository with many revisions. We evaluate the scalability of our method by benchmarks, comparing a prototype with available competitors in MSR/ESE. We propose a method to improve the definition of functions that abstract over a repository with a heterogeneous technology stack, by using concepts from declarative logic programming and combining them with ideas on megamodeling and linguistic architecture. We reproduce existing ideas on declarative logic programming with languages close to Datalog, coming from architecture recovery, source code querying, and static program analysis, and transfer them from the analysis of a homogeneous to a heterogeneous technology stack. We provide a prove-of-concept of such method in a case study. We propose a high-order method to improve the disambiguation of threats to methods used in MSR/ESE. We focus on a better disambiguation of threats, operationalizing reasoning about them, and making the implications to a valid data analysis methodology explicit, by using simulations. We encourage researchers to accomplish their work by implementing ‘fake’ simulations of their MSR/ESE scenarios, to operationalize relevant insights about alternative plausible results, negative results, potential threats and the used data analysis methodologies. We prove that such way of simulation based testing contributes to the disambiguation of threats in published MSR/ESE research.
The trends of industry 4.0 and the further enhancements toward an ever changing factory lead to more mobility and flexibility on the factory floor. With that higher need of mobility and flexibility the requirements on wireless communication rise. A key requirement in that setting is the demand for wireless Ultra-Reliability and Low Latency Communication (URLLC). Example use cases therefore are cooperative Automated Guided Vehicles (AGVs) and mobile robotics in general. Working along that setting this thesis provides insights regarding the whole network stack. Thereby, the focus is always on industrial applications. Starting on the physical layer, extensive measurements from 2 GHz to 6 GHz on the factory floor are performed. The raw data is published and analyzed. Based on that data an improved Saleh-Valenzuela (SV) model is provided. As ad-hoc networks are highly depended onnode mobility, the mobility of AGVs is modeled. Additionally, Nodal Encounter Patterns (NEPs) are recorded and analyzed. A method to record NEP is illustrated. The performance by means of latency and reliability are key parameters from an application perspective. Thus, measurements of those two parameters in factory environments are performed using Wireless Local Area Network (WLAN) (IEEE 802.11n), private Long Term Evolution (pLTE) and 5G. This showed auto-correlated latency values. Hence, a method to construct confidence intervals based on auto-correlated data containing rare events is developed. Subsequently, four performance improvements for wireless networks on the factory floor are proposed. Of those optimization three cover ad-hoc networks, two deal with safety relevant communication, one orchestrates the usage of two orthogonal networks and lastly one optimizes the usage of information within cellular networks.
Finally, this thesis is concluded by an outlook toward open research questions. This includes open questions remaining in the context of industry 4.0 and further the ones around 6G. Along the research topics of 6G the two most relevant topics concern the ideas of a network of networks and overcoming best-effort IP.
In the last decade, policy-makers around the world have turned their attention toward the creative industry as the economic engine and significant driver of employments. Yet, the literature suggests that creative workers are one of the most vulnerable work-forces of today’s economy. Because of the highly deregulated and highly individuated environment, failure or success are believed to be the byproduct of individual ability and commitment, rather than a structural or collective issue. This thesis taps into the temporal, spatial, and social resolution of digital behavioural data to show that there are indeed structural and historical issues that impact individuals’ and
groups’ careers. To this end, this thesis offers a computational social science research framework that brings together the decades-long theoretical and empirical knowledge of inequality studies, and computational methods that deal with the complexity and scale of digital data. By taking music industry and science as use cases, this thesis starts off by proposing a novel gender detection method that exploits image search and face-detection methods.
By analysing the collaboration patterns and citation networks of male and female computer scientists, it sheds lights on some of the historical biases and disadvantages that women face in their scientific career. In particular, the relation of scientific success and gender-specific collaboration patterns is assessed. To elaborate further on the temporal aspect of inequalities in scientific careers, this thesis compares the degree of vertical and horizontal inequalities among the cohorts of scientists that started their career at different point in time. Furthermore, the structural inequality in music industry is assessed by analyzing the social and cultural relations that breed from live performances and musics releases. The findings hint toward the importance of community belonging at different stages of artists’ careers. This thesis also quantifies some of the underlying mechanisms and processes of inequality, such as the Matthew Effect and the Hipster Paradox, in creative careers. Finally, this thesis argues that online platforms such as Wikipedia could reflect and amplify the existing biases.
Currently, there are a variety of digital tools in the humanities, such
as annotation, visualization, or analysis software, which support researchers in their work and offer them new opportunities to address different research questions. However, the use of these tools falls far
short of expectations. In this thesis, twelve improvement measures are
developed within the framework of a design science theory to counteract the lack of usage acceptance. By implementing the developed design science theory, software developers can increase the acceptance of their digital tools in the humanities context.
For software engineers, conceptually understanding the tools they are using in the context of their projects is a daily challenge and a prerequisite for complex tasks. Textual explanations and code examples serve as knowledge resources for understanding software languages and software technologies. This thesis describes research on integrating and interconnecting
existing knowledge resources, which can then be used to assist with understanding and comparing software languages and software technologies on a conceptual level. We consider the following broad research questions that we later refine: What knowledge resources can be systematically reused for recovering structured knowledge and how? What vocabulary already exists in literature that is used to express conceptual knowledge? How can we reuse the
online encyclopedia Wikipedia? How can we detect and report on instances of technology usage? How can we assure reproducibility as the central quality factor of any construction process for knowledge artifacts? As qualitative research, we describe methodologies to recover knowledge resources by i.) systematically studying literature, ii.) mining Wikipedia, iii.) mining available textual explanations and code examples of technology usage. The theoretical findings are backed by case studies. As research contributions, we have recovered i.) a reference semantics of vocabulary for describing software technology usage with an emphasis on software languages, ii.) an annotated corpus of Wikipedia articles on software languages, iii.) insights into technology usage on GitHub with regard to a catalog of pattern and iv.) megamodels of technology usage that are interconnected with existing textual explanations and code examples.
Social media provides a powerful way for people to share opinions and sentiments about a specific topic, allowing others to benefit from these thoughts and feelings. This procedure generates a huge amount of unstructured data, such as texts, images, and references that are constantly increasing through daily comments to related discussions. However, the vast amount of unstructured data presents risks to the information-extraction process, and so decision making becomes highly challenging. This is because data overload may cause the loss of useful data due to its inappropriate presentation and its accumulation. To this extent, this thesis contributed to the field of analyzing and detecting feelings in images and texts. And that by extracting the feelings and opinions hidden in a huge collection of image data and texts on social networks After that, these feelings are classified into positive, negative, or neutral, according to the features of the classified data. The process of extracting these feelings greatly helps in decision-making processes on various topics as will be explained in the first chapter of the thesis. A system has been built that can classify the feelings inherent in the images and texts on social media sites, such as people’s opinions about products and companies, personal posts, and general messages. This thesis begins by introducing a new method of reducing the dimension of text data based on data-mining approaches and then examines the sentiment based on neural and deep neural network classification algorithms. Subsequently, in contrast to sentiment analysis research in text datasets, we examine sentiment expression and polarity classification within and across image datasets by building deep neural networks based on the attention mechanism.
Connected vehicles will have a tremendous impact on tomorrow’s mobility solutions. Such systems will heavily rely on information delivery in time to ensure the functional reliability, security and safety. However, the host-centric communication model of today’s networks questions efficient data dissemination in a scale, especially in networks characterized by a high degree of mobility. The Information-Centric Networking (ICN) paradigm has evolved as a promising candidate for the next generation of network architectures. Based on a loosely coupled communication model, the in-network processing and caching capabilities of ICNs are promising to solve the challenges set by connected vehicular systems. In such networks, a special class of caching strategies which take action by placing a consumer’s anticipated content actively at the right network nodes in time are promising to reduce the data delivery time. This thesis contributes to the research in active placement strategies in information-centric and computation-centric vehicle networks for providing dynamic access to content and computation results. By analyzing different vehicular applications and their requirements, novel caching strategies are developed in order to reduce the time of content retrieval. The caching strategies are compared and evaluated against the state-of-the-art in both extensive simulations as well as real world deployments. The results are showing performance improvements by increasing the content retrieval (availability of specific data increased up to 35% compared to state-of-the-art caching strategies), and reducing the delivery times (roughly double the number of data retrieval from neighboring nodes). However, storing content actively in connected vehicle networks raises questions regarding security and privacy. In the second part of the thesis, an access control framework for information-centric connected vehicles is presented. Finally, open security issues and research directions in executing computations at the edge of connected vehicle networks are presented.
Virtual reality is a growing field of interest as it provides a particular intuitive way of user-interaction. However, there are still open technical issues regarding latency — the delay between interaction and display reaction — and the trade-off between visual quality and frame-rate of real-time graphics, especially when taking visual effects like specular and semi-transparent surfaces and volumes into account. One solution, a distributed rendering setup, is presented in this thesis, in which the image synthesis is divided into an accurate but costly physically based rendering thread with a low refresh rate and a fast reprojection thread to remain a responsive interactivity with a high frame-rate. Two novel reprojection techniques are proposed that cover reflections and refractions produced by surface ray-tracing as well as volumetric light transport generated by volume ray-marching. The introduced setup can enhance the VR experience within several domains. In this thesis, three innovative training applications have been realized to investigate the added value of virtual reality to the three learning stages of observation, interaction and collaboration. For each stage an interdisciplinary curriculum, currently taught with traditional media, was transferred to a VR setting in order to investigate how virtual reality is capable of providing a natural, flexible and efficient learning environment
Initial goal of the current dissertation was the determination of image-based biomarkers sensitive for neurodegenerative processes in the human brain. One such process is the demyelination of neural cells characteristic for Multiple sclerosis (MS) - the most common neurological disease in young adults for which there is no cure yet. Conventional MRI techniques are very effective in localizing areas of brain tissue damage and are thus a reliable tool for the initial MS diagnosis. However, a mismatch between the clinical fndings and the visualized areas of damage is observed, which renders the use of the standard MRI diffcult for the objective disease monitoring and therapy evaluation. To address this problem, a novel algorithm for the fast mapping of myelin water content using standard multiecho gradient echo acquisitions of the human brain is developed in the current work. The method extents a previously published approach for the simultaneous measurement of brain T1, T∗ 2 and total water content. Employing the multiexponential T∗ 2 decay signal of myelinated tissue, myelin water content is measured based on the quantifcation of two water pools (myelin water and rest) with different relaxation times. Whole brain in vivo myelin water content maps are acquired in 10 healthy controls and one subject with MS. The in vivo results obtained are consistent with previous reports. The acquired quantitative data have a high potential in the context of MS. However, the parameters estimated in a multiparametric acquisition are correlated and constitute therefore an ill-posed, nontrivial data analysis problem. Motivated by this specific problem, a new data clustering approach is developed called Nuclear Potential Clustering, NPC. It is suitable for the explorative analysis of arbitrary dimensional and possibly correlated data without a priori assumptions about its structure. The developed algorithm is based on a concept adapted from nuclear physics. To partition the data, the dynamic behavior of electrically even charged nucleons interacting in a d-dimensional feature space is modeled. An adaptive nuclear potential, comprised of a short-range attractive (Strong interaction) and a long-range repulsive term (Coulomb potential), is assigned to each data point. Thus, nucleons that are densely distributed in space fuse to build nuclei (clusters), whereas single point clusters are repelled (noise). The algorithm is optimized and tested in an extensive study with a series of synthetic datasets as well as the Iris data. The results show that it can robustly identify clusters even when complex configurations and noise are present. Finally, to address the initial goal, quantitative MRI data of 42 patients are analyzed employing NPC. A series of experiments with different sets of image-based features show a consistent grouping tendency: younger patients with low disease grade are recognized as cohesive clusters, while those of higher age and impairment are recognized as outliers. This allows for the definition of a reference region in a feature space associated with phenotypic data. Tracking of the individual's positions therein can disclose patients at risk and be employed for therapy evaluation.
Modern software projects are composed of several software languages, software technologies and different kind of artifacts. Therefore, the understanding of the software project at hand, including the semantic links between the different parts, becomes a difficult challenge for a developer. One approach to attack this issue is to document the software project with the help of a linguistic architecture. This kind of architecture can be described with the help of the MegaL ontology. A remaining challenge is the creation of it since it requires different kind of skills. Therefore, this paper proposes an approach for the automatic extraction of a linguistic architecture. The open source framework Apache Jena, which is focusing on semantic web technologies like RDF and OWL, is used to define custom rules that are capable to infer new knowledge based on the defined or already extracted RDF triples. The complete approach is tested in a case study on ten different open source projects. The aim of the case study is to extract a linguistic architecture that is describing the use of Hibernate in the selected projects. In the end, the result is evaluated with the help of different metrics. The evaluation is performed with the help of an internal and external approach.
The term “Software Chrestomaty” is defined as a collection of software systems meant to be useful in learning about or gaining insight into software languages, software technologies, software concepts, programming, and software engineering. 101companies software chrestomathy is a community project with the attributes of a Research 2.0 infrastructure for various stakeholders in software languages and technology communities. The core of 101companies combines a semantic wiki and confederated open source repositories. We designed and developed an integrated ontology-based knowledge base about software languages and technologies. The knowledge is created by the community of contributors and supported with a running example and structured documentation. The complete ecosystem is exposed by using Linked Data principles and equipped with the additional metadata about individual artifacts. Within the context of software chrestomathy we explored a new type of software architecture – linguistic architecture that is targeted on the language and technology relationships within a software product and based on the megamodels. Our approach to documentation of the software systems is highly structured and makes use of the concepts of the newly developed megamodeling language MegaL. We “connect” an emerging ontology with the megamodeling artifacts to raise the cognitive value of the linguistic architecture.
Die Forschung im Bereich der modellbasierten Objekterkennung und Objektlokalisierung hat eine vielversprechende Zukunft, insbesondere die Gebäudeerkennung bietet vielfaltige Anwendungsmöglichkeiten. Die Bestimmung der Position und der Orientierung des Beobachters relativ zu einem Gebäude ist ein zentraler Bestandteil der Gebäudeerkennung.
Kern dieser Arbeit ist es, ein System zur modellbasierten Poseschätzung zu entwickeln, das unabhängig von der Anwendungsdomäne agiert. Als Anwendungsdomäne wird die modellbasierte Poseschätzung bei Gebäudeaufnahmen gewählt. Vorbereitend für die Poseschätzung bei Gebäudeaufnahmen wird die modellbasierte Erkennung von Dominosteinen und Pokerkarten realisiert. Eine anwendungsunabhängige Kontrollstrategie interpretiert anwendungsspezifische Modelle, um diese im Bild sowohl zu lokalisieren als auch die Pose mit Hilfe dieser Modelle zu bestimmen. Es wird explizit repräsentiertes Modellwissen verwendet, sodass Modellbestandteilen Bildmerkmale zugeordnet werden können. Diese Korrespondenzen ermöglichen die Kamerapose aus einer monokularen Aufnahme zurückzugewinnen. Das Verfahren ist unabhängig vom Anwendungsfall und kann auch mit Modellen anderer rigider Objekte umgehen, falls diese der definierten Modellrepräsentation entsprechen. Die Bestimmung der Pose eines Modells aus einem einzigen Bild, das Störungen und Verdeckungen aufweisen kann, erfordert einen systematischen Vergleich des Modells mit Bilddaten. Quantitative und qualitative Evaluationen belegen die Genauigkeit der bestimmten Gebäudeposen.
In dieser Arbeit wird zudem ein halbautomatisches Verfahren zur Generierung eines Gebäudemodells vorgestellt. Das verwendete Gebäudemodell, das sowohl semantisches als auch geometrisches Wissen beinhaltet, den Aufgaben der Objekterkennung und Poseschätzung genügt und sich dennoch an den bestehenden Normen orientiert, ist Voraussetzung für das Poseschätzverfahren. Leitgedanke der Repräsentationsform des Modells ist, dass sie für Menschen interpretierbar bleibt. Es wurde ein halbautomatischer Ansatz gewählt, da die automatische Umsetzung dieses Verfahrens schwer die nötige Präzision erzielen kann. Das entwickelte Verfahren erreicht zum einen die nötige Präzision zur Poseschätzung und reduziert zum anderen die Nutzerinteraktionen auf ein Minimum. Eine qualitative Evaluation belegt die erzielte Präzision bei der Generierung des Gebäudemodells.
Skalierbarkeit und garantierte Ausliererung sind essentielle Eigenschaften eines jeden Routingalgorithmus. Beides bietet bei drahtlosen Ad-hoc Netzwerken die Kombination aus Greedy- und Face- Routing, sofern ein planarer Graph zur Verfügung steht. Doch gerade die fehlerfreie Planarisierung bereitet bei realistischen Netzwerken Schwierigkeiten. Daher soll mit dieser Arbeit die Frage beantwortet werden, zu welcher Fehlerrate es führt, wenn der Graph lediglich mit lokalen Methoden teilplanarisiert wird. Dazu wurde eine Simulationsumgebung geschaffen, um unter Anwendung des Log-Normal-Shadowing-Modells zufällige Konnektivitätsgraphen zu generieren. Diese wurden anschließend durch zwei unterschiedliche, lokale Strategien teilplanarisiert. Es wurden neun verschiedene Settings definiert, die sich aus drei unterschiedlichen Graphendichten und drei unterschiedlichen Werten für den Sigmaparameter des Log-Normal-Shadowing-Modells ergeben. Für jedes Setting wurde in 2000 Simulationsdurchläufen das Verhalten von Greedy-, Face- und kombiniertem Greedy-Face-Routing untersucht und ausgewertet. Zum Abschluss wurden die Ergebnisse dieser Simulation bewertet und diskutiert.
E-Learning Anwendungen sind heutzutage im Trend. Jedoch nicht nur aus diesem Grund sind sie vom großen Interesse. Im Vordergrund steht ihr didaktisches Konzept. Heutzutage ist es technisch möglich, die Teilnehmer interaktiv in einen
E- Learning Kurs einzubinden. Die Lernschritte können durch die Anwendung kontrolliert werden. Im Gegensatz zu einer Vorlesung, kann die Theorie direkt mit der Praxis verknüpft werden. Das Ziel der Arbeit ist die Erstellung eines Konzeptes und die Entwicklung eines Prototypen einer interaktiven Entwicklungsumgebung von Java für Programmieranfänger. In dieser Arbeit wurden aktuelle E-Learning Plattformen zu Java analysiert. Aus den gewonnen Erkenntnissen wurde ein Konzept für eine E-Learningplattform entwickelt. Dieses Konzept wurde erfolgreich als Prototyp implementiert. Für die Implementation wurden bewährte Technologien, wie das Django-Framework und das Javascript-Framework Vue.js eingesetzt. Der Kurs wurde von neun Teilnehmern getestet. Zusätzlich wurde eine Umfrage zu der Kursanwendung durchgeführt. Das Testergebnis bestätigt, dass die Teilnehmer den Kurs bestanden haben.
Using semantic data from general-purpose programming languages does not provide the unified experience one would want for such an application. Static error checking is lacking, especially with regards to static typing of the data. Based on the previous work of λ-DL, which integrates semantic queries and concepts as types into a typed λ-calculus, this work takes its ideas a step further to meld them into a real-world programming language. This thesis explores how λ-DL's features can be extended and integrated into an existing language, researches an appropriate extension mechanism and produces Semantics4J, a JastAdd-based Java language semantic data extension for type-safe OWL programming, together with examples of its usage.
Motion capture refers to the process of capturing, processing and trans- lating real motions onto a 3D model. Not only in the movie and gaming industries, motion capture creates an indispensable realism of human and animal movement. Also in the context of robotics, medical movement therapy, as well as in AR and VR, motion capture is used extensively. In addition to the well established optical processes, especially in the last three areas, alternative systems based on inertial navigation (IMUs) are being used in-creasingly, because they do not rely on external cameras and thus limit the area of movement considerably less.
Fast evolving technical progress in the manufacturing of such IMUs allows building small sensors, wearable on the body which can transfer movements to a computer. The development of applying inertial systems to a motion capture context, however, is still at an early state. Problems like drift can currently only be minimized by adding additional hardware for correcting the read data.
In the following master thesis an IMU based motion capture system is designed and constructed. This contains the assembly of the hardware components as well as processing of the received movement data on the software side and their application to a 3D model.
This thesis proposes the use of MSR (Mining Software Repositories) techniques to identify software developers with exclusive expertise about specific APIs and programming domains in software repositories. A pilot Tool for finding such
“Islands of Knowledge” in Node.js projects is presented and applied in a case study to the 180 most popular npm packages. It is found that on average each package has 2.3 Islands of Knowledge, which is possibly explained by the finding that npm packages tend to have only one main contributor. In a survey, the maintainers of 50 packages are contacted and asked for opinions on the results produced by the Tool. Together with their responses, this thesis reports on experiences made with the pilot Tool and how future iterations could produce even more accurate statements about programming expertise distribution in developer teams.
This Work analyzes if a hardware prototype on an Arduino basis for an Adhoc Network can be created. The objective of the prototype development is, the creation of a sensor node with a modular design, where components can be easily changed. Furthermore the application area has requirements, which the node must fulfill. These requirements are derived from the Tmote Sky sensor node, therefore the new created sensor node must be a possible alternative for it and fulfill the same functions. For that purpose this study reviews some available Arduino microprocessors on their suitability for a sensor node. Later in the work the composition of the sensor node is documented. For this, the hardware and their costs are illustrated. The created hardware prototype allows, through easily changed radio modules, the covering of 433 MHz, 866 MHz and 2,40 GHz radio frequencies. At the end of the work, the sensor node prototype is used in an experiment to check for the suitability for water monitoring. For this, an experiment was performed on land and on water and the results evaluated. In the end the prototype fulfilled most of the requirements, but the cost was a little too high.
The provision of electronic participation services (e-participation) is a complex socio-technical undertaking that needs comprehensive design and implementation strategies. E-participation service providers, in the most cases administrations and governments, struggle with changing requirements that demand more transparency, better connectivity and increased collaboration among different actors. At the same time, less staff are available. As a result, recent research assesses only a minority of e-participation services as successful. The challenge is that the e-participation domain lacks comprehensive approaches to design and implement (e-)participation services. Enterprise Architecture (EA) frameworks have evolved in information systems research as an approach to guide the development of complex socio-technical systems. This approach can guide the design and implementation services, if the collection of organisations with the commonly held goal to provide participation services is understood as an E Participation Enterprise (EE). However, research & practice in the e participation domain has not yet exploited EA frameworks. Consequently, the problem scope that motivates this dissertation is the existing gap in research to deploy EA frameworks in e participation design and implementation. The research question that drives this research is: What methodical and technical guides do architecture frameworks provide that can be used to design and implement better and successful e participation?
This dissertation presents a literature study showing that existing approaches have not covered yet the challenges of comprehensive e participation design and implementation. Accordingly, the research moves on to investigate established EA frameworks such as the Zachman Framework, TOGAF, the DoDAF, the FEA, the ARIS, and the ArchiMate for their use. While the application of these frameworks in e participation design and implementation is possible, an integrated approach is lacking so far. The synthesis of literature review and practical insights in design and implementation of e participation services from four projects show the challenges of adapting architecture frameworks for this domain. However, the research shows also the potential of a combination of the different approaches. Consequently, the research moves on to develop the E-Participation Architecture Framework (EPART-Framework). Therefore, the dissertation applies design science research including literature review and action research. Two independent settings test an initial EPART-Framework version. The results yield into the EPART-Framework presented in this dissertation.
The EPART-Framework comprises of the EPART-Metamodel with six EPART-Viewpoints, which frame the stakeholder concerns: the Participation Scope, the Participant Viewpoint, the Participation Viewpoint, the Data & Information Viewpoint, the E-participation Viewpoint, and Implementation & Governance Viewpoint. The EPART-Method supports the stakeholders to design the EE and implement e participation and stores its output in an architecture description and a solution repository. It consists of five consecutive phases accompanied by requirements management: Initiation, Design, Implementation and Preparation, Participation, and Evaluation. The EPART-Framework fills the gap between the e participation domain and the enterprise architecture framework domain. The evaluation gives reasonable evidence that the framework is a valuable addition in academia and in practice to improve e-participation design and implementation. The same time, it shows opportunities for future research to extend and advance the framework.
Confidentiality, integrity, and availability are often listed as the three major requirements for achieving data security and are collectively referred to as the C-I-A triad. Confidentiality of data restricts the data access to authorized parties only, integrity means that the data can only be modified by authorized parties, and availability states that the data must always be accessible when requested. Although these requirements are relevant for any computer system, they are especially important in open and distributed networks. Such networks are able to store large amounts of data without having a single entity in control of ensuring the data's security. The Semantic Web applies to these characteristics as well as it aims at creating a global and decentralized network of machine-readable data. Ensuring the confidentiality, integrity, and availability of this data is therefore also important and must be achieved by corresponding security mechanisms. However, the current reference architecture of the Semantic Web does not define any particular security mechanism yet which implements these requirements. Instead, it only contains a rather abstract representation of security.
This thesis fills this gap by introducing three different security mechanisms for each of the identified security requirements confidentiality, integrity, and availability of Semantic Web data. The mechanisms are not restricted to the very basics of implementing each of the requirements and provide additional features as well. Confidentiality is usually achieved with data encryption. This thesis not only provides an approach for encrypting Semantic Web data, it also allows to search in the resulting ciphertext data without decrypting it first. Integrity of data is typically implemented with digital signatures. Instead of defining a single signature algorithm, this thesis defines a formal framework for signing arbitrary Semantic Web graphs which can be configured with various algorithms to achieve different features. Availability is generally supported by redundant data storage. This thesis expands the classical definition of availability to compliant availability which means that data must only be available as long as the access request complies with a set of predefined policies. This requirement is implemented with a modular and extensible policy language for regulating information flow control. This thesis presents each of these three security mechanisms in detail, evaluates them against a set of requirements, and compares them with the state of the art and related work.
This thesis analyzes the online attention towards scientists and their research topics. The studies compare the attention dynamics towards the winners of important scientific prizes with scientists who did not receive a prize. Web signals such as Wikipedia page views, Wikipedia edits, and Google Trends were used as a proxy for online attention. One study focused on the time between the creation of the article about a scientist and their research topics. It was discovered that articles about research topics were created closer to the articles of prize winners than to scientists who did not receive a prize. One possible explanation could be that the research topics are more closely related to the scientist who got an award. This supports that scientists who received the prize introduced the topics to the public. Another study considered the public attention trends towards the related research topics before and after a page of a scientist was created. It was observed that after a page about a scientist was created, research topics of prize winners received more attention than the topics of scientists who did not receive a prize. Furthermore, it was demonstrated that Nobel Prize winners get a lower amount of attention before receiving the prize than the potential nominees from the list of Citation Laureates of Thompson Reuters. Also, their popularity is going down faster after receiving it. It was also shown that it is difficult to predict the prize winners based on the attention dynamics towards them.
The aim of this work was to present the accident and traffic modeling of Oman (Muskat City) under theoretical assumptions by means of agent-based simulation. TRASS should be taken as the basis for the simulation environment. In addition, possible counter-measures should be proposed with regard to the high accident rate, which result from different simulation variants.
It was not part of this work whether the theoretical assumptions by the simulation should actually be applied and implemented in reality.
It was also necessary to check whether three different simulation variants could be represented differently by TRASS. The three variants were divided into simulation sequence with traffic light control, without traffic light control and finally with traffic light control, which however should be ignored with a probability of 70%. All three variants could produce different results. It has been shown that all three variants have advantages and disadvantages with respect to traffic flow and accident rate.
Not to be neglected is the fact that this work is modeled, analyzed and evaluated exclusively with TRASS framework. All potentials and deficits thus flow into the overall framework of this work. Therefore, after critical observation, this statement was viewed from a single angle only: the theoretical assumptions of Oman and the modeling limits of TRASS. In order to be able to make a promising statement about the actual implementation of the proposals, further comprehensive analyzes and simulations are necessary in a much wider range of variants of transport for Oman.
The aim of this thesis was to develop and to evaluate a method, which enables the utilization of traditional dialog marketing tools through the web. For this purpose, a prototype of a website with "extended real-time interaction (eEI)" capabilities has been implemented and tested. The prototype was evaluated by a methodology based on the five-dimensional "e-service quality" measure after Gwo-Guang Lee und Hsiu-Fen Lin. The Foundation of the "e-service quality" measure is the SERVQUAL-Model. A statistical analysis of the user study results showed a significant correlation between eEI and user satisfaction. Before the actual realization of eEI, the "Technology Acceptance Model" after Fred D. Davis was used to investigate currently used real-time interaction systems.
Reactive local algorithms are distributed algorithms which suit the needs of battery-powered, large-scale wireless ad hoc and sensor networks particularly well. By avoiding both unnecessary wireless transmissions and proactive maintenance of neighborhood tables (i.e., beaconing), such algorithms minimize communication load and overhead, and scale well with increasing network size. This way, resources such as bandwidth and energy are saved, and the probability of message collisions is reduced, which leads to an increase in the packet reception ratio and a decrease of latencies.
Currently, the two main application areas of this algorithm type are geographic routing and topology control, in particular the construction of a node's adjacency in a connected, planar representation of the network graph. Geographic routing enables wireless multi-hop communication in the absence of any network infrastructure, based on geographic node positions. The construction of planar topologies is a requirement for efficient, local solutions for a variety of algorithmic problems.
This thesis contributes to reactive algorithm research in two ways, on an abstract level, as well as by the introduction of novel algorithms:
For the very first time, reactive algorithms are considered as a whole and as an individual research area. A comprehensive survey of the literature is given which lists and classifies known algorithms, techniques, and application domains. Moreover, the mathematical concept of O- and Omega-reactive local topology control is introduced. This concept unambiguously distinguishes reactive from conventional, beacon-based, topology control algorithms, serves as a taxonomy for existing and prospective algorithms of this kind, and facilitates in-depth investigations of the principal power of the reactive approach, beyond analysis of concrete algorithms.
Novel reactive local topology control and geographic routing algorithms are introduced under both the unit disk and quasi unit disk graph model. These algorithms compute a node's local view on connected, planar, constant stretch Euclidean and topological spanners of the underlying network graph and route messages reactively on these spanners while guaranteeing the messages' delivery. All previously known algorithms are either not reactive, or do not provide constant Euclidean and topological stretch properties. A particularly important partial result of this work is that the partial Delaunay triangulation (PDT) is a constant stretch Euclidean spanner for the unit disk graph.
To conclude, this thesis provides a basis for structured and substantial research in this field and shows the reactive approach to be a powerful tool for algorithm design in wireless ad hoc and sensor networking.
One of the main goals of the artificial intelligence community is to create machines able to reason with dynamically changing knowledge. To achieve this goal, a multitude of different problems have to be solved, of which many have been addressed in the various sub-disciplines of artificial intelligence, like automated reasoning and machine learning. The thesis at hand focuses on the automated reasoning aspects of these problems and address two of the problems which have to be overcome to reach the afore-mentioned goal, namely 1. the fact that reasoning in logical knowledge bases is intractable and 2. the fact that applying changes to formalized knowledge can easily introduce inconsistencies, which leads to unwanted results in most scenarios.
To ease the intractability of logical reasoning, I suggest to adapt a technique called knowledge compilation, known from propositional logic, to description logic knowledge bases. The basic idea of this technique is to compile the given knowledge base into a normal form which allows to answer queries efficiently. This compilation step is very expensive but has to be performed only once and as soon as the result of this step is used to answer many queries, the expensive compilation step gets worthwhile. In the thesis at hand, I develop a normal form, called linkless normal form, suitable for knowledge compilation for description logic knowledge bases. From a computational point of view, the linkless normal form has very nice properties which are introduced in this thesis.
For the second problem, I focus on changes occurring on the instance level of description logic knowledge bases. I introduce three change operators interesting for these knowledge bases, namely deletion and insertion of assertions as well as repair of inconsistent instance bases. These change operators are defined such that in all three cases, the resulting knowledge base is ensured to be consistent and changes performed to the knowledge base are minimal. This allows us to preserve as much of the original knowledge base as possible. Furthermore, I show how these changes can be applied by using a transformation of the knowledge base.
For both issues I suggest to adapt techniques successfully used in other logics to get promising methods for description logic knowledge bases.
The publication of open source software aims to support the reuse, the distribution and the general utilization of software. This can only be enabled by the correct usage of open source software licenses. Therefore associations provide a multitude of open source software licenses with different features, of which a developer can choose, to regulate the interaction with his software. Those licenses are the core theme of this thesis.
After an extensive literature research, two general research questions are elaborated in detail. First, a license usage analysis of licenses in the open source sector is applied, to identify current trends and statistics. This includes questions concerning the distribution of licenses, the consistency in their usage, their association over a period of time and their publication.
Afterwards the recommendation of licenses for specific projects is investigated. Therefore, a recommendation logic is presented, which includes several influences on a suitable license choice, to generate an at most applicable recommendation. Besides the exact features of a license of which a user can choose, different methods of ranking the recommendation results are proposed. This is based on the examination of the current situation of open source licensing and license suggestion. Finally, the logic is evaluated on the exemplary use-case of the 101companies project.
Die folgende Arbeit zeigt eine Möglichkeit auf, Lokalisierung eines Objektes mittels Ultraschall zu realisieren. Dazu werden drei bis fünf im Raum verteilte Sensoren genutzt, um anhand von Distanzinformationen die Position eines Objekts relativ zu den Positionen der Sensoren zu bestimmen. Eine Besonderheit besteht dabei darin, dass die Sensoren nahezu beliebig in der Ebene verteilt sein können. Ihre Anordnung wird vom System in der Kalibrierungsphase mit Unterstützung des Anwenders ermittelt. Dabei dürften ein gleichseitiges Dreieck, ein Quadrat oder Pentagramm je nach Sensoranzahl die besten Ergebnisse liefern. Um die relative Bewegung in eine Absolute zu übertragen, findet eine Umrechnung in Meter anhand der Taktung der Mikrocontroller, des Prescalers des verwendeten Timers und der Schallgeschwindigkeit statt.
The present thesis deals with the realization of a stepper motor driver on an 8-bit microcontroller by the company Atmel. The focus is on the devel- opment of a current control, which allows microstepping in addition to the basic modes of operation like full- and halfstep. For this purpose, a PI con- troller is derived using physical and control engineering principles, which is implemented on the microcontroller. In this context, essential knowledge for the practical implementation will be discussed. In addition, the development of the hardware is documented, which is of great significance for the current measurement.
The identification of experts for a specific technology or framework produces a large benefit for collaborative software projects. Hence it reduces the communication overhead that is required to identify an expert on the fly. Therefore this thesis describes a tool and approach that can be used to identify an expert that has a specific skill-set. It will mainly focus on the skills and expertise of developers that use the Django framework. By adding more rules to our framework that approach could easily be extended for different technologies or frameworks. The paper will close with a case study on an open source project.
Die Arbeitsgruppe Echtzeitsysteme an der Universität Koblenz beschäftigt sich seit mehreren Jahren mit der Thematik autonomes und assistiertes Fahren. Eine große Herausforderung stellen in diesem Zusammenhang mehrgliedrige Fahrzeuge dar, deren Steuerung für den Fahrer während der Rückwärtsfahrt sehr anspruchsvoll ist. Um präzise Manöver zu ermöglichen, können elektronische Fahrerassistenzsysteme zum Einsatz kommen. Im Rahmen vorhergehender Arbeiten sind bereits einige Prototypen entstanden, von denen jedoch keiner eine geeignete Lösung für moderne, zweiachsige Anhänger darstellt. Im Rahmen dieser Arbeit wurde ein prototypisches Fahrerassistenzsystem entwickelt, wobei es noch weiterer Forschungs- und Entwicklungsarbeit bedarf, um das System straßentauglich zu machen.
Traditional Driver Assistance Systems (DAS) like for example Lane Departure Warning Systems or the well-known Electronic Stability Program have in common that their system and software architecture is static. This means that neither the number and topology of Electronic Control Units (ECUs) nor the presence and functionality of software modules changes after the vehicles leave the factory.
However, some future DAS do face changes at runtime. This is true for example for truck and trailer DAS as their hardware components and software entities are spread over both parts of the combination. These new requirements cannot be faced by state-of-the-art approaches of automotive software systems. Instead, a different technique of designing such Distributed Driver Assistance Systems (DDAS) needs to be developed. The main contribution of this thesis is the development of a novel software and system architecture for dynamically changing DAS using the example of driving assistance for truck and trailer. This architecture has to be able to autonomously detect and handle changes within the topology. In order to do so, the system decides which degree of assistance and which types of HMI can be offered every time a trailer is connected or disconnected. Therefore an analysis of the available software and hardware components as well as a determination of possible assistance functionality and a re-configuration of the system take place. Such adaptation can be granted by the principles of Service-oriented Architecture (SOA). In this architectural style all functionality is encapsulated in self-contained units, so-called Services. These Services offer the functionality through well-defined interfaces whose behavior is described in contracts. Using these Services, large-scale applications can be built and adapted at runtime. This thesis describes the research conducted in achieving the goals described by introducing Service-oriented Architectures into the automotive domain. SOA deals with the high degree of distribution, the demand for re-usability and the heterogeneity of the needed components.
It also applies automatic re-configuration in the event of a system change. Instead of adapting one of the frameworks available to this scenario, the main principles of Service-orientation are picked up and tailored. This leads to the development of the Service-oriented Driver Assistance (SODA) framework, which implements the benefits of Service-orientation while ensuring compatibility and compliance to automotive requirements, best-practices and standards. Within this thesis several state-of-the-art Service-oriented frameworks are analyzed and compared. Furthermore, the SODA framework as well as all its different aspects regarding the automotive software domain are described in detail. These aspects include a well-defined reference model that introduces and relates terms and concepts and defines an architectural blueprint. Furthermore, some of the modules of this blueprint such as the re-configuration module and the Communication Model are presented in full detail. In order to prove the compliance of the framework regarding state-of-the-art automotive software systems, a development process respecting today's best practices in automotive design procedures as well as the integration of SODA into the AUTOSAR standard are discussed. Finally, the SODA framework is used to build a full-scale demonstrator in order to evaluate its performance and efficiency.
One task of executives and project managers in IT companies or departments is to hire suitable developers and to assign them to suitable problems. In this paper, we propose a new technique that directly leverages previous work experience of developers in a systematic manner. Existing evidence for developer expertise based on the version history of existing projects is analyzed. More specifically, we analyze the commits to a repository in terms of affected API usage. On these grounds, we associate APIs with developers and thus we assess API experience of developers. In transitive closure, we also assess programming domain experience.
Geographic cluster based routing in ad-hoc wireless sensor networks is a current field of research. Various algorithms to route in wireless ad-hoc networks based on position information already exist. Among them algorithms that use the traditional beaconing approach as well as algorithms that work beaconless (no information about the environment is required besides the own position and the destination). Geographic cluster based routing with guaranteed message delivery can be carried out on overlay graphs as well. Until now the required planar overlay graphs are not being constructed reactively.
This thesis proposes a reactive algorithm, the Beaconless Cluster Based Planarization (BCBP) algorithm, which constructs a planar overlay graph and noticeably reduces the number of messages required for that. Based on an algorithm for cluster based planarization it beaconlessly constructs a planar overlay graph in an unit disk graph (UDG). An UDG is a model for a wireless network in which every participant has the same sending radius. Evaluation of the algorithm shows it to be more efficient than the non beaconless variant. Another result of this thesis is the Beaconless LLRAP (BLLRAP) algorithm, for which planarity but not continued connectivity could be proven.
Software systems are often developed as a set of variants to meet diverse requirements. Two common approaches to this are "clone-and-owning" and software product lines. Both approaches have advantages and disadvantages. In previous work we and collaborators proposed an idea which combines both approaches to manage variants, similarities, and cloning by using a virtual platform and cloning-related operators.
In this thesis, we present an approach for aggregating essential metadata to enable a propagate operator, which implements a form of change propagation. For this we have developed a system to annotate code similarities which were extracted throughout the history of a software repository. The annotations express similarity maintenance tasks, which can then either be executed automatically by propagate or have to be performed manually by the user. In this work we outline the automated metadata extraction process and the system for annotating similarities; we explain how the implemented system can be integrated into the workflow of an existing version control system (Git); and, finally, we present a case study using the 101haskell corpus of variants.
The publication of freely available and machine-readable information has increased significantly in the last years. Especially the Linked Data initiative has been receiving a lot of attention. Linked Data is based on the Resource Description Framework (RDF) and anybody can simply publish their data in RDF and link it to other datasets. The structure is similar to the World Wide Web where individual HTML documents are connected with links. Linked Data entities are identified by URIs which are dereferenceable to retrieve information describing the entity. Additionally, so called SPARQL endpoints can be used to access the data with an algebraic query language (SPARQL) similar to SQL. By integrating multiple SPARQL endpoints it is possible to create a federation of distributed RDF data sources which acts like one big data store.
In contrast to the federation of classical relational database systems there are some differences for federated RDF data. RDF stores are accessed either via SPARQL endpoints or by resolving URIs. There is no coordination between RDF data sources and machine-readable meta data about a source- data is commonly limited or not available at all. Moreover, there is no common directory which can be used to discover RDF data sources or ask for sources which offer specific data. The federation of distributed and linked RDF data sources has to deal with various challenges. In order to distribute queries automatically, suitable data sources have to be selected based on query details and information that is available about the data sources. Furthermore, the minimization of query execution time requires optimization techniques that take into account the execution cost for query operators and the network communication overhead for contacting individual data sources. In this thesis, solutions for these problems are discussed. Moreover, SPLENDID is presented, a new federation infrastructure for distributed RDF data sources which uses optimization techniques based on statistical information.
This thesis addresses the problem of terrain classification in unstructured outdoor environments. Terrain classification includes the detection of obstacles and passable areas as well as the analysis of ground surfaces. A 3D laser range finder is used as primary sensor for perceiving the surroundings of the robot. First of all, a grid structure is introduced for data reduction. The chosen data representation allows for multi-sensor integration, e.g., cameras for color and texture information or further laser range finders for improved data density. Subsequently, features are computed for each terrain cell within the grid. Classification is performedrnwith a Markov random field for context-sensitivity and to compensate for sensor noise and varying data density within the grid. A Gibbs sampler is used for optimization and is parallelized on the CPU and GPU in order to achieve real-time performance. Dynamic obstacles are detected and tracked using different state-of-the-art approaches. The resulting information - where other traffic participants move and are going to move to - is used to perform inference in regions where the terrain surface is partially or completely invisible for the sensors. Algorithms are tested and validated on different autonomous robot platforms and the evaluation is carried out with human-annotated ground truth maps of millions of measurements. The terrain classification approach of this thesis proved reliable in all real-time scenarios and domains and yielded new insights. Furthermore, if combined with a path planning algorithm, it enables full autonomy for all kinds of wheeled outdoor robots in natural outdoor environments.
Im Rahmen dieser Arbeit soll eine Methodik erarbeitet werden, die englische, keyword-basierte Anfragen in SPARQL übersetzt und bewertet. Aus allen generierten SPARQL-Queries sollen die relevantesten ermittelt und ein Favorit bestimmt werden. Das Ergebnis soll in einer Nutzerevaluation bewertet werden.
In the recent years, Software Engineering research has shown the rise of interest in the empirical studies. Such studies are often based on empirical evidence derived from corpora - collections of software artifacts. While there are established forms of carrying out empirical research (experiments, case studies, surveys, etc.), the common task of preparing the underlying collection of software artifacts is typically addressed in ad hoc manner.
In this thesis, by means of a literature survey we show how frequently software engineering research employs software corpora and using a developed classification scheme we discuss their characteristics. Addressing the lack of methodology, we suggest a method of corpus (re-)engineering and apply it to an existing collection of Java projects.
We report two extensive empirical studies, where we perform a broad and diverse range of analyses on the language for privacy preferences (P3P) and on object-oriented application programming interfaces (APIs). In both cases, we are driven by the data at hand, by the corpus itself, discovering the actual usage of the languages.
Eine systematische Literaturstudie zu beaconless Algorithmen für drahtlose Ad-hoc- und Sensornetze
(2014)
Wireless sensor networks (WSN) are computer networks consisting of sensor nodes.In position-based protocols nodes broadcast messages containing their position information periodically. These messages are called beacons. The use of beacons however has downsides, which led to the emerging of beaconless algorithms. Despite their importance in the research area of computer networks and to the best of our knowledge, beaconless algorithms have not been systematically surveyed.
The aim of this work is a comprehensive, systematic review of beaconless algorithms since the turn of the millennium. Relevant articles are categorized based on their similarities and their differences are compared. Thus, this work serves as a reference to the current state of research of beaconless algorithms and as a basis for finding research gaps.
Im Verlauf der vergangenen Jahre wurden unter der Leitung von Dr. Merten Joost basierend auf Microcontrollern der ATMega-Reihe verschiedene Projekte zur Ansteuerung der Peripheriegeräte eines Computers realisiert. Hierzu zählen unter anderem die Abfrage einer Tastatur, die Ausgabe von Audio- und Videosignalen sowie eine Programmierumgebung mit eigener Programmiersprache. Ziel dieser Arbeit ist es, die gesammelten Projekte zu verbinden, um als Ergebnis einen eigenständigen "Homecomputer" zu erhalten, der per Tastatur angesteuert werden können soll und über eine Audio- und Videoausgabe verfügen soll. Dabei wird eine SD-Karte als Speichermedium dienen, das per Tastatureingabe über eine Art Shell verwaltet werden kann.
In diesem Bericht wird der Einsatz von drahtlosen Sensornetzen zur Temperaturmessung in Fließgewässern untersucht. Es wird dargestellt, inwieweit solche Netze als Bindeglied zwischen Fernerkundung und stationären Sensoren eingesetzt werden können. Es werden die Anforderungen an Sensornetze für die Anwendung Gewässermonitoring ermittelt und eine prototypische Realisierung von Netzknoten für ein solches Sensornetz dargestellt. Als Ergebnis dieser Arbeit werden die Genauigkeit von Temperaturmessungen mit solchen Sensorknoten im Vergleich zu einem Temperaturlogger als Referenzsystem dargestellt. Die Messungen zeigen, dass eine vergleichsweise gute Messgenauigkeit zu geringen Kosten erreichbar ist. Durch Weiterentwicklung des hier dargestellten Prototypen steht für die Temperaturüberwachung in Gewässern ein vielversprechendes und kostengünstiges neues Messinstrument zur Verfügung. Dieses kann auf der einen Seite in tieferen Regionen Gewässertemperaturen messen, als dies mit Fernerkundung möglich ist, und auf der anderen Seite eine höhere räumliche Auflösung als stationäre Messstationen erreichen. Zusätzlich dienen die Literaturrecherche und die Formulierung der Kriterien einer Eingrenzung des Anwendungsbereichs für weiterführende Arbeiten.
Der Apple ][ war einer der drei ersten kompletten Computersysteme auf dem Markt. Von April 1977 an wurde er rund 16 Jahre lang mehrere Millionen mal verkauft. Entwickelt wurde dieser 8 Bit Homecomputer von Steve Wozniak und Steve Jobs. Sie ebneten damit den Weg für den Macintosh und das heute gut bekannte Unternehmen Apple.
Diese Arbeit beschreibt die Implementierung eines Softwareemulators für das komplette Apple ][ Computersystem auf nur einem Atmel AVR Microcontroller. Die größte Herausforderung besteht darin, dass der Microcontroller nur eine geringfügig höhere Taktrate als die zu emulierende Hardware hat. Dies erfordert eine effiziente Emulation der CPU und Speicherverwaltung, die nachfolgend zusammen mit der Laufzeitumgebung für die Emulation vorgestellt wird. Weiterhin wird die Umsetzung des Emulators mit Display und Tastatur in Hardware naher erläutert.
Mit dieser Arbeit wird die erfolgreiche Entwicklung eines portablen Apple ][ Emulators, von der Software über die Hardware bis hin zu einem Prototypen, vorgestellt.
The increased networking of devices has established the usage of distributed applications as a common method in software development. Despite the demand of anonymity in using distributed applications, software frameworks still lack appropriate support in developing them. Building anonymous communicating applications therefore often results in an expensive individual approach. This work integrates an approach for anonymous communication using remote procedure calls into a software framework for building distributed applications. This also includes the design, development, and prototypical implementation of a binding for the MAppLab Remote Procedure Call Framework on the basis of the mPartAPI Framework. Furthermore the resulting prototype will be tested in an exemplary scenario.
Diese Bachelorarbeit befasst sich mit der Entwicklung und Implementierung einer Gesichtserkennenden Software, die in der Lage ist Personenströme zurnerkennen und zu protokollieren. Dabei wird, ausgehend von den speziellen Anforderungen der Bildverarbeitung die entstandene Softwarearchitektur und deren Implementation vorgestellt. Zusätzlich wird zur Implementation ein Webinterface entwickelt welches die Verwaltung der Daten vereinfachen soll. Abschließend werden weitere Verfahren der Gesichtserkennung vorgestellt und gegen das eingesetzte Verfahren verglichen. Zum Schluss wird die implementierte Software evaluiert.
Community-Plattformen im Internet verwenden codebasierte Governance, um ihre hohe Anzahl an Nutzerbeiträgen zu verwalten. Dazu gehören alle Arten von Funktionalitäten, mit denen die Community Nutzerbeiträge in irgendeiner Form direkt oder indirekt beurteilen kann. Diese Arbeit erklärt zunächst die Bedeutung codebasierter Governance und der verschiedenen dafür nutzbaren Funktionalitäten. Anschließend werden die erfolgreichsten 50 Community-Plattformen auf codebasierte Governance untersucht. Das Ergebnis zeigt die Zusammenhänge zwischen dem Aufbau einer Plattform, der Beschaσffenheit der Nutzerbeiträge und der darauf ausübbaren codebasierten Governance auf.