Refine
Document Type
- Master's Thesis (9) (remove)
Keywords
- 2019 European Parliament Election (1)
- Articles for Deletion (1)
- Association Rules (1)
- Ausrichtungswerkzeug (1)
- CSCA (1)
- Cicero (1)
- Computer Supported Cooperative Work (1)
- Eyetracking (1)
- Function Words (1)
- Handsfree editing (1)
Cicero ist eine asynchrone Diskussionsplattform, die im Rahmen der Arbeitsgruppe Informationssysteme und Semantic Web (ISWeb) der Universität Koblenz-Landau entwickelt wurde. Die webbasierte Anwendung folgt dem Gedanken eines semantischen Wikis und soll insbesondere beim Arbeitsablauf von Entwurfsprozessen eingesetzt werden. Dabei verwendet Cicero ein restriktives Argumentationsmodell, das einerseits strukturierte Diskussionen von schwierigen Prozessen fördert und andererseits den Entscheidungsfindungsprozess unterstützt. Im Zentrum der Arbeit steht die Evaluation von Cicero, wobei im vorhergehenden theoretischen Teil die Hintergründe und Funktionsweisen vorgestellt werden und im nachfolgenden praktischen Teil die Anwendung anhand einer Fallstudie evaluiert wird. Die Studie wurde im Rahmen der Übungsveranstaltung zu Grundlagen der Datenbanken der Universität Koblenz im Wintersemester 2008/2009 durchgeführt , und die Studenten hatten die Aufgabe, einen Entwurfsprozess mit Hilfe von Cicero zu diskutieren. Über die erhobenen Daten der Fallstudie wird ein Akzeptanztest durchgeführt. Hierbei wird überprüft, ob die Benutzer Cicero positiv annehmen und die Methodik richtig anwenden. Denn aufgrund des vorgegebenen Argumentationsmodells müssen die Benutzer ihr Kommunikationsverhalten ändern und ihren herkömmlichen Diskussionsstil der Anwendung anpassen. Ziel der Evaluation ist es, kritische Erfolgsfaktoren im Umgang mit Cicero ausfindig zu machen. Anhand der identifizierten Schwachstellen werden abschließend gezielte Maßnahmen vorgeschlagen, die die Akzeptanz der Benutzer gegenüber Cicero erhöhen könnten.
Ontologies are valuable tools for knowledge representation and important building blocks of the Semantic Web. They are not static and can change over time. Changing an ontology can be necessary for various reasons: the domain that is represented by an ontology can change or an ontology is reused and must be adapted to the new context. In addition, modeling errors could have been introduced into the ontology which must be found and removed. The non-triviality of the change process has led to the emerge of ontology change as an own field of research. The removal of knowledge from ontologies is an important aspect of this change process, because even the addition of new knowledge to an ontology potentially requires the removal of older, conflicting knowledge. Such a removal must be performed in a thought-out way. A naïve change of concepts within the ontology can easily remove other, unrelated knowledge or alter the semantics of concepts in an unintended way [2]. For these reasons, this thesis introduces a formal operator for the fine-grained retraction of knowledge from EL concepts which is partially based on the postulates for belief set contraction and belief base contraction [3, 4, 5] and the work of Suchanek et al. [6]. For this, a short introduction to ontologies and OWL 2 is given and the problem of ontology change is explained. It is then argued why a formal operator can support this process and why the Description Logic EL provides a good starting point for the development of such an operator. After this, a general introduction to Description Logic is given. This includes its history, an overview of its applications and common reasoning tasks in this logic. Following this, the logic EL is defined. In a next step, related work is examined and it is shown why the recovery postulate and the relevance postulate cannot be naïvely employed in the development of an operator that removes knowledge from EL concepts. Following this, the requirements to the operator are formulated and properties are given which are mainly based on the postulates for belief set and belief base contraction. Additional properties are developed which make up for the non-applicability of the recovery and relevance postulates. After this, a formal definition of the operator is given and it is shown that the operator is applicable to the task of a fine-grained removal of knowledge from EL concepts. In a next step, it is proven that the operator fulfills all the previously defined properties. It is then demonstrated how the operator can be combined with laconic justifications [7] to assist a human ontology editor by automatically removing unwanted consequences from an ontology. Building on this, a plugin for the ontology editor Protégé is introduced that is based on algorithms that were derived from the formal definition of the operator. The content of this work is then summarized and a final conclusion is drawn. The thesis closes with an outlook into possible future work.
Commonsense reasoning can be seen as a process of identifying dependencies amongst events and actions. Understanding the circumstances surrounding these events requires background knowledge with sufficient breadth to cover a wide variety of domains. In the recent decades, there has been a lot of work in extracting commonsense knowledge, a number of these projects provide their collected data as semantic networks such as ConceptNet and CausalNet. In this thesis, we attempt to undertake the Choice Of Plausible Alternatives (COPA) challenge, a problem set with 1000 questions written in multiple-choice format with a premise and two alternative choices for each question. Our approach differs from previous work by using shortest paths between concepts in a causal graph with the edge weight as causality metric. We use CausalNet as primary network and implement a few design choices to explore the strengths and drawbacks of this approach, and propose an extension using ConceptNet by leveraging its commonsense knowledge base.
Current political issues are often reflected in social media discussions, gathering politicians and voters on common platforms. As these can affect the public perception of politics, the inner dynamics and backgrounds of such debates are of great scientific interest. This thesis takes user generated messages from an up-to-date dataset of considerable relevance as Time Series, and applies a topic-based analysis of inspiration and agenda setting to it. The Institute for Web Science and Technologies of the University Koblenz-Landau has collected Twitter data generated beforehand by candidates of the European Parliament Election 2019. This work processes and analyzes the dataset for various properties, while focusing on the influence of politicians and media on online debates. An algorithm to cluster tweets into topical threads is introduced. Subsequently, Sequential Association Rules are mined, yielding wide array of potential influence relations between both actors and topics. The elaborated methodology can be configured with different parameters and is extensible in functionality and scope of application.
“Did I say something wrong?” A word-level analysis of Wikipedia articles for deletion discussions
(2016)
This thesis focuses on gaining linguistic insights into textual discussions on a word level. It was of special interest to distinguish messages that constructively contribute to a discussion from those that are detrimental to them. Thereby, we wanted to determine whether “I”- and “You”-messages are indicators for either of the two discussion styles. These messages are nowadays often used in guidelines for successful communication. Although their effects have been successfully evaluated multiple times, a large-scale analysis has never been conducted. Thus, we used Wikipedia Articles for Deletion (short: AfD) discussions together with the records of blocked users and developed a fully automated creation of an annotated data set. In this data set, messages were labelled either constructive or disruptive. We applied binary classifiers to the data to determine characteristic words for both discussion styles. Thereby, we also investigated whether function words like pronouns and conjunctions play an important role in distinguishing the two. We found that “You”-messages were a strong indicator for disruptive messages which matches their attributed effects on communication. However, we found “I”-messages to be indicative for disruptive messages as well which is contrary to their attributed effects. The importance of function words could neither be confirmed nor refuted. Other characteristic words for either communication style were not found. Yet, the results suggest that a different model might represent disruptive and constructive messages in textual discussions better.
The output of eye tracking Web usability studies can be visualized to the analysts as screenshots of the Web pages with their gaze data. However, the screenshot visualizations are found to be corrupted whenever there are recorded fixations on fixed Web page elements on different scroll positions. The gaze data are not gathered on their fixated fixed elements; rather they are scattered on their recorded scroll positions. This problem has raised our attention to find an approach to link gaze data to their intended fixed elements and gather them in one position on the screenshot. The approach builds upon the concept of creating the screenshot during the recording session, where images of the viewport are captured on visited scroll positions and lastly stitched into one Web page screenshot. Additionally, the fixed elements in the Web page are identified and linked to their fixations. For the evaluation, we compared the interpretation of our enhanced screenshot against the video visualization, which overcomes the problem. The results revealed that both visualizations equally deliver accurate interpretations. However, interpreting the visualizations of eye tracking Web usability studies using the enhanced screenshots outperforms the video visualizations in terms of speed and it requires less temporal demands from the interpreters.
Knowledge-based authentication methods are vulnerable to Shoulder surfing phenomenon.
The widespread usage of these methods and not addressing the limitations it has could result in the user’s information to be compromised. User authentication method ought to be effortless to use and efficient, nevertheless secure.
The problem that we face concerning the security of PIN (Personal Identification Number) or password entry is shoulder surfing, in which a direct or indirect malicious observer could identify the user sensitive information. To tackle this issue we present TouchGaze which combines gaze signals and touch capabilities, as an input method for entering user’s credentials. Gaze signals will be primarily used to enhance targeting and touch for selecting. In this work, we have designed three different PIN entry method which they all have similar interfaces. For the evaluation, these methods were compared based on efficiency, accuracy, and usability. The results uncovered that despite the fact that gaze-based methods require extra time for the user to get familiar with yet it is considered more secure. In regards to efficiency, it has the similar error margin to the traditional PIN entry methods.
Einfluss eines Ausrichtungswerkzeugs auf die Bedienbarkeit in unbeaufsichtigten Eyetrackingsystemen
(2015)
Eye gaze trackers are devices that can estimate the direction of gaze of a person. Among usability testing eye tracking also allows persons with decreased limb mobility to control or to interact with the computer. The quality and availability of eye tracking equipment has been increasing while costs have been decreasing. This development leads to entering new markets by using eye tracking as an additional input dimension for a variety of applications. Up to now eye tracking has been supervised by qualified experts, who assured that the important conditions like position in front of the eye tracking device, calibration and light conditions has been kept, while using.
This thesis examines an adjustment tool, which is helping the user to adjust in front of the eye tracker and helping to keep this position during the experiment. Furthermore the accuracy while moving the head has been analysed. In this experiment an remote eye gaze tracker has been used to control a game character in the video game called 'Schau Genau!'. The goal was to determine whether the game is playable without the barrier of adjusting and calibration. The results show that adjusting in front of an eye tracker is not a problem, keeping this position is. Small changes of the head position after the calibration process leads to a lack of accuracy. Giving up the calibration and using someone else calibration shows way bigger deviation. Additional head movement increases error rate and makes controlling more difficult.