Refine
Year of publication
Document Type
- Master's Thesis (18)
- Part of Periodical (15)
- Doctoral Thesis (11)
- Study Thesis (5)
- Bachelor Thesis (3)
- Diploma Thesis (3)
- Habilitation (1)
Keywords
- Semantic Web (3)
- ontology (3)
- Linked Open Data (2)
- Maschinelles Lernen (2)
- OWL (2)
- OWL <Informatik> (2)
- Ontology (2)
- RDF <Informatik> (2)
- SPARQL (2)
- mobile phone (2)
Institute
- Institute for Web Science and Technologies (56) (remove)
In this article we analyze the privacy aspects of a mobile sensor application used for recording urban travel patterns as part of a travel-survey service. This service has been developed and field-tested within the Live+Gov EU Project. The privacy analysis follows a structured approach established in. Eight privacy recommendations are derived, and have already led to corresponding enhancements of the travel-survey service.
Graphs are known to be a good representation of structured data. TGraphs, which are typed, attributed, ordered, and directed graphs, are a very general kind of graphs that can be used for many domains. The Java Graph Laboratory (JGraLab) provides an efficient implementation of TGraphs with all their properties. JGraLab ships with many features, including a query language (GReQL2) for extracting data from a graph. However, it lacks a generic library for important common graph algorithms. This mid-study thesis extends JGraLab with a generic algorithm library called Algolib, which provides a generic and extensible implementation of several important common graph algorithms. The major aspects of this work are the generic nature of Algolib, its extensibility, and the methods of software engineering that were used for achieving both. Algolib is designed to be extensible in two ways. Existing algorithms can be extended for solving specialized problems and further algorithms can be easily added to the library.
The purpose of this thesis is to explore the sentiment distributions of Wikipedia concepts.
We analyse the sentiment of the entire English Wikipedia corpus, which includes 5,669,867 articles and 1,906,375 talks, by using a lexicon-based method with four different lexicons.
Also, we explore the sentiment distributions from a time perspective using the sentiment scores obtained from our selected corpus. The results obtained have been compared not only between articles and talks but also among four lexicons: OL, MPQA, LIWC, and ANEW.
Our findings show that among the four lexicons, MPQA has the highest sensitivity and ANEW has the lowest sensitivity to emotional expressions. Wikipedia articles show more sentiments than talks according to OL, MPQA, and LIWC, whereas Wikipedia talks show more sentiments than articles according to ANEW. Besides, the sentiment has a trend regarding time series, and each lexicon has its own bias regarding text describing different things.
Moreover, our research provides three interactive widgets for visualising sentiment distributions for Wikipedia concepts regarding the time and geolocation attributes of concepts.
Der Idee des Semantic Desktop liegen die gleichen Konzepte zugrunde wie dem Semantic Web mit dem Unterschied, dass sie nun auf die Applikationen und Daten eines durchschnittlichen Desktops angewendet werden. Insbesondere geht es darum, die unterschiedlichen Sichten auf Daten, die durch unterschiedliche Anwendungen erzeugt werden, über eine kontextübergreifende Beschreibung zu integrieren. Z.B. können sich zwei Programme darin unterscheiden, dass eine E-Mail-Adresse in dem einen als aktiver Link dargestellt wird, über den sich direkt eine E-Mail versenden lässt, in dem anderen aber lediglich als Zeichenkette. In der angestrebten idealen Welt des Semantic Desktop würde diese Adresse unabhängig von ihrem Anzeigekontext auch als solche erkannt und behandelt. Ziel der Arbeit ist die Entwicklung einer Integrationsmöglichkeit, die es Applikationen erlaubt, Informationen über ihre Inhalte auszutauschen. Hierzu werden zunächst die Anforderungen an die zu erarbeitende Kommunikationskomponente aufgeführt und daraus das technische Umfeld abgeleitet. Dabei wird vor allem auf die einzusetzenden Technologien eingegangen, und die angewendeten Konzepte werden erläutert. Die Entwicklung und Beschreibung einer Testanwendung schließen den technischen Teil ab. Zuletzt wird die gesamte Entwicklung kritisch diskutiert und ein Ausblick auf mögliche fortführende Entwicklungen geboten.
This thesis focuses on approximate inference in assumption-based argumentation frameworks. Argumentation provides a significant idea in the computerization of theoretical and practical reasoning in AI. And it has a close connection with AI, engaging in arguments to perform scientific reasoning. The fundamental approach in this field is abstract argumentation frameworks developed by Dung. Assumption-based argumentation can be regarded as an instance of abstract argumentation with structured arguments. When facing a large scale of data, a challenge of reasoning in assumption-based argumentation is how to construct arguments and resolve attacks over a given claim with minimal cost of computation and acceptable accuracy at the same time. This thesis proposes and investigates approximate methods that randomly select and construct samples of frameworks based on graphical dispute derivations to solve this problem. The presented approach aims to improve reasoning performance and get an acceptable trade-off between computational time and accuracy. The evaluation shows that for reasoning in assumption-based argumentation, in general, the running time is reduced with the cost of slightly low accuracy by randomly sampling and constructing inference rules for potential arguments over a query.
Die nächste Generation des World Wide Web, das Semantic Web, erlaubt Benutzern, Unmengen an Informationen über die Grenzen von Webseiten und Anwendungen hinaus zu veröffentlichen und auszutauschen. Die Prinzipien von Linked Data beschreiben Konventionen, um diese Informationen maschinenlesbar zu veröffentlichen. Obwohl es sich aktuell meist um Linked Open Data handelt, deren Verbreitung nicht beschränkt, sondern explizit erwünscht ist, existieren viele Anwendungsfälle, in denen der Zugriff auf Linked Data in Resource Description Framework (RDF) Repositories regelbar sein soll. Bisher existieren lediglich Ansätze für die Lösung dieser Problemstellung, weshalb die Veröffentlichung von vertraulichen Inhalten mittels Linked Data bisher nicht möglich war.
Aktuell können schützenswerte Informationen nur mit Hilfe eines externen Betreibers kontrolliert veröffentlicht werden. Dabei werden alle Daten auf dessen System abgelegt und verwaltet. Für einen wirksamen Schutz sind weitere Zugriffsrichtlinien, Authentifizierung von Nutzern sowie eine sichere Datenablage notwendig.
Beispiele für ein solches Szenario finden sich bei den sozialen Netzwerken wie Facebook oder StudiVZ. Die Authentifizierung aller Nutzer findet über eine zentrale Webseite statt. Anschließend kann beispielsweise über eine Administrationsseite der Zugriff auf Informationen für bestimmte Nutzergruppen definiert werden. Trotz der aufgezeigten Schutzmechanismen hat der Betreiber selbst immer Zugriff auf die Daten und Inhalte aller Nutzer.
Dieser Zustand ist nicht zufriedenstellend.
Die Idee des Semantic Webs stellt einen alternativen Ansatz zur Verfügung. Der Nutzer legt seine Daten an einer von ihm kontrollierten Stelle ab, beispielsweise auf seinem privaten Server. Im Gegensatz zum zuvor vorgestellten Szenario ist somit jeder Nutzer selbst für Kontrollmechanismen wie Authentifizierung und Zugriffsrichtlinien verantwortlich.
Innerhalb der vorliegenden Arbeit wird ein Framework konzeptioniert und entworfen, welches es mit Hilfe von Regeln erlaubt, den Zugriff auf RDF-Repositories zu beschränken. In Kapitel 2 werden zunächst die bereits existierenden Ansätze für die Zugriffssteuerung vertraulicher Daten im Sematic Web vorgestellt. Des Weiteren werden in Kapitel 3 grundlegende Mechanismen und Techniken erläutert, welche in dieser Arbeit Verwendung finden. In Kapitel 4 wird die Problemstellung konkretisiert und anhand eines Beispielszenarios analysiert.
Nachdem Anforderungen und Ansprüche erhoben sind, werden in Kapitel 6 verschiedene Lösungsansätze, eine erste Implementierung und ein Prototyp vorgestellt. Abschließend werden die Ergebnisse der Arbeit und die resultierenden Ausblicke in Kapitel 7 zusammengefasst.
Navigation is a natural way to explore and discover content in a digital environment. Hence, providers of online information systems such as Wikipedia---a free online encyclopedia---are interested in providing navigational support to their users. To this end, an essential task approached in this thesis is the analysis and modeling of navigational user behavior in information networks with the goal of paving the way for the improvement and maintenance of web-based systems. Using large-scale log data from Wikipedia, this thesis first studies information access by contrasting search and navigation as the two main information access paradigms on the Web. Second, this thesis validates and builds upon existing navigational hypotheses to introduce an adaptation of the well-known PageRank algorithm. This adaptation is an improvement of the standard PageRank random surfer navigation model that results in a more "reasonable surfer" by accounting for the visual position of links, the information network regions they lead to, and the textual similarity between the link source and target articles. Finally, using agent-based simulations, this thesis compares user models that have a different knowledge of the network topology in order to investigate the amount and type of network topological information needed for efficient navigation. An evaluation of agents' success on four different networks reveals that in order to navigate efficiently, users require only a small amount of high-quality knowledge of the network topology. Aside from the direct benefits to content ranking provided by the "reasonable surfer" version of PageRank, the empirical insights presented in this thesis may also have an impact on system design decisions and Wikipedia editor guidelines, i.e., for link placement and webpage layout.
Ontologies play an important role in knowledge representation for sharing information and collaboratively developing knowledge bases. They are changed, adapted and reused in different applications and domains resulting in multiple versions of an ontology. The comparison of different versions and the analysis of changes at a higher level of abstraction may be insightful to understand the changes that were applied to an ontology. While there is existing work on detecting (syntactical) differences and changes in ontologies, there is still a need in analyzing ontology changes at a higher level of abstraction like ontology evolution or refactoring pattern. In our approach we start from a classification of model refactoring patterns found in software engineering for identifying such refactoring patterns in OWL ontologies using DL reasoning to recognize these patterns.
The novel mobile application csxPOI (short for: collaborative, semantic, and context-aware points-of-interest) enables its users to collaboratively create, share, and modify semantic points of interest (POI). Semantic POIs describe geographic places with explicit semantic properties of a collaboratively created ontology. As the ontology includes multiple subclassiffcations and instantiations and as it links to DBpedia, the richness of annotation goes far beyond mere textual annotations such as tags. With the intuitive interface of csxPOI, users can easily create, delete, and modify their POIs and those shared by others. Thereby, the users adapt the structure of the ontology underlying the semantic annotations of the POIs. Data mining techniques are employed to cluster and thus improve the quality of the collaboratively created POIs. The semantic POIs and collaborative POI ontology are published as Linked Open Data.
Commonsense reasoning can be seen as a process of identifying dependencies amongst events and actions. Understanding the circumstances surrounding these events requires background knowledge with sufficient breadth to cover a wide variety of domains. In the recent decades, there has been a lot of work in extracting commonsense knowledge, a number of these projects provide their collected data as semantic networks such as ConceptNet and CausalNet. In this thesis, we attempt to undertake the Choice Of Plausible Alternatives (COPA) challenge, a problem set with 1000 questions written in multiple-choice format with a premise and two alternative choices for each question. Our approach differs from previous work by using shortest paths between concepts in a causal graph with the edge weight as causality metric. We use CausalNet as primary network and implement a few design choices to explore the strengths and drawbacks of this approach, and propose an extension using ConceptNet by leveraging its commonsense knowledge base.
In this paper, we compare two approaches for exploring large,rnhierarchical data spaces of social media data on mobile devicesrnusing facets. While the first approach arranges thernfacets in a 3x3 grid, the second approach makes use of arnscrollable list of facets for exploring the data. We have conductedrna between-group experiment of the two approachesrnwith 24 subjects (20 male, 4 female) executing the same set ofrntasks of typical mobile users" information needs. The resultsrnshow that the grid-based approach requires significantly morernclicks, but subjects need less time for completing the tasks.rnFurthermore, it shows that the additional clicks do not hamperrnthe subjects" satisfaction. Thus, the results suggest thatrnthe grid-based approach is a better choice for faceted searchrnon touchscreen mobile devices. To the best of our knowledge,rnsuch a summative evaluation of different approaches for facetedrnsearch on mobile devices has not been done so far.
As a multilingual system,Wikipedia provides many challenges for academics and engineers alike. One such challenge is cultural contextualisation of Wikipedia content, and the lack of approaches to effectively quantify it. Additionally, what seems to lack is the intent of establishing sound computational practices and frameworks for measuring cultural variations in the data. Current approaches seem to mostly be dictated by the data availability, which makes it difficult to apply them in other contexts. Another common drawback is that they rarely scale due to a significant qualitative or translation effort. To address these limitations, this thesis develops and tests two modular quantitative approaches. They are aimed at quantifying culture-related phenomena in systems which rely on multilingual user-generated content. In particular, they allow to: (1) operationalise a custom concept of culture in a system; (2) quantify and compare culture-specific content- or coverage biases in such a system; and (3) map a large scale landscape of shared cultural interests and focal points. Empirical validation of these approaches is split into two parts. First, an approach to mapping Wikipedia communities of shared co-editing interests is validated on two large Wikipedia datasets comprising multilateral geopolitical and linguistic editor communities. Both datasets reveal measurable clusters of consistent co-editing interest, and computationally confirm that these clusters correspond to existing colonial, religious, socio economic, and geographical ties. Second, an approach to quantifying content differences is validated on a multilingual Wikipedia dataset, and a multi-platform (Wikipedia and Encyclopedia Britannica) dataset. Both are limited to a selected knowledge domain of national history. This analysis allows, for the first time on the large scale, to quantify and visualise the distribution of historical focal points in the articles on national histories. All results are cross-validated either by domain experts, or external datasets.
Main thesis contributions. This thesis: (1) presents an effort to formalise the process of measuring cultural variations in user-generated data; (2) introduces and tests two novel approaches to quantifying cultural contextualisation in multilingual data; (3) synthesises a valuable overview of literature on defining and quantifying culture; (4) provides important empirical insights on the effect of culture on Wikipedia content and coverage; demonstrates that Wikipedia is not contextfree, and these differences should not be treated as noise, but rather, as an important feature of the data. (5) makes practical service contributions through sharing data and visualisations.
Wikipedia is the biggest, free online encyclopaedia that can be expanded by any-one. For the users, who create content on a specific Wikipedia language edition, a social network exists. In this social network users are categorised into different roles. These are normal users, administrators and functional bots. Within the networks, a user can post reviews, suggestions or send simple messages to the "talk page" of another user. Each language in the Wikipedia domain has this type of social network.
In this thesis characteristics of the three different roles are analysed in order to learn how they function in one language network of Wikipedia and apply them to another Wikipedia network to identify bots. Timestamps from created posts are analysed to reveal noticeable characteristics referring to continuous messages, message rates and irregular behaviour of a user are discovered. Through this process we show that there exist differences between the roles for the mentioned characteristics.
Entwicklung eines generischen Sesame-Sails für die Abbildung von SPARQL-Anfragen auf Webservices
(2010)
Diese Arbeit soll eine Möglichkeit aufzeigen, aufbauend auf dem Sesame Framework Datenbestände von nicht-semantischen Web-Diensten im Sinne des Semantic Web auszuwerten. Konkret wird ein Sail (Webservice-Sail) entwickelt, das einen solchen Web-Dienst wie eine RDF-Quelle abfragen kann, indem es SPARQL-Ausdrücke in Methodenaufrufe des Dienstes übersetzt und deren Ergebnisse entsprechend auswertet und zurückgibt. Um eine möglichst große Anzahl von Webservices abdecken zu können, muss die Lösung entsprechend generisch gehalten sein. Das bedeutet aber insbesondere auch, dass das Sail auf die Modalitäten konkreter Services eingestellt werden muss. Es muss also auch eine geeignete Konfigurationsrepräsentation gefunden werden, um eine möglichst gute Unterstützung eines zu verwendenden Web-Dienstes durch das Webservice-Sail zu gewährleisten. Die Entwicklung einer solchen Repräsentation ist damit auch Bestandteil dieser Arbeit.
Schema information about resources in the Linked Open Data (LOD) cloud can be provided in a twofold way: it can be explicitly defined by attaching RDF types to the resources. Or it is provided implicitly via the definition of the resources´ properties.
In this paper, we analyze the correlation between the two sources of schema information. To this end, we have extracted schema information regarding the types and properties defined in two datasets of different size. One dataset is a LOD crawl from TimBL- FOAF profile (11 Mio. triple) and the second is an extract from the Billion Triples Challenge 2011 dataset (500 Mio. triple). We have conducted an in depth analysis and have computed various entropy measures as well as the mutual information encoded in this two manifestations of schema information.
Our analysis provides insights into the information encoded in the different schema characteristics. It shows that a schema based on either types or properties alone will capture only about 75% of the information contained in the data. From these observations, we derive conclusions about the design of future schemas for LOD.
The availability of digital cameras and the possibility to take photos at no cost lead to an increasing amount of digital photos online and on private computers. The pure amount of data makes approaches that support users in the administration of the photo necessary. As the automatic understanding of photo content is still an unsolved task, metadata is needed for supporting administrative tasks like search or photo work such as the generation of photo books. Meta-information textually describes the depicted scene or consists of information on how good or interesting a photo is.
In this thesis, an approach for creating meta-information without additional effort for the user is investigated. Eye tracking data is used to measure the human visual attention. This attention is analyzed with the objective of information creation in the form of metadata. The gaze paths of users working with photos are recorded, for example, while they are searching for photos or while they are just viewing photo collections.
Eye tracking hardware is developing fast within the last years. Because of falling prices for sensor hardware such as cameras and more competition on the eye tracker market, the prices are falling, and the usability is increasing. It can be assumed that eye tracking technology can soon be used in everyday devices such as laptops or mobile phones. The exploitation of data, recorded in the background while the user is performing daily tasks with photos, has great potential to generate information without additional effort for the users.
The first part of this work deals with the labeling of image region by means of gaze data for describing the depicted scenes in detail. Labeling takes place by assigning object names to specific photo regions. In total, three experiments were conducted for investigating the quality of these assignments in different contexts. In the first experiment, users decided whether a given object can be seen on a photo by pressing a button. In the second study, participants searched for specific photos in an image search application. In the third experiment, gaze data was collected from users playing a game with the task to classify photos regarding given categories. The results of the experiments showed that gaze-based region labeling outperforms baseline approaches in various contexts. In the second part, most important photos in a collection of photos are identified by means of visual attention for the creation of individual photo selections. Users freely viewed photos of a collection without any specific instruction on what to fixate, while their gaze paths were recorded. By comparing gaze-based and baseline photo selections to manually created selections, the worth of eye tracking data in the identification of important photos is shown. In the analysis of the data, the characteristics of gaze data has to be considered, for example, inaccurate and ambiguous data. The aggregation of gaze data, collected from several users, is one suggested approach for dealing with this kind of data.
The results of the performed experiments show the value of gaze data as source of information. It allows to benefit from human abilities where algorithms still have problems to perform satisfyingly.
We examine the systematic underrecognition of female scientists (Matilda effect) by exploring the citation network of papers published in the American Physical Society (APS) journals. Our analysis shows that articles written by men (first author, last author and dominant gender of authors) receive more citations than similar articles written by women (first author, last author and dominant gender of authors) after controlling for the journal of publication, year of publication and content of the publication. Statistical significance of the overlap between the lists of references was considered as the measure of similarity between articles in our analysis. In addition, we found that men are less likely to cite articles written by women and women are less likely to cite articles written by men. This pattern leads to receiving more citations by articles written by men than similar articles written by women because the majority of authors who published in APS journals are male (85%). We also observed Matilda effect reduces when articles are published in journals with the highest impact factors. In other words, people’s evaluation of articles published in these journals is not affected by the gender of authors significantly. Finally, we suggested a method that can be applied by editors in academic journals to reduce the evaluation bias to some extent. Editors can identify missing citations using our proposed method to complete bibliographies. This policy can reduce the evaluation bias because we observed papers written by female scholars (first author, last author, the dominant gender of authors) miss more citations than articles written by male scholars (first author, last author, the dominant gender of authors).
Modeling and publishing Linked Open Data (LOD) involves the choice of which vocabulary to use. This choice is far from trivial and poses a challenge to a Linked Data engineer. It covers the search for appropriate vocabulary terms, making decisions regarding the number of vocabularies to consider in the design process, as well as the way of selecting and combining vocabularies. Until today, there is no study that investigates the different strategies of reusing vocabularies for LOD modeling and publishing. In this paper, we present the results of a survey with 79 participants that examines the most preferred vocabulary reuse strategies of LOD modeling. Participants of our survey are LOD publishers and practitioners. Their task was to assess different vocabulary reuse strategies and explain their ranking decision. We found significant differences between the modeling strategies that range from reusing popular vocabularies, minimizing the number of vocabularies, and staying within one domain vocabulary. A very interesting insight is that the popularity in the meaning of how frequent a vocabulary is used in a data source is more important than how often individual classes and properties arernused in the LOD cloud. Overall, the results of this survey help in understanding the strategies how data engineers reuse vocabularies, and theyrnmay also be used to develop future vocabulary engineering tools.
In dieser Arbeit wird das MobileFacets System präsentiert, dass ein bequemes facettiertes Browsen und Suchen von semantischen Daten auf einem mobilen Endgerät ermöglicht. Anwender bekommen in Abhängigkeit ihres lokalen Ortskontextes, weitreichende Informationen wie Orte, Personen, Organisationen oder Events dargeboten. Basierend auf der Theorie von Facetten, wird das facettierte Browsen zur Erkundung von strukturierten Datensätzen anhand einer Client Anwendung realisiert. Die Anwendung bedient sich dabei eines lokalen Servers, der für Anfragen der Clients, die Anbindung an externe Datenquellen und die Aufbereitung der strukturierten Daten zuständig ist.
Expert-driven business process management is an established means for improving efficiency of organizational knowledge work. Implicit procedural knowledge in the organization is made explicit by defining processes. This approach is not applicable to individual knowledge work due to its high complexity and variability. However, without explicitly described processes there is no analysis and efficient communication of best practices of individual knowledge work within the organization. In addition, the activities of the individual knowledge work cannot be synchronized with the activities in the organizational knowledge work.rnrnSolution to this problem is the semantic integration of individual knowledgernwork and organizational knowledge work by means of the patternbased core ontology strukt. The ontology allows for defining and managing the dynamic tasks of individual knowledge work in a formal way and to synchronize them with organizational business processes. Using the strukt ontology, we have implemented a prototype application for knowledge workers and have evaluated it at the use case of an architectural fifirm conducting construction projects.