Refine
Year of publication
Document Type
- Master's Thesis (187) (remove)
Keywords
- Augmented Reality (3)
- Computersimulation (3)
- Datenschutz (3)
- Internet of Things (3)
- virtual reality (3)
- Beschaffung (2)
- E-Partizipation (2)
- E-participation (2)
- Simulation (2)
- Sport (2)
Institute
- Institut für Computervisualistik (45)
- Fachbereich 4 (34)
- Institut für Management (33)
- Institut für Wirtschafts- und Verwaltungsinformatik (27)
- Institute for Web Science and Technologies (18)
- Institut für Informatik (14)
- Institut für Softwaretechnik (6)
- Fachbereich 1 (1)
- Fachbereich 3 (1)
- Fachbereich 6 (1)
In der Masterthesis von Benjamin Waldmann mit dem Titel „Flusskrebse in Deutschland – Aktueller Stand der Verbreitung heimischer und invasiver gebietsfremder Flusskrebse in Deutschland; Überblick über die erfolgten Schutzmaßnahmen und den damit verbundenen Erfahrungen; Vernetzung der Akteure im Flusskrebsschutz“ wurden erstmals für alle heimischen wie gebietsfremden Flusskrebsarten (Zehnfußkrebse) Verbreitungskarten für Deutschland vorgelegt. Grundlage der Arbeit waren umfangreiche Recherchen und Abfragen zur Verbreitung der Arten in den Bundesländern bei den zuständigen Behörden, Institutionen, Artexperten und Privatpersonen. Die Rohdaten wurden qualitätsgesichert und in einem Geoinformationssystem aufbereitet und dargestellt, so dass daraus bundesweite Verbreitungskarten für jede Art in einem zehn Kilometerraster (UTM-Gitter im Bezugssystem ETRS89) erstellt werden konnten. Darüber hinaus wurden, ebenfalls auf Basis umfangreicher Recherchen und Abfragen, die unterschiedlichen Möglichkeiten für Schutzmaßnahmen für heimische Flusskrebspopulationen aufgezeigt, bewertet und daraus Empfehlungen abgeleitet. Besonderes Augenmerk wurde dabei auf das Management invasiver gebietsfremder Flusskrebsarten sowie der Umgang mit der Tierseuche Krebspest (Aphanomyces astaci) gelegt. Abschließend wurden Empfehlungen zur Vernetzung der Akteure im Flusskrebsschutz gegeben sowie die Ansprechpartner:innen in den einzelnen Bundesländern aufgeführt.
Rudolf Virchows Vorstellungen von einer human-bürgerlichen Krankenpflege in der Zeit des preußischen Kulturkampfes
Über die Auseinandersetzungen im Preußischen Haus der Abgeordneten zwischen Rudolf Virchow, dem Kultusministerium und der Zentrumspartei über die katholischen Kranken-pflegeorden – von Hilmar Conrad
Der Mediziner und liberale Abgeordnete des Preußischen Hauses der Abgeordneten Rudolf Virchow ist vielen bekannt für den Satz „‹Omnis cellula e cellula›“1, aus seinem Werk ‚Die Zellularpathologie‘. Er wies mit diesem Lehrsatz und der zugrundeliegenden Forschung nach, dass Zellen in Morphologie und Funktion die kleinste Einheit des menschlichen Körpers sind.2 Der spätere Medizinprofessor wurde am 13. Oktober 1821 in Schivelbein geboren und verstarb am 5. September 1902 in Berlin. Das Jahr 1848 ist in zweierlei Hinsicht als Schicksalsjahr für Virchow zu betrachten. Er war in dieser Zeit Militärarzt und wurde in diesem Winter 1847/48 von der preußischen Regierung nach Ober-schlesien entsendet, wo eine Typhus-Epidemie wütete. Sein Auftrag war die wissenschaftliche Analyse der Geschehnisse. An seinem 80. Geburtstag sagte er über diese Reise: „Jene 16 Tage in Oberschlesien, 1848, waren das entscheidende Ereignis meines Lebens.“3 Es war Revolution in Berlin und dort zurück, nahm er an den Kämpfen für Demokratie teil.4 Im Bericht über seine Reise schlussfolgerte er, dass zur Prävention einer solchen Seuche in Oberschlesien Demokratisierung und Bildung für die Bevölkerung notwendig seien – das war der Beginn der Sozialmedizin. Ab 1859 war er Berliner Stadtverordneter und ab 1861 Mitglied des Preußischen Abgeordnetenhauses für die linksliberale Deutsche Fortschrittspartei, die er zusammen mit Mitstreitern gegründet hatte. Von 1880 bis 1893 war er Abgeordneter im Deutschen Reichstag.5 Für ihn war „[d]ie Medizin […] eine soziale Wissenschaft, und die Politik ist weiter nichts als Medizin im Großen.“6
Was diese Person also so spannend macht, ist, dass sie gleich zwei relevanten Gruppen angehörte, welche einflussreich über die Krankenpflege sprachen: die der Politik und die der Ärzte.
Rudolf Virchow forderte am 9. November 1869 auf dem Kongress der deutschen Frauenvereine in Berlin eine Pflege, die sich außerhalb der Kirchen organisieren solle:
„Organisiren wir ganz und gar ausserhalb der kirchlichen Organisation, organisiren wir ganz innerhalb der bürgerlichen Gesellschaft als solcher, nach rein menschlichen Aufgaben, ohne irgend einen weiteren Nebenzweck.“7
Was Virchow hier forderte ist eine Hinwendung zum ‚Humanen‘8, wie er es nannte und auch in mehreren Debatten im Abgeordnetenhaus erläuterte. Dies überrascht nicht, war er doch maßgeblich am Kulturkampf im Parlament beteiligt und ein vehementer Verfechter der Säkularisierung.9 Das wissenschaftlich relevante Problem an dieser Stelle ist Virchows zentrale Äußerungen über die Krankenpflege im Kontext der geführten Debatten im Preußischen Abgeordnetenhaus darzustellen. Die wissenschaftliche Literatur geht hierbei, nur sehr oberflächlich auf das vor- und hinterher Gesagte ein, obwohl es sich um Debatten handelt. Und hier zeigt sich auch die Relevanz der Arbeit. Natürlich liegt das Interesse darauf, Virchows Vorstellungen über die Krankenpflege anhand seiner Beiträge im Preußischen Haus der Abgeordneten nachzuvollziehen, aber eben auch nachzuzeichnen, wie sich die gesamten Debatten um die Krankenpflege, in denen sich Virchow bewegte, zugetragen hatten.
Demnach ist das Erkenntnisinteresse dieser Arbeit herauszuarbeiten, welche Ansichten Rudolf Virchow über die Krankenpflege seiner Zeit hatte, wie er sie im Preußischen Haus der Abgeordneten verteidigte, wie sich die Ansichten seiner politischen Gegner und der preußischen Regierung dazu verhielten sowie daraus folgend, wie die Debatten über die Krankenpflege im Preußischen Abgeordnetenhaus verliefen.
1 Virchow, zit. n. Schipperges (1994): Rudolf Virchow, S. 58.
2 Schipperges (2008): Virchow, Rudolf (Ludwig Carl), S. 257.
3 Andree (2006): Rudolf Virchow (1821–1902) im Spannungsfeld von Glauben, Kirche und Staat, S. 99.
4 Vgl. ebd., S. 99–100.
5 Vgl. Schipperges (2008): Virchow, Rudolf (Ludwig Carl), S. 257–258.
6 Schipperges (1994): Rudolf Virchow, S. 113. Im Original kursiv.
7 Virchow (1879): Die berufsmässige Ausbildung zur Krankenpflege, auch ausserhalb der kirchlichen Organisationen, S. 49. Im Original gesperrt.
8 Vgl. Virchow (1875): Rede im Preußischen Abgeordnetenhaus, Berlin. 8. Mai 1875, S. 207.
9 Vgl. Virchow (1873): Rede im Preußischen Abgeordnetenhaus, Berlin. 17. Januar 1873, S. 359.
Advanced Auditing of Inconsistencies in Declarative Process Models using Clustering Algorithms
(2021)
To have a compliant business process of an organization, it is essential to ensure a onsistent process. The measure of checking if a process is consistent or not depends on the business rules of a process. If the process adheres to these business rules, then the process is compliant and efficient. For huge processes, this is quite a challenge. Having an inconsistency in a process can yield very quickly to a non-functional process, and that’s a severe problem for organizations. This thesis presents a novel auditing approach for handling inconsistencies from a post-execution perspective. The tool identifies the run-time inconsistencies and visualizes them in heatmaps. These plots aim to help modelers observe the most problematic constraints and help them make the right remodeling decisions. The modelers assisted with many variables can be set in the tool to see a different representation of heatmaps that help grasp all the perspectives of the problem. The heatmap sort and shows the run-time inconsistency patterns, so that modeler can decide which constraints are highly problematic and should address a re-model. The tool can be applied to real-life data sets in a reasonable run-time.
Point Rendering
(2021)
In this thesis different methods for rendering point data are shown and compared with each other. The methods can be divided into two categories. For one visual methods are introduced that strictly deal with the displaying of point primitves. The main problem here lies in the depiction of surfaces since point data, unlike traditional triangle meshes, doesn't contain any connectivity information. On the other hand data strucutres are shown that enable real-time rendering of large point clouds. Point clouds often contain large amounts of data since they are mostly generated through 3D scanning processes such as laser scanning and photogrammetry.
In this thesis, the performance of the IceCube projects photon propagation
code (clsim) is optimized. The process of GPU code analysis and perfor-
mance optimization is described in detail. When run on the same hard-
ware, the new version achieves a speedup of about 3x over the original
implementation. Comparing the unmodified code on hardware currently
used by IceCube (NVIDIA GTX 1080) against the optimized version run on
a recent GPU (NVIDIA A100) a speedup of about 9.23x is observed. All
changes made to the code are shown and their performance impact as well
as the implications for simulation accuracy are discussed individually.
The approach taken for optimization is then generalized into a recipe.
Programmers can use it as a guide, when approaching large and complex
GPU programs. In addition, the per warp job-queue, a design pattern used
for load balancing among threads in a CUDA thread block, is discussed in
detail.
Soziale Netzwerke spielen im Alltagsleben der Schülerinnen und Schüler eine entscheidende Rolle. Im Rahmen der vorliegenden Masterarbeit wurde ein Konzept für die Anzeige von Profilvorschlägen innerhalb des sozialen Netzwerks „InstaHub“, welches ein speziell für den Informatikunterricht programmiertes Werkzeug zum Thema „Datenbanken“ darstellt, entwickelt. Als Hürde stellte sich dabei dar, dass von den etablierten sozialen Netzwerken nur wenig bis gar keine Informationen über die Berechnung von Profil- oder Freundschaftsvorschlägen preisgegeben werden. Daher wurde zunächst das Wesen von Beziehungen zwischen Menschen in nicht-internetbasierten und in internetbasierten sozialen Netzwerken sowie die Gründe für Beziehungen zwischen Menschen in diesen Netzwerken dargelegt. Anhand der Beobachtung von Vorschlägen in anderen sozialen Netzwerken sowie der in InstaHub gespeicherten Nutzerdaten wurde ein Algorithmus für Profilvorschläge in InstaHub entworfen und mitsamt einer passenden Visualisierung entsprechend implementiert. Den zweiten Teil der Arbeit bildete eine Unterrichtseinheit für die Sekundarstufe II mit dem Thema Gefahren der Erzeugung und Verarbeitung von personenbezogenen Daten. In der Unterrichtseinheit dienen die Profilvorschläge in InstaHub, die auf von InstaHub über dessen Nutzer gesammelten Daten aufbauen, als Einstieg in die Thematik. Anschließend wird der Fokus von sozialen Netzwerken auf andere Online-Dienste erweitert und auf die Verarbeitung und Weitergabe dieser Daten eingegangen.
This thesis focuses on approximate inference in assumption-based argumentation frameworks. Argumentation provides a significant idea in the computerization of theoretical and practical reasoning in AI. And it has a close connection with AI, engaging in arguments to perform scientific reasoning. The fundamental approach in this field is abstract argumentation frameworks developed by Dung. Assumption-based argumentation can be regarded as an instance of abstract argumentation with structured arguments. When facing a large scale of data, a challenge of reasoning in assumption-based argumentation is how to construct arguments and resolve attacks over a given claim with minimal cost of computation and acceptable accuracy at the same time. This thesis proposes and investigates approximate methods that randomly select and construct samples of frameworks based on graphical dispute derivations to solve this problem. The presented approach aims to improve reasoning performance and get an acceptable trade-off between computational time and accuracy. The evaluation shows that for reasoning in assumption-based argumentation, in general, the running time is reduced with the cost of slightly low accuracy by randomly sampling and constructing inference rules for potential arguments over a query.
The Material Point Method (MPM) has proven to be a very capable simulation method in computer graphics that is able to model materials that were previously very challenging to animate [1, 2]. Apart from simulating singular materials, the simulation of multiple materials that interact with each other introduces new challenges. This is the focus of this thesis. It will be shown that the self-collision capabilities of the MPM can naturally handle multiple materials interacting in the same scene on a collision basis, even if the materials use distinct constitutive models. This is then extended by porous interaction of materials as in[3], which also integrates easily with MPM.It will furthermore be shown that regular single-grid MPM can be viewed as a subset of this multi-grid approach, meaning that its behavior can also be achieved if multiple grids are used. The porous interaction is generalized to arbitrary materials and freely changeable material interaction terms, yielding a flexible, user-controllable framework that is independent of specific constitutive models. The framework is implemented on the GPU in a straightforward and simple way and takes advantage of the rasterization pipeline to resolve write-conflicts, resulting in a portable implementation with wide hardware support, unlike other approaches such as [4].
„La liaison est un phénomène complexe dont la phénoménologie est encore aujourd’hui sujette à recherches et à débats. Dans la littérature classique, orthoépique ou descriptive, comme dans les recherches les plus actuelles, la liaison est considérée comme un phénomène multi-paramétrique et tous les niveaux linguistiques sont convoqués : phonologie, prosodie et syllabation, morphologie, syntaxe, lexique et sémantique, diachronie, orthographe et différentiation des styles [...] toutes les dimensions de la variation externe : variation dans le temps, dans l’espace géographique et dans l’espace social, variation dans l’espace stylistique des genres de discours“
(Eychenne/Laks 2017:1).
Dieses Zitat beschreibt die Liaison als ein sehr komplexes, von vielen Parametern beeinflusstes Phänomen. Wie gehen Lernende 1 mit einem solchen Phänomen um? Welche Liaison realisie-ren sie wie häufig? Welche Fehler treten auf? Welche Gründe gibt es für diese Fehler? Welche Auswirkungen hat ein längerer Auslandsaufenthalt des Lernenden in einem französischsprachi-gen Land auf die Produktion von Liaisons? Gibt es Unterschiede zwischen dem Erwerb der Liaison bei Kindern mit Französisch als Erstsprache (L1) und Lernenden des Französischen als Fremdsprache (L2)?
Auf all diese Fragen möchte ich im Laufe der vorliegenden Arbeit eingehen. Nach dem Zusam-mentragen einiger grundlegender Fakten über die Liaison soll daher ein Korpus mit französi-schen Sprachaufnahmen von deutschen Studierenden ausgewertet werden. Die Ergebnisse wer-den im Anschluss präsentiert und zunächst mit Resultaten von Kindern mit Französisch als L1 sowie anschließend mit Ergebnissen anderer Studien über Französischlernende verglichen.
The industry standard Decision Model and Notation (DMN) has enabled a new way for the formalization of business rules since 2015. Here, rules are modeled in so-called decision tables, which are defined by input columns and output columns. Furthermore, decisions are arranged in a graph-like structure (DRD level), which creates dependencies between them. With a given input, the decisions now can be requested by appropriate systems. Thereby, activated rules produce output for future use. However, modeling mistakes produces erroneous models, which can occur in the decision tables as well as at the DRD level. According to the Design Science Research Methodology, this thesis introduces an implementation of a verification prototype for the detection and resolution of these errors while the modeling phase. Therefore, presented basics provide the needed theoretical foundation for the development of the tool. This thesis further presents the architecture of the tool and the implemented verification capabilities. Finally, the created prototype is evaluated.
On-screen interactive presentations have got immense popularity in the domain of attentive interfaces recently. These attentive screens adapt their behavior according to the user's visual attention. This thesis aims to introduce an application that would enable these attentive interfaces to change their behavior not just according to the gaze data but also facial features and expressions. The modern era requires new ways of communications and publications for advertisement. These ads need to be more specific according to people's interests, age, and gender. When advertising, it's important to get a reaction from the user but not every user is interested in providing feedback. In such a context more, advance techniques are required that would collect user's feedback effortlessly. The main problem this thesis intends to resolve is, to apply advanced techniques of gaze and face recognition to collect data about user's reactions towards different ads being played on interactive screens. We aim to create an application that enables attentive screens to detect a person's facial features, expressions, and eye gaze. With eye gaze data we can determine the interests and with facial features, age and gender can be specified. All this information will help in optimizing the advertisements.
Blockchain in Healthcare
(2020)
The underlying characteristics of blockchain can facilitate data provenance, data integrity, data security, and data management. It has the potential to transform the healthcare sector. Since the introduction of Bitcoin in the fintech industry, the blcockhain technology has been gaining a lot of traction and its purpose is not just limited to finance. This thesis highlights the inner workings of blockchain technology and its application areas with possible existing solutions. Blockchain could lay the path for a new revolution in conventional healthcare systems. We presented how individual sectors within the healthcare industry could use blockchain and what solution persists. Also, we have presented our own concept to improve the existing paper-based prescription management system which is based on Hyperledger framework. The results of this work suggest that healthcare can benefit from blockchain technology bringing in the new ways patients can be treated.
Since the invention of U-net architecture in 2015, convolutional networks based on its encoder-decoder approach significantly improved results in image analysis challenges. It has been proven that such architectures can also be successfully applied in different domains by winning numerous championships in recent years. Also, the transfer learning technique created an opportunity to push state-of-the-art benchmarks to a higher level. Using this approach is beneficial for the medical domain, as collecting datasets is generally a difficult and expensive process.
In this thesis, we address the task of semantic segmentation with Deep Learning and make three main contributions and release experimental results that have practical value for medical imaging.
First, we evaluate the performance of four neural network architectures on the dataset of the cervical spine MRI scans. Second, we use transfer learning from models trained on the Imagenet dataset and compare it to randomly initialized networks. Third, we evaluate models trained on the bias field corrected and raw MRI data. All code to reproduce results is publicly available online.
Constituent parsing attempts to extract syntactic structure from a sentence. These parsing systems are helpful in many NLP applications such as grammar checking, question answering, and information extraction. This thesis work is about implementing a constituent parser for German language using neural networks. Over the past, recurrent neural networks have been used in building a parser and also many NLP applications. In this, self-attention neural network modules are used intensively to understand sentences effectively. With multilayered self-attention networks, constituent parsing achieves 93.68% F1 score. This is improved even further by using both character and word embeddings as a representation of the input. An F1 score of 94.10% was the best achieved by constituent parser using only the dataset provided. With the help of external datasets such as German Wikipedia, pre-trained ELMo models are used along with self-attention networks achieving 95.87% F1 score.
Thesis is devoted to the topic of challenges and solutions for human resources management (HRM) in international organizations. The aim is to investigate methodological approaches to assessment of HRM challenges and solutions, and to apply them on practice, to develop ways of improvement of HRM of a particular enterprise. The practical research question investigated is “Is the Ongoing Professional Development – Strategic HRM (OPD-SHRM) model a better solution for HRM system of PrJSC “Philip Morris Ukraine”?”
To achieve the aim of this work and to answer the research question, we have studied theoretical approaches to explaining and assessing HRM in section 1, analyzed HRM system of an international enterprise in section 2, and then synthesized theory and practice to find intersection points in section 3.
Research findings indicate that the main challenge of HRM is to balance between individual and organizational interests. Implementation of OPD-SHRM is one of the solutions. Switching focus from satisfaction towards success will bring both tangible and intangible benefits for individuals and organization. In case of PrJSC “Philip Morris Ukraine”, the maximum forecasted increase is 330% in net profit, 350% in labor productivity, and 26% in Employee Development and Engagement Index.
Current political issues are often reflected in social media discussions, gathering politicians and voters on common platforms. As these can affect the public perception of politics, the inner dynamics and backgrounds of such debates are of great scientific interest. This thesis takes user generated messages from an up-to-date dataset of considerable relevance as Time Series, and applies a topic-based analysis of inspiration and agenda setting to it. The Institute for Web Science and Technologies of the University Koblenz-Landau has collected Twitter data generated beforehand by candidates of the European Parliament Election 2019. This work processes and analyzes the dataset for various properties, while focusing on the influence of politicians and media on online debates. An algorithm to cluster tweets into topical threads is introduced. Subsequently, Sequential Association Rules are mined, yielding wide array of potential influence relations between both actors and topics. The elaborated methodology can be configured with different parameters and is extensible in functionality and scope of application.
Unkontrolliert gewachsene Software-Architekturen zeichnen sich i.d.R. durch fehlende oder schlecht nachvollziehbare Strukturen aus. Hierfür können als Gründe beispielsweise mangelhafte Definitionen oder ein langsames Erodieren sein. Dies ist auch unter dem Begriff "Big Ball of Mud" bekannt. Langfristig erhöhen solche architekturellen Mängel nicht nur die Entwicklungskosten, sondern können letztendlich auch Veränderungen vollständig verhindern.
Die Software-Architektur benötigt somit eine kontinuierliche Weiterentwicklung, um solchen Effekten entgegen wirken zu können. Eine gute Software-Architektur unterstützt die Software-Entwicklung und erhöht die Produktivität. Auf der Ebene von Quellcode existieren bereits etablierte Vorgehensweisen zur kontrollierten Verbesserung der Qualität. Im Gegensatz hierzu existieren für Verbesserungen einer Software-Architektur jedoch keine allgemeingültigen Vorgehensweisen, welche unabhängig vom Anwendungsfall angewandt werden können. An diesem Punkt setzt die vorliegende Arbeit an.
Bisherige Arbeiten beschäftigen sich einerseits nur mit Teilpunkten des Problems. Anderseits existieren zwar bereits Vorgehensweisen zum Treffen von Architekturentscheidungen, jedoch agieren diese auf einer stark abstrakten Ebene ohne praktische Beispiele. Diese Arbeit stellt eine leichtgewichtige Vorgehensweise zum gezielten Verbessern einer Software-Architektur vor. Die Vorgehensweise basiert auf einem generischen Problemlösungsprozess. Auf dieser Basis ist ein Prozess zum Lösen von Problemen einer Software-Architektur entwickelt worden. Im Fokus der Arbeit stehen zur Eingrenzung des Umfanges architektonische Probleme aufgrund geforderter Variabilität sowie externer Abhängigkeiten.
Die wissenschaftliche Methodik, welcher der Arbeit zugrunde liegt, agiert im Rahmen der Design Science Research (DSR). Über mehrere Iterationen hinweg wurde eine Vorgehensweise entwickelt, welche sich an Softwareentwickler mit zwei bis drei Jahren Erfahrung und Kenntnissen über Grundlage der Softwareentwicklung und Software-Architektur richtet. Fünf Schritte inkl. Verweise auf aussagekräftige Literatur leiten Anwender anschließend durch den Prozess zur gezielten Verbesserung einer Software-Architektur.
Belief revision is the subarea of knowledge representation which studies the dynamics of epistemic states of an agent. In the classical AGM approach, contraction, as part of the belief revision, deals with the removal of beliefs in knowledge bases. This master's thesis presents the study and the implementation of concept contraction in the Description Logic EL. Concept contraction deals with the following situation. Given two concept C and D, assuming that C is subsumed by D, how can concept C be changed so that it is not subsumed by D anymore, but is as similar as possible to C? This approach of belief change is different from other related work because it deals with contraction in the level of concepts and not T-Boxes and A-Boxes in general. The main contribution of the thesis is the implementation of the concept contraction. The implementation provides insight into the complexity of contraction in EL, which is tractable since the main inference task in EL is also tractable. The implementation consists of the design of five algorithms that are necessary for concept contraction. The algorithms are described, illustrated with examples, and analyzed in terms of time complexity. Furthermore, we propose an new approach for a selection function, adapt for the concept contraction. The selection function uses metadata about the concepts in order to select the best from an input set. The metadata is modeled in a framework that we have designed, based on standard metadata frameworks. As an important part of the concept contraction, the selection function is responsible for selecting the best concepts that are as similar as possible to concept C. Lastly, we have successfully implemented the concept contraction in Python, and the results are promising.
To construct a business process model manually is a highly complex and error-prone task which takes a lot of time and deep insights into the organizational structure, its operations and business rules. To improve the output of business analysts dealing with this process, different techniques have been introduced by researchers to support them during construction with helpful recommendations. These supporting recommendation systems vary in their way of what to recommend in the first place as well as their calculations taking place under the hood to recommend the most fitting element to the user. After a broad introduction into the field of business process modeling and its basic recommendation structures, this work will take a closer look at diverse proposals and descriptions published in current literature regarding implementation strategies to effectively and efficiently assist modelers during their business process model creation. A critical analysis of presentations in the selected literature will point out strengths and weaknesses of their approaches, studies and descriptions of those. As a result, the final concept matrix in this work will give a precise and helpful overview about the key features and recommendation methods used and implemented in previous research studies to pinpoint an entry into future works without the downsides already spotted by fellow researchers.
Commonsense reasoning can be seen as a process of identifying dependencies amongst events and actions. Understanding the circumstances surrounding these events requires background knowledge with sufficient breadth to cover a wide variety of domains. In the recent decades, there has been a lot of work in extracting commonsense knowledge, a number of these projects provide their collected data as semantic networks such as ConceptNet and CausalNet. In this thesis, we attempt to undertake the Choice Of Plausible Alternatives (COPA) challenge, a problem set with 1000 questions written in multiple-choice format with a premise and two alternative choices for each question. Our approach differs from previous work by using shortest paths between concepts in a causal graph with the edge weight as causality metric. We use CausalNet as primary network and implement a few design choices to explore the strengths and drawbacks of this approach, and propose an extension using ConceptNet by leveraging its commonsense knowledge base.
Implementation of Agile Software Development Methodology in a Company – Why? Challenges? Benefits?
(2019)
The software development industry is enhancing day by day. The introduction of agile software development methodologies was a tremendous structural change in companies. Agile transformation provides unlimited opportunities and benefits to the existing and new developing companies. Along with benefits, agile conversion also brings many unseen challenges. New entrants have the advantage of being flexible and cope with the environmental, consumer, and cultural changes, but existing companies are bound to rigid structure.
The goal of this research is to have deep insight into agile software development methodology, agile manifesto, and principles behind the agile manifesto. The prerequisites company must know for agile software development implementation. The benefits a company can achieve by implementing agile software development. Significant challenges that a company can face during agile implementation in a company.
The research objectives of this study help to generate strong motivational research questions. These research questions cover the cultural aspects of company agility, values and principles of agile, benefits, and challenges of agile implementation. The project management triangle will show how benefits of cost, benefits of time, and benefits of quality can be achieved by implementing agile methodologies. Six significant areas have been explored, which shows different challenges a company can face during implementation agile software development methodology. In the end, after the in depth systematic literature review, conclusion is made following some open topics for future work and recommendations on the topic of implementation of agile software development methodology in a company.
Business Process Querying (BPQ) is a discipline in the field of Business Process Man- agement which helps experts to understand existing process models and accelerates the development of new ones. Its queries can fetch and merge these models, answer questions regarding the underlying process, and conduct compliance checking in return. Many languages have been deployed in this discipline but two language types are dominant: Logic-based languages use temporal logic to verify models as finite state machines whereas graph-based languages use pattern matching to retrieve subgraphs of model graphs directly. This thesis aims to map the features of both language types to features of the other to identify strengths and weaknesses. Exemplarily, the features of Computational Tree Logic (CTL) and The Diagramed Modeling Language (DMQL) are mapped to one another. CTL explores the valid state space and thus is better for behavioral querying. Lacking certain structural features and counting mechanisms it is not appropriate to query structural properties. In contrast, DMQL issues structural queries and its patterns can reconstruct any CTL formula. However, they do not always achieve exactly the same semantic: Patterns treat conditional flow as sequential flow by ignoring its conditions. As a result, retrieved mappings are invalid process execution sequences, i.e. false positives, in certain scenarios. DMQL can be used for behavioral querying if these are absent or acceptable. In conclusion, both language types have strengths and are specialized for different BPQ use cases but in certain scenarios graph-based languages can be applied to both. Integrating the evaluation of conditions would remove the need for logic-based languages in BPQ completely.
Data visualization is an effective way to explore data. It helps people to get a valuable insight of the data by placing it in a visual context. However, choosing a good chart without prior knowledge in the area is not a trivial job. Users have to manually explore all possible visualizations and decide upon ones that reflect relevant and desired trend in the data, are insightful and easy to decode, have a clear focus and appealing appearance. To address these challenges we developed a Tool for Automatic Generation of Good viSualizations using Scoring (TAG²S²). The approach tackles the problem of identifying an appropriate metric for judging visualizations as good or bad. It consists of two modules: visualization detection: given a data-set it creates a list of combination of data attributes for scoring and visualization ranking: scores each chart and decides which ones are good or bad. For the later, an utility metric of ten criteria was developed and each visualization detected in the first module is evaluated on these criteria. Only those visualizations that received enough scores are then presented to the user. Additionally to these data parameters, the tool considers user perception regarding the choice of visual encoding when selecting a visualization. To evaluate the utility of the metric and the importance of each criteria, test cases were developed, executed and the results presented.
Clubs, such as Scouts, rely on the work of their volunteer members, who have a variety of tasks to accomplish. Often there are sudden changes in their organization teams and offices, whereby planning steps are lost and inexperience in planning occurs. Since the special requirements are not covered by already existing tools, ScOuT, a planning tool for the organization administration, is designed and developed in this work to support clubs with regard to the mentioned problems. The focus was on identifying and using various suitable guidelines and heuristic methods to create a usable interface. The developed product was evaluated empirically by a user survey in terms of usability.
The result of this study shows that already a high degree of the desired goal could be reached by the inclusion of the guidelines and methods. From this it can be concluded that with the help of user-specific concept ideas and the application of suitable guidelines and methods, a suitable basis for a usable application to support clubs can be created.
The goal of simulations in computergraphics is the simulation of realistic phenomena of materials. Therefore, internal and external acting forces are accumulated in each timestep. From those, new velocities get calculated that ultimately change the positions of geometry or particles. Position Based Dynamics omits thie velocity layer and directly works on the positions. Constraints are a set of rules defining the simulated material. Those rules must not be violated throughout the simulation. If this happens, the violating positions get changed so that the constraints get fullfilled once again. In this work a PBD-framework gets implemented, that allows simulations of solids and fluids. Constraints get solved using GPU implementations of Gauss-Seidel and Gauss-Jakobi solvers. Results are physically plausible simulations that are real-time capable.
The erosion of the closed innovation paradigm in conjunction with increasing competitive pressure has boosted the interest of both researchers and organizations in open innovation. Despite such rising interest, several companies remain reluctant to open their organizational boundaries to practice open innovation. Among the many reasons for such reservation are the pertinent complexity of transitioning toward open innovation and a lack of understanding of the procedures required for such endeavors. Hence, this thesis sets out to investigate how organizations can open their boundaries to successfully transition from closed to open innovation by analyzing the current literature on open innovation. In doing so, the transitional procedures are structured and classified into a model comprising three phases, namely unfreezing, moving, and institutionalizing of changes. Procedures of the unfreezing phase lay the foundation for a successful transition to open innovation, while procedures of the moving phase depict how the change occurs. Finally, procedures of the institutionalizing phase contribute to the sustainability of the transition by employing governance mechanisms and performance measures. Additionally, the individual procedures are characterized along with their corresponding barriers and critical success factors. As a result of this structured depiction of the transition process, a guideline is derived. This guideline includes the commonly employed actions of successful practitioners of open innovation, which may serve as a baseline for interested parties of the paradigm. With the derivation of the guideline and concise depiction of the individual transitional phases, this thesis consequently reduces the overall complexity and increases the comprehensibility of the transition and its implications for organizations.
With the appearance of modern virtual reality (VR) headsets on the consumer market, there has been the biggest boom in the history of VR technology. Naturally, this was accompanied by an increasing focus on the problems of current VR hardware. Especially the control in VR has always been a complex topic.
One possible solution is the Leap Motion, a hand tracking device that was initially developed for desktop use, but with the last major software update it can be attached to standard VR headsets. This device allows very precise tracking of the user’s hands and fingers and their replication in the virtual world.
The aim of this work is to design virtual user interfaces that can be operated with the Leap Motion to provide a natural method of interaction between the user and the VR environment. After that, subject tests are performed to evaluate their performance and compare them to traditional VR controllers.
Data flow models in the literature are often very fine-grained, which transfers to the data flow analysis performed on them and thus leads to a decrease in the analysis' understandability. Since a data flow model, which abstracts from the majority of implementation details of the program modeled, allows for potentially easier to understand data flow analyses, this master thesis deals with the specification and construction of a highly abstracted data flow model and the application of data flow analyses on this model. The model and the analyses performed on it have been developed in a test-driven manner, so that a wide range of possible data flow scenarios could be covered. As a concrete data flow analysis, a static security check in the form of a detection of insufficient user input sanitization has been performed. To date, there's no data flow model on a similarly high level of abstraction. The proposed solution is therefore unique and facilitates developers without expertise in data flow analysis to perform such analyses.
Despite the inception of new technologies at a breakneck pace, many analytics projects fail mainly due to the use of incompatible development methodologies. As big data analytics projects are different from software development projects, the methodologies used in software development projects could not be applied in the same fashion to analytics projects. The traditional agile project management approaches to the projects do not consider the complexities involved in the analytics. In this thesis, the challenges involved in generalizing the application of agile methodologies will be evaluated, and some suitable agile frameworks which are more compatible with the analytics project will be explored and recommended. The standard practices and approaches which are currently applied in the industry for analytics projects will be discussed concerning enablers and success factors for agile adaption. In the end, after the comprehensive discussion and analysis of the problem and complexities, a framework will be recommended that copes best with the discussed challenges and complexities and is generally well suited for the most data-intensive analytics projects.
In dieser Arbeit wird eine Unterrichtsreihe beschrieben, welche aus den drei Bereichen „mathematische Relationen“, „Datenbanken in Sozialen Netzwerken“ und „Datenschutz“ zusammengesetzt ist. Zu jedem Bereich wird ein eigener Unterrichtsentwurf präsentiert.
Außerdem wurde im Rahmen der vorliegenden Arbeit ein Programm zur Visualisierung der Relationen des Sozialen Netzwerks Instahub entworfen, welches im Anschluss an die Beschreibung der Unterrichtsreihe aufgeführt wird.
Diese Arbeit soll das von Dietz und Oppermann entwickelte Planspiel „Datenschutz 2.0“ an den heutigen Alltag der Schüler anpassen, die Benutzung in der Sekundarstufe II ermöglichen und die technischen und gesetzlichen Problematiken des Planspiels beheben. Das mit dem Planspiel aufgegriffene Thema Datenschutz ist im rheinland-pfälzischen Informatik-Lehrplan für die Sekundarstufe II verankert. Hier wird der Begriff Datenschutz in der Reihe „Datenerhebung unter dem Aspekt Datenschutz beurteilen“ genannt. Jedoch werden in dem Planspiel keine Daten erhoben, sondern die selbst hinterlassenen Datenspuren untersucht. Diese Form des Datenschutzes ist im Grundkurs in der vorgeschlagenen Reihe „Datensicherheit unter der Berücksichtigung kryptologischer Verfahren erklären und beachten“ unter dem Thema Kommunikation in Rechnernetzen zu finden. Im Leistungskurs steht die Datensicherheit in gleichbenannter Reihe und Thema und in der Reihe „Datenerhebung unter dem Aspekt Datenschutz beurteilen“ im Thema Wechselwirkung zwischen Informatiksysteme, Individuum und Gesellschaft.
The Internet of Things (IoT) is a fast-growing, technological concept, which aims to integrate various physical and virtual objects into a global network to enable interaction and communication between those objects (Atzori, Iera and Morabito, 2010). The application possibilities are manifold and may transform society and economy similarly to the usage of the internet (Chase, 2013). Furthermore, the Internet of Things occupies a central role for the realisation of visionary future concepts, for example, Smart City or Smart Healthcare. In addition, the utilisation of this technology promises opportunities for the enhancement of various sustainability aspects, and thus for the transformation to a smarter, more efficient and more conscious dealing with natural resources (Maksimovic, 2017). The action principle of sustainability increasingly gains attention in the societal and academical discourse. This is reasoned by the partly harmful consumption and production patterns of the last century (Mcwilliams et al., 2016). Relating to sustainability, the advancing application of IoT technology also poses risks. Following the precautionary principle, these risks should be considered early (Harremoës et al., 2001). Risks of IoT for sustainability include the massive amounts of energy and raw materials which are required for the manufacturing and operation of IoT objects and furthermore, the disposal of those objects (Birkel et al., 2019). The exact relations in the context of IoT and sustainability are insufficiently explored to this point and do not constitute a central element within the discussion of this technology (Behrendt, 2019). Therefore, this thesis aims to develop a comprehensive overview of the relations between IoT and sustainability.
To achieve this aim, this thesis utilises the methodology of Grounded Theory in combination with a comprehensive literature review. The analysed literature primarily consists of research contributions in the field of Information Technology (IT). Based on this literature, aspects, solution approaches, effects and challenges in the context of IoT and sustainability were elaborated. The analysis revealed two central perspectives in this context. IoT for Sustainability (IoT4Sus) describes the utilisation and usage of IoT-generated information to enhance sustainability aspects. In contrast, Sustainability for IoT (Sus4IoT) fo-cuses on sustainability aspects of the applied technology and highlights methods to reduce negative impacts, which are associated with the manufacturing and operation of IoT. Elaborated aspects and relations were illustrated in the comprehensive CCIS Framework. This framework represents a tool for the capturing of relevant aspects and relations in this context and thus supports the awareness of the link between IoT and sustainability. Furthermore, the framework suggests an action principle to optimise the performance of IoT systems regarding sustainability.
The central contribution of this thesis is represented by the providence of the CCIS Framework and the contained information regarding the aspects and relations of IoT and sustainability.
Stylized image triangulation
(2019)
Stylized image triangulation is a popular tool of image processing. Results can be found on magazine covers or bought as a piece of art. Common use cases are filters by mobile apps or programs dedicated to automated triangulation. This thesis is based upon a paper that achieves new results formulating the adaptive dynamic triangulation as optimization problem. With this approach, new results concerning visual and technical quality are accomplished. One aim of this thesis is to make this approach accessible to as many users as possible. To reach users, a mobile app called Mesh is designed and implemented. A client-host-system is presented which relieves the app from computing the result requiring a lot of resources. Therefore, transferring the approach to a CPU based solution is part of the thesis. Also, a webserver is implemented that handles the communication between app and algorithm. “Mesh” enables the user to send a arbitrary image to the server whose result can be downloaded.
Part of the research deals with optimizing the method. As the main step, the gradient descent method, which minimizes an approximation error, is examined with three different approaches re-defining the movement of a point: The limitation of the directions of movement in a meaningful manner, diagonal directions and a dynamically repositioning of points are analyzed. Results show no improvement of visual quality using diagonal instead of horizontal and vertical steps. Disallowing a point to take its last position, the limitation of step opportunities results in a loss of visual quality but reaches an intended global error earlier. The dynamically repositioning rests upon a vectorbased solution that weights the directions and applies a factor to each of them. This results in a longer computational time but also in a higher visual quality.
Inspired by the work of Josh Bryan, another part of research aims at imitating an artists style. With the use of pseudo-random events combined with a geometryshader, a more natural look shall be achieved. This method illustrates a way of adding minor details to a rendering. To imitate an artist's work, a more complex and more precise triangulation is needed. As the last aspect, a renderstyle is presented. The style uses a center for its effect moving the triangles of a triangulation apart. The arbitrary choice of placing the centrum enables the renderstyle to be used in animations.
In order to plan the interior of a room, various programs for computers,
smart phones or head-mounted displays are available. The transfer to the
real environment is a difficult task. Therefore an augmented reality approach
is developed to illustrate the planning in the real room. If several
people want to contribute their ideas, conventional systems require to
work on one device together. The aim of this master thesis is to design and
develop a collaborative spatial planning application in augmented reality.
The application is developed in Unity with ARCore and C#.
Willingness to pay and willingness to accept on a two-sided platform - The use case of DoBeeDo
(2019)
It is widely known that especially for technology-based start-ups, entrepreneurs need to set up the boundaries of the business and define the product/service to offer in order to minimize the risk of failure. The goal of this thesis is to not only emphasize the importance of the business model development and evaluation but also show an example customer validation process for an emerging start-up named DoBeeDo, which is a mobile app operating on a two-sided market. During the process of customer validation a survey has been conducted to evaluate the interest of the target groups as well as the fit of their expectations using the Willingness to Pay and Willingness to Accept measures. The paper includes an analysis and evaluation of the gathered results and assesses whether the execution of the Customer Development Model can be continued.
The status of Business Process Management (BPM) recommender systems is not quite clear as research states. The use of recommenders familiarized itself with the world during the rise of technological evolution in the past decade.Ever since then, several BPM recommender systems came about. However, not a lot of research is conducted in this field. It is not well known to what broad are the technologies used and how are they used. Moreover, this master’s thesis aims at surveying the BPM recommender systems existing. Building on this, the recommendations come in different shapes. They can be positionbased where an element is to be placed at an element’s front, back or to autocomplete a missing link. On the other hand, Recommendations can be textual, to fill the labels of the elements. Furthermore, the literature review for BPM recommender systems took place under the guides of a literature review framework. The framework suggests 5stages of consecutive stages for this sake. The first stage is defining a scope for the research. Secondly, conceptualizing the topic by choosing key terms for literature research. After that in the third stage, comes the research stage.As for the fourth stage, it suggests choosing analysis features over which the literature is to be synthesized and compared. Finally, it recommends defining the research agenda to describe the reason for the literature review. By invoking the mentioned methodology, this master’s thesis surveyed 18 BPM recommender systems. It was found as a result of the survey that there
are not many different technologies for implementing the recommenders. It was also found that the majority of the recommenders suggest nodes that are yet to come in the model, which is called forward recommending. Also, one of the results of the survey indicated the scarce use of textual recommendations to BPM labels. Finally, 18 recommenders are considered less than excepted for a developing field therefore as a result, the survey found a shortage in the number of BPM recommender systems. The results indicate several shortages in several aspects in the field of BPM recommender systems. On this basis, this master’s thesis recommends the future work on it the results.
As a result of the technical progress, processes have to be adjusted. On the one hand, the digital transformation is absolutely necessary for every organization to operate efficient and sustainable, on the other hand whose accomplishment is a tremendous challenge. The huge amount of personal data, which accrue in this context, is an additional difficulty.
Against the background of the General Data Protection Regulation (GDPR), this thesis focuses on process management and ways of optimizing processes in a Human Resources Department. Beside the analysis of already existing structures and workflows, data management and especially the handling of personal data in an application process are examined. Both topics, the process management and the data protection are vitally important by itself, but it is necessary to implement the requirements of data protection within the appropriate position of a corresponding process. Relating to this, the thesis deals with the research question of what barriers may occur by a sustainable process integration and to which extend the GDPR prevent an unobstructed workflow within the Human Resources Department of the Handwerkskammer Koblenz. Additionally, answering the question of which subprocesses are convenient for a process automation is highly significant.
In scope of these questions Business Process Management is the solution. By means of the graphical representation standard, Business Process Model and Notation, a process model with the relevant activities, documents and responsibilities of the recruitment process is designed. Based on a target-actual comparison it becomes apparent, that standardized process steps with less exceptions and a large amount of information are basically convenient for automation respectively partial automation. After the different phases of the recruitment process are documented in detail, a Workflow-Management-System can ex-port the transformed models, so the involved employees just have to carry out a task list with assigned exercises. Against the background of the data protection regulations, access rights and maturities can be determined. Subsequently only authorized employees have admission to the personal data of applicants. Because of impending sanctions by violation against the GDPR, the implementation of the relevant legal foundations within the recruitment process is necessary and appropriate. Relating to the defined research questions, it appears that in principle not every activity is appropriate for a process automation. Especially unpredictable and on a wide range of factors depending subprocesses are unsuitable. Additionally, media discontinuities and redundant data input are obstacles to an enduring process integration. Nevertheless, a coherent consideration of the topics of business process management and the data protection regulations is required.
In this master's thesis the principle of hybrid ray tracing, consisting of a rasterization pipeline which includes ray tracing techniques for certain effects, is explained and the implementation of an application which uses a hybrid approach in which ray tracing is used to calculate shadows, ambient occlusion, and reflections and combines those with direct lighting is documented and explained. Hybrid ray tracing is based on the idea of combining the performance and flexibility of rasterization-based approaches with ray tracing to overcome the limitation of not being able to access the complete surrounding geometry at any point in the scene.
While describing the implementation of said application, the RTX API which is being used for ray tracing is explained as well Vulkan, the graphics API used.
Based on the results and the insights gained while using the RTX API, it is assessed in regards of its usage scenarios and technical sophistication.
A gonioreflectometer is a device to measure the reflection properties of arbitrary materials. In this work, such an apparatus is being built from easily obtainable parts. Therefore three stepper-motors and 809 light-emitting diodes are controlled by an Arduino microcontroller. RGB-images are captured with an industrial camera which serve as refelction data. Furthermore, a control software with several capture programs and a renderer for displaying the measured materials are implemented. These allow capturing and rendering entire bidirectional reflection distribution functions (BRDFs) by which also complex anisotropic material properties can be represented. Although the quality of the results has some artifacts due to shadows of the camera, these artifacts can be largely removed by using special algorithms like inpainting. In addition, the goniorefelctometer is applied to other use cases. One can perform 3D scans, light field capturing and light staging without altering the construction. The quality of these processes also meet the expectations in a positive way. Thus, the gonioreflectometer built in this work can be seen as a widely applicable and economical alternative to other publications.
This scientific paper deals with the question to which extend the increasing digitization has an impact on work-life balance. Answering this question is the main goal of this study.
To reach this goal a literature review is made, in which it is possible to create a direct correlation between the subjective feel of work-life balance and the perceived stress level. With the help of Antonovsky’s salutogenesis model (1997) from the stress research field, factors are ascertained which determine the perceived stress level and linked with that the perceived work-life balance. These stress-influencing factors are examined through a qualitative content analysis by Mayring (2014) on a base of a problem-centered interview.
The results suggest that the digitization has impact on all these ascertained factors and linked to them on the work-life balance. This shows, that the digitization influences us in almost every aspect of work or private life. Whether this impact is positive or negative towards the work life balance depends on the individual, that assesses this factor. Clear distinctions can be made between people working in an IT-based job and those that do not. In comparison, people with IT-based jobs perceive a substantial better impact of digitization on work-life balance.
Tracking is an integral part of many modern applications, especially in areas like autonomous systems and Augmented Reality. For performing tracking there are a wide array of approaches. One that has become a subject of research just recently is the utilization of Neural Networks. In the scope of this master thesis an application will be developed which uses such a Neural Network for the tracking process. This also requires the creation of training data as well as the creation and training of a Neural Network. Subsequently the usage of Neural Networks for tracking will be analyzed and evaluated. This includes several aspects. The quality of the tracking for different degrees of freedom will be checked as well as the the impact of the Neural Network on the applications performance. Additionally the amount of required training data is investigated, the influence of the network architecture and the importance of providing depth data as part of the networks input. This should provide an insight into how relevant this approach could be for its adoption in future products.
Unterschiedliche Quellen (Print-Medien, Fernsehberichte u. Ä.) berichten immer wieder davon, dass es mit der Datenschutzkompetenz bei Kindern und Jugendlichen schlecht bestellt ist. Daher ist dem Thema Datenschutz im Informatikunterricht eine besondere Bedeutung zuzuschreiben.
Im Rahmen der Dissertation von Herrn Hug wird ein Datenschutzkompetenzmodell [Quelle INFOS17] entwickelt, anhand dessen die Datenschutzkompetenz von Schülerinnen und Schülern im Altern von 10 bis 13 Jahren gemessen werden kann.
Im Rahmen dieser Masterarbeit werden existierende Unterrichtsmaterialien zum Thema Datenschutz gesammelt und dazu eine Unterrichtsreihe entwickelt. Hierbei werden auch eigene Zugänge aufzeigt, um ein kohärentes und abgeschlossenes Projekt zu entwerfen, bei dem aktuelle Gefahren für Schülerinnen und Schüler aufgezeigt werden. Ziel ist es, dass die Schülerinnen und Schüler dazu befähigt werden, ihr Verhalten bezüglich Datenschutz besser einzuschätzen und verantwortungsvoller mit ihren persönlichen Daten umzugehen. Im Rahmen eines Feldversuches in einer 6. Klasse eines Gymnasiums wurde die Unterrichtsreihe erprobt.
Die im Rahmen dieser Masterarbeit durchgeführte Analyse von Ernährungsumstellungen auf die vegane Ernährung in Form von vier Portraits lotet mit ihrer interdisziplinären Perspektive aus Gastrosophie, Ethnologie und Leibphilosophie die soziokulturellen Aspekte dieser Transmissionsprozesse aus. Dazu gehören der zivilisatorisch erlernte Umgang mit Nahrung, das Umsetzen und Reflektieren sinnlicher Wahrnehmungen im Ernährungsprozess und die Prägung von Relationen zwischen Essendem und Zu-Essendem. Geleitet wird die Analyse dabei von der Forschungsfrage: „Welche leiblich sinnlichen Wahrnehmungen geben den Anstoß zu einer Ernährungsumstellung und wie begleiten diese den Prozess der Umstellung?“. Damit gibt die Frage den Fokus auf das (leibliche) Wahrnehmungsvermögen des Menschen vor und begrenzt Ernährung bewusst nicht auf ihre physiologische Funktion der Energiegewinnung. Dadurch flicht sie sich in den Diskurs um den cultural turn in der Ernährungswissenschaft ein und zielt weg von der metabolischen Durchmessung des Essens, hin auf das Verständnis von Ernährung als Praxis des Sich-Einverleibens-von-Anderem – Essen als Vollzug von Beziehungen. Das Sich-Ernähren wird damit als sozial geprägter und leiblich vermittelter Erfahrungsvorgang sichtbar, der über das Geschmackserlebnis hinaus durch Betroffenheitserfahrung konstituiert wird. Durch das Zusammenbringen philosophischer Perspektiven, wie die der Gastrosophie Harald Lemkes, die Essen u. a. auch als Selbstbestimmung denkt, und soziologischen Theorien, wie die des Habitus-Konzepts nach Pierre Bourdieu, wird deutlich, wie die Wahrnehmungspraxis und -fähigkeit des Menschen, in ihrem leiblichsinnlichen Aspekt, die Verhaltensbestimmung im Umgang mit Nahrung und deren Quellen beeinflusst. Die vier Portraits entstanden während einer sechsmonatigen ethnographischen Feldforschung und stellen anonymisiert die Motivation, den Verlauf und das Konfliktthema der jeweiligen Ernährungsumstellung dar. Die Portraits sind sehr individuell und zeigen auf, wie und warum beispielsweise die gesundheitliche Selbstsorge oder das empathische Mitleiden mit Nutztieren Veränderungen oder Tabuisierungen im Ernährungshandeln anstoßen.
Recently the workflow control as well as compliance analysis of the Enterprise Resource Planning systems are of a high demand. In this direction, this thesis presents the potential of developing a Workflow Management System upon a large Enterprise Resource Planning system by involving business rule extraction, business process discovery, design of the process, integration and compliance analysis of the system. Towards this, usability, limitations and challenges of every applied approach are deeply explained in the case of an existing system named SHD ECORO.
Business rules have become an important tool to warrant compliance at their business processes. But the collection of these business rules can have various conflicting elements. This can lead to a violation of the compliance to be achieved. This conflicting elements are therefore a kind of inconsistencies, or quasi incon- sistencies in the business rule base. The target for this thesis is to investigate how those quasi inconsistencies in business rules can be detected and analyzed. To this aim, we develop a comprehensive library which allows to apply results from the scientific field of inconsistency measurement to business rule formalisms that are actually used in practice.
Most social media platforms allow users to freely express their opinions, feelings, and beliefs. However, in recent years the growing propagation of hate speech, offensive language, racism and sexism on the social media outlets have drawn attention from individuals, companies, and researchers. Today, sexism both online and offline with different forms, including blatant, covert, and subtle lan- guage, is a common phenomenon in society. A notable amount of work has been done over identifying sexist content and computationally detecting sexism which exists online. Although previous efforts have mostly used peoples’ activities on social media platforms such as Twitter as a public and helpful source for collecting data, they neglect the fact that the method of gathering sexist tweets could be biased towards the initial search terms. Moreover, some forms of sexism could be missed since some tweets which contain offensive language could be misclassified as hate speech. Further, in existing hate speech corpora, sexist tweets mostly express hostile sexism, and to some degree, the other forms of sexism which also appear online was disregarded. Besides, the creation of labeled datasets with manual exertion, relying on users to report offensive comments with a tremendous effort by human annotators is not only a costly and time-consuming process, but it also raises the risk of involving discrimination under biased judgment.
This thesis generates a novel sexist and non-sexist dataset which is constructed via "UnSexistifyIt", an online web-based game that incentivizes the players to make minimal modifications to a sexist statement with the goal of turning it into a non-sexist statement and convincing other players that the modified statement is non-sexist. The game applies the methodology of "Game With A Purpose" to generate data as a side-effect of playing the game and also employs the gamification and crowdsourcing techniques to enhance non-game contexts. When voluntary participants play the game, they help to produce non-sexist statements which can reduce the cost of generating new corpus. This work explores how diverse individual beliefs concerning sexism are. Further, the result of this work highlights the impact of various linguistic features and content attributes regarding sexist language detection. Finally, this thesis could help to expand our understanding regarding the syntactic and semantic structure of sexist and non-sexist content and also provides insights to build a probabilistic classifier for single sentences into sexist or non-sexist classes and lastly find a potential ground truth for such a classifier.
Our work finds the fine grained edits in context of neighbouring tokens in Wikipedia articles. We cluster those edits according to similar neighbouring context. We encode neighbouring context into vector space using word vectors. We evaluate clusters returned by our algorithm on extrinsic and intrinsic metric and compare it with previous work. We analyse the relation between extrinsic and intrinsic measurements of fine grained edit tokens.
The goal of this master thesis was to develop a CRM system for the Assist team of CompuGroup Medical that is aiding in integrating open innovation into the development of the Minerva 2.0 software. To achieve this, CRM methodology has been combined with Social Networking Systems, following the research of Lin and Chen (2010, pp. 11 – 30). To achieve the predefined goals literature has been analyzed on how to successfully im- plement a CRM system as well as an online community. Subsequently the results have been applied to the development of the Minerva Community according to the guidelines of Design Science suggested by Hevner et al. (2004, pp. 75 – 104). The finished product is designed based on customer and management requirements and evaluated from a customer and company perspective.
The purpose of this thesis is to explore the sentiment distributions of Wikipedia concepts.
We analyse the sentiment of the entire English Wikipedia corpus, which includes 5,669,867 articles and 1,906,375 talks, by using a lexicon-based method with four different lexicons.
Also, we explore the sentiment distributions from a time perspective using the sentiment scores obtained from our selected corpus. The results obtained have been compared not only between articles and talks but also among four lexicons: OL, MPQA, LIWC, and ANEW.
Our findings show that among the four lexicons, MPQA has the highest sensitivity and ANEW has the lowest sensitivity to emotional expressions. Wikipedia articles show more sentiments than talks according to OL, MPQA, and LIWC, whereas Wikipedia talks show more sentiments than articles according to ANEW. Besides, the sentiment has a trend regarding time series, and each lexicon has its own bias regarding text describing different things.
Moreover, our research provides three interactive widgets for visualising sentiment distributions for Wikipedia concepts regarding the time and geolocation attributes of concepts.
Innovationen prägen die Gesellschaft. Daher ist das Innovationsmanagement eine zentrale Aufgabe in Unternehmen. Erfolgreiche Innovationen können die Effizienz und Effektivität eines Unternehmens steigern, folglich ist das Ziel eines Innovationsmanagements, erfolgreiche Innovationen hervorzubringen. Innerhalb des Managements von Innovationen entstehen Herausforderungen, die auf Innovationsbarrieren zurückzuführen sind. Diese können zum Scheitern von Innovation führen. Um diese Art von Misserfolg zu vermeiden, kann ein Vorgehensmodell zur Identifizierung und zum Umgang mit Innovationsbarrieren einen erfolgsorientierten Innovationsprozess begünstigen.
Die Eisenbahnindustrie ist durch ihre Infrastruktur, ihre politische Abhängigkeit und Sicherheitsanforderungen gekennzeichnet. Sicherheit, als eines der Hauptthemen der Branche ist einerseits durch das Schutz- und Kontrollsystem der Politik geprägt und andererseits gestaltet und beeinflusst durch Forschung und Entwicklung von Unternehmen, Universitäten oder externen Entwicklern. Die Unfallvermeidung ist ein bedeutendes Thema in der Literatur zur Eisenbahnsicherheit, hierbei ist die Instandhaltung eines der Instrumente, die der Vorbeugung dienen. Die Prävention von Unfällen treibt den Bedarf von Innovatio-nen für die Eisenbahnsicherheit an. Dementsprechend ist die Thematik der Innovationsbarrieren auch für die Bahnindustrie von Bedeutung. Der Umgang mit diesen in dem konkreten Zusammenhang der Eisenbahnsicherheit soll anhand eines Vorgehensmodells systematisiert werden. Um das Modell insbesondere auf die Branche und den Schwerpunkt der Sicherheit abzustimmen, wird das Grundgerüst des Modells aus der Literatur erarbeitet und mit den recherchierten Kenntnissen der Branche erweitert. Basierend auf den Kenntnissen über Innovationsbarrieren im Schienenverkehr werden Schnittstellenprobleme als Ursache von den spezifischen Barrieren identifiziert. Dazu gehören beispielsweise die Interoperabilität und technische Harmonisierung zwischen Ländern wie der EU sowie die Zusammenarbeit zwischen Industrie und Wissenschaft oder externen Entwicklern. In dem Bezug zu der Branche konnte für das Modell eine Erweiterung erarbeitet werden. Es wurde herausgearbeitet, dass die Eisenbahnbranche weniger innovationsaktiv ist, als z. B. die Technologiebranche. Dies ist auf einen Mangel an Motivation zur Innovation zurückzuführen, der in der Branchenkultur aufgrund von der vergangenen Monopolstellung und dem hohen Sicherheitsanspruch zu einer risikoaversen Haltung führt. Daher muss die fortschreitende Liberalisierung des Eisenbahnmarktes abgeschlossen und die Sicherheitskonzepte neu strukturiert werden, um ein innovationsfreundliches Umfeld zu schaffen.
Letztendlich sollte das Vorgehensmodell in einer realen Umgebung getestet werden. Es wäre interessant zu erfahren, ob das entwickelte Modell zu den in dieser Arbeit entwickelten Ergebnissen hinsichtlich der mangelnden Motivation für Innovationen passt. Darüber hinaus scheint eine Validierung von Experten angebracht zu sein, da die Entwicklung des Modells von der Literaturrecherche abhängt
We examine the systematic underrecognition of female scientists (Matilda effect) by exploring the citation network of papers published in the American Physical Society (APS) journals. Our analysis shows that articles written by men (first author, last author and dominant gender of authors) receive more citations than similar articles written by women (first author, last author and dominant gender of authors) after controlling for the journal of publication, year of publication and content of the publication. Statistical significance of the overlap between the lists of references was considered as the measure of similarity between articles in our analysis. In addition, we found that men are less likely to cite articles written by women and women are less likely to cite articles written by men. This pattern leads to receiving more citations by articles written by men than similar articles written by women because the majority of authors who published in APS journals are male (85%). We also observed Matilda effect reduces when articles are published in journals with the highest impact factors. In other words, people’s evaluation of articles published in these journals is not affected by the gender of authors significantly. Finally, we suggested a method that can be applied by editors in academic journals to reduce the evaluation bias to some extent. Editors can identify missing citations using our proposed method to complete bibliographies. This policy can reduce the evaluation bias because we observed papers written by female scholars (first author, last author, the dominant gender of authors) miss more citations than articles written by male scholars (first author, last author, the dominant gender of authors).
Ontologies are valuable tools for knowledge representation and important building blocks of the Semantic Web. They are not static and can change over time. Changing an ontology can be necessary for various reasons: the domain that is represented by an ontology can change or an ontology is reused and must be adapted to the new context. In addition, modeling errors could have been introduced into the ontology which must be found and removed. The non-triviality of the change process has led to the emerge of ontology change as an own field of research. The removal of knowledge from ontologies is an important aspect of this change process, because even the addition of new knowledge to an ontology potentially requires the removal of older, conflicting knowledge. Such a removal must be performed in a thought-out way. A naïve change of concepts within the ontology can easily remove other, unrelated knowledge or alter the semantics of concepts in an unintended way [2]. For these reasons, this thesis introduces a formal operator for the fine-grained retraction of knowledge from EL concepts which is partially based on the postulates for belief set contraction and belief base contraction [3, 4, 5] and the work of Suchanek et al. [6]. For this, a short introduction to ontologies and OWL 2 is given and the problem of ontology change is explained. It is then argued why a formal operator can support this process and why the Description Logic EL provides a good starting point for the development of such an operator. After this, a general introduction to Description Logic is given. This includes its history, an overview of its applications and common reasoning tasks in this logic. Following this, the logic EL is defined. In a next step, related work is examined and it is shown why the recovery postulate and the relevance postulate cannot be naïvely employed in the development of an operator that removes knowledge from EL concepts. Following this, the requirements to the operator are formulated and properties are given which are mainly based on the postulates for belief set and belief base contraction. Additional properties are developed which make up for the non-applicability of the recovery and relevance postulates. After this, a formal definition of the operator is given and it is shown that the operator is applicable to the task of a fine-grained removal of knowledge from EL concepts. In a next step, it is proven that the operator fulfills all the previously defined properties. It is then demonstrated how the operator can be combined with laconic justifications [7] to assist a human ontology editor by automatically removing unwanted consequences from an ontology. Building on this, a plugin for the ontology editor Protégé is introduced that is based on algorithms that were derived from the formal definition of the operator. The content of this work is then summarized and a final conclusion is drawn. The thesis closes with an outlook into possible future work.
Knowledge-based authentication methods are vulnerable to Shoulder surfing phenomenon.
The widespread usage of these methods and not addressing the limitations it has could result in the user’s information to be compromised. User authentication method ought to be effortless to use and efficient, nevertheless secure.
The problem that we face concerning the security of PIN (Personal Identification Number) or password entry is shoulder surfing, in which a direct or indirect malicious observer could identify the user sensitive information. To tackle this issue we present TouchGaze which combines gaze signals and touch capabilities, as an input method for entering user’s credentials. Gaze signals will be primarily used to enhance targeting and touch for selecting. In this work, we have designed three different PIN entry method which they all have similar interfaces. For the evaluation, these methods were compared based on efficiency, accuracy, and usability. The results uncovered that despite the fact that gaze-based methods require extra time for the user to get familiar with yet it is considered more secure. In regards to efficiency, it has the similar error margin to the traditional PIN entry methods.
Die Zielsetzung der vorliegenden Masterarbeit lag darin, die instrumentelle Gewäs-seranalytik im Freiland für Schüler*innen erschließbar zu machen.
Als Maßnahme der Master-Thesis wurde eine arbeitsteilige Lerneinheit rund um das Thema Gewässeranalytik für den außerschulischen Chemieunterricht der Sekun-darstufe II konzipiert und evaluiert. Die Konzeption erfolgte in Anlehnung an die Lehrpläne der Fächer Biologie, Geografie, Physik und Chemie des Landes Rhein-land-Pfalz. Zur Bewertung der Gewässergüte kam das digitale Messsystem Einstein Labmate für zahlreiche physikalisch-chemische Parameter zur Verwendung. Zu-sätzlich wurde das Online-Portal FLUVIDAT Saar zur rudimentären Bestimmung von Indikatororganismen sowie für eine Einordnung der Gewässerstruktur genutzt.
Im Rahmen der begleitenden Evaluation wird der Forschungsfrage nachgegangen, in wie weit die konzipierte Einheit praxistauglich und adressatengerecht für den Chemieunterricht der Oberstufe ist. Dies wird anhand der Konstrukte „Aktuelles Inte-resse“ und „Cognitive Load“ mit einem Fragebogen, einer Videoanalyse mit litera-turbasiertem Codierleitfaden sowie einem Lesbarkeitsindex der zugrundeliegenden Experimentieranleitungen überprüft.
Die Ergebnisse der Datenauswertung zeigten ein hohes aktuelles Interesse der Schüler*innen sowie eine angemessene kognitive Belastung. Durch die Erhebung der lernbezogenen kognitiven Belastung zeigte sich, dass die im unteren Bereich liegende Schwierigkeit der Materialien kognitive Kapazitäten zur Schemabildung frei gab.
Im Rahmen der Masterarbeit wird die kompetenzgestützte Weiterbildungssuche analysiert und am Beispiel des Weiterbildungsvergleichsportales EDUfittery dargestellt.
Die Umsetzung und Integration eines Kompetenzmessverfahrens in ein Vergleichsportal für Weiterbildungen war das Ziel dieser Arbeit. Des Weiteren sollte anhand des Kompetenzmessverfahrens ein sogenanntes „Matching“ zwischen Kompetenz und Weiterbildung erarbeitet werden. Beim „Matching“ werden dementsprechend die vom Kunden eingegebenen Kompetenzen mit passenden Weiterbildungsvorschlägen verbunden. Somit wird ein individuelles Kundenerlebnis in der Weiterbildungssuche erreicht. Um dies zu ermöglichen, wird zu Beginn der Arbeit ein theoretisches Kompetenzschema analysiert, sodass Aussagen über Erhebungsverfahren und Auswertungsstrategien getroffen werden können, um diese, besonders in Bezug auf ein web-gestütztes Messinstrument, kritisch zu hinterfragen. Aufbauend auf der Analyse verschiedener Kompetenzmessverfahren im Online-Segment, kann gezeigt werden, dass die Kompetenzdiagnostik zwar einige Verfahren zur Verfügung stellt, doch nur wenige sind praktikabel und in der Lage tätigkeitsbezogene Handlungskompetenzen zu bewerten. Folglich wurde das Instrument der Selbsteinschätzung von Kompetenzen als am besten zutreffendes Verfahren für EDUfittery gewählt.
Im Zuge der Arbeit wurden zwei Online-Umfragen und ein Experteninterview durchgeführt. Dabei wurden zum einen Mitarbeiter, als auch Personaler in Bezug auf Weiterbildungen befragt. Als Experte konnte Herr Korn, von der IHK Akademie Koblenz gewonnen werden. Anhand der Umfrageergebnisse und weiterer Recherchen konnte ein umfangreicher Business Case für EDUfittery angefertigt und mit Prozessdiagrammen veranschaulicht werden. Es wird ersichtlich, dass das Thema Kompetenzanalysen in Unternehmen bisher kaum Anwendung findet. Bei genauerer Betrachtung der aktuellen Marktsituation wird zusätzlich klar, dass es derzeitig noch keine Plattform gibt, die Kompetenzen mit Weiterbildungen erfolgreich in Zusammenhang bringt, um dem Kunden individuelle Weiterbildungslösungen anbieten zu können. Als Ergebnis der Arbeit kann demnach festgehalten werden, dass eine Integration von Kompetenzanalysen im Bereich Weiterbildungssuche durchaus sinnvoll ist. EDUfittery könnte sich bei erfolgreicher Umsetzung als hilfreiches Tool im Bereich Weiterbildung und Kompetenz etablieren und somit Unternehmen grundlegend bei der Abbildung, Analyse und Weiterentwicklung von Kompetenzen unterstützen.
This Master Thesis is an exploratory research to determine whether it is feasible to construct a subjectivity lexicon using Wikipedia. The key hypothesis is that that all quotes in Wikipedia are subjective and all regular text are objective. The degree of subjectivity of a word, also known as ''Quote Score'' is determined based on the ratio of word frequency in quotations to its frequency outside quotations. The proportion of words in the English Wikipedia which are within quotations is found to be much smaller as compared to those which are not in quotes, resulting in a right-skewed distribution and low mean value of Quote Scores.
The methodology used to generate the subjectivity lexicon from text corpus in English Wikipedia is designed in such a way that it can be scaled and reused to produce similar subjectivity lexica of other languages. This is achieved by abstaining from domain and language-specific methods, apart from using only readily-available English dictionary packages to detect and exclude stopwords and non-English words in the Wikipedia text corpus.
The subjectivity lexicon generated from English Wikipedia is compared against other lexica; namely MPQA and SentiWordNet. It is found that words which are strongly subjective tend to have high Quote Scores in the subjectivity lexicon generated from English Wikipedia. There is a large observable difference between distribution of Quote Scores for words classified as strongly subjective versus distribution of Quote Scores for words classified as weakly subjective and objective. However, weakly subjective and objective words cannot be differentiated clearly based on Quote Score. In addition to that, a questionnaire is commissioned as an exploratory approach to investigate whether subjectivity lexicon generated from Wikipedia could be used to extend the coverage of words of existing lexica.
Es wird ein Augmented-Reality Ansatz zur Erforschung modularer OSGi-Softwaresysteme präsentiert. Der Prototyp wird unter der Verwendung der Microsoft HoloLens implementiert. Module, wie Komponenten und Packages, werden in einer virtuellen Stadt dargestellt. Dieser Ansatz ermöglicht es dem Anwender, die Software-Architektur mittels intuitiver Navigation zu erkunden: Spracheingabe, Blickpunkt- und Gestenkontrolle. Eine multifunktionale Benutzeroberfläche wird vorgestellt, die für verschiedene Zielgruppen adaptiert werden kann. Viele veröffentlichte Visualisierungen weisen keine klare Zielgruppendefinition auf. Das Konzept kann leicht auf andere Darstellungsformen, wie beispielsweise der Inselmetapher übertragen werden. Erste Ergebnisse einer Evaluierung, die mittels kleiner strukturierter Interviews gewonnen werden konnten, werden präsentiert. Die Probanden mussten vier Programm-verständnis Aufgaben lösen und ihren Aufwand, sowie ihre Arbeitsbelastung einschätzen. Die Ergebnisse bilden eine gute Grundlage für weitere Forschung im Bereich der Software- Visualisierung in Augmented Reality.
Hintergrund
Das neue Modell einer Knieorthese Condlya 4 soll die Bewegungsfreiheit im Knie nur geringfügig einschränken, so dass viele sportliche Bewegungen weiterhin gewährleistet sind. Dennoch stabilisiert die Orthese das Gelenk soweit, dass Scherbewegungen vermieden werden und während der Rehabilitationsphase nach Knieverletzungen und Instabilitäten bereits nach kurzer Zeit mit dem Sport wieder begonnen werden kann.
Das Ziel der Arbeit
Ziel der Masterarbeit war es mittels Bewegungsanalyse den Einfluss der Knieorhese auf die Bewegungsqualität des Handstands zu prüfen. Zu dieser 2D- Analyse wurden die Bewegungsabläufe mit mehreren digitalen Hochgeschwindigkeitskameras (OptiTrack Flex 3) aus zwei Ebenen gefilmt. Über die an anatomischen Fixpunkten angebrachten Markern wurden die Aufnahmen mit der Software MyoVideo am Rechner mittels automatischem Marker Tracking verarbeitet. Damit konnten die zeitlichen Verläufe von Marker-Koordinaten und Gelenkwinkeln aufgezeigt werden.
Ergebnisse
Die Ergebnisse dieser Untersuchung bestätigten, dass die untersuchte Knieorthese Condyla 4 für den Sport geeignet ist und keinen negativen Einfluss auf die Bewegungsqualität des Handstands nimmt. Die Anwendung der Knieorthese lässt sich dadurch auch auf andere Elemente aus dem Bereich des Turnens übertragen, bei denen die unteren Extremitäten ähnlichen Belastungen ausgesetzt sind.
A service excellence culture is determined by various factors. Some factors are supporting the realization of a service excellence culture, others are more about to inhibit excellent service. The internal environment of the organisation is the foundation for excellent service. While the employees are known as the key of excellent service, it is up to the management to support their employees with a respective treatment. Especially empowerment and encouragement are known as supportive elements concerning service-orientated behaviour of the employees. The term internal excellence describes the optimal internal situation of the organisation for introducing a service excellence culture. Within an internal excellence, the front-line-employees are creating customer satisfaction and delight among the customers by working off the customers problems as well with standardized processes, as well as with innovative processes. An inhibiting factor is the complexity of the customers problems. With a rising complexity, it gets more difficult to solve the problem and to create customer satisfaction and customer delight. In the IT service industry, the complexity has a special impact. Due to the characteristics of the industry, encountered problems are often complicated and technical sophisticated. These individual characteristics lead to an increased complexity, the front-line-employees have to deal with. To manage this challenge, several measures have to be adopted. The service-employee can be supported by a clearly defined guideline, that includes all relevant steps of the service process. Additionally, a learning culture can be enhanced by creating a knowledge database, where service-cases are documented. The technical support of the service-employee by using special software tools is another measure, which supports the service excellence culture of the organisation.
The content aggregator platform Reddit has established itself as one of the most popular websites in the world. However, scientific research on Reddit is hindered as Reddit allows (and even encourages) user anonymity, i.e., user profiles do not contain personal information such as the gender. Inferring the gender of users in large-scale could enable the analysis of gender-specific areas of interest, reactions to events, and behavioral patterns. In this direction, this thesis suggests a machine learning approach of estimating the gender of Reddit users. By exploiting specific conventions in parts of the website, we obtain a ground truth for more than 190 million comments of labeled users. This data is then used to train machine learning classifiers to use them to gain insights about the gender balance of particular subreddits and the platform in general. By comparing a variety of different approaches for classification algorithm, we find that character-level convolutional neural network achieves performance with an 82.3% F1 score on a task of predicting a gender of a user based on his/her comments. The score surpasses 85% mark for frequent users with more than 50 comments. Furthermore, we discover that female users are less active on Reddit platform, they write fewer comments and post in fewer subreddits on average, when compared to male users.
Das Ziel der vorliegenden Masterarbeit ist es, einen Einführungskurs in die Computervisualistik mit dem Schwerpunkt Computergrafik zu konzeptionieren und zu prototypisieren. Der Kurs sollte Grundlagen der Computergrafik vermitteln und dabei Bezüge zu anderen Veranstaltungen des Studiums herstellen, um Motivation und Verständnis für die komplexen Zusammenhänge der Studieninhalte in der Computervisualistik zu schaffen. Der aktuelle Studiengangplan weist hier bislang ein erkennbares Defizit auf. Für den Einführungskurs wurden prototypische Lerneinheiten auf Grundlage der didaktischen Methode der Moderation und unter Verwendung von Unity entwickelt. Konzept und Prototypen wurden an Probanden ohne informationstechnischen Hintergrund evaluiert. Die Ergebnisse zeigten, dass Unity eine geeignete Oberfläche für die Vermittlung der Informationen bietet. Diese stieß auf Akzeptanz und konnte leichte Zugänglichkeit bei den Probanden aufweisen, obwohl die Lerneinheiten selbst kleinere Schwächen aufwiesen. Im Anschluss an die erste Evaluationsphase wurde eine qualitative Umfrage mit Alumini der Computervisualistik durchgeführt. Die Ergebnisse bestätigten den Bedarf nach einer einführenden Veranstaltung zur Orientierung und zur Förderung von Motivation und Verständnis für die breiten Themengebiete der Computervisualistik.
Mit der Microsoft Kinect waren die ersten Aufnahmen von synchronisierten Farb- und Tiefendaten (RGB-D) möglich, ohne hohe finanzielle Mittel aufwenden zu müssen und neue Möglichkeiten der Forschung eröffneten sich. Mit fortschreitender Technik sind auch mobile Endgeräte in der Lage, immer mehr zu leisten. Lenovo und Asus bieten die ersten kommerziell erwerblichen Geräte mit RGB D-Wahrnehmung an. Mit integrierten Funktionen der Lokalisierung, Umgebungserkennung und Tiefenwahrnehmung durch die Plattform Tango von Google gibt es bereits die ersten Tests in verschiedenen Bereichen des Rechnersehens z.B. Mapping. In dieser Arbeit wird betrachtet, inwiefern sich ein Tango Gerät für die Objekterkennung eignet. Aus den Ausgangsdaten des Tango Geräts werden RGB D-Daten extrahiert und für die Objekterkennung verarbeitet. Es wird ein Überblick über den aktuellen Stand der Forschung und gewisse Grundlagen bezüglich der Tango Plattform gegeben. Dabei werden existierende Ansätze und Methoden für eine Objekterkennung auf mobilen Endgeräten untersucht. Die Implementation der Erkennung wird anhand einer selbst erstellten Datenbank von RGB-D Bildern gelernt und getestet. Neben der Vorstellung der Ergebnisse werden Verbesserungen und Erweiterungen für die Erkennung vorgeschlagen.
This thesis deals with Responsible Research and Innovation (RRI). The objectives of the work are to clarify the term RRI and to identify the current state of research in the field. RRI is a concept significantly shaped by the European Commission (EC). It is composed of responsibility, research and innovation, which will be considered individually to begin with. According to the definition by Clausen (2009) used here, responsibility is made up of a subject ("Who takes responsibility"), an object ("for what,"), one or several addressees ("to whom") and authorities ("and why?"). In the conceptual chapter of this thesis, the role of organisations as subjects of responsibility is grounded in the fact that they work toward a certain purpose. More importantly, though, society as an important addressee and authority of responsibility, increasingly calls for organisations to take responsibility for the consequences and wider impacts of their actions. The field of sustainability, with its pillars of economical, ecological and social issues, is considered a key object of responsibility. Innovation and research as an integral part of it are understood as complex processes aiming at reaching corporate goals in novel ways. By means of a literature analysis according to Webster & Watson (2002) the current state of research on RRI is explored. Most of the literature used is identified as conceptual in nature. Despite the great uncertainty that still prevails, four dimensions of RRI can be identified: Deliberation, Anticipation, Reflexivity, and Responsiveness. Taking into account the results of the literature review, the dimensions offered by the EC are classified as important activities rather than conceptual dimensions. Research and innovation in alignment with RRI involves stakeholders, particularly society, in the process right from the beginning. It also examines its potential impacts and continually reflects on its motivations and underlying values. The most important aspect is the adaptability of the process in response to the results from the other dimensions. All activities are based on responsibility and sustainability and are integrated through appropriate management and governance. In addition to conceptual work, little research has been done especially with regard to operationalisation and implementation of RRI. For the latter, the embedding in a stage-gate process according to Cooper (2010) is suggested here. However, the actual design of the conceptual dimensions still needs to be specified. In addition, overall awareness of RRI needs to be sharpened and a commitment to RRI made.
Modern software projects are composed of several software languages, software technologies and different kind of artifacts. Therefore, the understanding of the software project at hand, including the semantic links between the different parts, becomes a difficult challenge for a developer. One approach to attack this issue is to document the software project with the help of a linguistic architecture. This kind of architecture can be described with the help of the MegaL ontology. A remaining challenge is the creation of it since it requires different kind of skills. Therefore, this paper proposes an approach for the automatic extraction of a linguistic architecture. The open source framework Apache Jena, which is focusing on semantic web technologies like RDF and OWL, is used to define custom rules that are capable to infer new knowledge based on the defined or already extracted RDF triples. The complete approach is tested in a case study on ten different open source projects. The aim of the case study is to extract a linguistic architecture that is describing the use of Hibernate in the selected projects. In the end, the result is evaluated with the help of different metrics. The evaluation is performed with the help of an internal and external approach.
Die vorliegende Masterarbeit thematisiert die Evaluation einer sprachgesteuerten Lösung in der Produktion mit multimodaler Eingabe. Dabei wurden die Usability und die Benut-zerfreundlichkeit eines gewählten Sprachdialogsystems bewertet. Die Bewertung wurde mit Hilfe von Benutzertests und eines modifizierten SASSI-Fragebogens durchgeführt. Weiterhin wurden auch technische Kriterien, wie die Wortfehlerrate und die Out-of-grammar Rate zur Hilfe gezogen. Für den Versuch wurden zwei verschiedene Szenarien aus einer realen Produktionsum-gebung definiert. Dabei sollten die Teilnehmer verschiedene Aufgaben mit Hilfe des Testsystems erledigen. Die Interaktion mit dem Sprachdialogsystem fand anhand von ge-sprochenen Befehlen statt, welche durch eine Grammatik definiert wurden. Die Sprach-kommandos wurden durch die Zuhilfenahme eines WLAN-Headsets an das Sprachsys-tem übertragen. Während des Versuchs wurden Aussagen der Teilnehmer protokolliert und die technischen Kriterien notiert.
Das Ergebnis der Evaluation verdeutlicht, dass das Sprachdialogsystem eine hohe Quali-tät bezüglich Usability und Benutzerfreundlichkeit aufweist. Dabei sind die Wortfehler-rate und die Out-of-grammar Rate sehr niedrig ausgefallen und das System wurde von den Benutzern deutlich positiv bewertet. Nichtsdestotrotz wurden einige Kritikpunkte ge-nannt, die zu einer Verbesserung des Systems beitragen können.
Topic models are a popular tool to extract concepts of large text corpora. These text corpora tend to contain hidden meta groups. The size relation of these groups is frequently imbalanced. Their presence is often ignored when applying a topic model. Therefore, this thesis explores the influence of such imbalanced corpora on topic models.
The influence is tested by training LDA on samples with varying size relations. The samples are generated from data sets containing a large group differences i.e language difference and small group differences i.e. political orientation. The predictive performance on those imbalanced corpora is judged using perplexity.
The experiments show that the presence of groups in training corpora can influence the prediction performance of LDA. The impact varies due to various factors, including language-specific perplexity scores. The group-related prediction performance changes for groups when varying the relative group sizes. The actual change varies between data sets.
LDA is able to distinguish between different latent groups in document corpora if differences between groups are large enough, e.g. for groups with different languages. The proportion of group-specific topics is under-proportional to the share of the group in the corpus and relatively smaller for minorities.
The output of eye tracking Web usability studies can be visualized to the analysts as screenshots of the Web pages with their gaze data. However, the screenshot visualizations are found to be corrupted whenever there are recorded fixations on fixed Web page elements on different scroll positions. The gaze data are not gathered on their fixated fixed elements; rather they are scattered on their recorded scroll positions. This problem has raised our attention to find an approach to link gaze data to their intended fixed elements and gather them in one position on the screenshot. The approach builds upon the concept of creating the screenshot during the recording session, where images of the viewport are captured on visited scroll positions and lastly stitched into one Web page screenshot. Additionally, the fixed elements in the Web page are identified and linked to their fixations. For the evaluation, we compared the interpretation of our enhanced screenshot against the video visualization, which overcomes the problem. The results revealed that both visualizations equally deliver accurate interpretations. However, interpreting the visualizations of eye tracking Web usability studies using the enhanced screenshots outperforms the video visualizations in terms of speed and it requires less temporal demands from the interpreters.
Mapping ORM to TGraph
(2017)
Object Role Modeling (ORM) is a semantic modeling language used to describe objects and their relations amongst each other. Both objects and relations may be subject to rules or ORM constraints.
TGraphs are ordered, attributed, typed and directed graphs. The type of a TGraph and its components, the edges and vertices, is defined using the schema language graph UML (grUML), a profiled version of UML class diagrams. The goal of this thesis is to map ORM schemas to grUML schemas in order to be able to represent ORM schema instances as TGraphs.
Up to this point, the preferred representation for ORM schema instances is in form of relational tables. Though mappings from ORM schemas to relational schemas exist, those publicly available do not support most of the constraints ORM has to offer.
Constraints can be added to grUML schemas using the TGraph query language GReQL, which can efficiently check whether a TGraph validates the constraint or not. The graph library JGraLab provides efficient implementations of TGraphs and their query language GReQL and supports the generation of grUML schemas.
The first goal of this work is to perform a complete mapping from ORM schemas to grUML schemas, using GReQL to sepcify constraints. The second goal is to represent ORM instances in form of TGraphs.
This work gives an overview of ORM, TGraphs, grUML and GReQL and the theoretical mapping from ORM schemas to grUML schemas. It also describes the implementation of this mapping, deals with the representation of ORM schema instances as TGraphs and the question how grUML constraints can be validated.
The Internet of Things (IoT) recently developed from the far-away vision of ubiquitous computing into very tangible endeavors in politics and economy, implemented in expensive preparedness programs. Experts predict considerable changes in business models that need to be addressed by organizations in order to respond to competition. Although there is a need to develop strategies for upcoming transformations, organizational change literature did not turn to the specific change related to the new technology yet. This work aims at investigating IoT-related organizational change by identifying and classifying different change types. It therefore combines the methodological approach of grounded theory with a discussion and classification of identified change informed by a structured literature review of organizational change literature. This includes a meta-analysis of case studies using a qualitative, exploratory coding approach to identify categories of organizational change related to the introduction of IoT. Furthermore a comparison of the identified categories to former technology-related change is provided using the example of Electronic Business (e-business), Enterprise Resource Planning (ERP) systems, and Customer Relationship Management (CRM) systems. As a main result, this work develops a comprehensive model of IoT-related business change. The model presents two main themes of change indicating that personal smart things will transform businesses by means of using more personal devices, suggesting and scheduling actions of their users, and trying to avoid hazards. At the same time, the availability of information in organizations will further increase to a state where information is available ubiquitously. This will ultimately enable accessing real time information about objects and persons anytime and from any place. As a secondary result, this work gives an overview on concepts of technology-related organizational change in academic literature.
The market for virtual reality is rapidly evolving regarding its hardware components. Further applications are the result of this progress. In addition to the gaming market virtual reality offers further possibilities and advantages in research. That way this technology provides the investigation of perceptual phenomena. Therefore the present thesis aimed to examine the induced roelofs effect in a virtual environment and in reality in front of a computer screen. The roelofs effect describes a misperception that occurs if a persons midline and a target surrounding frame are offset, which results in a false localisation of the object. Thus the present study provides insight into the influence of a computer screen regarding this effect. Moreover, conclusions concerning the suitability of virtual reality in perceptual research are drawn. The results obtained by the virtual reality indicated a greater degree of perceptual distortion. Therefore the fixed frame of the computer screen can be assumed as an additional orientation.
This thesis explores the possibilities of probabilistic process modelling for the Computer Supported Cooperative Work (CSCW) systems in order to predict the behaviour of the users present in the CSCW system. Toward this objective applicability, advantages, limitations and challenges of probabilistic modelling are excavated in context of CSCW systems. Finally, as a primary goal seven models are created and examined to show the feasibilities of probabilistic process discovery and predictions of the users behaviour in CSCW systems.
One of the greatest goals in computer graphics is the aesthetic representation of objects. In addition to conventional methods, another field focuses on non-photorealistic renderings. The so-called example-based rendering is an area where users can transfer their art style to a pre-computed 3D rendering, using a hand-painted template. There are some algorithms that already provide impressive results, but their problem is that most of these procedures count as offline methods and are not able to produce results in real-time. For this reason, this work show a method that satisfies this condition. In addition, the influence of the run-time reduction on the results is investigated. Requirements are defined, to which the method and its results are examined. Other methods in this field are referenced and compared with their results.
The extensive literature in the data visualization field indicates that the process of creating efficient data visualizations requires the data designer to have a large set of skills from different fields (such as computer science, user experience, and business expertise). However, there is a lack of guidance about the visualization process itself. This thesis aims to investigate the different processes for creating data visualizations and develop an integrated framework to guide the process of creating data visualizations that enable the user to create more useful and usable data visualizations. Firstly, existing frameworks in the literature will be identified, analyzed and compared. During this analysis, eight views of the visualization process are developed. These views represent the set of activities which should be done in the visualization process. Then, a preliminary integrated framework is developed based on an analysis of these findings. This new integrated framework is tested in the field of Social Collaboration Analytics on an example from the UniConnect platform. Lastly, the integrated framework is refined and improved based on the results of testing with the help of diagrams, visualizations and textual description. The results show that the visualization process is not a waterfall type. It is the iterative methodology with the certain phases of work, demonstrating how to address the eight views with different levels of stakeholder involvement. The findings are the basis for a visualization process which can be used in future work to develop the fully functional methodology.
Im Bereich Augmented Reality ist es von großer Bedeutung, dass virtuelle
Objekte möglichst realistisch in ein Kamerabild eingebettet werden. Nur
so ist es möglich, dem Nutzer eine immersive Erfahrung zu bieten. Dazu
gehört unter anderem, Verdeckung dieser Objekte korrekt zu behandeln.
Während schon verschiedene Ansätze existieren, dieses Verdeckungsproblem
zu beheben, wird in dieser Arbeit eine Lösung mittels Natural Image
Matting vorgestellt. Mit Hilfe einer Tiefenkamera wird das Kamerabild in
Vorder- und Hintergrund aufgeteilt und anschließend das virtuelle Objekt
im Bild platziert. Für Bereiche, in denen die Zugehörigkeit zu Vorder- oder
Hintergrund nicht eindeutig ist, wird anhand bekannter Pixel ein Transparenz-
Wert geschätzt. Es werden Methoden präsentiert, welche einen
Ablauf des Image Matting in Echtzeit ermöglichen. Zudem werden
Verbesserungsmöglichkeiten dieser Methoden präsentiert und gezeigt, dass
durch diese eine höhere Bildqualität für schwierige Szenen erreicht wird.
In scientific data visualization huge amounts of data are generated, which implies the task of analyzing these in an efficient way. This includes the reliable detection of important parts and a low expenditure of time and effort. This is especially important for the big-sized seismic volume datasets, that are required for the exploration of oil and gas deposits. Since the generated data is complex and a manual analysis is very time-intensive, a semi-automatic approach could on one hand reduce the time required for the analysis and on the other hand offer more flexibility, than a fully automatic approach.
This master's thesis introduces an algorithm, which is capable of locating regions of interest in seismic volume data automatically by detecting anomalies in local histograms. Furthermore the results are visualized and a variety of tools for the exploration and interpretation of the detected regions are developed. The approach is evaluated by experiments with synthetic data and in interviews with domain experts on the basis of real-world data. Conclusively further improvements to integrate the algorithm into the seismic interpretation workflow are suggested.
Using semantic data from general-purpose programming languages does not provide the unified experience one would want for such an application. Static error checking is lacking, especially with regards to static typing of the data. Based on the previous work of λ-DL, which integrates semantic queries and concepts as types into a typed λ-calculus, this work takes its ideas a step further to meld them into a real-world programming language. This thesis explores how λ-DL's features can be extended and integrated into an existing language, researches an appropriate extension mechanism and produces Semantics4J, a JastAdd-based Java language semantic data extension for type-safe OWL programming, together with examples of its usage.
With the emergence of current generation head-mounted displays (HMDs), virtual reality (VR) is regaining much interest in the field of medical imaging and diagnosis. Room-scale exploration of CT or MRI data in virtual reality feels like an intuitive application. However in VR retaining a high frame rate is more critical than for conventional user interaction seated in front of a screen. There is strong scientific evidence suggesting that low frame rates and high latency have a strong influence on the appearance of cybersickness. This thesis explores two practical approaches to overcome the high computational cost of volume rendering for virtual reality. One lies within the exploitation of coherency properties of the especially costly stereoscopic rendering setup. The main contribution is the development and evaluation of a novel acceleration technique for stereoscopic GPU ray casting. Additionally, an asynchronous rendering approach is pursued to minimize the amount of latency in the system. A selection of image warping techniques has been implemented and evaluated methodically, assessing the applicability for VR volume rendering.
Motion capture refers to the process of capturing, processing and trans- lating real motions onto a 3D model. Not only in the movie and gaming industries, motion capture creates an indispensable realism of human and animal movement. Also in the context of robotics, medical movement therapy, as well as in AR and VR, motion capture is used extensively. In addition to the well established optical processes, especially in the last three areas, alternative systems based on inertial navigation (IMUs) are being used in-creasingly, because they do not rely on external cameras and thus limit the area of movement considerably less.
Fast evolving technical progress in the manufacturing of such IMUs allows building small sensors, wearable on the body which can transfer movements to a computer. The development of applying inertial systems to a motion capture context, however, is still at an early state. Problems like drift can currently only be minimized by adding additional hardware for correcting the read data.
In the following master thesis an IMU based motion capture system is designed and constructed. This contains the assembly of the hardware components as well as processing of the received movement data on the software side and their application to a 3D model.
This thesis proposes the use of MSR (Mining Software Repositories) techniques to identify software developers with exclusive expertise about specific APIs and programming domains in software repositories. A pilot Tool for finding such
“Islands of Knowledge” in Node.js projects is presented and applied in a case study to the 180 most popular npm packages. It is found that on average each package has 2.3 Islands of Knowledge, which is possibly explained by the finding that npm packages tend to have only one main contributor. In a survey, the maintainers of 50 packages are contacted and asked for opinions on the results produced by the Tool. Together with their responses, this thesis reports on experiences made with the pilot Tool and how future iterations could produce even more accurate statements about programming expertise distribution in developer teams.
The Internet of Things (IoT) is a network of addressable, physical objects that contain embedded sensing, communication and actuating technologies to sense and interact with their environment (Geschickter 2015). Like every novel paradigm, the IoT sparks interest throughout all domains both in theory and practice, resulting in the development of systems pushing technology to its limits. These limits become apparent when having to manage an increasing number of Things across various contexts. A plethora of IoT architecture proposals have been developed and prototype products, such as IoT platforms, been introduced. However, each of these architectures and products apply their very own interpretations of an IoT architecture and its individual components so that IoT is currently more an Intranet of Things than an Internet of Things (Zorzi et al. 2010). Thus, this thesis aims to develop a common understanding of the elements forming an IoT architecture and provide high-level specifications in the form of a Holistic IoT Architecture Framework.
Design Science Research (DSR) is used in this thesis to develop the architecture framework based on the pertinent literature. The development of the Holistic IoT Architecture Framework includes the identification of two new IoT Architecture Perspectives that became apparent during the analysis of the IoT architecture proposals identified in the extant literature. While applying these novel perspectives, the need for a new component for the architecture framework, which was merely implicitly mentioned in the literature, became obvious as well. The components of various IoT architecture proposals as well as the novel component, the Thing Management System, were combined, consolidated and related to each other to develop the Holistic IoT Architecture Framework. Subsequently, it was shown that the specifications of the architecture framework are suitable to guide the implementation of a prototype.
This contribution provides a common understanding of the basic building blocks, actors and relations of an IoT architecture.
Coordination and awareness mechanisms are important in systems for Computer-Supported Cooperative Work (CSCW) and traditional groupware systems. It has been a key focus of research into collaborative groupware and its capability to enable people to efficiently collaborate and coordinate work. Until now, no classification of the mechanisms has been undertaken to identify commonalities and differences in coordination and awareness mechanisms and to show their significance in collaborative environments. In addition, there is a little investigation of coordination and awareness mechanisms in new forms of groupware such as socially enabled Enterprise Collaboration Systems (ECS). Indeed, both in science and in practices, ECS incorporating social software have become increasingly important. Based on the combination of traditional groupware and social software, ECS also include coordination and awareness mechanisms that may simplify collaboration, but these have not yet been investigated.
Therefore, the aim of this thesis is to identify coordination and awareness mechanisms in the academic literature to provide a general overview of those mechanisms examples. Additionally, this thesis aims to classify the mechanism examples. Based on a deep literature analysis, concepts described in literature are chosen and applied with the intension to analyse the mechanisms and to reach a classification. Based on the classification of the identified mechanisms their commonalities and differences are examined and described to gain a better understanding of them. For illustration purpose, examples of coordination and awareness mechanisms and their application are portrayed. The mechanisms examples refer to the classification groups derived. The selection of the mechanisms for the visualization is based on significant differences in their functionality. Subsequently, the selected mechanisms, more based on traditional groupware, are checked to a limited extend whether they can be found in socially enabled ECS. The collaborative platform of IBM Connections serves as a practical example of ECS incorporating social software. IBM Connections is used at the University of Koblenz to run the platform "UniConnect". On the platform it is investigated which of the identified mechanisms examples of the literature are applied in IBM Connections and which additional mechanisms are created by users. This work is the first step in the study of coordination and awareness mechanisms in socially-enabled ECS. In addition, it is expected to detect new mechanisms which are used while the social factor to collaborative work is new.
The purpose of this thesis is to examine and collect coordination and awareness mechanisms examples in literature to analyse them. Additionally, the purpose is to provide a first overview of mechanisms and to classify them by investigating their commonalities. Beside this thesis should give incentive for further investigations to investigate coordination and awareness mechanisms in socially integrated ECS.
This thesis analyzes the online attention towards scientists and their research topics. The studies compare the attention dynamics towards the winners of important scientific prizes with scientists who did not receive a prize. Web signals such as Wikipedia page views, Wikipedia edits, and Google Trends were used as a proxy for online attention. One study focused on the time between the creation of the article about a scientist and their research topics. It was discovered that articles about research topics were created closer to the articles of prize winners than to scientists who did not receive a prize. One possible explanation could be that the research topics are more closely related to the scientist who got an award. This supports that scientists who received the prize introduced the topics to the public. Another study considered the public attention trends towards the related research topics before and after a page of a scientist was created. It was observed that after a page about a scientist was created, research topics of prize winners received more attention than the topics of scientists who did not receive a prize. Furthermore, it was demonstrated that Nobel Prize winners get a lower amount of attention before receiving the prize than the potential nominees from the list of Citation Laureates of Thompson Reuters. Also, their popularity is going down faster after receiving it. It was also shown that it is difficult to predict the prize winners based on the attention dynamics towards them.
The purpose of this research is to examine various existing cloud-based Internet of Things (IoT) development platforms and evaluate one platform (IBM Watson IoT) in detail using a use case scenario. Internet of Things IoT is an emerging technology that has a vision of interconnecting the virtual world (e.g. clouds, social networks) and the physical world (e.g. device, cars, fridge, people, animals) through the Internet technology. For example, the IoT concept of smart cities which has the objectives to improve the efficiency and development of business, social and cultural services in the city, can be achieved by using sensors, actuators, clouds and mobile devices (IEEE, 2015). A sensor (e.g. temperature sensor) in the building (global world) can send the real-time data to the IoT cloud platform (virtual world), where it can be monitored, stored, analysed, or used to trigger some action (e.g. turn on the cooling system in the building if temperature exceeds a threshold limit). Although, the IoT creates vast opportunities in different areas (e.g. transportation, healthcare, manufacturing industry), it also brings challenges such as standardisation, interoperability, scalability, security and privacy. In this research report, IoT concepts and related key issues are discussed.
The focus of this research is to compare various cloud-based IoT platforms in order to understand the business and technical features they offer. The cloud-based IoT platforms from IBM, Google, Microsoft, PTC and Amazon have been studied.
To design the research, the Design Science Research (DSR) methodology has been followed, and to model the real-time IoT system the IOT-A modelling approach has been used.
The comparison of different cloud based IoT development platforms shows that all of the studied platforms provide basic IoT functionalities such as connecting the IoT devices to the cloud based IoT platform, collecting data from the IoT devices, data storage and data analytics. However, the IBM’s IoT platform appears to have an edge over the other platforms studied in this research because of the integrated run-time environment which also makes it more developer friendly. Therefore, IBM Watson IoT for Bluemix is selected for further examination of its capabilities. The IBM Watson IoT for Bluemix offerings include analytics, risk management, connect and information management. A use case was implemented to assess the capabilities that IBM Watson IoT platform offers. The digital artifacts (i.e. applications) are produced to evaluate the IBM’s IoT solution. The results show that IBM offers a very scalable, developer and deployment friendly IoT platform. Its cognitive, contextual and predictive analytics provide a promising functionality that can be used to gain insights from the IoT data transmitted by the sensors and other IoT devices.
The aim of this thesis was to develop and to evaluate a method, which enables the utilization of traditional dialog marketing tools through the web. For this purpose, a prototype of a website with "extended real-time interaction (eEI)" capabilities has been implemented and tested. The prototype was evaluated by a methodology based on the five-dimensional "e-service quality" measure after Gwo-Guang Lee und Hsiu-Fen Lin. The Foundation of the "e-service quality" measure is the SERVQUAL-Model. A statistical analysis of the user study results showed a significant correlation between eEI and user satisfaction. Before the actual realization of eEI, the "Technology Acceptance Model" after Fred D. Davis was used to investigate currently used real-time interaction systems.
While Virtual Reality has been around for decades it gained new life in recent years. The release of the first consumer hardware devices allows fully immersive and affordable VR for the user at home. This availability lead to a new focus of research on technical problems as well as psychological effects. The concepts of presence, describing the feeling of being in the virtual place, body ownership and their impact are central topics in research for a long time and still not fully understood.
To enable further research in the area of Mixed Reality, we want to introduce a framework that integrates the users body and surroundings inside a visual coherent virtual environment. As one of two main aspects we want to merge real and virtual objects to a shared environment in a way such that they are no longer visually distinguishable. To achieve this the main focus is not supposed to be on a high graphical fidelity but on a simplified representation of reality. The essential question is, what level of visual realism is necessary to create a believable mixed reality environment that induces a sense of presence in the user? The second aspect considers the integration of virtual persons. Can characters be recorded and replayed in a way such that they are perceived as believable entities of the world and therefore act as a part of the users environment?
The purpose of this thesis was the development of a framework called Mixed Reality Embodiment Platform. This inital system implements fundamental functionalities to be used as a basis for future extensions to the framework. We also provide a first application that enables user studies to evaluate the framework and contribute to aforementioned research questions.
Prototyp einer generischen Recommendation Engine basierend auf Echtzeit-Assoziationsanalysen mit R
(2016)
In dieser Arbeit wurde unter Verwendung der Programmiersprache R ein Prototyp zur Erstellung einer Recommendation Engine zur Aufdeckung von Assoziationen innerhalb einer gegebenen Datenmenge entwickelt. Die Berechnung der Assoziationen findet hierbei in Echtzeit statt und des Weiteren wurden die Analysefunktionen generisch programmiert, um ein schnelles Einbinden und einfaches Parametrisieren von Datensätzen zu ermöglichen. Die Entwicklung fußte auf der grundlegenden Motivation, Data Mining Methoden wie das Assoziationsverfahren teilweise zu automatisieren, um damit generierte Lösungen effizienter umsetzen zu können. Der Entwicklungsprozess war insgesamt erfolgreich, sodass alle Grundfunktionalitäten im Sinne eines evolutionären Prototypings vorhanden sind.
The publication of open source software aims to support the reuse, the distribution and the general utilization of software. This can only be enabled by the correct usage of open source software licenses. Therefore associations provide a multitude of open source software licenses with different features, of which a developer can choose, to regulate the interaction with his software. Those licenses are the core theme of this thesis.
After an extensive literature research, two general research questions are elaborated in detail. First, a license usage analysis of licenses in the open source sector is applied, to identify current trends and statistics. This includes questions concerning the distribution of licenses, the consistency in their usage, their association over a period of time and their publication.
Afterwards the recommendation of licenses for specific projects is investigated. Therefore, a recommendation logic is presented, which includes several influences on a suitable license choice, to generate an at most applicable recommendation. Besides the exact features of a license of which a user can choose, different methods of ranking the recommendation results are proposed. This is based on the examination of the current situation of open source licensing and license suggestion. Finally, the logic is evaluated on the exemplary use-case of the 101companies project.
Die vorliegende Arbeit beschäftigt sich mit der Betrachtung der Rolle von Vertrauen zwischen den Kapitalnehmern und Kapitalgebern auf einer der größten Crowdfunding-Plattformen, Kickstarter. Kernthema ist die Beantwortung der Forschungsfragen, wie das Vertrauen zwischen den Kapitalgebern und Kapitalnehmern im Kontext von Crowdfunding entsteht, welche Faktoren das Maß von Vertrauen in dieser Beziehung beeinflussen und welche Signale für die Vertrauensbildung verantwortlich sind. Das Ziel dieser Arbeit besteht darin, das von Zerwas, Kilian und von Kortzfleisch 2015 vorgestellte konzeptuelle Modell der Einflussfaktoren von Vertrauen im Kontext von Crowdfunding aus Sicht der Kapitalgeber zu überprüfen und gegebenenfalls zu erweitern.
Auf Grundlage einer Literaturrecherche und qualitativer, leitfadenorientierter Interviews werden die gesammelten Daten diskutiert, bestehende Faktoren verifiziert und weiterführend neue Faktoren und Signale identifiziert, die die Bildung von Vertrauen auf Crowdfunding-Plattformen beeinflussen.
Als Ergebnis werden die Überarbeitung sowie Erweiterung des Ausgangsmodells um die neuen Faktoren geographische Nähe, Vermittlerqualität und Verständnis vom Konzept Crowdfunding vorgeschlagen. Anhand der Häufigkeit der Erwähnung und Relevanz der Faktoren und beeinflussenden Signale in den durchgeführten Interviews wird weiterführend eine Gewichtung der Einzelfaktoren vorgenommen. Abschließend werden Implikationen und Bedeutung für Wissenschaft und Forschung diskutiert.
One of the fundamental decisions during the development of any system is the representation of data. In computer graphics, objects are usually represented as sets of triangles. There are however many different variants with their own strengths and weaknesses. This thesis will explore distancefields as a representation for objects. Distancefields are functions, which map every point in space to the distance to the closest surface point. While this description is very simple, a number of interesting properties can be derived, allowing for a multitude of shapes, operations and effects. An overview of the necessary background and methods is given. Furthermore, some extended or new approaches are presented, such as displaying implicit surfaces, approximating indirect illumination or implementing a GPU tracer.
In recent years head mounted displays (HMD) and their abilities to create virtual realities comparable with the real world moved more into the focus of press coverage and consumers. The reason for this lies in constant improvements in available computing power, miniaturisation of components as well as the constantly shrinking power consumption. These trends originate in the general technical progress driven by advancements made in smartphone sector. This gives more people than ever access to the required components to create these virtual realities. However at the same time there is only limited research which uses the current generation of HMDs especially when comparing the virtual and real world against each other. The approach of this thesis is to look into the process of navigating both real and virtual spaces while using modern hardware and software. One of the key areas are the spatial and peripheral perception without which it would be difficult to navigate a given space. The influence of prior real and virtual experiences on these will be another key aspect. The final area of focus is the influence on the emotional state and how it compares to the real world. To research these influences a experiment using the Oculus Rift DK2 HMD will be held in which subjects will be guided through a real space as well as a virtual model of it. Data will be gather in a quantitative manner by using surveys. Finally, the findings will be discussed based on a statistical evaluation. During these tests the different perception of distances and room size will the compared and how they change based on the current reality. Furthermore, the influence of prior spatial activities both in the real and the virtual world will looked into. Lastly, it will be checked how real these virtual worlds are and if they are sufficiently sophisticated to trigger the same emotional responses as the real world.
This work covers techniques for interactive and physically - based rendering of hair for computer generated imagery (CGI). To this end techniques
for the simulation and approximation of the interaction of light with hair are derived and presented. Furthermore it is described how hair, despite such computationally expensive algorithms, can be rendered interactively.
Techniques for computing the shadowing in hair as well as approaches to render hair as transparent geometry are also presented. A main focus of
this work is the DBK-Buffer, which was conceived, implemented and evaluated. Using the DBK-Buffer, it is possible to render thousands of hairs as
transparent geometry without being dependent on either the newest GPU hardware generation or a great amount of video memory. Moreover, a comprehensive evaluation of all the techniques described was conducted with respect to the visual quality, performance and memory requirements. This
revealed that hair can be rendered physically - based at interactive or even at real - time frame rates.
“Did I say something wrong?” A word-level analysis of Wikipedia articles for deletion discussions
(2016)
This thesis focuses on gaining linguistic insights into textual discussions on a word level. It was of special interest to distinguish messages that constructively contribute to a discussion from those that are detrimental to them. Thereby, we wanted to determine whether “I”- and “You”-messages are indicators for either of the two discussion styles. These messages are nowadays often used in guidelines for successful communication. Although their effects have been successfully evaluated multiple times, a large-scale analysis has never been conducted. Thus, we used Wikipedia Articles for Deletion (short: AfD) discussions together with the records of blocked users and developed a fully automated creation of an annotated data set. In this data set, messages were labelled either constructive or disruptive. We applied binary classifiers to the data to determine characteristic words for both discussion styles. Thereby, we also investigated whether function words like pronouns and conjunctions play an important role in distinguishing the two. We found that “You”-messages were a strong indicator for disruptive messages which matches their attributed effects on communication. However, we found “I”-messages to be indicative for disruptive messages as well which is contrary to their attributed effects. The importance of function words could neither be confirmed nor refuted. Other characteristic words for either communication style were not found. Yet, the results suggest that a different model might represent disruptive and constructive messages in textual discussions better.
Statistical Shape Models (SSMs) are one of the most successful tools in 3Dimage analysis and especially medical image segmentation. By modeling the variability of a population of training shapes, the statistical information inherent in such data are used for automatic interpretation of new images. However, building a high-quality SSM requires manually generated ground truth data from clinical experts. Unfortunately, the acquisition of such data is a time-consuming, error-prone and subjective process. Due to this effort, the majority of SSMs is often based on a limited set of this ground truth training data, which makes the models less statistically meaningful. On the other hand, image data itself is abundant in clinics from daily routine. In this work, methods for automatically constructing a reliable SSM without the need of manual image interpretation from experts are proposed. Thus, the training data is assumed to be the result of any segmentation algorithm or may originate from other sources, e.g. non-expert manual delineations. Depending on the algorithm, the output segmentations will contain errors to a higher or lower degree. In order to account for these errors, areas of low probability of being a boundary should be excluded from the training of the SSM. Therefore, the probabilities are estimated with the help of image-based approaches. By including many shape variations, the corrupted parts can be statistically reconstructed. Two approaches for reconstruction are proposed - an Imputation method and Weighted Robust Principal Component Analysis (WRPCA). This allows the inclusion of many data sets from clinical routine, covering a lot more variations of shape examples. To assess the quality of the models, which are robust against erroneous training shapes, an evaluation compares the generalization and specificity ability to a model build from ground truth data. The results show, that especially WRPCA is a powerful tool to handle corrupted parts and yields to reasonable models, which have a higher quality than the initial segmentations.
The goal of this thesis is to create and develop a concept for a mobile city guide combined with game-based contents.
The application is intented to support flexible and independent exploration of the city of Koblenz.
Based on the geographical data, historical information for and interesting stories of various places were provided in this application. These informations are combined with playful elements in order to create a motivating concept.
Therefore, related approaches were examined and, combined with own ideas, a new concept has been developed. This concept has been prototypically implemented as an Android application and afterwards evaluated by 15 test persons. A questionnaire was used to examine the operability, the motivation of game patterns and the additional value of the application.
Politische und gesellschaftliche Polarisierung ist ein interessantes Phänomen, über dessen Auswirkungen viele unterschiedliche, zum Teil auch gegensätzliche, Theorien existieren.
Polarisierung wird in der Literatur mit unterschiedlichen Methoden gemessen. Die vorliegende Arbeit gibt einen Überblick über existierende Polarisierungsmaße und es werden zwei neuartige Maße aus dem Gebiet der spektralen Graphentheorie vorgestellt. Anschließend werden die bekannten und die neu entwickelten Maße auf den LiquidFeedback-Datensatz der Piratenpartei Deutschland angewandt. Als Ergebnis lässt sich festhalten, dass die Maße teilweise zu unterschiedlichen Ergebnisse kommen. Dies liegt darin begründet, dass nicht alle Maße das Gleiche messen. Um zu verstehen was die einzelnen Maße aussagen, werden wesentliche Eigenschaften von Polarisierungsmaßen herausgearbeitet und es wird für jedes Maß dargelegt, welche Eigenschaften es erfüllt. Die angesprochenen Polarisierungsmaße beziehen sich auf die Entwicklung der Polarisierung zwischen Usern des LiquidFeedback-Systems. Bei der Betrachtung von einzelnen Personen und Abstimmungen fiel unter anderem auf, dass polarisierende Personen mehr Macht durch
Delegationen besitzen als die restlichen Personen und dass polarisierte Vorschläge circa doppelt so häufig umgesetzt werden.
In this thesis, we deal with the question if challenge, flow and fun in computer games are related to each other, and which influence the motivational, psychological components motivation of success, motivation of failure and the chance of success do have. In addition, we want to know if a free choice in the level of difficulty is the optimal way to flow. To examine these theories, a study based on an online survey was executed, in which the participants played the game “flOw“. The results were evaluated with the help of a two-factorial analysis of variance with repeated measurement and tests on correlation. Thereby we found out that there actually exists a relation between challenge, flow and fun and that motivation does matter indirectly.
Heat exchangers are used for thickening of various products or desalination of saltwater. Nevertheless, they are used as cooling unit in industries. Thereby, the stainless steel heat transferring elements get in contact with microorganism containing media, such as river water or saltwater, and corrode. After at least two years of utilization the material is covered with bacterial slime called biofilm. This process is called biofouling and causes loss in efficiency and creates huge costs depending on cleaning technique and efficiency. Cleaning a heat exchanger is very expensive and time consuming. It only can be done while the device is out of business.
Changing the surface properties of materials is the best and easiest way to lengthen the initial phase of biofilm formation. This leads to less biofouling (Mogha et al. 2014).
Thin polymer films as novel materials have less costs in production than stainless steel and are easy to handle. Furthermore, they can be functionalzed easily and can be bougth in different sizes all over the world. Because of this, they can reduce the costs of cleaning techniques and lead to a longer high efficiency state of the heat exchanger. If the efficiency of the heat exchanger decreases, the thin polymer films can be replaced.
For a successful investigation of the microbial and the process engineering challenges a cooperation of Technical University of Kaiserslautern (chair of seperation science and technology) and University of Koblenz-Landau (working goup microbiology) was established.
The aim of this work was design engineering and production of a reactor for investigation of biofouling taking place on thin polymeric films and stainless steel. Furthermore, an experimental design has to be established. Several requirements have to be applied for these tasks. Therefore, a real heat exchanger is downscaled, so the process parameters are at least comparable. There are many commercial flow cell kits available. Reducing the costs by selfassembling increased the number of samples, so there is a basis for statistic analysis. In addition, fast and minimal invasive online-in-situ microscopy and Raman spectroscopy can be performed. By creating laminary flow and using a weir we implemented homogenous inflow to the reactors. Reproduceable data on biomass and cell number were created.
The assessment of biomass and cell number is well established for drinking water analysis. Epifluorescense microscopy and gravimetric determination are the basic techniques for this work, too. Differences in cell number and biomass between surface modifications and materials are quantified and statistically analysed.
The wildtype strain Escherichia coli K12 and an inoculum of 500 ml fresh water were used to describe the biofouling of the films. Thereby, we generated data with natural bacterial community in unknown media properties and data with well known media properties, so the technical relevance of the data is given.
Free surface energy and surface roughness are the first attachment hurdles for bacteria. These parameters were measured according to DIN 55660 and DIN EN ISO 4287. The materials science data were correlated with the number of cells and the biomass. This correlation acts as basal link of biofouling as biological induced parameter to the material properties. Material properties for reducing the biofouling can be prospected.
By using Raman spectroscopy as a cutting edge method future investigations could be shortened. If biomass or cell number can be linked with the spectra, new functional materials can be investigated in a short time.