Refine
Year of publication
Document Type
- Master's Thesis (187) (remove)
Keywords
- Augmented Reality (3)
- Computersimulation (3)
- Datenschutz (3)
- Internet of Things (3)
- virtual reality (3)
- Beschaffung (2)
- E-Partizipation (2)
- E-participation (2)
- Simulation (2)
- Sport (2)
Institute
- Institut für Computervisualistik (45)
- Fachbereich 4 (34)
- Institut für Management (33)
- Institut für Wirtschafts- und Verwaltungsinformatik (27)
- Institute for Web Science and Technologies (18)
- Institut für Informatik (14)
- Institut für Softwaretechnik (6)
- Fachbereich 1 (1)
- Fachbereich 3 (1)
- Fachbereich 6 (1)
On-screen interactive presentations have got immense popularity in the domain of attentive interfaces recently. These attentive screens adapt their behavior according to the user's visual attention. This thesis aims to introduce an application that would enable these attentive interfaces to change their behavior not just according to the gaze data but also facial features and expressions. The modern era requires new ways of communications and publications for advertisement. These ads need to be more specific according to people's interests, age, and gender. When advertising, it's important to get a reaction from the user but not every user is interested in providing feedback. In such a context more, advance techniques are required that would collect user's feedback effortlessly. The main problem this thesis intends to resolve is, to apply advanced techniques of gaze and face recognition to collect data about user's reactions towards different ads being played on interactive screens. We aim to create an application that enables attentive screens to detect a person's facial features, expressions, and eye gaze. With eye gaze data we can determine the interests and with facial features, age and gender can be specified. All this information will help in optimizing the advertisements.
Blockchain in Healthcare
(2020)
The underlying characteristics of blockchain can facilitate data provenance, data integrity, data security, and data management. It has the potential to transform the healthcare sector. Since the introduction of Bitcoin in the fintech industry, the blcockhain technology has been gaining a lot of traction and its purpose is not just limited to finance. This thesis highlights the inner workings of blockchain technology and its application areas with possible existing solutions. Blockchain could lay the path for a new revolution in conventional healthcare systems. We presented how individual sectors within the healthcare industry could use blockchain and what solution persists. Also, we have presented our own concept to improve the existing paper-based prescription management system which is based on Hyperledger framework. The results of this work suggest that healthcare can benefit from blockchain technology bringing in the new ways patients can be treated.
Since the invention of U-net architecture in 2015, convolutional networks based on its encoder-decoder approach significantly improved results in image analysis challenges. It has been proven that such architectures can also be successfully applied in different domains by winning numerous championships in recent years. Also, the transfer learning technique created an opportunity to push state-of-the-art benchmarks to a higher level. Using this approach is beneficial for the medical domain, as collecting datasets is generally a difficult and expensive process.
In this thesis, we address the task of semantic segmentation with Deep Learning and make three main contributions and release experimental results that have practical value for medical imaging.
First, we evaluate the performance of four neural network architectures on the dataset of the cervical spine MRI scans. Second, we use transfer learning from models trained on the Imagenet dataset and compare it to randomly initialized networks. Third, we evaluate models trained on the bias field corrected and raw MRI data. All code to reproduce results is publicly available online.
Constituent parsing attempts to extract syntactic structure from a sentence. These parsing systems are helpful in many NLP applications such as grammar checking, question answering, and information extraction. This thesis work is about implementing a constituent parser for German language using neural networks. Over the past, recurrent neural networks have been used in building a parser and also many NLP applications. In this, self-attention neural network modules are used intensively to understand sentences effectively. With multilayered self-attention networks, constituent parsing achieves 93.68% F1 score. This is improved even further by using both character and word embeddings as a representation of the input. An F1 score of 94.10% was the best achieved by constituent parser using only the dataset provided. With the help of external datasets such as German Wikipedia, pre-trained ELMo models are used along with self-attention networks achieving 95.87% F1 score.
Thesis is devoted to the topic of challenges and solutions for human resources management (HRM) in international organizations. The aim is to investigate methodological approaches to assessment of HRM challenges and solutions, and to apply them on practice, to develop ways of improvement of HRM of a particular enterprise. The practical research question investigated is “Is the Ongoing Professional Development – Strategic HRM (OPD-SHRM) model a better solution for HRM system of PrJSC “Philip Morris Ukraine”?”
To achieve the aim of this work and to answer the research question, we have studied theoretical approaches to explaining and assessing HRM in section 1, analyzed HRM system of an international enterprise in section 2, and then synthesized theory and practice to find intersection points in section 3.
Research findings indicate that the main challenge of HRM is to balance between individual and organizational interests. Implementation of OPD-SHRM is one of the solutions. Switching focus from satisfaction towards success will bring both tangible and intangible benefits for individuals and organization. In case of PrJSC “Philip Morris Ukraine”, the maximum forecasted increase is 330% in net profit, 350% in labor productivity, and 26% in Employee Development and Engagement Index.
Current political issues are often reflected in social media discussions, gathering politicians and voters on common platforms. As these can affect the public perception of politics, the inner dynamics and backgrounds of such debates are of great scientific interest. This thesis takes user generated messages from an up-to-date dataset of considerable relevance as Time Series, and applies a topic-based analysis of inspiration and agenda setting to it. The Institute for Web Science and Technologies of the University Koblenz-Landau has collected Twitter data generated beforehand by candidates of the European Parliament Election 2019. This work processes and analyzes the dataset for various properties, while focusing on the influence of politicians and media on online debates. An algorithm to cluster tweets into topical threads is introduced. Subsequently, Sequential Association Rules are mined, yielding wide array of potential influence relations between both actors and topics. The elaborated methodology can be configured with different parameters and is extensible in functionality and scope of application.
Unkontrolliert gewachsene Software-Architekturen zeichnen sich i.d.R. durch fehlende oder schlecht nachvollziehbare Strukturen aus. Hierfür können als Gründe beispielsweise mangelhafte Definitionen oder ein langsames Erodieren sein. Dies ist auch unter dem Begriff "Big Ball of Mud" bekannt. Langfristig erhöhen solche architekturellen Mängel nicht nur die Entwicklungskosten, sondern können letztendlich auch Veränderungen vollständig verhindern.
Die Software-Architektur benötigt somit eine kontinuierliche Weiterentwicklung, um solchen Effekten entgegen wirken zu können. Eine gute Software-Architektur unterstützt die Software-Entwicklung und erhöht die Produktivität. Auf der Ebene von Quellcode existieren bereits etablierte Vorgehensweisen zur kontrollierten Verbesserung der Qualität. Im Gegensatz hierzu existieren für Verbesserungen einer Software-Architektur jedoch keine allgemeingültigen Vorgehensweisen, welche unabhängig vom Anwendungsfall angewandt werden können. An diesem Punkt setzt die vorliegende Arbeit an.
Bisherige Arbeiten beschäftigen sich einerseits nur mit Teilpunkten des Problems. Anderseits existieren zwar bereits Vorgehensweisen zum Treffen von Architekturentscheidungen, jedoch agieren diese auf einer stark abstrakten Ebene ohne praktische Beispiele. Diese Arbeit stellt eine leichtgewichtige Vorgehensweise zum gezielten Verbessern einer Software-Architektur vor. Die Vorgehensweise basiert auf einem generischen Problemlösungsprozess. Auf dieser Basis ist ein Prozess zum Lösen von Problemen einer Software-Architektur entwickelt worden. Im Fokus der Arbeit stehen zur Eingrenzung des Umfanges architektonische Probleme aufgrund geforderter Variabilität sowie externer Abhängigkeiten.
Die wissenschaftliche Methodik, welcher der Arbeit zugrunde liegt, agiert im Rahmen der Design Science Research (DSR). Über mehrere Iterationen hinweg wurde eine Vorgehensweise entwickelt, welche sich an Softwareentwickler mit zwei bis drei Jahren Erfahrung und Kenntnissen über Grundlage der Softwareentwicklung und Software-Architektur richtet. Fünf Schritte inkl. Verweise auf aussagekräftige Literatur leiten Anwender anschließend durch den Prozess zur gezielten Verbesserung einer Software-Architektur.
Belief revision is the subarea of knowledge representation which studies the dynamics of epistemic states of an agent. In the classical AGM approach, contraction, as part of the belief revision, deals with the removal of beliefs in knowledge bases. This master's thesis presents the study and the implementation of concept contraction in the Description Logic EL. Concept contraction deals with the following situation. Given two concept C and D, assuming that C is subsumed by D, how can concept C be changed so that it is not subsumed by D anymore, but is as similar as possible to C? This approach of belief change is different from other related work because it deals with contraction in the level of concepts and not T-Boxes and A-Boxes in general. The main contribution of the thesis is the implementation of the concept contraction. The implementation provides insight into the complexity of contraction in EL, which is tractable since the main inference task in EL is also tractable. The implementation consists of the design of five algorithms that are necessary for concept contraction. The algorithms are described, illustrated with examples, and analyzed in terms of time complexity. Furthermore, we propose an new approach for a selection function, adapt for the concept contraction. The selection function uses metadata about the concepts in order to select the best from an input set. The metadata is modeled in a framework that we have designed, based on standard metadata frameworks. As an important part of the concept contraction, the selection function is responsible for selecting the best concepts that are as similar as possible to concept C. Lastly, we have successfully implemented the concept contraction in Python, and the results are promising.
To construct a business process model manually is a highly complex and error-prone task which takes a lot of time and deep insights into the organizational structure, its operations and business rules. To improve the output of business analysts dealing with this process, different techniques have been introduced by researchers to support them during construction with helpful recommendations. These supporting recommendation systems vary in their way of what to recommend in the first place as well as their calculations taking place under the hood to recommend the most fitting element to the user. After a broad introduction into the field of business process modeling and its basic recommendation structures, this work will take a closer look at diverse proposals and descriptions published in current literature regarding implementation strategies to effectively and efficiently assist modelers during their business process model creation. A critical analysis of presentations in the selected literature will point out strengths and weaknesses of their approaches, studies and descriptions of those. As a result, the final concept matrix in this work will give a precise and helpful overview about the key features and recommendation methods used and implemented in previous research studies to pinpoint an entry into future works without the downsides already spotted by fellow researchers.
Commonsense reasoning can be seen as a process of identifying dependencies amongst events and actions. Understanding the circumstances surrounding these events requires background knowledge with sufficient breadth to cover a wide variety of domains. In the recent decades, there has been a lot of work in extracting commonsense knowledge, a number of these projects provide their collected data as semantic networks such as ConceptNet and CausalNet. In this thesis, we attempt to undertake the Choice Of Plausible Alternatives (COPA) challenge, a problem set with 1000 questions written in multiple-choice format with a premise and two alternative choices for each question. Our approach differs from previous work by using shortest paths between concepts in a causal graph with the edge weight as causality metric. We use CausalNet as primary network and implement a few design choices to explore the strengths and drawbacks of this approach, and propose an extension using ConceptNet by leveraging its commonsense knowledge base.
Implementation of Agile Software Development Methodology in a Company – Why? Challenges? Benefits?
(2019)
The software development industry is enhancing day by day. The introduction of agile software development methodologies was a tremendous structural change in companies. Agile transformation provides unlimited opportunities and benefits to the existing and new developing companies. Along with benefits, agile conversion also brings many unseen challenges. New entrants have the advantage of being flexible and cope with the environmental, consumer, and cultural changes, but existing companies are bound to rigid structure.
The goal of this research is to have deep insight into agile software development methodology, agile manifesto, and principles behind the agile manifesto. The prerequisites company must know for agile software development implementation. The benefits a company can achieve by implementing agile software development. Significant challenges that a company can face during agile implementation in a company.
The research objectives of this study help to generate strong motivational research questions. These research questions cover the cultural aspects of company agility, values and principles of agile, benefits, and challenges of agile implementation. The project management triangle will show how benefits of cost, benefits of time, and benefits of quality can be achieved by implementing agile methodologies. Six significant areas have been explored, which shows different challenges a company can face during implementation agile software development methodology. In the end, after the in depth systematic literature review, conclusion is made following some open topics for future work and recommendations on the topic of implementation of agile software development methodology in a company.
Business Process Querying (BPQ) is a discipline in the field of Business Process Man- agement which helps experts to understand existing process models and accelerates the development of new ones. Its queries can fetch and merge these models, answer questions regarding the underlying process, and conduct compliance checking in return. Many languages have been deployed in this discipline but two language types are dominant: Logic-based languages use temporal logic to verify models as finite state machines whereas graph-based languages use pattern matching to retrieve subgraphs of model graphs directly. This thesis aims to map the features of both language types to features of the other to identify strengths and weaknesses. Exemplarily, the features of Computational Tree Logic (CTL) and The Diagramed Modeling Language (DMQL) are mapped to one another. CTL explores the valid state space and thus is better for behavioral querying. Lacking certain structural features and counting mechanisms it is not appropriate to query structural properties. In contrast, DMQL issues structural queries and its patterns can reconstruct any CTL formula. However, they do not always achieve exactly the same semantic: Patterns treat conditional flow as sequential flow by ignoring its conditions. As a result, retrieved mappings are invalid process execution sequences, i.e. false positives, in certain scenarios. DMQL can be used for behavioral querying if these are absent or acceptable. In conclusion, both language types have strengths and are specialized for different BPQ use cases but in certain scenarios graph-based languages can be applied to both. Integrating the evaluation of conditions would remove the need for logic-based languages in BPQ completely.
Data visualization is an effective way to explore data. It helps people to get a valuable insight of the data by placing it in a visual context. However, choosing a good chart without prior knowledge in the area is not a trivial job. Users have to manually explore all possible visualizations and decide upon ones that reflect relevant and desired trend in the data, are insightful and easy to decode, have a clear focus and appealing appearance. To address these challenges we developed a Tool for Automatic Generation of Good viSualizations using Scoring (TAG²S²). The approach tackles the problem of identifying an appropriate metric for judging visualizations as good or bad. It consists of two modules: visualization detection: given a data-set it creates a list of combination of data attributes for scoring and visualization ranking: scores each chart and decides which ones are good or bad. For the later, an utility metric of ten criteria was developed and each visualization detected in the first module is evaluated on these criteria. Only those visualizations that received enough scores are then presented to the user. Additionally to these data parameters, the tool considers user perception regarding the choice of visual encoding when selecting a visualization. To evaluate the utility of the metric and the importance of each criteria, test cases were developed, executed and the results presented.
Clubs, such as Scouts, rely on the work of their volunteer members, who have a variety of tasks to accomplish. Often there are sudden changes in their organization teams and offices, whereby planning steps are lost and inexperience in planning occurs. Since the special requirements are not covered by already existing tools, ScOuT, a planning tool for the organization administration, is designed and developed in this work to support clubs with regard to the mentioned problems. The focus was on identifying and using various suitable guidelines and heuristic methods to create a usable interface. The developed product was evaluated empirically by a user survey in terms of usability.
The result of this study shows that already a high degree of the desired goal could be reached by the inclusion of the guidelines and methods. From this it can be concluded that with the help of user-specific concept ideas and the application of suitable guidelines and methods, a suitable basis for a usable application to support clubs can be created.
The goal of simulations in computergraphics is the simulation of realistic phenomena of materials. Therefore, internal and external acting forces are accumulated in each timestep. From those, new velocities get calculated that ultimately change the positions of geometry or particles. Position Based Dynamics omits thie velocity layer and directly works on the positions. Constraints are a set of rules defining the simulated material. Those rules must not be violated throughout the simulation. If this happens, the violating positions get changed so that the constraints get fullfilled once again. In this work a PBD-framework gets implemented, that allows simulations of solids and fluids. Constraints get solved using GPU implementations of Gauss-Seidel and Gauss-Jakobi solvers. Results are physically plausible simulations that are real-time capable.
The erosion of the closed innovation paradigm in conjunction with increasing competitive pressure has boosted the interest of both researchers and organizations in open innovation. Despite such rising interest, several companies remain reluctant to open their organizational boundaries to practice open innovation. Among the many reasons for such reservation are the pertinent complexity of transitioning toward open innovation and a lack of understanding of the procedures required for such endeavors. Hence, this thesis sets out to investigate how organizations can open their boundaries to successfully transition from closed to open innovation by analyzing the current literature on open innovation. In doing so, the transitional procedures are structured and classified into a model comprising three phases, namely unfreezing, moving, and institutionalizing of changes. Procedures of the unfreezing phase lay the foundation for a successful transition to open innovation, while procedures of the moving phase depict how the change occurs. Finally, procedures of the institutionalizing phase contribute to the sustainability of the transition by employing governance mechanisms and performance measures. Additionally, the individual procedures are characterized along with their corresponding barriers and critical success factors. As a result of this structured depiction of the transition process, a guideline is derived. This guideline includes the commonly employed actions of successful practitioners of open innovation, which may serve as a baseline for interested parties of the paradigm. With the derivation of the guideline and concise depiction of the individual transitional phases, this thesis consequently reduces the overall complexity and increases the comprehensibility of the transition and its implications for organizations.
With the appearance of modern virtual reality (VR) headsets on the consumer market, there has been the biggest boom in the history of VR technology. Naturally, this was accompanied by an increasing focus on the problems of current VR hardware. Especially the control in VR has always been a complex topic.
One possible solution is the Leap Motion, a hand tracking device that was initially developed for desktop use, but with the last major software update it can be attached to standard VR headsets. This device allows very precise tracking of the user’s hands and fingers and their replication in the virtual world.
The aim of this work is to design virtual user interfaces that can be operated with the Leap Motion to provide a natural method of interaction between the user and the VR environment. After that, subject tests are performed to evaluate their performance and compare them to traditional VR controllers.
Data flow models in the literature are often very fine-grained, which transfers to the data flow analysis performed on them and thus leads to a decrease in the analysis' understandability. Since a data flow model, which abstracts from the majority of implementation details of the program modeled, allows for potentially easier to understand data flow analyses, this master thesis deals with the specification and construction of a highly abstracted data flow model and the application of data flow analyses on this model. The model and the analyses performed on it have been developed in a test-driven manner, so that a wide range of possible data flow scenarios could be covered. As a concrete data flow analysis, a static security check in the form of a detection of insufficient user input sanitization has been performed. To date, there's no data flow model on a similarly high level of abstraction. The proposed solution is therefore unique and facilitates developers without expertise in data flow analysis to perform such analyses.