Master's Thesis
Refine
Year of publication
Document Type
- Master's Thesis (192) (remove)
Keywords
- Augmented Reality (3)
- Computersimulation (3)
- Datenschutz (3)
- Internet of Things (3)
- virtual reality (3)
- Beschaffung (2)
- E-Partizipation (2)
- E-participation (2)
- Simulation (2)
- Sport (2)
Institute
- Institut für Computervisualistik (46)
- Fachbereich 4 (34)
- Institut für Management (33)
- Institut für Wirtschafts- und Verwaltungsinformatik (27)
- Institute for Web Science and Technologies (20)
- Institut für Informatik (14)
- Institut für Softwaretechnik (6)
- Fachbereich 1 (1)
- Fachbereich 3 (1)
- Fachbereich 6 (1)
- Institut für Integrierte Naturwissenschaften, Abt. Biologie (1)
- Institut für Kulturwissenschaft (1)
- Institut für Psychologie (1)
- Institut für Sportwissenschaft (1)
- Institut für Umweltwissenschaften (1)
- Institut für naturwissenschaftliche Bildung (1)
- Zentrum für Fernstudien und Universitäre Weiterbildung (1)
One of the greatest goals in computer graphics is the aesthetic representation of objects. In addition to conventional methods, another field focuses on non-photorealistic renderings. The so-called example-based rendering is an area where users can transfer their art style to a pre-computed 3D rendering, using a hand-painted template. There are some algorithms that already provide impressive results, but their problem is that most of these procedures count as offline methods and are not able to produce results in real-time. For this reason, this work show a method that satisfies this condition. In addition, the influence of the run-time reduction on the results is investigated. Requirements are defined, to which the method and its results are examined. Other methods in this field are referenced and compared with their results.
Ontologies are valuable tools for knowledge representation and important building blocks of the Semantic Web. They are not static and can change over time. Changing an ontology can be necessary for various reasons: the domain that is represented by an ontology can change or an ontology is reused and must be adapted to the new context. In addition, modeling errors could have been introduced into the ontology which must be found and removed. The non-triviality of the change process has led to the emerge of ontology change as an own field of research. The removal of knowledge from ontologies is an important aspect of this change process, because even the addition of new knowledge to an ontology potentially requires the removal of older, conflicting knowledge. Such a removal must be performed in a thought-out way. A naïve change of concepts within the ontology can easily remove other, unrelated knowledge or alter the semantics of concepts in an unintended way [2]. For these reasons, this thesis introduces a formal operator for the fine-grained retraction of knowledge from EL concepts which is partially based on the postulates for belief set contraction and belief base contraction [3, 4, 5] and the work of Suchanek et al. [6]. For this, a short introduction to ontologies and OWL 2 is given and the problem of ontology change is explained. It is then argued why a formal operator can support this process and why the Description Logic EL provides a good starting point for the development of such an operator. After this, a general introduction to Description Logic is given. This includes its history, an overview of its applications and common reasoning tasks in this logic. Following this, the logic EL is defined. In a next step, related work is examined and it is shown why the recovery postulate and the relevance postulate cannot be naïvely employed in the development of an operator that removes knowledge from EL concepts. Following this, the requirements to the operator are formulated and properties are given which are mainly based on the postulates for belief set and belief base contraction. Additional properties are developed which make up for the non-applicability of the recovery and relevance postulates. After this, a formal definition of the operator is given and it is shown that the operator is applicable to the task of a fine-grained removal of knowledge from EL concepts. In a next step, it is proven that the operator fulfills all the previously defined properties. It is then demonstrated how the operator can be combined with laconic justifications [7] to assist a human ontology editor by automatically removing unwanted consequences from an ontology. Building on this, a plugin for the ontology editor Protégé is introduced that is based on algorithms that were derived from the formal definition of the operator. The content of this work is then summarized and a final conclusion is drawn. The thesis closes with an outlook into possible future work.
This thesis deals with the development and evaluation of a concept of novel interaction with ubiquitous user interfaces. To accomplish the evaluation of this interaction concept, a prototype was implementated by using an existing head-mounted display solution and an android smartphone.
Furthermore, in the course of this thesis, a concrete use case for this prototype " the navigation through a city block with the aid of an electronic map " was developed and built as an executable application to help evaluate the quality of the interaction concept. Therefore, fundamental research results were achieved.
The erosion of the closed innovation paradigm in conjunction with increasing competitive pressure has boosted the interest of both researchers and organizations in open innovation. Despite such rising interest, several companies remain reluctant to open their organizational boundaries to practice open innovation. Among the many reasons for such reservation are the pertinent complexity of transitioning toward open innovation and a lack of understanding of the procedures required for such endeavors. Hence, this thesis sets out to investigate how organizations can open their boundaries to successfully transition from closed to open innovation by analyzing the current literature on open innovation. In doing so, the transitional procedures are structured and classified into a model comprising three phases, namely unfreezing, moving, and institutionalizing of changes. Procedures of the unfreezing phase lay the foundation for a successful transition to open innovation, while procedures of the moving phase depict how the change occurs. Finally, procedures of the institutionalizing phase contribute to the sustainability of the transition by employing governance mechanisms and performance measures. Additionally, the individual procedures are characterized along with their corresponding barriers and critical success factors. As a result of this structured depiction of the transition process, a guideline is derived. This guideline includes the commonly employed actions of successful practitioners of open innovation, which may serve as a baseline for interested parties of the paradigm. With the derivation of the guideline and concise depiction of the individual transitional phases, this thesis consequently reduces the overall complexity and increases the comprehensibility of the transition and its implications for organizations.
This thesis deals with Responsible Research and Innovation (RRI). The objectives of the work are to clarify the term RRI and to identify the current state of research in the field. RRI is a concept significantly shaped by the European Commission (EC). It is composed of responsibility, research and innovation, which will be considered individually to begin with. According to the definition by Clausen (2009) used here, responsibility is made up of a subject ("Who takes responsibility"), an object ("for what,"), one or several addressees ("to whom") and authorities ("and why?"). In the conceptual chapter of this thesis, the role of organisations as subjects of responsibility is grounded in the fact that they work toward a certain purpose. More importantly, though, society as an important addressee and authority of responsibility, increasingly calls for organisations to take responsibility for the consequences and wider impacts of their actions. The field of sustainability, with its pillars of economical, ecological and social issues, is considered a key object of responsibility. Innovation and research as an integral part of it are understood as complex processes aiming at reaching corporate goals in novel ways. By means of a literature analysis according to Webster & Watson (2002) the current state of research on RRI is explored. Most of the literature used is identified as conceptual in nature. Despite the great uncertainty that still prevails, four dimensions of RRI can be identified: Deliberation, Anticipation, Reflexivity, and Responsiveness. Taking into account the results of the literature review, the dimensions offered by the EC are classified as important activities rather than conceptual dimensions. Research and innovation in alignment with RRI involves stakeholders, particularly society, in the process right from the beginning. It also examines its potential impacts and continually reflects on its motivations and underlying values. The most important aspect is the adaptability of the process in response to the results from the other dimensions. All activities are based on responsibility and sustainability and are integrated through appropriate management and governance. In addition to conceptual work, little research has been done especially with regard to operationalisation and implementation of RRI. For the latter, the embedding in a stage-gate process according to Cooper (2010) is suggested here. However, the actual design of the conceptual dimensions still needs to be specified. In addition, overall awareness of RRI needs to be sharpened and a commitment to RRI made.
We examine the systematic underrecognition of female scientists (Matilda effect) by exploring the citation network of papers published in the American Physical Society (APS) journals. Our analysis shows that articles written by men (first author, last author and dominant gender of authors) receive more citations than similar articles written by women (first author, last author and dominant gender of authors) after controlling for the journal of publication, year of publication and content of the publication. Statistical significance of the overlap between the lists of references was considered as the measure of similarity between articles in our analysis. In addition, we found that men are less likely to cite articles written by women and women are less likely to cite articles written by men. This pattern leads to receiving more citations by articles written by men than similar articles written by women because the majority of authors who published in APS journals are male (85%). We also observed Matilda effect reduces when articles are published in journals with the highest impact factors. In other words, people’s evaluation of articles published in these journals is not affected by the gender of authors significantly. Finally, we suggested a method that can be applied by editors in academic journals to reduce the evaluation bias to some extent. Editors can identify missing citations using our proposed method to complete bibliographies. This policy can reduce the evaluation bias because we observed papers written by female scholars (first author, last author, the dominant gender of authors) miss more citations than articles written by male scholars (first author, last author, the dominant gender of authors).
Statistical Shape Models (SSMs) are one of the most successful tools in 3Dimage analysis and especially medical image segmentation. By modeling the variability of a population of training shapes, the statistical information inherent in such data are used for automatic interpretation of new images. However, building a high-quality SSM requires manually generated ground truth data from clinical experts. Unfortunately, the acquisition of such data is a time-consuming, error-prone and subjective process. Due to this effort, the majority of SSMs is often based on a limited set of this ground truth training data, which makes the models less statistically meaningful. On the other hand, image data itself is abundant in clinics from daily routine. In this work, methods for automatically constructing a reliable SSM without the need of manual image interpretation from experts are proposed. Thus, the training data is assumed to be the result of any segmentation algorithm or may originate from other sources, e.g. non-expert manual delineations. Depending on the algorithm, the output segmentations will contain errors to a higher or lower degree. In order to account for these errors, areas of low probability of being a boundary should be excluded from the training of the SSM. Therefore, the probabilities are estimated with the help of image-based approaches. By including many shape variations, the corrupted parts can be statistically reconstructed. Two approaches for reconstruction are proposed - an Imputation method and Weighted Robust Principal Component Analysis (WRPCA). This allows the inclusion of many data sets from clinical routine, covering a lot more variations of shape examples. To assess the quality of the models, which are robust against erroneous training shapes, an evaluation compares the generalization and specificity ability to a model build from ground truth data. The results show, that especially WRPCA is a powerful tool to handle corrupted parts and yields to reasonable models, which have a higher quality than the initial segmentations.
This work covers techniques for interactive and physically - based rendering of hair for computer generated imagery (CGI). To this end techniques
for the simulation and approximation of the interaction of light with hair are derived and presented. Furthermore it is described how hair, despite such computationally expensive algorithms, can be rendered interactively.
Techniques for computing the shadowing in hair as well as approaches to render hair as transparent geometry are also presented. A main focus of
this work is the DBK-Buffer, which was conceived, implemented and evaluated. Using the DBK-Buffer, it is possible to render thousands of hairs as
transparent geometry without being dependent on either the newest GPU hardware generation or a great amount of video memory. Moreover, a comprehensive evaluation of all the techniques described was conducted with respect to the visual quality, performance and memory requirements. This
revealed that hair can be rendered physically - based at interactive or even at real - time frame rates.
The application of artificial intelligences on digital games became more and more successful in recent years. A drawback is, that they need lots of computing power to achieve good results, the more complex the game, the more computing power is needed. In this thesis a strategy learning-system is implemented, which is based on crowd-learned heuristics. The heuristics are given in a wiki. The research is done according to the Design Science Research Methodology. The implemented system is allied to the game Dominion. To do this, an ontology for Dominion is designed. A mapping language is defined and implemented in the system, which allows the mapping of information in the wiki to an ontology. Furthermore, metrics to rate the found strategies are defined. Using the system, users can enter a mapping for the information transfer and apply it. They can also select cards from Dominion, for which the system determines and rates strategies. Finally, the system is evaluated by Dominion-players by rating the strategies, which are found by the system, and the defined metrics.
In this master thesis some new helpful features will be added to the Spanning Tree Simulator. This simulator was created by Andreas Sebastian Janke in his bachelor thesis [Jan10b] in 2010. It is possible to visualize networks which are defined in a configuration file. Each of them is a XML representation of a network consisting of switches and hosts. After loading such a file into the program it is possible to run the Spanning Tree Algorithm IEEE 802.1D. In contrast to the previous version only the switches are implemented as threads. When the algorithm is finished a spanning tree is built. This means, that messages cannot run into loops anymore. This is important because loops can cause a total breakdown of the communication in a network, if the running routing protocols cannot handle loops.
In a software reengineering task legacy systems are adapted computer-aided to new requirements. For this an efficient representation of all data and information is needed. TGraphs are a suitable representation because all vertices and edges are typed and may have attributes. Further more there exists a global sequence of all graph elements and for each vertex exists a sequence of all incidences. In this thesis the "Extractor Description Language" (EDL) was developed. It can be used to generate an extractor out of a syntax description, which is extended by semantic actions. The generated extractor can be used to create a TGraph representation of the input data. In contrast to classical parser generators EDL support ambiguous grammars, modularization, symbol table stacks and island grammars. These features simplify the creation of the syntax description. The collected requirements for EDL are used to determine an existing parser generator which is suitable to realize the requirements.
After that the syntax and semantics of EDL are described and implemented using the suitable parser generator. Following two extractors one for XML and one for Java are created with help of EDL. Finally the time they need to process some input data is measured.
Die Diskussion zum Thema Mindestlohn ist ein stets aktuelles und findet zur Jahreswende 2011/2012, in der diese Arbeit entstanden ist, von der Politik und Wirtschaft besonders viel Aufmerksamkeit. Die Aktualität dieses Themas und ihre Dynamik werden besonders darin bemerkbar, dass bei der Untersuchung der deutschen Literatur zu diesem Thema viele der Aussagen und Thesen nicht mehr zutreffen. Das eingangs aufgeführte Zitat von der amtierenden Bundesministerin für Arbeit und Soziales, Ursula von der Leyen, bringt zum Ausdruck, dass mittlerweile in der Politik ein Konsens darüber existiert, dass vollzeitbeschäftigte Arbeitnehmer in der Lage sein müssen, ihren Lebensunterhalt aus ihrem Einkommen zu sichern. Dies stellt für die christlich-demokratische Regierungspartei einen Dogmenwechsel dar. Während die CDU in den letzten Jahrzehnten auf die Tarifbindung gesetzt und einen Mindestlohn kategorisch abgelehnt hat, geht sie nun dazu über, Lohnuntergrenzen für alle Branchen zum Ziel ihrer Regierungsarbeit zu machen. Dies ist in starkem Maße darauf zurückzuführen, dass auf dem Arbeitsmarkt in den letzten Jahren die Lohnspreizung, die traditionell in Deutschland niedrig war, eine stark divergente Entwicklung genommen hat.
Ein weiterer Grund ist die abnehmende Rolle der Tarifbindung der letzten Jahre. Die Folge dieser Entwicklungen ist, dass 1,2 Millionen Menschen, somit vier Prozent der Beschäftigten, für Löhne unter fünf Euro. Weitere 2,2 Millionen Menschen arbeiten für Stundenlöhne unter sechs Euro, 3,7 Millionen Menschen verdienen unter sieben Euro und 5,1 Millionen Menschen arbeiten für Löhne unter acht Die Frage inwieweit ein Leben in Würde unter diesen Voraussetzungen möglich ist, beschäftigt große Teile der Gesellschaft. Denn damit sind das Volumen und die Lohnhöhe des Niedriglohnsektors auf ein Niveau gesunken, welche gesellschaftlich und politisch nicht mehr einfach zu vertreten sind. Zur Abwendung dieser Entwicklung wird das wirtschaftspolitische Instrument Mindestlohn, als probates Mittel, häufig in die Diskussion gebracht. So haben in der Vergangenheit viele Staaten den Mindestlohn auf unterschiedliche Art eingesetzt. Die Einführung eines flächendeckenden Mindestlohns in der Bundesrepublik wird vor allem mit den folgenden Zielen befürwortet.
Der Mindestlohn soll zum einen gewährleisten, dass Vollzeitbeschäftigte ein Einkommen erzielen, dass mindestens ihrem soziokulturellen Existenzminimum entspricht. Andererseits soll die Einführung des Mindestlohns die Notwendigkeit des Aufstockens mit dem Arbeitslosengeld II hemmen und somit die öffentlichen Kassen entlasten. Die Gegner des Mindestlohns lehnen die Einführung eines flächendeckenden allgemeinverbindlichen Mindestlohns, vor allem aufgrund arbeitsmarkttheoretischer Überlegungen, kategorisch ab. Diese vertreten die Ansicht, dass die Mechanismen des Arbeitsmarktes eine selbstregulierende Wirkung besitzen und ergänzt um die Tarifautonomie ausreichend geregelt sind. Eine drohende Vernichtung bestehender Arbeitsplätze und eine ausbleibende Schaffung neuer Arbeitsplätze werden als Folge der Einführung von Mindestlohn weiterhin argumentiert.
Hinzu kommt, dass in der Theorie je nach Denkschule und Position eine entgegengesetzte Auswirkung auf den Arbeitsmarkt prognostiziert werden kann. Vor dem Hintergrund der aktuellen Debatte untersucht die vorliegende Arbeit die Auswirkung der Einführung eines Mindestlohns. Um eine objektive Wertung für die vorliegende Problemstellung zu finden, wird mit Hilfe von Netlogo die computergestützte agentenbasierte Simulation benutzt. Anhand eines fiktiven Marktes mit fiktiven Akteuren/"Agenten" wird der Versuch unternommen, eine modellhafte Darstellung des Arbeitsmarktes abzubilden. Insbesondere soll untersucht werden, inwieweit die Einführung eines Mindestlohns, branchenspezifisch oder flächendeckend, den Beschäftigungsgrad und die Höhe der Löhne beeinflusst.
In current research of the autonomous mobile robots, path planning is still a very important issue.
This master's thesis deals with various path planning algorithms for the navigation of such mobile systems. This is not only to determine a collision-free trajectory from one point to another. The path should still be optimal and comply with all vehicle-given constraints. Especially the autonomous driving in an unknown and dynamic environment poses a major challenge, because a closed-loop control is necessary and thus a certain dynamic of the planner is demanded.
In this paper, two types of algorithms are presented. First, the path planner, based on A*, which is a common graph search algorithm: A*, Anytime Repairing A*, Lifelong Planning A*, D* Lite, Field D*, hybrid A*. Second, the algorithms which are based on the probabilistic planning algorithm Rapidly-exploring Random Tree (Rapidly-exploring Random Tree, RRT*, Lifelong Planning RRT*), as well as some extensions and heuristics. In addition, methods for collision avoidance and path smoothing are presented. Finally, these different algorithms are evaluated and compared with each other.
Blockchain in Healthcare
(2020)
The underlying characteristics of blockchain can facilitate data provenance, data integrity, data security, and data management. It has the potential to transform the healthcare sector. Since the introduction of Bitcoin in the fintech industry, the blcockhain technology has been gaining a lot of traction and its purpose is not just limited to finance. This thesis highlights the inner workings of blockchain technology and its application areas with possible existing solutions. Blockchain could lay the path for a new revolution in conventional healthcare systems. We presented how individual sectors within the healthcare industry could use blockchain and what solution persists. Also, we have presented our own concept to improve the existing paper-based prescription management system which is based on Hyperledger framework. The results of this work suggest that healthcare can benefit from blockchain technology bringing in the new ways patients can be treated.
In this work a framework is developed that is used to create an evaluation scheme for the evaluation of text processing tools. The evaluation scheme is developed using a model-dependent software evaluation approach and the focus of the model-dependent part is the text-processing process which is derived from the Conceptual Analysis Process developed in the GLODERS project. As input data a German court document is used containing two incidents of extortion racketeering which happened in 2011 and 2012. The evaluation of six different tools shows that one tool offers great results for the given dataset when it is compared to manual results. It is able to identify and visualize relations between concepts without any additional manual work. Other tools also offer good results with minor drawbacks. The biggest drawback for some tools is the unavailability of models for the German language. They can perform automated tasks only on English documents. Nonetheless some tools can be enhanced by self-written code which allows users with development experience to apply additional methods.
Today, augmented reality is becoming more and more important in several areas like industrial sectors, medicine, or tourism. This gain of importance can easily be explained by its powerful extension of real world content. Therefore, augmented realty became a way to explain and enhance the real world information. Yet, to create a system which can enhance a scene with additional information, the relation between the system and the real world must be known. In order to establish this relationship a commonly used method is optical tracking. The system calculates its relation to the real world from camera images. To do so, a reference which is known is needed in the scene to serve as an orientation. Today, this is mostly a 2D-marker or a 2D-texture. These are placed in the real world scenery to serve as a reference. But, this is an intrusion in the scene. That is why it is desirable that the system works without such an additional aid. An strategy without manipulating the scene is object-tracking. In this approach, any object from the scene can be used as a reference for the system. As an object is far more complex than a marker, it is harder for the system to establish its relationship with the real world. That is why most methods for 3D-object-tracking reduce the object by not using the whole object as reference. The focus of this thesis is to research how a whole object can be used as a reference in a way that either the system or the camera can be moved in any 360 degree angle around the object without loosing the relation to the real world. As a basis the augmented reality framework, the so called VisionLib, is used. Extensions to this system for 360 degree tracking are implemented in different ways and analyzed in the scope of this work. Also, the different extensions are compared. The best results were achieved by improving the reinitialization process. With this extension, current camera images of the scene are given to the system. With the hek of these images, the system can calculate the relation to the real world faster in case the relation went missing.
Im Rahmen dieser Masterarbeit wird ein umfassender Überblick über die Vielfalt der Sicherheitsmodelle gegeben, indem ausgewählte Sicherheitsmodelle beschrieben, klassifiziert und miteinander verglichen werden.
Sicherheitsmodelle beschreiben in einer abstrakten Weise die sicherheitsrelevanten
Komponenten und Zusammenhänge eines Systems. Mit den Sicherheitsmodellen können komplexe Sachverhalte veranschaulicht und analysiert werden.
Da Sicherheitsmodelle unterschiedliche Sicherheitsaspekte behandeln, beschäftigt
sich diese Arbeit mit der Ausarbeitung eines Klassifizierungsschemas, welches
die strukturelle und konzeptuelle Besonderheiten der Modelle in Bezug auf die zugrundeliegenden Sicherheitsaspekte beschreibt. Im Rahmen des Klassifizierungsschemas werden die drei grundlegenden Modellklassen gebildet: Zugriffskontrollmodelle, Informationsflussmodelle und Transaktionsmodelle.
Sicherheitsmodelle werden in einem direkten und indirekten Vergleich gegenüber gestellt. Im letzten Fall werden sie einer oder mehrerer Modellklassen des Klassifizierungsschemas zugeordnet. Diese Klassifizierung erlaubt, Aussagen über die betrachteten Sicherheitsaspekte und die strukturellen bzw. konzeptuellen Besonderheiten eines Sicherheitsmodells in Bezug auf die anderen Sicherheitsmodelle
zu machen.
Beim direkten Vergleich werden anhand der ausgewählten Kriterien die Eigenschaften
und Aspekte der Sicherheitsmodelle orthogonal zu den Modellklassen
betrachtet.
Die vorliegende Masterarbeit thematisiert die Evaluation einer sprachgesteuerten Lösung in der Produktion mit multimodaler Eingabe. Dabei wurden die Usability und die Benut-zerfreundlichkeit eines gewählten Sprachdialogsystems bewertet. Die Bewertung wurde mit Hilfe von Benutzertests und eines modifizierten SASSI-Fragebogens durchgeführt. Weiterhin wurden auch technische Kriterien, wie die Wortfehlerrate und die Out-of-grammar Rate zur Hilfe gezogen. Für den Versuch wurden zwei verschiedene Szenarien aus einer realen Produktionsum-gebung definiert. Dabei sollten die Teilnehmer verschiedene Aufgaben mit Hilfe des Testsystems erledigen. Die Interaktion mit dem Sprachdialogsystem fand anhand von ge-sprochenen Befehlen statt, welche durch eine Grammatik definiert wurden. Die Sprach-kommandos wurden durch die Zuhilfenahme eines WLAN-Headsets an das Sprachsys-tem übertragen. Während des Versuchs wurden Aussagen der Teilnehmer protokolliert und die technischen Kriterien notiert.
Das Ergebnis der Evaluation verdeutlicht, dass das Sprachdialogsystem eine hohe Quali-tät bezüglich Usability und Benutzerfreundlichkeit aufweist. Dabei sind die Wortfehler-rate und die Out-of-grammar Rate sehr niedrig ausgefallen und das System wurde von den Benutzern deutlich positiv bewertet. Nichtsdestotrotz wurden einige Kritikpunkte ge-nannt, die zu einer Verbesserung des Systems beitragen können.
Opinion Mining
(2010)
Das Opinion Mining stellt eine Methode dar, mit der Meinungen aus dem WWW gefiltert und intelligent aufbereitet werden können, um Stimmungsanalysen zu einzelnen Produkten und Dienstleistungen zu erstellen. Opionion Mining bietet den Unternehmen somit eine innovative Möglichkeit, ihre Marketingkommmunikation planbarer und wirkungsvoller zu gestalten und ein deutlich erhöhtes Maß an Involvement und Relevanz innerhalb bestimmter Zielgruppen zu erreichen. Zunächst werden die theoretischen Grundlagen des Web 2.0 und relevanter Methoden der Business Intelligence erläutert. Die empirische Untersuchung bildet den Schwerpunkt der Arbeit. Auf die Entwicklung eines Vorgehensmodells folgt die praktische Anwendung dieses Modells auf ein Produkt. Die Ergebnisse werden im Review-Scan im Hinblick auf Forschung und Praxis ausgewertet und diskutiert.
Zusätzlich zum Rendern wird die Rechenleistung moderner Grafikkarten immer häufiger auch für allgemeine Berechnungen (GPGPU) genutzt. Für die Umsetzung stehen verschiedene Möglichkeiten zur Verfügung, die von der Verwendung der Renderingpipeline bis zu eigenständigen Schnittstellen reichen. In dieser Arbeit werden mit Render-To-Texture, Transform Feedback, Compute Shader und OpenCL vier verschiedene GPGPU-Methoden untersucht. Anhand von Partikelsystemen werden sie hinsichtlich der benötigten Berechnungszeit, der GPU-Auslastung, Lines of Code und Portierbarkeit miteinander verglichen. Dazu wurden sowohl das N-Körper Problem, Smoothed Particle Hydrodynamics und ein Partikelschwarm als Partikelsysteme umgesetzt. Es konnte gezeigt werden, dass insbesondere OpenCL und Compute Shader sehr gute Ergebnisse liefern.
Pedestrian Detection in digital images is a task of huge importance for the development of automaticsystems and in improving the interaction of computer systems with their environment. The challenges such a system has to overcome are the high variance of the pedestrians to be recognized and the unstructured environment. For this thesis, a complete system for pedestrian detection was implemented according to a state of the art technique. A novel insight about precomputing the Color Self-Similarity accelerates the computations by a factor of four. The complete detection system is described and evaluated, and was published under an open source license.
Data visualization is an effective way to explore data. It helps people to get a valuable insight of the data by placing it in a visual context. However, choosing a good chart without prior knowledge in the area is not a trivial job. Users have to manually explore all possible visualizations and decide upon ones that reflect relevant and desired trend in the data, are insightful and easy to decode, have a clear focus and appealing appearance. To address these challenges we developed a Tool for Automatic Generation of Good viSualizations using Scoring (TAG²S²). The approach tackles the problem of identifying an appropriate metric for judging visualizations as good or bad. It consists of two modules: visualization detection: given a data-set it creates a list of combination of data attributes for scoring and visualization ranking: scores each chart and decides which ones are good or bad. For the later, an utility metric of ten criteria was developed and each visualization detected in the first module is evaluated on these criteria. Only those visualizations that received enough scores are then presented to the user. Additionally to these data parameters, the tool considers user perception regarding the choice of visual encoding when selecting a visualization. To evaluate the utility of the metric and the importance of each criteria, test cases were developed, executed and the results presented.
Die vorliegende Arbeit beschäftigt sich mit der Betrachtung der Rolle von Vertrauen zwischen den Kapitalnehmern und Kapitalgebern auf einer der größten Crowdfunding-Plattformen, Kickstarter. Kernthema ist die Beantwortung der Forschungsfragen, wie das Vertrauen zwischen den Kapitalgebern und Kapitalnehmern im Kontext von Crowdfunding entsteht, welche Faktoren das Maß von Vertrauen in dieser Beziehung beeinflussen und welche Signale für die Vertrauensbildung verantwortlich sind. Das Ziel dieser Arbeit besteht darin, das von Zerwas, Kilian und von Kortzfleisch 2015 vorgestellte konzeptuelle Modell der Einflussfaktoren von Vertrauen im Kontext von Crowdfunding aus Sicht der Kapitalgeber zu überprüfen und gegebenenfalls zu erweitern.
Auf Grundlage einer Literaturrecherche und qualitativer, leitfadenorientierter Interviews werden die gesammelten Daten diskutiert, bestehende Faktoren verifiziert und weiterführend neue Faktoren und Signale identifiziert, die die Bildung von Vertrauen auf Crowdfunding-Plattformen beeinflussen.
Als Ergebnis werden die Überarbeitung sowie Erweiterung des Ausgangsmodells um die neuen Faktoren geographische Nähe, Vermittlerqualität und Verständnis vom Konzept Crowdfunding vorgeschlagen. Anhand der Häufigkeit der Erwähnung und Relevanz der Faktoren und beeinflussenden Signale in den durchgeführten Interviews wird weiterführend eine Gewichtung der Einzelfaktoren vorgenommen. Abschließend werden Implikationen und Bedeutung für Wissenschaft und Forschung diskutiert.
One of the fundamental decisions during the development of any system is the representation of data. In computer graphics, objects are usually represented as sets of triangles. There are however many different variants with their own strengths and weaknesses. This thesis will explore distancefields as a representation for objects. Distancefields are functions, which map every point in space to the distance to the closest surface point. While this description is very simple, a number of interesting properties can be derived, allowing for a multitude of shapes, operations and effects. An overview of the necessary background and methods is given. Furthermore, some extended or new approaches are presented, such as displaying implicit surfaces, approximating indirect illumination or implementing a GPU tracer.
Im Bereich Augmented Reality ist es von großer Bedeutung, dass virtuelle
Objekte möglichst realistisch in ein Kamerabild eingebettet werden. Nur
so ist es möglich, dem Nutzer eine immersive Erfahrung zu bieten. Dazu
gehört unter anderem, Verdeckung dieser Objekte korrekt zu behandeln.
Während schon verschiedene Ansätze existieren, dieses Verdeckungsproblem
zu beheben, wird in dieser Arbeit eine Lösung mittels Natural Image
Matting vorgestellt. Mit Hilfe einer Tiefenkamera wird das Kamerabild in
Vorder- und Hintergrund aufgeteilt und anschließend das virtuelle Objekt
im Bild platziert. Für Bereiche, in denen die Zugehörigkeit zu Vorder- oder
Hintergrund nicht eindeutig ist, wird anhand bekannter Pixel ein Transparenz-
Wert geschätzt. Es werden Methoden präsentiert, welche einen
Ablauf des Image Matting in Echtzeit ermöglichen. Zudem werden
Verbesserungsmöglichkeiten dieser Methoden präsentiert und gezeigt, dass
durch diese eine höhere Bildqualität für schwierige Szenen erreicht wird.
Im Rahmen des "Design Thinking"-Prozesses kommen unterschiedliche Varianten kreativitätsfördernder Techniken zum Einsatz. Aufgrund der zunehmenden Globalisierung ergeben sich immer häufiger Kollaborationen, bei denen sich die jeweiligen Projektteilnehmer an verteilten Standorten befinden. Somit erweist sich die Digitalisierung des Design-Prozesses als durchaus erstrebenswert. Ziel der hier vorliegenden Untersuchung ist daher die Erstellung eines Bewertungsschemas, welches die Eignung digitaler Kreativitätstechniken in Bezug auf das "Entrepreneurial Design Thinking" misst. Des Weiteren soll geprüft werden, inwiefern sich der Einsatz von e-Learning-Systemen in Kombination mit der Verwendung digitaler Kreativitätstechniken eignet. Diese Prüfung soll am Beispiel der e-Learning Software "WebCT" konkretisiert werden. Hieraus ergibt sich die folgende Fragestellung: Welche digitalen Kreativitätstechniken eignen sich für die Anwendung im Bereich des "Entrepreneurial Design Thinkings" unter Einsatz der e-Learning Plattform "WebCT"? Zunächst wird eine Literaturanalyse bezüglich des "Entrepreneurial Design Thinkings", der klassische und digitale Kreativitätstechniken sowie des Arbeitens in Gruppen, was auch das Content Management, e-Learning-Systeme und die Plattform "WebCT" beinhaltet, durchgeführt. Im Anschluss daran wird eine qualitative Untersuchung durchgeführt. Auf Basis bereits bestehender Literatur, soll ein Bewertungsschema erstellt werden, welches misst, welche der behandelten digitalen Kreativitätstechniken für den Einsatz im "Entrepreneurial Design Thinking" am besten geeignet ist. Darauf aufbauend erfolgt die Verknüpfung des digitalisierten "Design Thinking"-Prozesses mit der e-Learning Plattform "WebCT". Abschließend wird diskutiert, in wie fern diese Zusammenführung als sinnvoll erachtet werden kann.
Large amounts of qualitative data make the utilization of computer-assisted methods for their analysis inevitable. In this thesis Text Mining as an interdisciplinary approach, as well as the methods established in the empirical social sciences for analyzing written utterances are introduced. On this basis a process of extracting concept networks from texts is outlined and the possibilities of utilitzing natural language processing methods within are highlighted. The core of this process is text processing, to whose execution software solutions supporting manual as well as automated work are necessary. The requirements to be met by these solutions, against the background of the initiating project GLODERS, which is devoted to investigating extortion racket systems as part of the global fiσnancial system, are presented, and their fulσlment by the two most preeminent candidates reviewed. The gap between theory and pratical application is closed by a prototypical application of the method to a data set of the research project utilizing the two given software solutions.
Einfluss eines Ausrichtungswerkzeugs auf die Bedienbarkeit in unbeaufsichtigten Eyetrackingsystemen
(2015)
Eye gaze trackers are devices that can estimate the direction of gaze of a person. Among usability testing eye tracking also allows persons with decreased limb mobility to control or to interact with the computer. The quality and availability of eye tracking equipment has been increasing while costs have been decreasing. This development leads to entering new markets by using eye tracking as an additional input dimension for a variety of applications. Up to now eye tracking has been supervised by qualified experts, who assured that the important conditions like position in front of the eye tracking device, calibration and light conditions has been kept, while using.
This thesis examines an adjustment tool, which is helping the user to adjust in front of the eye tracker and helping to keep this position during the experiment. Furthermore the accuracy while moving the head has been analysed. In this experiment an remote eye gaze tracker has been used to control a game character in the video game called 'Schau Genau!'. The goal was to determine whether the game is playable without the barrier of adjusting and calibration. The results show that adjusting in front of an eye tracker is not a problem, keeping this position is. Small changes of the head position after the calibration process leads to a lack of accuracy. Giving up the calibration and using someone else calibration shows way bigger deviation. Additional head movement increases error rate and makes controlling more difficult.
The purpose of this research is to examine various existing cloud-based Internet of Things (IoT) development platforms and evaluate one platform (IBM Watson IoT) in detail using a use case scenario. Internet of Things IoT is an emerging technology that has a vision of interconnecting the virtual world (e.g. clouds, social networks) and the physical world (e.g. device, cars, fridge, people, animals) through the Internet technology. For example, the IoT concept of smart cities which has the objectives to improve the efficiency and development of business, social and cultural services in the city, can be achieved by using sensors, actuators, clouds and mobile devices (IEEE, 2015). A sensor (e.g. temperature sensor) in the building (global world) can send the real-time data to the IoT cloud platform (virtual world), where it can be monitored, stored, analysed, or used to trigger some action (e.g. turn on the cooling system in the building if temperature exceeds a threshold limit). Although, the IoT creates vast opportunities in different areas (e.g. transportation, healthcare, manufacturing industry), it also brings challenges such as standardisation, interoperability, scalability, security and privacy. In this research report, IoT concepts and related key issues are discussed.
The focus of this research is to compare various cloud-based IoT platforms in order to understand the business and technical features they offer. The cloud-based IoT platforms from IBM, Google, Microsoft, PTC and Amazon have been studied.
To design the research, the Design Science Research (DSR) methodology has been followed, and to model the real-time IoT system the IOT-A modelling approach has been used.
The comparison of different cloud based IoT development platforms shows that all of the studied platforms provide basic IoT functionalities such as connecting the IoT devices to the cloud based IoT platform, collecting data from the IoT devices, data storage and data analytics. However, the IBM’s IoT platform appears to have an edge over the other platforms studied in this research because of the integrated run-time environment which also makes it more developer friendly. Therefore, IBM Watson IoT for Bluemix is selected for further examination of its capabilities. The IBM Watson IoT for Bluemix offerings include analytics, risk management, connect and information management. A use case was implemented to assess the capabilities that IBM Watson IoT platform offers. The digital artifacts (i.e. applications) are produced to evaluate the IBM’s IoT solution. The results show that IBM offers a very scalable, developer and deployment friendly IoT platform. Its cognitive, contextual and predictive analytics provide a promising functionality that can be used to gain insights from the IoT data transmitted by the sensors and other IoT devices.
The strategic placement of branches in the banking sector is important for improving client access, expanding market reach, and for overall business success. This thesis uses Geographic Information Systems (GIS) and machine learning clustering algorithms which includes K-Means, DBSCAN, and Hierarchical Clustering to predict optimal locations for new bank branches. At first, it analyzes the point of interest (POIs) around the existing bank branches in different locations across five German cities which include Koblenz, Dresden, Dortmund, Rostock, and Hannover. By analyzing the types of POIs around these branches, it identifies factors contributing to increased customer footfall and possible bank success. The geospatial data used in this thesis is extracted from OpenStreetMap API. A scoring mechanism, assigning scores from 0 to 10 to different POIs is then developed. This POI list with scores is then integrated with clustering algorithms to predict new branch locations, enhancing strategic planning in the banking sector. The approach used in this thesis extends well beyond the banking industry, suggesting that it can be applied in a wider range of fields, such as location-based services and spatial decision support systems.
We present the conceptual and technological foundations of a distributed natural language interface employing a graph-based parsing approach. The parsing model developed in this thesis generates a semantic representation of a natural language query in a 3-staged, transition-based process using probabilistic patterns. The semantic representation of a natural language query is modeled in terms of a graph, which represents entities as nodes connected by edges representing relations between entities. The presented system architecture provides the concept of a natural language interface that is both independent in terms of the included vocabularies for parsing the syntax and semantics of the input query, as well as the knowledge sources that are consulted for retrieving search results. This functionality is achieved by modularizing the system's components, addressing external data sources by flexible modules which can be modified at runtime. We evaluate the system's performance by testing the accuracy of the syntactic parser, the precision of the retrieved search results as well as the speed of the prototype.
Human detection is a key element for human-robot interaction. More and more robots are used in human environments, and are expected to react to the behavior of people. Before a robot can interact with a person, it must be able to detect it at first. This thesis presents a system for the detection of humans and their hands using a RGB-D camera. First, a model based hypotheses for possible positions of humans are created to recognize a person. By using the upper parts of the body are used to extract, new features based on relief and width of a person- head and shoulders are extracted. The hypotheses are checked by classifying the features with a support vector machine (SVM). The system is able to detect people in different poses. Both sitting and standing humans are found, by using the visible upper parts of the person. Moreover, the system is able to recognize if a human is facing or averting the sensor. If the human is facing the sensor, the color information and the distance between hand and body are used to detect the positions of the person- hands. This information is useful for gestures recognition and thus can further enhances human-robot interaction.
The goal of simulations in computergraphics is the simulation of realistic phenomena of materials. Therefore, internal and external acting forces are accumulated in each timestep. From those, new velocities get calculated that ultimately change the positions of geometry or particles. Position Based Dynamics omits thie velocity layer and directly works on the positions. Constraints are a set of rules defining the simulated material. Those rules must not be violated throughout the simulation. If this happens, the violating positions get changed so that the constraints get fullfilled once again. In this work a PBD-framework gets implemented, that allows simulations of solids and fluids. Constraints get solved using GPU implementations of Gauss-Seidel and Gauss-Jakobi solvers. Results are physically plausible simulations that are real-time capable.
Es wird ein Augmented-Reality Ansatz zur Erforschung modularer OSGi-Softwaresysteme präsentiert. Der Prototyp wird unter der Verwendung der Microsoft HoloLens implementiert. Module, wie Komponenten und Packages, werden in einer virtuellen Stadt dargestellt. Dieser Ansatz ermöglicht es dem Anwender, die Software-Architektur mittels intuitiver Navigation zu erkunden: Spracheingabe, Blickpunkt- und Gestenkontrolle. Eine multifunktionale Benutzeroberfläche wird vorgestellt, die für verschiedene Zielgruppen adaptiert werden kann. Viele veröffentlichte Visualisierungen weisen keine klare Zielgruppendefinition auf. Das Konzept kann leicht auf andere Darstellungsformen, wie beispielsweise der Inselmetapher übertragen werden. Erste Ergebnisse einer Evaluierung, die mittels kleiner strukturierter Interviews gewonnen werden konnten, werden präsentiert. Die Probanden mussten vier Programm-verständnis Aufgaben lösen und ihren Aufwand, sowie ihre Arbeitsbelastung einschätzen. Die Ergebnisse bilden eine gute Grundlage für weitere Forschung im Bereich der Software- Visualisierung in Augmented Reality.
Coordination and awareness mechanisms are important in systems for Computer-Supported Cooperative Work (CSCW) and traditional groupware systems. It has been a key focus of research into collaborative groupware and its capability to enable people to efficiently collaborate and coordinate work. Until now, no classification of the mechanisms has been undertaken to identify commonalities and differences in coordination and awareness mechanisms and to show their significance in collaborative environments. In addition, there is a little investigation of coordination and awareness mechanisms in new forms of groupware such as socially enabled Enterprise Collaboration Systems (ECS). Indeed, both in science and in practices, ECS incorporating social software have become increasingly important. Based on the combination of traditional groupware and social software, ECS also include coordination and awareness mechanisms that may simplify collaboration, but these have not yet been investigated.
Therefore, the aim of this thesis is to identify coordination and awareness mechanisms in the academic literature to provide a general overview of those mechanisms examples. Additionally, this thesis aims to classify the mechanism examples. Based on a deep literature analysis, concepts described in literature are chosen and applied with the intension to analyse the mechanisms and to reach a classification. Based on the classification of the identified mechanisms their commonalities and differences are examined and described to gain a better understanding of them. For illustration purpose, examples of coordination and awareness mechanisms and their application are portrayed. The mechanisms examples refer to the classification groups derived. The selection of the mechanisms for the visualization is based on significant differences in their functionality. Subsequently, the selected mechanisms, more based on traditional groupware, are checked to a limited extend whether they can be found in socially enabled ECS. The collaborative platform of IBM Connections serves as a practical example of ECS incorporating social software. IBM Connections is used at the University of Koblenz to run the platform "UniConnect". On the platform it is investigated which of the identified mechanisms examples of the literature are applied in IBM Connections and which additional mechanisms are created by users. This work is the first step in the study of coordination and awareness mechanisms in socially-enabled ECS. In addition, it is expected to detect new mechanisms which are used while the social factor to collaborative work is new.
The purpose of this thesis is to examine and collect coordination and awareness mechanisms examples in literature to analyse them. Additionally, the purpose is to provide a first overview of mechanisms and to classify them by investigating their commonalities. Beside this thesis should give incentive for further investigations to investigate coordination and awareness mechanisms in socially integrated ECS.
The mitral valve is one of the four valves in the human heart. It is located in the left heart chamber and its function is to control the blood flow from the left atrium to the left ventricle. Pathologies can lead to malfunctions of the valve so that blood can flow back to the atrium. Patients with a faulty mitral valve function may suffer from fatigue and chest pain. The functionality can be surgically restored, which is often a long and exhaustive intervention. Thorough planning is necessary to ensure a safe and effective surgery. This can be supported by creating pre-operative segmentations of the mitral valve. A post-operative analysis can determine the success of an intervention. This work will combine existing and new ideas to propose a new approach to (semi-)automatically create such valve models. The manual part can guarantee a high quality model and reliability, whereas the automatic part contributes to saving valuable labour time.
The main contributions of the automatic algorithm are an estimated semantic separation of the two leaflets of the mitral valve and an optimization process that is capable of finding a coaptation-line and -area between the leaflets. The segmentation method can perform a fully automatic segmentation of the mitral leaflets if the annulus ring is already given. The intermediate steps of this process will be integrated into a manual segmentation method so a user can guide the whole procedure. The quality of the valve models generated by the method proposed in this work will be measured by comparing them to completely manually segmented models. This will show that commonly used methods to measure the quality of a segmentation are too general and do not suffice to reflect the real quality of a model. Consequently the work at hand will introduce a set of measurements that can qualify a mitral valve segmentation in more detail and with respect to anatomical landmarks. Besides the intra-operative support for a surgeon, a segmented mitral valve provides additional benefits. The ability to patient-specifically obtain and objectively describe the valve anatomy may be the base for future medical research in this field and automation allows to process large data sets with reduced expert dependency. Further, simulation methods that use the segmented models as input may predict the outcome of a surgery.
Object recognition is a well-investigated area in image-based computer vision and several methods have been developed. Approaches based on Implicit Shape Models have recently become popular for recognizing objects in 2D images, which separate objects into fundamental visual object parts and spatial relationships between the individual parts. This knowledge is then used to identify unknown object instances. However, since the emergence of aσordable depth cameras like Microsoft Kinect, recognizing unknown objects in 3D point clouds has become an increasingly important task. In the context of indoor robot vision, an algorithm is developed that extends existing methods based on Implicit Shape Model approaches to the task of 3D object recognition.
Stylized image triangulation
(2019)
Stylized image triangulation is a popular tool of image processing. Results can be found on magazine covers or bought as a piece of art. Common use cases are filters by mobile apps or programs dedicated to automated triangulation. This thesis is based upon a paper that achieves new results formulating the adaptive dynamic triangulation as optimization problem. With this approach, new results concerning visual and technical quality are accomplished. One aim of this thesis is to make this approach accessible to as many users as possible. To reach users, a mobile app called Mesh is designed and implemented. A client-host-system is presented which relieves the app from computing the result requiring a lot of resources. Therefore, transferring the approach to a CPU based solution is part of the thesis. Also, a webserver is implemented that handles the communication between app and algorithm. “Mesh” enables the user to send a arbitrary image to the server whose result can be downloaded.
Part of the research deals with optimizing the method. As the main step, the gradient descent method, which minimizes an approximation error, is examined with three different approaches re-defining the movement of a point: The limitation of the directions of movement in a meaningful manner, diagonal directions and a dynamically repositioning of points are analyzed. Results show no improvement of visual quality using diagonal instead of horizontal and vertical steps. Disallowing a point to take its last position, the limitation of step opportunities results in a loss of visual quality but reaches an intended global error earlier. The dynamically repositioning rests upon a vectorbased solution that weights the directions and applies a factor to each of them. This results in a longer computational time but also in a higher visual quality.
Inspired by the work of Josh Bryan, another part of research aims at imitating an artists style. With the use of pseudo-random events combined with a geometryshader, a more natural look shall be achieved. This method illustrates a way of adding minor details to a rendering. To imitate an artist's work, a more complex and more precise triangulation is needed. As the last aspect, a renderstyle is presented. The style uses a center for its effect moving the triangles of a triangulation apart. The arbitrary choice of placing the centrum enables the renderstyle to be used in animations.
In this master's thesis the principle of hybrid ray tracing, consisting of a rasterization pipeline which includes ray tracing techniques for certain effects, is explained and the implementation of an application which uses a hybrid approach in which ray tracing is used to calculate shadows, ambient occlusion, and reflections and combines those with direct lighting is documented and explained. Hybrid ray tracing is based on the idea of combining the performance and flexibility of rasterization-based approaches with ray tracing to overcome the limitation of not being able to access the complete surrounding geometry at any point in the scene.
While describing the implementation of said application, the RTX API which is being used for ray tracing is explained as well Vulkan, the graphics API used.
Based on the results and the insights gained while using the RTX API, it is assessed in regards of its usage scenarios and technical sophistication.
Heat exchangers are used for thickening of various products or desalination of saltwater. Nevertheless, they are used as cooling unit in industries. Thereby, the stainless steel heat transferring elements get in contact with microorganism containing media, such as river water or saltwater, and corrode. After at least two years of utilization the material is covered with bacterial slime called biofilm. This process is called biofouling and causes loss in efficiency and creates huge costs depending on cleaning technique and efficiency. Cleaning a heat exchanger is very expensive and time consuming. It only can be done while the device is out of business.
Changing the surface properties of materials is the best and easiest way to lengthen the initial phase of biofilm formation. This leads to less biofouling (Mogha et al. 2014).
Thin polymer films as novel materials have less costs in production than stainless steel and are easy to handle. Furthermore, they can be functionalzed easily and can be bougth in different sizes all over the world. Because of this, they can reduce the costs of cleaning techniques and lead to a longer high efficiency state of the heat exchanger. If the efficiency of the heat exchanger decreases, the thin polymer films can be replaced.
For a successful investigation of the microbial and the process engineering challenges a cooperation of Technical University of Kaiserslautern (chair of seperation science and technology) and University of Koblenz-Landau (working goup microbiology) was established.
The aim of this work was design engineering and production of a reactor for investigation of biofouling taking place on thin polymeric films and stainless steel. Furthermore, an experimental design has to be established. Several requirements have to be applied for these tasks. Therefore, a real heat exchanger is downscaled, so the process parameters are at least comparable. There are many commercial flow cell kits available. Reducing the costs by selfassembling increased the number of samples, so there is a basis for statistic analysis. In addition, fast and minimal invasive online-in-situ microscopy and Raman spectroscopy can be performed. By creating laminary flow and using a weir we implemented homogenous inflow to the reactors. Reproduceable data on biomass and cell number were created.
The assessment of biomass and cell number is well established for drinking water analysis. Epifluorescense microscopy and gravimetric determination are the basic techniques for this work, too. Differences in cell number and biomass between surface modifications and materials are quantified and statistically analysed.
The wildtype strain Escherichia coli K12 and an inoculum of 500 ml fresh water were used to describe the biofouling of the films. Thereby, we generated data with natural bacterial community in unknown media properties and data with well known media properties, so the technical relevance of the data is given.
Free surface energy and surface roughness are the first attachment hurdles for bacteria. These parameters were measured according to DIN 55660 and DIN EN ISO 4287. The materials science data were correlated with the number of cells and the biomass. This correlation acts as basal link of biofouling as biological induced parameter to the material properties. Material properties for reducing the biofouling can be prospected.
By using Raman spectroscopy as a cutting edge method future investigations could be shortened. If biomass or cell number can be linked with the spectra, new functional materials can be investigated in a short time.
Data Mining im Fußball
(2014)
The term Data Mining is used to describe applications that can be applied to extract useful information from large datasets. Since the 2011/2012 season of the german soccer league, extensive data from the first and second Bundesliga have been recorded and stored. Up to 2000 events are recorded for each game.
The question arises, whether it is possible to use Data Mining to extract patterns from this extensive data which could be useful to soccer clubs.
In this thesis, Data Mining is applied to the data of the first Bundesliga to measure the value of individual soccer players for their club. For this purpose, the state of the art and the available data are described. Furthermore, classification, regression analysis and clustering are applied to the available data. This thesis focuses on qualitative characteristics of soccer players like the nomination for the national squad or the marks players get for their playing performance. Additionally this thesis considers the playing style of the available players and examines if it is possible to make predictions for upcoming seasons. The value of individual players is determined by using regression analysis and a combination of cluster analysis and regression analysis.
Even though not all applications can achieve sufficient results, this thesis shows that Data Mining has the potential to be applied to soccer data. The value of a player can be measured with the help of the two approaches, allowing simple visualization of the importance of a player for his club.
Advanced Auditing of Inconsistencies in Declarative Process Models using Clustering Algorithms
(2021)
To have a compliant business process of an organization, it is essential to ensure a onsistent process. The measure of checking if a process is consistent or not depends on the business rules of a process. If the process adheres to these business rules, then the process is compliant and efficient. For huge processes, this is quite a challenge. Having an inconsistency in a process can yield very quickly to a non-functional process, and that’s a severe problem for organizations. This thesis presents a novel auditing approach for handling inconsistencies from a post-execution perspective. The tool identifies the run-time inconsistencies and visualizes them in heatmaps. These plots aim to help modelers observe the most problematic constraints and help them make the right remodeling decisions. The modelers assisted with many variables can be set in the tool to see a different representation of heatmaps that help grasp all the perspectives of the problem. The heatmap sort and shows the run-time inconsistency patterns, so that modeler can decide which constraints are highly problematic and should address a re-model. The tool can be applied to real-life data sets in a reasonable run-time.
Im Rahmen dieser Arbeit wird untersucht, wie sich Modellfehler auf die Positionsgenauigkeit und Handhabbarkeit beim Rangieren mit einem Fahrerassistenzsystem auswirken. Besonderer Wert wird dabei auf die Bestimmung von Fehlergrenzen gelegt. Es wird der Frage nachgegangen, wie groß der Eingangsfehler sein darf, damit die Assistenz noch hinreichende Qualitätseigenschaften hinsichtlich ihrer Präzision und Robustheit aufweist. Dazu erfolgt zunächst eine quantitative Betrachtung der Fehler anhand des kinematischen Modells. Danach wird eine qualitative Betrachtung anhand von systematischen Experimenten durchgeführt. Es wird zunächst ein Controller entwickelt, mit dem sich ein Manöver mithilfe der visuellen Informationen der Assistenz simulieren lässt.
Dann wird eine Methode vorgestellt, mit deren Hilfe man das Manöver anhand definierter Fehlergrenzen bewerten kann. Um einen großen Raum möglicher Fehlerkombinationen effizient zu durchsuchen, wird das probabilistische Verfahren des Annealed Particle Filters benutzt. Mithilfe einer Testumgebung werden schließlich systematische Experimente durchgeführt. Zur weiteren Evaluation des Assistenzsystems in einer kontrollierten Umgebung erfolgte in Zusammenarbeit mit dem Fraunhofer ITWM in Kaiserslautern die Portierung des Assistenzsystems auf die dortige Simulationsumgebung RODOS.
Data flow models in the literature are often very fine-grained, which transfers to the data flow analysis performed on them and thus leads to a decrease in the analysis' understandability. Since a data flow model, which abstracts from the majority of implementation details of the program modeled, allows for potentially easier to understand data flow analyses, this master thesis deals with the specification and construction of a highly abstracted data flow model and the application of data flow analyses on this model. The model and the analyses performed on it have been developed in a test-driven manner, so that a wide range of possible data flow scenarios could be covered. As a concrete data flow analysis, a static security check in the form of a detection of insufficient user input sanitization has been performed. To date, there's no data flow model on a similarly high level of abstraction. The proposed solution is therefore unique and facilitates developers without expertise in data flow analysis to perform such analyses.
Prototyp einer generischen Recommendation Engine basierend auf Echtzeit-Assoziationsanalysen mit R
(2016)
In dieser Arbeit wurde unter Verwendung der Programmiersprache R ein Prototyp zur Erstellung einer Recommendation Engine zur Aufdeckung von Assoziationen innerhalb einer gegebenen Datenmenge entwickelt. Die Berechnung der Assoziationen findet hierbei in Echtzeit statt und des Weiteren wurden die Analysefunktionen generisch programmiert, um ein schnelles Einbinden und einfaches Parametrisieren von Datensätzen zu ermöglichen. Die Entwicklung fußte auf der grundlegenden Motivation, Data Mining Methoden wie das Assoziationsverfahren teilweise zu automatisieren, um damit generierte Lösungen effizienter umsetzen zu können. Der Entwicklungsprozess war insgesamt erfolgreich, sodass alle Grundfunktionalitäten im Sinne eines evolutionären Prototypings vorhanden sind.
Politische und gesellschaftliche Polarisierung ist ein interessantes Phänomen, über dessen Auswirkungen viele unterschiedliche, zum Teil auch gegensätzliche, Theorien existieren.
Polarisierung wird in der Literatur mit unterschiedlichen Methoden gemessen. Die vorliegende Arbeit gibt einen Überblick über existierende Polarisierungsmaße und es werden zwei neuartige Maße aus dem Gebiet der spektralen Graphentheorie vorgestellt. Anschließend werden die bekannten und die neu entwickelten Maße auf den LiquidFeedback-Datensatz der Piratenpartei Deutschland angewandt. Als Ergebnis lässt sich festhalten, dass die Maße teilweise zu unterschiedlichen Ergebnisse kommen. Dies liegt darin begründet, dass nicht alle Maße das Gleiche messen. Um zu verstehen was die einzelnen Maße aussagen, werden wesentliche Eigenschaften von Polarisierungsmaßen herausgearbeitet und es wird für jedes Maß dargelegt, welche Eigenschaften es erfüllt. Die angesprochenen Polarisierungsmaße beziehen sich auf die Entwicklung der Polarisierung zwischen Usern des LiquidFeedback-Systems. Bei der Betrachtung von einzelnen Personen und Abstimmungen fiel unter anderem auf, dass polarisierende Personen mehr Macht durch
Delegationen besitzen als die restlichen Personen und dass polarisierte Vorschläge circa doppelt so häufig umgesetzt werden.
Thesis is devoted to the topic of challenges and solutions for human resources management (HRM) in international organizations. The aim is to investigate methodological approaches to assessment of HRM challenges and solutions, and to apply them on practice, to develop ways of improvement of HRM of a particular enterprise. The practical research question investigated is “Is the Ongoing Professional Development – Strategic HRM (OPD-SHRM) model a better solution for HRM system of PrJSC “Philip Morris Ukraine”?”
To achieve the aim of this work and to answer the research question, we have studied theoretical approaches to explaining and assessing HRM in section 1, analyzed HRM system of an international enterprise in section 2, and then synthesized theory and practice to find intersection points in section 3.
Research findings indicate that the main challenge of HRM is to balance between individual and organizational interests. Implementation of OPD-SHRM is one of the solutions. Switching focus from satisfaction towards success will bring both tangible and intangible benefits for individuals and organization. In case of PrJSC “Philip Morris Ukraine”, the maximum forecasted increase is 330% in net profit, 350% in labor productivity, and 26% in Employee Development and Engagement Index.