Refine
Year of publication
Document Type
- Master's Thesis (46) (remove)
Keywords
- Augmented Reality (3)
- virtual reality (3)
- 360 Grad (1)
- 360 degree (1)
- 3D-Scan (1)
- 3D-Visualisierung (1)
- AR (1)
- Ausrichtungswerkzeug (1)
- BRDF (1)
- Breaking (1)
Institute
- Institut für Computervisualistik (46) (remove)
Diese Arbeit soll das von Dietz und Oppermann entwickelte Planspiel „Datenschutz 2.0“ an den heutigen Alltag der Schüler anpassen, die Benutzung in der Sekundarstufe II ermöglichen und die technischen und gesetzlichen Problematiken des Planspiels beheben. Das mit dem Planspiel aufgegriffene Thema Datenschutz ist im rheinland-pfälzischen Informatik-Lehrplan für die Sekundarstufe II verankert. Hier wird der Begriff Datenschutz in der Reihe „Datenerhebung unter dem Aspekt Datenschutz beurteilen“ genannt. Jedoch werden in dem Planspiel keine Daten erhoben, sondern die selbst hinterlassenen Datenspuren untersucht. Diese Form des Datenschutzes ist im Grundkurs in der vorgeschlagenen Reihe „Datensicherheit unter der Berücksichtigung kryptologischer Verfahren erklären und beachten“ unter dem Thema Kommunikation in Rechnernetzen zu finden. Im Leistungskurs steht die Datensicherheit in gleichbenannter Reihe und Thema und in der Reihe „Datenerhebung unter dem Aspekt Datenschutz beurteilen“ im Thema Wechselwirkung zwischen Informatiksysteme, Individuum und Gesellschaft.
Artificial neural networks is a popular field of research in artificial intelli-
gence. The increasing size and complexity of huge models entail certain
problems. The lack of transparency of the inner workings of a neural net-
work makes it difficult to choose efficient architectures for different tasks.
It proves to be challenging to solve these problems, and with a lack of in-
sightful representations of neural networks, this state of affairs becomes
entrenched. With these difficulties in mind a novel 3D visualization tech-
nique is introduced. Attributes for trained neural networks are estimated
by utilizing established methods from the area of neural network optimiza-
tion. Batch normalization is used with fine-tuning and feature extraction to
estimate the importance of different parts of the neural network. A combi-
nation of the importance values with various methods like edge bundling,
ray tracing, 3D impostor and a special transparency technique results in a
3D model representing a neural network. The validity of the extracted im-
portance estimations is demonstrated and the potential of the developed
visualization is explored.
Es wird ein Augmented-Reality Ansatz zur Erforschung modularer OSGi-Softwaresysteme präsentiert. Der Prototyp wird unter der Verwendung der Microsoft HoloLens implementiert. Module, wie Komponenten und Packages, werden in einer virtuellen Stadt dargestellt. Dieser Ansatz ermöglicht es dem Anwender, die Software-Architektur mittels intuitiver Navigation zu erkunden: Spracheingabe, Blickpunkt- und Gestenkontrolle. Eine multifunktionale Benutzeroberfläche wird vorgestellt, die für verschiedene Zielgruppen adaptiert werden kann. Viele veröffentlichte Visualisierungen weisen keine klare Zielgruppendefinition auf. Das Konzept kann leicht auf andere Darstellungsformen, wie beispielsweise der Inselmetapher übertragen werden. Erste Ergebnisse einer Evaluierung, die mittels kleiner strukturierter Interviews gewonnen werden konnten, werden präsentiert. Die Probanden mussten vier Programm-verständnis Aufgaben lösen und ihren Aufwand, sowie ihre Arbeitsbelastung einschätzen. Die Ergebnisse bilden eine gute Grundlage für weitere Forschung im Bereich der Software- Visualisierung in Augmented Reality.
Tractography on HARDI data
(2011)
Diffusion weighted imaging is an important modality in clinical imaging and the only possibility to gain insight into the human brain noninvasively and in-vivo. The applications of this imaging technique are diversified. It is used to study the brain, its structure, development and the functionality of the different areas. Further, important fields of application are neurosurgical planning, examinations of pathologies, investigation of Alzheimer-, strokes, and multiple sclerosis. This thesis gives a brief introduction to MRI and diffusion MRI. Based on this, the mostly used data representation in diffusion MRI in clinical imaging, the diffusion tensor, is introduced. As the diffusion tensor suffers from severe limitations new techniques subsumed under the term HARDI (high angular resolution diffusion imaging) are introduced and discussed in detail. Further, an extensive introduction to tractography, approaches that aim at reconstructing neuronal fibers, is given. Based on the knowledge fromthe theoretical part established tractography algorithms are redesigned to handle HARDI data and, thus, improve the reconstruction of neuronal fibers. Among these algorithms, a novel approach is presented that successfully reconstructs fibers on phantom data as well as on human brain data. Further, a novel global classification approach is presented to cluster voxels according to their diffusion properties.
Im Bereich Augmented Reality ist es von großer Bedeutung, dass virtuelle
Objekte möglichst realistisch in ein Kamerabild eingebettet werden. Nur
so ist es möglich, dem Nutzer eine immersive Erfahrung zu bieten. Dazu
gehört unter anderem, Verdeckung dieser Objekte korrekt zu behandeln.
Während schon verschiedene Ansätze existieren, dieses Verdeckungsproblem
zu beheben, wird in dieser Arbeit eine Lösung mittels Natural Image
Matting vorgestellt. Mit Hilfe einer Tiefenkamera wird das Kamerabild in
Vorder- und Hintergrund aufgeteilt und anschließend das virtuelle Objekt
im Bild platziert. Für Bereiche, in denen die Zugehörigkeit zu Vorder- oder
Hintergrund nicht eindeutig ist, wird anhand bekannter Pixel ein Transparenz-
Wert geschätzt. Es werden Methoden präsentiert, welche einen
Ablauf des Image Matting in Echtzeit ermöglichen. Zudem werden
Verbesserungsmöglichkeiten dieser Methoden präsentiert und gezeigt, dass
durch diese eine höhere Bildqualität für schwierige Szenen erreicht wird.
Tracking is an integral part of many modern applications, especially in areas like autonomous systems and Augmented Reality. For performing tracking there are a wide array of approaches. One that has become a subject of research just recently is the utilization of Neural Networks. In the scope of this master thesis an application will be developed which uses such a Neural Network for the tracking process. This also requires the creation of training data as well as the creation and training of a Neural Network. Subsequently the usage of Neural Networks for tracking will be analyzed and evaluated. This includes several aspects. The quality of the tracking for different degrees of freedom will be checked as well as the the impact of the Neural Network on the applications performance. Additionally the amount of required training data is investigated, the influence of the network architecture and the importance of providing depth data as part of the networks input. This should provide an insight into how relevant this approach could be for its adoption in future products.
Stylized image triangulation
(2019)
Stylized image triangulation is a popular tool of image processing. Results can be found on magazine covers or bought as a piece of art. Common use cases are filters by mobile apps or programs dedicated to automated triangulation. This thesis is based upon a paper that achieves new results formulating the adaptive dynamic triangulation as optimization problem. With this approach, new results concerning visual and technical quality are accomplished. One aim of this thesis is to make this approach accessible to as many users as possible. To reach users, a mobile app called Mesh is designed and implemented. A client-host-system is presented which relieves the app from computing the result requiring a lot of resources. Therefore, transferring the approach to a CPU based solution is part of the thesis. Also, a webserver is implemented that handles the communication between app and algorithm. “Mesh” enables the user to send a arbitrary image to the server whose result can be downloaded.
Part of the research deals with optimizing the method. As the main step, the gradient descent method, which minimizes an approximation error, is examined with three different approaches re-defining the movement of a point: The limitation of the directions of movement in a meaningful manner, diagonal directions and a dynamically repositioning of points are analyzed. Results show no improvement of visual quality using diagonal instead of horizontal and vertical steps. Disallowing a point to take its last position, the limitation of step opportunities results in a loss of visual quality but reaches an intended global error earlier. The dynamically repositioning rests upon a vectorbased solution that weights the directions and applies a factor to each of them. This results in a longer computational time but also in a higher visual quality.
Inspired by the work of Josh Bryan, another part of research aims at imitating an artists style. With the use of pseudo-random events combined with a geometryshader, a more natural look shall be achieved. This method illustrates a way of adding minor details to a rendering. To imitate an artist's work, a more complex and more precise triangulation is needed. As the last aspect, a renderstyle is presented. The style uses a center for its effect moving the triangles of a triangulation apart. The arbitrary choice of placing the centrum enables the renderstyle to be used in animations.
One of the greatest goals in computer graphics is the aesthetic representation of objects. In addition to conventional methods, another field focuses on non-photorealistic renderings. The so-called example-based rendering is an area where users can transfer their art style to a pre-computed 3D rendering, using a hand-painted template. There are some algorithms that already provide impressive results, but their problem is that most of these procedures count as offline methods and are not able to produce results in real-time. For this reason, this work show a method that satisfies this condition. In addition, the influence of the run-time reduction on the results is investigated. Requirements are defined, to which the method and its results are examined. Other methods in this field are referenced and compared with their results.
The mitral valve is one of the four valves in the human heart. It is located in the left heart chamber and its function is to control the blood flow from the left atrium to the left ventricle. Pathologies can lead to malfunctions of the valve so that blood can flow back to the atrium. Patients with a faulty mitral valve function may suffer from fatigue and chest pain. The functionality can be surgically restored, which is often a long and exhaustive intervention. Thorough planning is necessary to ensure a safe and effective surgery. This can be supported by creating pre-operative segmentations of the mitral valve. A post-operative analysis can determine the success of an intervention. This work will combine existing and new ideas to propose a new approach to (semi-)automatically create such valve models. The manual part can guarantee a high quality model and reliability, whereas the automatic part contributes to saving valuable labour time.
The main contributions of the automatic algorithm are an estimated semantic separation of the two leaflets of the mitral valve and an optimization process that is capable of finding a coaptation-line and -area between the leaflets. The segmentation method can perform a fully automatic segmentation of the mitral leaflets if the annulus ring is already given. The intermediate steps of this process will be integrated into a manual segmentation method so a user can guide the whole procedure. The quality of the valve models generated by the method proposed in this work will be measured by comparing them to completely manually segmented models. This will show that commonly used methods to measure the quality of a segmentation are too general and do not suffice to reflect the real quality of a model. Consequently the work at hand will introduce a set of measurements that can qualify a mitral valve segmentation in more detail and with respect to anatomical landmarks. Besides the intra-operative support for a surgeon, a segmented mitral valve provides additional benefits. The ability to patient-specifically obtain and objectively describe the valve anatomy may be the base for future medical research in this field and automation allows to process large data sets with reduced expert dependency. Further, simulation methods that use the segmented models as input may predict the outcome of a surgery.
In scientific data visualization huge amounts of data are generated, which implies the task of analyzing these in an efficient way. This includes the reliable detection of important parts and a low expenditure of time and effort. This is especially important for the big-sized seismic volume datasets, that are required for the exploration of oil and gas deposits. Since the generated data is complex and a manual analysis is very time-intensive, a semi-automatic approach could on one hand reduce the time required for the analysis and on the other hand offer more flexibility, than a fully automatic approach.
This master's thesis introduces an algorithm, which is capable of locating regions of interest in seismic volume data automatically by detecting anomalies in local histograms. Furthermore the results are visualized and a variety of tools for the exploration and interpretation of the detected regions are developed. The approach is evaluated by experiments with synthetic data and in interviews with domain experts on the basis of real-world data. Conclusively further improvements to integrate the algorithm into the seismic interpretation workflow are suggested.
Statistical Shape Models (SSMs) are one of the most successful tools in 3Dimage analysis and especially medical image segmentation. By modeling the variability of a population of training shapes, the statistical information inherent in such data are used for automatic interpretation of new images. However, building a high-quality SSM requires manually generated ground truth data from clinical experts. Unfortunately, the acquisition of such data is a time-consuming, error-prone and subjective process. Due to this effort, the majority of SSMs is often based on a limited set of this ground truth training data, which makes the models less statistically meaningful. On the other hand, image data itself is abundant in clinics from daily routine. In this work, methods for automatically constructing a reliable SSM without the need of manual image interpretation from experts are proposed. Thus, the training data is assumed to be the result of any segmentation algorithm or may originate from other sources, e.g. non-expert manual delineations. Depending on the algorithm, the output segmentations will contain errors to a higher or lower degree. In order to account for these errors, areas of low probability of being a boundary should be excluded from the training of the SSM. Therefore, the probabilities are estimated with the help of image-based approaches. By including many shape variations, the corrupted parts can be statistically reconstructed. Two approaches for reconstruction are proposed - an Imputation method and Weighted Robust Principal Component Analysis (WRPCA). This allows the inclusion of many data sets from clinical routine, covering a lot more variations of shape examples. To assess the quality of the models, which are robust against erroneous training shapes, an evaluation compares the generalization and specificity ability to a model build from ground truth data. The results show, that especially WRPCA is a powerful tool to handle corrupted parts and yields to reasonable models, which have a higher quality than the initial segmentations.
One of the fundamental decisions during the development of any system is the representation of data. In computer graphics, objects are usually represented as sets of triangles. There are however many different variants with their own strengths and weaknesses. This thesis will explore distancefields as a representation for objects. Distancefields are functions, which map every point in space to the distance to the closest surface point. While this description is very simple, a number of interesting properties can be derived, allowing for a multitude of shapes, operations and effects. An overview of the necessary background and methods is given. Furthermore, some extended or new approaches are presented, such as displaying implicit surfaces, approximating indirect illumination or implementing a GPU tracer.
Object recognition is a well-investigated area in image-based computer vision and several methods have been developed. Approaches based on Implicit Shape Models have recently become popular for recognizing objects in 2D images, which separate objects into fundamental visual object parts and spatial relationships between the individual parts. This knowledge is then used to identify unknown object instances. However, since the emergence of aσordable depth cameras like Microsoft Kinect, recognizing unknown objects in 3D point clouds has become an increasingly important task. In the context of indoor robot vision, an algorithm is developed that extends existing methods based on Implicit Shape Model approaches to the task of 3D object recognition.
Point Rendering
(2021)
In this thesis different methods for rendering point data are shown and compared with each other. The methods can be divided into two categories. For one visual methods are introduced that strictly deal with the displaying of point primitves. The main problem here lies in the depiction of surfaces since point data, unlike traditional triangle meshes, doesn't contain any connectivity information. On the other hand data strucutres are shown that enable real-time rendering of large point clouds. Point clouds often contain large amounts of data since they are mostly generated through 3D scanning processes such as laser scanning and photogrammetry.
In this thesis, the performance of the IceCube projects photon propagation
code (clsim) is optimized. The process of GPU code analysis and perfor-
mance optimization is described in detail. When run on the same hard-
ware, the new version achieves a speedup of about 3x over the original
implementation. Comparing the unmodified code on hardware currently
used by IceCube (NVIDIA GTX 1080) against the optimized version run on
a recent GPU (NVIDIA A100) a speedup of about 9.23x is observed. All
changes made to the code are shown and their performance impact as well
as the implications for simulation accuracy are discussed individually.
The approach taken for optimization is then generalized into a recipe.
Programmers can use it as a guide, when approaching large and complex
GPU programs. In addition, the per warp job-queue, a design pattern used
for load balancing among threads in a CUDA thread block, is discussed in
detail.
Mit der Microsoft Kinect waren die ersten Aufnahmen von synchronisierten Farb- und Tiefendaten (RGB-D) möglich, ohne hohe finanzielle Mittel aufwenden zu müssen und neue Möglichkeiten der Forschung eröffneten sich. Mit fortschreitender Technik sind auch mobile Endgeräte in der Lage, immer mehr zu leisten. Lenovo und Asus bieten die ersten kommerziell erwerblichen Geräte mit RGB D-Wahrnehmung an. Mit integrierten Funktionen der Lokalisierung, Umgebungserkennung und Tiefenwahrnehmung durch die Plattform Tango von Google gibt es bereits die ersten Tests in verschiedenen Bereichen des Rechnersehens z.B. Mapping. In dieser Arbeit wird betrachtet, inwiefern sich ein Tango Gerät für die Objekterkennung eignet. Aus den Ausgangsdaten des Tango Geräts werden RGB D-Daten extrahiert und für die Objekterkennung verarbeitet. Es wird ein Überblick über den aktuellen Stand der Forschung und gewisse Grundlagen bezüglich der Tango Plattform gegeben. Dabei werden existierende Ansätze und Methoden für eine Objekterkennung auf mobilen Endgeräten untersucht. Die Implementation der Erkennung wird anhand einer selbst erstellten Datenbank von RGB-D Bildern gelernt und getestet. Neben der Vorstellung der Ergebnisse werden Verbesserungen und Erweiterungen für die Erkennung vorgeschlagen.
With the appearance of modern virtual reality (VR) headsets on the consumer market, there has been the biggest boom in the history of VR technology. Naturally, this was accompanied by an increasing focus on the problems of current VR hardware. Especially the control in VR has always been a complex topic.
One possible solution is the Leap Motion, a hand tracking device that was initially developed for desktop use, but with the last major software update it can be attached to standard VR headsets. This device allows very precise tracking of the user’s hands and fingers and their replication in the virtual world.
The aim of this work is to design virtual user interfaces that can be operated with the Leap Motion to provide a natural method of interaction between the user and the VR environment. After that, subject tests are performed to evaluate their performance and compare them to traditional VR controllers.
The Material Point Method (MPM) has proven to be a very capable simulation method in computer graphics that is able to model materials that were previously very challenging to animate [1, 2]. Apart from simulating singular materials, the simulation of multiple materials that interact with each other introduces new challenges. This is the focus of this thesis. It will be shown that the self-collision capabilities of the MPM can naturally handle multiple materials interacting in the same scene on a collision basis, even if the materials use distinct constitutive models. This is then extended by porous interaction of materials as in[3], which also integrates easily with MPM.It will furthermore be shown that regular single-grid MPM can be viewed as a subset of this multi-grid approach, meaning that its behavior can also be achieved if multiple grids are used. The porous interaction is generalized to arbitrary materials and freely changeable material interaction terms, yielding a flexible, user-controllable framework that is independent of specific constitutive models. The framework is implemented on the GPU in a straightforward and simple way and takes advantage of the rasterization pipeline to resolve write-conflicts, resulting in a portable implementation with wide hardware support, unlike other approaches such as [4].
In this thesis, we deal with the question if challenge, flow and fun in computer games are related to each other, and which influence the motivational, psychological components motivation of success, motivation of failure and the chance of success do have. In addition, we want to know if a free choice in the level of difficulty is the optimal way to flow. To examine these theories, a study based on an online survey was executed, in which the participants played the game “flOw“. The results were evaluated with the help of a two-factorial analysis of variance with repeated measurement and tests on correlation. Thereby we found out that there actually exists a relation between challenge, flow and fun and that motivation does matter indirectly.