Refine
Document Type
- Bachelor Thesis (10)
- Master's Thesis (3)
- Diploma Thesis (1)
Keywords
- Graphik (2)
- OpenGL (2)
- VOXEL (2)
- Augmented Reality (1)
- Beleuchtung (1)
- Blickpunktabhängig (1)
- Bounding Volume Hierarchie (1)
- Breaking (1)
- Bruchsimulation (1)
- Compute-Shader (1)
Institute
This thesis shows an interaction of primitives in a three-dimensional space which is done by gestures. Functions which are difficult to do by gestures without any absolute feeling of the position are implemented with a touchscreen. Besides the touchscreen a second input device, a Leap-Motion, is used to obtain data of the motion of the hand. To get its data the Leap-Motion uses two CCD-cameras and three infrared LEDs. The interactions that can be done without any feedback of the absolute position are the translation, rotation and scale. These three and the movement through space are implemented as gestures in this thesis. This is done in Blender with the BlenderrnGame Engine and Python. The only function which has been implemented for the touchscreen is to select an object. Later on, a comparative control of the mouse was invented to contrast it with the control of the gestures. There are two big differences between these two controls. On the one hand, the gesture controls can be done in a three-dimensional space but most people aren't used to it yet. On the other hand, there is just a two-dimensional input possibility with the mouse control. Otherwise it is familiar to most persons. The evaluation should reveal if people prefer interaction by mouse control or by gestures. The result shows that the prefered control is done by the mouse. However in some categories of the tests the gestures are quite close to the result of the mouse.
Real-time graphics applications are tending to get more realistic and approximate real world illumination gets more reasonable due to improvement of graphics hardware. Using a wide variation of algorithms and ideas, graphics processing units (GPU) can simulate complex lighting situations rendering computer generated imagery with complicated effects such as shadows, refraction and reflection of light. Particularly, reflections are an improvement of realism, because they make shiny materials, e.g. brushed metals, wet surfaces like puddles or polished floors, appear more realistic and reveal information of their properties such as roughness and reflectance. Moreover, reflections can get more complex, depending on the view: a wet surface like a street during rain for example will reflect lights depending on the distance of the viewer, resulting in more streaky reflection, which will look more stretched, if the viewer is locatedrnfarther away from the light source. This bachelor thesis aims to give an overview of the state-of-the-art in terms of rendering reflections. Understanding light is a basic need to understand reflections and therefore a physical model of light and its reflection will be covered in section 2, followed by the motivational section 2.2, that will give visual appealing examples for reflections from the real world and the media. Coming to rendering techniques, first, the main principle will be explained in section 3 followed by a short general view of a wide variety of approaches that try to generate correct reflections in section 4. This thesis will describe the implementation of three major algorithms, that produce plausible local reflections. Therefore, the developed framework is described in section 5, then three major algorithms will be covered, that are common methods in most current game and graphics engines: Screen space reflections (SSR), parallax-corrected cube mapping (PCCM) and billboard reflections (BBR). After describing their functional principle, they will be analysed of their visual quality and the possibilities of their real-time application. Finally they will be compared to each other to investigate the advantages and disadvantages over each other. In conclusion, the gained experiences will be described by summarizing advantages and disadvantages of each technique and giving suggestions for improvements. A short perspective will be given, trying to create a view of upcoming real-time rendering techniques for the creation of reflections as specular effects.
Zusätzlich zum Rendern wird die Rechenleistung moderner Grafikkarten immer häufiger auch für allgemeine Berechnungen (GPGPU) genutzt. Für die Umsetzung stehen verschiedene Möglichkeiten zur Verfügung, die von der Verwendung der Renderingpipeline bis zu eigenständigen Schnittstellen reichen. In dieser Arbeit werden mit Render-To-Texture, Transform Feedback, Compute Shader und OpenCL vier verschiedene GPGPU-Methoden untersucht. Anhand von Partikelsystemen werden sie hinsichtlich der benötigten Berechnungszeit, der GPU-Auslastung, Lines of Code und Portierbarkeit miteinander verglichen. Dazu wurden sowohl das N-Körper Problem, Smoothed Particle Hydrodynamics und ein Partikelschwarm als Partikelsysteme umgesetzt. Es konnte gezeigt werden, dass insbesondere OpenCL und Compute Shader sehr gute Ergebnisse liefern.
This thesis imparts a general view of the mechanics and implementation of latest voxelization strategies using the GPU. In addition to established voxelization procedures using the rasterization pipeline, new possibilities arising from GPGPU programming are examined. On the basis of the programming language C++ and the graphics library OpenGL the implementation of several methods is explained. The methods are compared in terms of performance and quality of the resulting voxelization and are evaluated critically with regards to possible use cases. Furthermore, two exemplary applications are detailed that use a voxelized scene in such a way that the augmentation of established techniques of real time graphics are facilitated. To this end, the concepts and the implementations of Transmittance Shadow Mapping and of Reflective Shadow Mapping utilizing a voxel based ambient occlusion effect is explained. Finally, the prolonging relevance of voxelization is put into prospect, by addressing latest research and further enhancements and applications of the presented methods.
Augmented Reality has many areas of application. It can be used to simplify everyday life as well as working processes. However, since there are
many manufacturers that offer greatly varying systems, choosing the correct system according to application as well as cross-platform development are dfficult. This thesis attempts to develop an application which can be used to simulate Augmented Reality devices on Virtual Reality systems. This should simplify the processes of choosing a system as well as cross-platform
development.
Since the simulation will be designed to run on mobile devices, it should be possible to render high quality, realistic environments in advance, using a panoramic image. On a Virtual Reality device, they need to be displayed as a stereoscopic image. To achieve this, several methods are presented that can be used to perform this conversion. An editor will be created which will allow the creation of scenes, configuration of Augmented Reality devices and displaying them on a Virtual Reality system. For closing this thesis a test will be performed, to check the quality of the simulation as well as improvements that can be made.
Das Thema dieser Arbeit ist die Entwicklung einer hardwarebeschleunigten Einzelbildkompression zur Videoübertragung. Verfahren zur Einzelbildkompressionrn existieren bereits seit längerer Zeit. Jedoch genügen die gängigen Verfahren nicht den Anforderungen der Echtzeit und Performanz, um während einer Videoübertragung ohne spürbare Latenz zum Einsatz zu kommen. In dieser Arbeit soll einer der geläufigsten Algorithmen zur Bildkompression auf Parallelisierbarkeit, unter zu Hilfenahme der Grafikkarte, untersucht werden, um Echtzeitfähigkeit während der Kompression und Dekompression von computergenerierten Bildern zu erreichen. Die Ergebnisse werden evaluiert und in den Rahmen aktueller Verfahren parallelisierter Kompressionstechniken eingeordnet.
The goal of this thesis is to create and develop a concept for a mobile city guide combined with game-based contents.
The application is intented to support flexible and independent exploration of the city of Koblenz.
Based on the geographical data, historical information for and interesting stories of various places were provided in this application. These informations are combined with playful elements in order to create a motivating concept.
Therefore, related approaches were examined and, combined with own ideas, a new concept has been developed. This concept has been prototypically implemented as an Android application and afterwards evaluated by 15 test persons. A questionnaire was used to examine the operability, the motivation of game patterns and the additional value of the application.
A Kinect device has the ability to record color and depth images simultaneously. This thesis is an attempt to use the depth image to manipulate lighting information and material properties in the color image. The presented method of lighting and material manipulation needs a light simulation of the lighting conditions at the time of recording the image. It is used to transform information from a new light simulation directly back into the color image. Since the simulations are performed on a three-dimensional model, a way is searched to generate a model out of single depth image. At the same time the text will react to the problems of the depth data acquisition of the Kinect sensor. An editor is designed to make lighting and material manipulations possible. To generate a light simulation, some simple, real-time capable rendering methods and lighting modells are proposed. They are used to insert new illumination, shadows and reflections into the scene. Simple environments with well defined lighting conditions are manipulated in experiments to show boundaries and possibilities of the device and the techniques being used.
Der natürliche Prozess der Verwitterung ist ein komplexer Vorgang, der von unterschiedlichsten Parametern beeinflusstwird. Hauptbestandteil dieses Prozesses ist das Zusammenziehen des Fruchtvolumens infolge von Wasserverlust durch Transpiration sowie die Veränderung der Fruchtfarbe und Oberfläche. Es wurden bereits Verfahren entwickelt, die diese Eigenschaften mit Hilfe von Parametrisierung sowie physikalischer Ansätze simulieren. Die in dieser Arbeit erstellte Anwendung simuliert das Fruchtfleisch durch ein Tetraedernetz und die Veränderung der Haut mit Hilfe von dynamischer Texturanpassung. Der entwickelte Algorithmus arbeitet in linearer Laufzeit und seine Ergebnisse werden anhand selbst erstellter Fruchtmodelle präsentiert.
Ray Tracing als Bildsyntheseverfahren ist relevant für viele Anwendungsbereiche, da es Aspekte des Lichttransports physikalisch korrekt simulieren kann. Aufgrund des hohen Berechnungsaufwands sind der Einsatz von Datenstrukturen zur Beschleunigung und die parallele Verarbeitung notwendig. GPUs sind inzwischen hoch parallele, programmierbare Prozessoren mit zahlreichen Kernen und eignen sich aufgrund ihrer hohen Leistungsfähigkeit dazu, aufwändige, parallelisierbare Probleme zu lösen. In dieser Arbeit geht es um GPU Ray Tracing, beschleunigt durch Bounding Volume Hierarchien (BVH). Auf Basis relevanter Veröffentlichungen zu Aufbau und Traversierung von BVHs und der Abbildung des Ray Tracing Prozesses auf die GPU Architektur wird ein GPU Ray Tracer konzeptioniert und entwickelt. Während der BVH Aufbau vorab auf dem Host stattfindet, wird der gesamte Ray Tracing Prozess durch mehrere Kernel komplett auf der GPU ausgeführt. Die Implementierung der Kernel erfolgt in Form von OpenGL Compute Shader Programmen, und die Aufteilung des Ray Tracers auf mehrere Kernel ist durch die GPU Achitektur und das SIMT Ausführungsmodell motiviert. Für die Speicherorganisation der binären BVHs werden zwei Varianten betrachtet, klassisch und als MBVH, wobei sich die MBVH Organisation als effizienter erweist. Zudem werden verschiedene Varianten für die Traversierung ohne Stack und für die Stack-basierte Traversierung umgesetzt und bewertet. Der in mehrere Kernel strukturierte GPU Ray Tracer wird zudem mit einer Einzelkernel Version verglichen. Die besten Ergebnisse erreicht die Traversierung ohne Stack mit einem while-while Ablauf und MBVH im Rahmen des aufgeteilten GPU Ray Tracers.