Refine
Document Type
- Bachelor Thesis (8)
- Diploma Thesis (3)
- Master's Thesis (2)
Keywords
- Ambient Occlusion (1)
- Android (1)
- Android <Systemplattform> (1)
- App (1)
- App <Programm> (1)
- CSMs (1)
- Computerspiel (1)
- Echtzeit-Raytracer "Augenblick" (1)
- GPGPU (1)
- Maschinelles Sehen (1)
Institute
- Fachbereich 4 (13) (remove)
In this thesis the possibilities for real-time visualization of OpenVDB
files are investigated. The basics of OpenVDB, its possibilities, as well
as NanoVDB and its GPU port, were studied. A system was developed
using PNanoVDB, the graphics API port of OpenVDB. Techniques were
explored to improve and accelerate a single ray approach of ray tracing.
To prove real-time capability, two single scattering approaches were
also implemented. One of these was selected, further investigated and
optimized to achieve interactive real-time rendering.
It is important to give artists immediate feedback on their adjustments, as
well as the possibility to change all parameters to ensure a user friendly
creation process.
In addition to the optical rendering, corresponding benchmarks were
collected to compare different improvement approaches and to prove
their relevance. Attention was paid to the rendering times and memory
consumption on the GPU to ensure optimal use. A special focus, when
rendering OpenVDB files, was put on the integrability and extensibility of
the program to allow easy integration into an existing real-time renderer
like U-Render.
In this thesis the feasibility of a GPGPU (general-purpose computing on graphics processing units) approach to natural feature description on mobile phone GPUs is assessed. To this end, the SURF descriptor [4] has been implemented with OpenGL ES 2.0/GLSL ES 1.0 and evaluated across different mobile devices. The implementation is multiple times faster than a comparable CPU variant on the same device. The results proof the feasibility of modern mobile graphics accelerators for GPGPU tasks especially for the detection phase in natural feature tracking used in augmented reality applications. Extensive analysis and benchmarking of this approach in comparison to state of the art methods have been undertaken. Insights into the modifications necessary to adapt and modify the SURF algorithm to the limitations of a mobile GPU are presented. Further, an outlook for a GPGPU-based tracking pipeline on a mobile device is provided.
In der vorliegenden Diplomarbeit wurde gezeigt, wie sich Ambient Occlusion in einer Szene aus frei transformierbaren Starrkörpern unter Verwendung von Coherent Shadow Maps in interaktiven Frameraten berechnen und darstellen lässt. Die Vorteile von Ambient Occlusion im Vergleich zu klassischen lokalen Beleuchtungsmodellen wurden aufgezeigt - den Ansatzpunkt bildet die Approximation von Umgebungslicht und indirekter Beleuchtung durch einen konstanten Farbwert - und die vereinfachenden Annahmen, die im Vergleich zu einer Simulation globaler Beleuchtung gemacht werden, wurden nachgezeichnet - Die Gültigkeit von Ambient Occlusion beschränkt sich demnach eigentlich auf eine direkte Beleuchtung diffuser Materialien mit konstanter Leuchtdichte. Um einen umfassenden Überblick zu gewährleisten, wurden zahlreiche existierende Ansätze zur Berechnung von Ambient Occlusion ausführlich vorgestellt. Anschließend wurde auf die Berechnung der Visibilität mittels CSMs eingegangen. Es wurde gezeigt wie sich die Komprimierungsrate von CSMs durch eine gleichmäßige Verteilung der zu komprimierenden Depth Maps deutlich erhöhen lässt und ein XML-konformes Dateiformat, das die beliebige Wiederverwendung einmal komprimierter Depth Maps erlaubt, wurde eingeführt.
This thesis describes the conception, implementation and evaluation of a collaborative multiplayer game for preschoolers for mobile devices.
The main object of this thesis is to find out, if mobile devices like smartphones and tablet computers are suitable for the interaction of children. In order to develop this kind of game relevant aspects were researched. On this basis a game was designed which was finally tested by preschoolers.
Die Computergrafik befasst sich mit der Erzeugung von virtuellen Bildern. Im Bereich der 3D-Computergrafik werden die dargestellten Objekte im dreidimensionalen Raum beschrieben. Dazu bedient man sich diverser Generierungsverfahren. Einer dieser so genannten Renderer ist das Raytracing-Verfahren. Es erfreut sich in der Computergrafik wegen der erreichten Bildqualität bei ueberschaubarer Komplexität großer Beliebtheit. Dabei wird versucht, immer realistischere Ergebnisse zu erreichen. In der Vergangenheit wurde Raytracing deswegen beispielsweise um globale Beleuchtungsmodelle oder um reflektierende beziehungsweise um transparente Objekte erweitert. Dabei wurde aber ein wichtiger Punkt häufig vernachlässigt, welcher ebenfalls den Grad an Realismus deutlich erhöhen kann: die Kamera. Meistens geht man auch heutzutage von einem vereinfachten Lochkameramodell aus. Aus diesem Grund genügen solche Modelle nicht den Ansprüchen physikalisch-korrekter Renderingverfahren. Eine wirklich umfassend korrekte Abbildung von Szenen darf also nicht vernachlässigen, dass ein generiertes Bild durch ein Linsensystem noch einmal entscheidend beeinflusst wird. In dieser Arbeit wird deswegen ein physikalisch korrektes Kameramodell vorgestellt, welches die geometrischen Eigenschaften des Linsensystems berücksichtigt und die Belichtung auf der Bildebene korrekt berechnet.
Magnetic resonance (MR) tomography is an imaging method, that is used to expose the structure and function of tissues and organs in the human body for medical diagnosis. Diffusion weighted (DW) imaging is a specific MR imaging technique, which enables us to gain insight into the connectivity of white matter pathways noninvasively and in vivo. It allows for making predictions about the structure and integrity of those connections. In clinical routine this modality finds application in the planning phase of neurosurgical operations, such as in tumor resections. This is especially helpful if the lesion is deeply seated in a functionally important area, where the risk of damage is given. This work reviews the concepts of MR imaging and DW imaging. Generally, at the current resolution of diffusion weighted data, single white matter axons cannot be resolved. The captured signal rather describes whole fiber bundles. Beside this, it often appears that different complex fiber configurations occur in a single voxel, such as crossings, splittings and fannings. For this reason, the main goal is to assist tractography algorithms who are often confound in such complex regions. Tractography is a method which uses local information to reconstruct global connectivities, i.e. fiber tracts. In the course of this thesis, existing reconstruction methods such as diffusion tensor imaging (DTI) and q-ball imaging (QBI) are evaluated on synthetic generated data and real human brain data, whereas the amount of valuable information provided by the individual reconstruction mehods and their corresponding limitations are investigated. The output of QBI is the orientation distribution function (ODF), where the local maxima coincides with the underlying fiber architecture. We determine those local maxima. Furthermore, we propose a new voxel-based classification scheme conducted on diffusion tensor metrics. The main contribution of this work is the combination of voxel-based classification, local maxima from the ODF and global information from a voxel- neighborhood, which leads to the development of a global classifier. This classifier validates the detected ODF maxima and enhances them with neighborhood information. Hence, specific asymmetric fibrous architectures can be determined. The outcome of the global classifier are potential tracking directions. Subsequently, a fiber tractography algorithm is designed that integrates along the potential tracking directions and is able to reproduce splitting fiber tracts.
Procedural content generation, the generation of video game content using pseudo-random algorithms, is a field of increasing business and academic interest due to its suitability for reducing development time and cost as well as the possibility of creating interesting, unique game spaces. Although many contemporary games feature procedurally generated content, the author perceived a lack of games using this approach to create realistic outer-space game environments, and the feasibility of employing procedural content generations in such a game was examined. Using current scientific models, a real-time astronomical simulation was developed in Python which generates star and planets object in a fictional galaxy procedurally to serve as the game space of a simple 2D space exploration game where the player has to search for intelligent life.
This thesis addresses the implementation of a particle simulation of an explosion. The simulation will be displayed via ray tracing in near real time. The implementation makes use of the openCL standard. The focus of research in this thesis is to analyse the performance of this combination of components.
Development of an Android Application for the Recognition and Translation of Words in Camera Scenes
(2012)
This bachelor- thesis describes the conception and implementation of a translation software for the Android platform. The specific feature of the software is the independent text recognition based on the view of the camera. This approach aims to enhance and accelerate the process of translation in certain situations. After an introduction into text recognition, the underlying technologies and the operation system Android useful applications are described. Then the concept of the software is created and the implementation examined. Finally an evaluation is conducted to identify strengths and weaknesses of the software.
A Kinect device has the ability to record color and depth images simultaneously. This thesis is an attempt to use the depth image to manipulate lighting information and material properties in the color image. The presented method of lighting and material manipulation needs a light simulation of the lighting conditions at the time of recording the image. It is used to transform information from a new light simulation directly back into the color image. Since the simulations are performed on a three-dimensional model, a way is searched to generate a model out of single depth image. At the same time the text will react to the problems of the depth data acquisition of the Kinect sensor. An editor is designed to make lighting and material manipulations possible. To generate a light simulation, some simple, real-time capable rendering methods and lighting modells are proposed. They are used to insert new illumination, shadows and reflections into the scene. Simple environments with well defined lighting conditions are manipulated in experiments to show boundaries and possibilities of the device and the techniques being used.