006 Spezielle Computerverfahren
Refine
Document Type
- Bachelor Thesis (4)
- Doctoral Thesis (2)
- Master's Thesis (2)
Keywords
- Datenstruktur (2)
- Line Space (2)
- Realistische Computergrafik (2)
- Ad-hoc-Netz (1)
- Algorithmische Geometrie (1)
- Beaconless (1)
- Bounding Volume Hierarchy (1)
- Computergraphik (1)
- Computervisualistik (1)
- Distributed Algorithm (1)
Institute
This thesis explores a 3D object detection and pose estimation approach based on the point pair features method presented by Drost et. al. [Dro+10]. While pose estimation methods have shown good improvements, they still remain a crucial problem on the computer vision field. In this work, we implemented a program that takes point cloud scenes as input and returns the detected object with their estimated pose. The program fully covers an object detection pipeline by processing 3D models during an offline phase, extracting their point pair features and creating a global descriptor out of them. During an online phase, the same features are extracted from a point cloud scene and are matched to the model features. After the voting scheme, potential poses of the object are retrieved. The poses end being clustered together and post-processed to finally deliver a result. The program was tested using simulated and real data. We evaluate these tests and present the final results, by discussing the achieved accuracy of the detections and the estimated poses.
The Material Point Method (MPM) has proven to be a very capable simulation method in computer graphics that is able to model materials that were previously very challenging to animate [1, 2]. Apart from simulating singular materials, the simulation of multiple materials that interact with each other introduces new challenges. This is the focus of this thesis. It will be shown that the self-collision capabilities of the MPM can naturally handle multiple materials interacting in the same scene on a collision basis, even if the materials use distinct constitutive models. This is then extended by porous interaction of materials as in[3], which also integrates easily with MPM.It will furthermore be shown that regular single-grid MPM can be viewed as a subset of this multi-grid approach, meaning that its behavior can also be achieved if multiple grids are used. The porous interaction is generalized to arbitrary materials and freely changeable material interaction terms, yielding a flexible, user-controllable framework that is independent of specific constitutive models. The framework is implemented on the GPU in a straightforward and simple way and takes advantage of the rasterization pipeline to resolve write-conflicts, resulting in a portable implementation with wide hardware support, unlike other approaches such as [4].
The goal of simulations in computergraphics is the simulation of realistic phenomena of materials. Therefore, internal and external acting forces are accumulated in each timestep. From those, new velocities get calculated that ultimately change the positions of geometry or particles. Position Based Dynamics omits thie velocity layer and directly works on the positions. Constraints are a set of rules defining the simulated material. Those rules must not be violated throughout the simulation. If this happens, the violating positions get changed so that the constraints get fullfilled once again. In this work a PBD-framework gets implemented, that allows simulations of solids and fluids. Constraints get solved using GPU implementations of Gauss-Seidel and Gauss-Jakobi solvers. Results are physically plausible simulations that are real-time capable.
Technologische Fortschritte auf dem Gebiet der integrierten Halbleitertechnik, die unter anderem auch zur gestiegenen Leistungsfähigkeit der Kamerasensoren beitragen, konzentrierten sich bisher primär auf die Schnelligkeit und das Auflösungsvermögen der Sensoren. Die sich ständig verändernde Entwicklung hat jedoch direkte Folgen auf das physikalische Verhalten einer Kamera und damit auch Konsequenzen für die erreichbare geometrische Genauigkeit einer photogrammetrischen 3D-Rekonstruktion. Letztere stand bisher nicht im Fokus der Forschung und ist eine Aufgabe, der sich diese Arbeit im Sinne der Photogrammetrie und Messtechnik stellt. Aktuelle Untersuchungen und Erfahrungen aus industriellen Projekten zeigen in diesem Zusammenhang, dass das geometrisch-physikalische Verhalten digitaler Kameras - für höchste photogrammetrische Ansprüche - noch nicht ausreichend modelliert ist. Direkte Aussagen zur erreichbaren Genauigkeit bei gegebener Hardware erweisen sich daher bislang als unzureichend. Ferner kommt es aufgrund der unpräzisen Modellierung zu Einbußen in der Zuverlässigkeit der erreichten Ergebnisse. Für den Entwickler präziser kamerabasierter Messverfahren folgt daraus, dass zu einer optimalen Schätzung der geometrischen Genauigkeit und damit auch vollständigen Ausschöpfung der Messkamera geeignete mathematische Modelle erforderlich sind, die das geometrisch physikalische Verhalten bestmöglich beschreiben. Diese Arbeit beschreibt, wie die erreichbare Genauigkeit einer Bündelblockausgleichung, schon a priori mithilfe des EMVA1288 Standards approximiert werden kann. Eine in diesem Zusammenhang wichtige Teilaufgabe ist die Schaffung einer optimalen Messanordnung. Hierzu gehören Untersuchungen der üblicherweise verwendeten Kalibrierkörper und die Beseitigung von systematischen Fehlern vor und nach der Bündelblockausgleichung. Zum Nachweis dieser Systematiken wird eine auf statistischem Lernen basierende Methode beschrieben und untersucht. Erst wenn alle genauigkeitsmindernden Einflüsse berücksichtigt sind, wird der Anteil des Sensors in den Messdaten sichtbar und damit auch mathematisch parametrisierbar. Die Beschreibung des Sensoreinflusses auf die erreichbare Genauigkeit der Bündelblockausgleichung erfolgt in drei Schritten. Der erste Schritt beschreibt den Zusammenhang zwischen ausgewählten EMVA1288-Kennzahlen und der Unsicherheit eines Grauwertes. Der zweite Schritt ist eine Modellierung dieser Grauwertunsicherheit als Zentrumsunsicherheit einer Zielmarke. Zur Beschreibung dieser Unsicherheit innerhalb der Bündelblockausgleichung wird ein stochastisches Modell, basierend auf dem EMVA1288-Standard, vorgeschlagen. Ausgehend vom Rauschen des Zielmarkenmittelpunktes wird im dritten Schritt die Unsicherheit im Objektraum beispielhaft mit Hilfe von physikalisch orientierten Simulationen approximiert. Die Wirkung der vorgeschlagenen Methoden wird anhand von Realkalibrierungen nachgewiesen. Abschließend erfolgt die Diskussion der vorgeschlagenen Methoden und erreichten Ergebnisse sowie ein Ausblick auf kommende Untersuchungen.
In der Computergrafik stellte das echtzeitfähige
Rendern von Haaren und Fell ein Problem dar. Die
Berechnung der Beleuchtung, Schattierung und
Transparenz erfordert einen hohen Rechenaufwand,
welcher sich negativ auf die Performanz auswirkt.
Doch durch verbesserte Hardware und neue Verfahren
ist es möglich, solch komplexe Effekte in Echtzeit
zu simulieren. In folgender Arbeit werden die
Grundlagen des Renderings von Haaren erläutert.
Außerdem wurde im Rahmen der Arbeit eine
echtzeitfähige Demo implementiert, deren zugrunde
liegende Verfahren und Funktionalitäten beschrieben
werden. Um die Demo zu evaluieren wurde die mögliche
Anzahl an Bildern pro Sekunde bei Modellen
unterschiedlicher Komplexität gemessen. Schließlich
wurden die Ergebnisse mit Bildern von echten Haaren
verglichen.
This thesis tests several methods and measures in pathtracing for selecting either the Line Space or the Bounding Volume Hierarchy data structure to make use of the advantages of both. The structures are defined locally around each object and each Line Space shaft contains one candidate ID each. All implementation is done as a C++ and OpenGL framework with compute shaders handling the pathtracing and Line Space generation. The measures include the probability distribution, the effect dependency, as well as a distance threshold and are tested against several different scenes. In most situations, the results show a noticeable increase in performance, partly only with minor visual differences, with the probability measure producing the highest quality images for a given performance. The fundamental problems of the Line Space concering the high memory consumption and a long generation time compared to the BVH still persist, despite the object local structure, a minimal amount of data per shaft and the compute shader implementation.
This thesis presents two methods for the computation of global illumination. The first is an extension of Reflective Shadow Maps with an additional shadow test in order to handle occlusion. The second method is a novel, bidirectional Light-Injection approach. Rays originating from the light source are traced through the scene and stored inside the shafts of the Linespace datastructure. These shafts are a discretization of the possible spatial directions. The Linespaces are embedded in a Uniform Grid. When retrieving this pre-calculated lightning information no traversal of datastructures and no additional indirection is necessary in the best-case scenario. This reduces computation time and variance compared to Pathtracing. Areas that are mostly lit indirectly and glas profit the most from this. However, the result is only approximative in nature and produces visible artifacts.
Reactive local algorithms are distributed algorithms which suit the needs of battery-powered, large-scale wireless ad hoc and sensor networks particularly well. By avoiding both unnecessary wireless transmissions and proactive maintenance of neighborhood tables (i.e., beaconing), such algorithms minimize communication load and overhead, and scale well with increasing network size. This way, resources such as bandwidth and energy are saved, and the probability of message collisions is reduced, which leads to an increase in the packet reception ratio and a decrease of latencies.
Currently, the two main application areas of this algorithm type are geographic routing and topology control, in particular the construction of a node's adjacency in a connected, planar representation of the network graph. Geographic routing enables wireless multi-hop communication in the absence of any network infrastructure, based on geographic node positions. The construction of planar topologies is a requirement for efficient, local solutions for a variety of algorithmic problems.
This thesis contributes to reactive algorithm research in two ways, on an abstract level, as well as by the introduction of novel algorithms:
For the very first time, reactive algorithms are considered as a whole and as an individual research area. A comprehensive survey of the literature is given which lists and classifies known algorithms, techniques, and application domains. Moreover, the mathematical concept of O- and Omega-reactive local topology control is introduced. This concept unambiguously distinguishes reactive from conventional, beacon-based, topology control algorithms, serves as a taxonomy for existing and prospective algorithms of this kind, and facilitates in-depth investigations of the principal power of the reactive approach, beyond analysis of concrete algorithms.
Novel reactive local topology control and geographic routing algorithms are introduced under both the unit disk and quasi unit disk graph model. These algorithms compute a node's local view on connected, planar, constant stretch Euclidean and topological spanners of the underlying network graph and route messages reactively on these spanners while guaranteeing the messages' delivery. All previously known algorithms are either not reactive, or do not provide constant Euclidean and topological stretch properties. A particularly important partial result of this work is that the partial Delaunay triangulation (PDT) is a constant stretch Euclidean spanner for the unit disk graph.
To conclude, this thesis provides a basis for structured and substantial research in this field and shows the reactive approach to be a powerful tool for algorithm design in wireless ad hoc and sensor networking.