Doctoral Thesis
Refine
Document Type
- Doctoral Thesis (18) (remove)
Is part of the Bibliography
- no (18) (remove)
Keywords
Institute
- Institut für Computervisualistik (18) (remove)
In this work has been examined, how the existing model of the simulation of cables and hoses can be advanced. Therefore an investigation has been made on the main influences to the shape simulation and the factors of constraints and side conditions were analyzed. For the validation of the accuracy, the simulation has to be compared to real specimen behavior. To obtain a very precise digitalization of the shape, the choice was made to use a laser scanner that converts the pointcloud into a .vrml file which can be imported into the simulation environment. The assumption was that the simulation method itself has the highest impact to the simulated shape. This is why the capabilities of the most sophisticated methods have been analyzed. The main criterion for the success of a simulation approach proved not to be accuracy, as expected. Process integration and usability showed to be of higher interest for the efficient exertion. Other factors like the pricing, the functionality and the real-time capability were assayed as well. The analyzed methods are based on the solution of the equations of elasticity with different ways of discetization, finite-elements and a spring-impulse-system. Since the finite-element-system takes several minutes for the computation of the shape and the spring-impulse-system reacts retarded on user manipulation, the competitiveness of these approaches is low. The other methods distinguish more in real-time performance, data interfaces and functionality than in accuracy. For the accuracy of a system, the consideration of other factors proved to be very important. As one of these main factors, the accurate assignment of the material properties was indicated. Until the start of this work, only the finite-element-approach dealt with this factor, but no documentation or validation is provided. In the publications of the other methods, the material properties are estimated to obtain a plausible simulation shape. Therefore the specific material values of real specimen have been measured and assigned to the simulation. With the comparison to the real shape it has been proven that the accuracy is very high with the measured properties. Since these measurements are very costly and time consuming, an investigation on a faster and cheaper way to obtain these values has been made. It has been assumed that with the knowledge of the cross-section it should be possible to compute the specimen behavior. Since the braid distribution changes individually from specimen to specimen, a more general way to obtain the values needed to be found. The program composer has been developed, where only the number of the different braids and the taping is entered. It computes with very high precision the stiffness, the density and the final diameter of the bundle. With the measured values and the fitting to the real shape it has been proven that the simulation approach reflects the precise behavior of cables and hoses. Since the stiffness of the single braids is wasteful to measure, a measurement setup was created where the stiffness has a large impact to the shape. With known density, the stiffness of the specimen can be reconstructed precisely. Thus a fast and beneficial way of obtaining the stiffness of a cable has been invented. The poissons ratio of cables and bundles cannot be measured with a tensile test, since the inner structure is very complex. For hoses, the variation of the inner diameter has been measured during the tensile test as well. The resulting values were reasonable, but their accuracy could not be proven. For cables and hoses, it has been tried to obtain the poissons ratio via the computation of the cross section, but the influence of individual changes and the crosstalk of the braids is very high. Therefore a setup was constructed where the torsion stiffness can be measured. For cables and hoses, the individual cross-sections and taping lead to varying results. For hoses, expected and repeatable good values for the poissons ratio were obtained. The low influence of the poisons ratio in the range between 0 and 0.5 has been proven. Therefore we decided to follow the advice of [Old06] and our own experiences to set the poisons ratio for cables and bundles to 0.25. With the knowledge of the measurability and the capabilities of the developed program composer, a procedure to obtain material properties for bundles has been designed. 1. Measurement of the braid density with via pyknometer or mass, length and diameter. 2. Empirical reconstruction of the stiffness with the designed setup. 3. Composing the bundle with the program composer. 4. Adding a factor for the taping and transfer the values to the simulation. The model of the cable simulation has been improved as follows: The main influences in the simulation of cables and hoses are the simulation method, the material properties and the geometric constraints. To obtain higher accuracy, an investigation on the correct material properties is indispensable. The scientific determination of material properties for the simulation of cables, bundles and hoses has been performed for the first time. The influence of geometrical constraints has been analyzed and documented. The next steps are the analysis of pre-deformation and further investigations to the determination of the poisons ratio with a more precise torsion test. All analysis were made with the simulation approach fleXengine. A comparison to other simulation methods would be of high interest.
This thesis focuses on the utilization of modern graphics hardware (GPU) for visualization and computation purposes, especially of volumetric data from medical imaging. The considerable increase in raw computing power in recent years has turned commodity systems into high-performance workstations. In combination with the direct rendering capabilities of graphics hardware, "visual computing" and "computational steering" approaches on large data sets have become feasible. In this regard several example applications and concepts such as the "ray textures" have been developed and are discussed in detail. As the amount of data to be processed and visualized is steadily increasing, memory and bandwidth limitations require compact representations of the data. While the compression of image data has been investigated extensively in the past, the thesis addresses possibilities of performing computations directly on the compressed data. Therefore, different categories of algorithms are identified and represented in the wavelet domain. By using special variants of the compressed format, efficient implementations of essential image processing algorithms are possible and demonstrate the potential of the approach. From the technical perspective, the GPU-based framework "Cascada" has been developed in the course of this thesis. The introduction of object-oriented concepts to shader programming, as well as a hierarchical representation of computation and/or visualization procedures led to a simplified utilization of graphics hardware while maintaining competitive performance. This is shown with different implementations throughout the contributions, as well as two clinical projects in the field of diagnosis assistance. On the one hand the semi-automatic segmentation of low-resolution MRI data sets of the human liver is evaluated. On the other hand different possibilities in assessing abdominal aortic aneurysms are discussed; both projects make use of graphics hardware. In addition, "Cascada" provides extensions towards recent general-purpose programming architectures and a modular design for future developments.
The goal of this thesis is the development of methods for augmented image synthesis using 3D photo collections. 3D photo collections are representations of real scenes automatically generated from single photos and describe a scene as a set of images with known camera poses as well as a sparse point-based model of the scene geometry. The main goal is to perform a photo-realistic augmented image synthesis of real and virtual parts, where the real scene is provided as a 3D photo collection. Therefore, three main problems are addressed.
Since the photos may be represented in different device-specific RGB color spaces, a color characterization of the 3D photo collections is necessary to gain correct color information that is consistent with human perception. The proposed novel method automatically transforms all images into a common RGB color space and thereby simplifies color characterization of 3D photo collections.
As a main problem for augmented image synthesis, all environmental lighting has to be known in order to apply illumination to virtual parts that is consistent with the real portions shown in the photos. To solve this problem, two novel methods were developed to reconstruct the lighting from 3D photo collections.
In order to perform image synthesis for arbitrary views on the scene, an image-based approach was developed that generates new views in 3D photo collections making direct use of its point cloud. The novel method creates new views in real-time and allows free-navigation.
In conclusion, the proposed novel methods show that 3D photo collections are a useful representation for real scenes in Augmented Reality and they can be used to perform a realistic image synthesis of real and virtual portions.
This thesis presents the analysis of gamebased touristic applications. In tourism, actions can only be motivated intrinsic. Thus, this thesis at first researches specific intrinsic motivation concepts. It shows how gamebased motivation can be produced on purpose and answers the question whether gamebased motivation can be transferred to non-gamebased applications.
Using these results, different touristic applications have been developed and evaluated.
All applications aimed to add value to the touristic experience. The applications are sorted by their mobility. There are completely mobile, completely stationary and hybrid systems in this work. There are different ways to add value which are presented in this work: Gamebased exploration, knowledge transfer and social interaction between tourists.
Finally, an authoring tool for gamebased touristic tours on smartphones is presented.
Technologische Fortschritte auf dem Gebiet der integrierten Halbleitertechnik, die unter anderem auch zur gestiegenen Leistungsfähigkeit der Kamerasensoren beitragen, konzentrierten sich bisher primär auf die Schnelligkeit und das Auflösungsvermögen der Sensoren. Die sich ständig verändernde Entwicklung hat jedoch direkte Folgen auf das physikalische Verhalten einer Kamera und damit auch Konsequenzen für die erreichbare geometrische Genauigkeit einer photogrammetrischen 3D-Rekonstruktion. Letztere stand bisher nicht im Fokus der Forschung und ist eine Aufgabe, der sich diese Arbeit im Sinne der Photogrammetrie und Messtechnik stellt. Aktuelle Untersuchungen und Erfahrungen aus industriellen Projekten zeigen in diesem Zusammenhang, dass das geometrisch-physikalische Verhalten digitaler Kameras - für höchste photogrammetrische Ansprüche - noch nicht ausreichend modelliert ist. Direkte Aussagen zur erreichbaren Genauigkeit bei gegebener Hardware erweisen sich daher bislang als unzureichend. Ferner kommt es aufgrund der unpräzisen Modellierung zu Einbußen in der Zuverlässigkeit der erreichten Ergebnisse. Für den Entwickler präziser kamerabasierter Messverfahren folgt daraus, dass zu einer optimalen Schätzung der geometrischen Genauigkeit und damit auch vollständigen Ausschöpfung der Messkamera geeignete mathematische Modelle erforderlich sind, die das geometrisch physikalische Verhalten bestmöglich beschreiben. Diese Arbeit beschreibt, wie die erreichbare Genauigkeit einer Bündelblockausgleichung, schon a priori mithilfe des EMVA1288 Standards approximiert werden kann. Eine in diesem Zusammenhang wichtige Teilaufgabe ist die Schaffung einer optimalen Messanordnung. Hierzu gehören Untersuchungen der üblicherweise verwendeten Kalibrierkörper und die Beseitigung von systematischen Fehlern vor und nach der Bündelblockausgleichung. Zum Nachweis dieser Systematiken wird eine auf statistischem Lernen basierende Methode beschrieben und untersucht. Erst wenn alle genauigkeitsmindernden Einflüsse berücksichtigt sind, wird der Anteil des Sensors in den Messdaten sichtbar und damit auch mathematisch parametrisierbar. Die Beschreibung des Sensoreinflusses auf die erreichbare Genauigkeit der Bündelblockausgleichung erfolgt in drei Schritten. Der erste Schritt beschreibt den Zusammenhang zwischen ausgewählten EMVA1288-Kennzahlen und der Unsicherheit eines Grauwertes. Der zweite Schritt ist eine Modellierung dieser Grauwertunsicherheit als Zentrumsunsicherheit einer Zielmarke. Zur Beschreibung dieser Unsicherheit innerhalb der Bündelblockausgleichung wird ein stochastisches Modell, basierend auf dem EMVA1288-Standard, vorgeschlagen. Ausgehend vom Rauschen des Zielmarkenmittelpunktes wird im dritten Schritt die Unsicherheit im Objektraum beispielhaft mit Hilfe von physikalisch orientierten Simulationen approximiert. Die Wirkung der vorgeschlagenen Methoden wird anhand von Realkalibrierungen nachgewiesen. Abschließend erfolgt die Diskussion der vorgeschlagenen Methoden und erreichten Ergebnisse sowie ein Ausblick auf kommende Untersuchungen.
This thesis addresses the automated identification and localization of a time-varying number of objects in a stream of sensor data. The problem is challenging due to its combinatorial nature: If the number of objects is unknown, the number of possible object trajectories grows exponentially with the number of observations. Random finite sets are a relatively new theory that has been developed to derive at principled and efficient approximations. It is based around set-valued random variables that contain an unknown number of elements which appear in arbitrary order and are themselves random. While extensively studied in theory, random finite sets have not yet become a leading paradigm in practical computer vision and robotics applications. This thesis explores random finite sets in visual tracking applications. The first method developed in this thesis combines set-valued recursive filtering with global optimization. The problem is approached in a min-cost flow network formulation, which has become a standard inference framework for multiple object tracking due to its efficiency and optimality. A main limitation of this formulation is a restriction to unary and pairwise cost terms. This circumstance makes integration of higher-order motion models challenging. The method developed in this thesis approaches this limitation by application of a Probability Hypothesis Density filter. The Probability Hypothesis Density filter was the first practically implemented state estimator based on random finite sets. It circumvents the combinatorial nature of data association itself by propagation of an object density measure that can be computed efficiently, without maintaining explicit trajectory hypotheses. In this work, the filter recursion is used to augment measurements with an additional hidden kinematic state to be used for construction of more informed flow network cost terms, e.g., based on linear motion models. The method is evaluated on public benchmarks where a considerate improvement is achieved compared to network flow formulations that are based on static features alone, such as distance between detections and appearance similarity. A second part of this thesis focuses on the related task of detecting and tracking a single robot operator in crowded environments. Different from the conventional multiple object tracking scenario, the tracked individual can leave the scene and later reappear after a longer period of absence. Therefore, a re-identification component is required that picks up the track on reentrance. Based on random finite sets, the Bernoulli filter is an optimal Bayes filter that provides a natural representation for this type of problem. In this work, it is shown how the Bernoulli filter can be combined with a Probability Hypothesis Density filter to track operator and non-operators simultaneously. The method is evaluated on a publicly available multiple object tracking dataset as well as on custom sequences that are specific to the targeted application. Experiments show reliable tracking in crowded scenes and robust re-identification after long term occlusion. Finally, a third part of this thesis focuses on appearance modeling as an essential aspect of any method that is applied to visual object tracking scenarios. Therefore, a feature representation that is robust to pose variations and changing lighting conditions is learned offline, before the actual tracking application. This thesis proposes a joint classification and metric learning objective where a deep convolutional neural network is trained to identify the individuals in the training set. At test time, the final classification layer can be stripped from the network and appearance similarity can be queried using cosine distance in representation space. This framework represents an alternative to direct metric learning objectives that have required sophisticated pair or triplet sampling strategies in the past. The method is evaluated on two large scale person re-identification datasets where competitive results are achieved overall. In particular, the proposed method better generalizes to the test set compared to a network trained with the well-established triplet loss.
The cytological examination of bone marrow serves as clarification of variations in blood smears. It is also used for the clarification of anemia, as exclusion of bone marrow affection at lymphoma and at suspicion of leukemia. The morphological evaluation of hematopoietic cells is the basis for the creation of the diagnosis and for decision support for further diagnostics. Even for experienced hematologists the manual classification of hematopoietic cells is time-consuming, error-prone and subjective. For this reason new methods in the field of image processing and pattern recognition for the automatic classification including preprocessing steps are developed for a computer-assisted microscopy system. These methods are evaluated by means of a huge reference database. The proposed image analysis procedures comprise methods for the automated detection of smears, for the determination of relevant regions, for the localization and segmentation of single hematopoietic cells as well as for the feature extraction and classification task. These methods provide the basis for the first system for the automated, morphological analysis of bone marrow aspirates for leukemia diagnosis and are therefore a major contribution for a better and more efficient patient care in the future.
Das Hauptziel der vorliegenden Arbeit ist die Absicherung der Qualität eines pharmazeutischen Produktionsprozesses durch die Überprüfung des Volumens mikroskopischer Polymerstäbchen mit einem hochgenauen 3D Messverfahren. Die Polymerstäbchen werden für pharmazeutische Anwendungen hergestellt. Aus Gründen der Qualitätssicherung muss das Istgewicht überprüft werden. Derzeit werden die Polymerstäbchen stichprobenartig mit einer hochpräzisen Waage gewogen. Für die nächste Generation von Polymeren wird angenommen, dass die Produktabmessungen weiter reduziert werden sollen und die Produktionstoleranzen auf 2,5% gesenkt werden. Die daraus resultierenden Genauigkeitsanforderungen übersteigen jedoch die Möglichkeiten der Wiegetechnik. Bei homogenen Materialien ist die Masse proportional zum Volumen. Aus diesem Grund kommt dessen Bestimmung als Alternative in Frage. Dies verschafft Zugang zu optischen Messverfahren und deren Flexibilität und Genauigkeitpotenzial. Für den Entwurf eines auf die Fragestellung angepassten Messkonzeptes sind weiterhin von Bedeutung, dass das Objekt kontaktlos, mit einer Taktzeit von maximal fünf Sekunden vermessen und das Volumen approximiert wird. Die Querschnitte der Polymerstäbchen sind etwa kreisförmig. Aufgrund der Herstellung der Fragmente kann nicht davon ausgegangen werden, dass die Anlageflächen orthogonal zur Symmetrieachse des Objektes sind. Daher muss analysiert werden, wie sich kleine Abweichungen von kreisförmigen Querschnitten sowie die nicht idealen Anlageflächen auswirken. Die maximale Standardabweichung für das Volumen, die nicht überschritten werden sollte, beträgt 2,5%. Dies entspricht einer maximalen Abweichung der Querschnittsfläche um 1106 µm² (Fehlerfortpfanzung). Als Bewertungskriterium wird der Korrelationskoeffzient zwischen den gemessenen Volumina und den Massen bestimmt. Ein ideales Ergebnis wäre 100%. Die Messung zielt auf einen Koeffzienten von 98% ab. Um dies zu erreichen, ist ein präzises Messverfahren für Volumen erforderlich. Basierend auf dem aktuellen Stand der Technik können die vorhandenen optischen Messverfahren nicht verwendet werden. Das Polymerstäbchen wird von einer Kamera im Durchlicht beobachtet. Daher sind der Durchmesser und die Länge sichtbar. Das Objekt wird mittels einer mechanischen Vorrichtung um die Längsachse gedreht. So können Bilder von allen Seiten aufgenommen werden. Der Durchmesser und die Länge werden mit der Bildverarbeitung berechnet. Das neue Konzept vereint die Vorteile der Verfahren: Es ist unempfindlich gegen Farb-/Helligkeitsänderungen und die Bilder können in beliebiger Anzahl aufgenommen werden. Außerdem sind die Erfassung und Auswertung wesentlich schneller. Es wird ein Entwurf und die Umsetzung einer Lösung zur hochpräzisen Volumenmessung von Polymerstäbchen mit optischer Messtechnik und Bildverarbeitung ausgearbeitet. Diese spezielle Prozesslösung in der Prozesslinie (inline) sollte eine 100%ige Qualitätskontrolle während der Produktion garantieren. Die Zykluszeiten des Systems sollte fünf Sekunden pro Polymerstäbchen nicht überschreiten. Die Rahmenbedienungen für den Prozess sind durch die Materialeigenschaften des Objekts, die geringe Objektgröße (Breite = 199 µm, Länge = 935 µm bis 1683 µm) und die undeffinierte Querschnittsform (durch den Trocknungsprozess) vorgegeben. Darüber hinaus sollten die Kosten für den Prozess nicht zu hoch sein. Der Messaufbau sollte klein sein und ohne Sicherheitsvorkehrungen oder Abschirmungen arbeiten. Das entstandene System nimmt die Objekte in verschiedenen Winkelschritten auf, wertet mit Hilfe der Bildverarbeitung die Aufnahmen aus und approximiert das Volumen. Der Korrelationskoffizient zwischen Volumen und Gewicht beträgt für 77 Polymerstäbchen mit einem Gewicht von 37 µg bis 80 µg 99; 87%. Mit Hilfe eines Referenzsystems kann die Genauigkeit der Messung bestimmt werden. Die Standardabweichung sollte maximal 2,5% betragen. Das entstandene System erzielt eine maximale Volumenabweichung von 1,7%. Die Volumenvermessung erfüllt alle Anforderungen und kann somit als Alternative für die Waage verwendet werden.
Studies in recent years have demonstrate adolescents and young adults to have a deficient data protection competence, however children and adolescents between the ages of ten and 13 were mostly not focus of these studies. Therefore, the guiding question of the work is how data protection competence is developed in children and adolescents at a young age in order to be able to infer suitable, educational concepts for this age group. At the beginning of the work, a data protection competence model is derived from a media competence model, which serves as the basis for the further field investigation. A survey was carried out at general secondary schools in Rhineland-Palatinate, which shows that the respondents still have sufficiently developed Risk Assessment Competence, but were insufficiently developed in terms of knowledge, Selection and Usage Competence and the Implementation Competence. Recommendations for actions are given in the last part of the work – containing learning goal descriptions to be possibly implemented in an educational framework – in order to address this issue.
In the context of augmented reality we define tracking as a collection of methods to obtain the position and orientation (pose) of a user. By means of various displaying techniques, this ensures a correct visual overlay of graphical information onto the reality perceived. Precise results for calculation of the camera pose are gained by methods of image processing, usually analyzing the pixels of an image and extracing features, which can be recognized over the image sequence. However, these methods do not regard the process of image synthesis or at least in a very simplyfied way. In contrast, the class of model-based methods assumes a given 3D model of the observed scene. Based on the model data features can be identified to establish correspondences in the camera image. From these feature correspondences the camera pose is calculated. An interesting approach is the strategy of analysis-by-synthesis, regarding the computer graphics rendering process for extending the knowledge about the model by information from image synthesis and other environment variables.
In this thesis the components of a tracking system are identified and further it is analyzed, to what extend information about the model, the rendering process and the environment can contribute to the components for improvement of the tracking process using analysis-by-synthesis. In particular, by using knowledge as topological information, lighting or perspective, the feature synthesis and correspondence finding should lead to visually unambiguous features that can be predicted and evaluated to be suitable for stable tracking of the camera pose.