Refine
Year of publication
Document Type
- Bachelor Thesis (100)
- Diploma Thesis (81)
- Study Thesis (76)
- Master's Thesis (46)
- Doctoral Thesis (18)
- Part of Periodical (12)
- Conference Proceedings (3)
Language
- German (282)
- English (52)
- Multiple languages (2)
Keywords
- Bildverarbeitung (16)
- Augmented Reality (13)
- Robotik (10)
- Computergrafik (9)
- Computergraphik (9)
- Computervisualistik (9)
- OpenGL (8)
- GPGPU (5)
- GPU (5)
- Line Space (5)
Institute
- Institut für Computervisualistik (336) (remove)
Artificial neural networks is a popular field of research in artificial intelli-
gence. The increasing size and complexity of huge models entail certain
problems. The lack of transparency of the inner workings of a neural net-
work makes it difficult to choose efficient architectures for different tasks.
It proves to be challenging to solve these problems, and with a lack of in-
sightful representations of neural networks, this state of affairs becomes
entrenched. With these difficulties in mind a novel 3D visualization tech-
nique is introduced. Attributes for trained neural networks are estimated
by utilizing established methods from the area of neural network optimiza-
tion. Batch normalization is used with fine-tuning and feature extraction to
estimate the importance of different parts of the neural network. A combi-
nation of the importance values with various methods like edge bundling,
ray tracing, 3D impostor and a special transparency technique results in a
3D model representing a neural network. The validity of the extracted im-
portance estimations is demonstrated and the potential of the developed
visualization is explored.
Leichte Sprache (LS, easy-to-read German) is a simplified variety of German. It is used to provide barrier-free texts for a broad spectrum of people, including lowliterate individuals with learning difficulties, intellectual or developmental disabilities (IDD) and/or complex communication needs (CCN). In general, LS authors are proficient in standard German and do not belong to the aforementioned group of people. Our goal is to empower the latter to participate in written discourse themselves. This requires a special writing system whose linguistic support and ergonomic software design meet the target group’s specific needs. We present EasyTalk a system profoundly based on natural language processing (NLP) for assistive writing in an extended variant of LS (ELS). EasyTalk provides users with a personal vocabulary underpinned with customizable communication symbols and supports in writing at their individual level of proficiency through interactive user guidance. The system minimizes the grammatical knowledge needed to produce correct and coherent complex contents by intuitively formulating linguistic decisions. It provides easy dialogs for selecting options from a natural-language paraphrase generator, which provides context-sensitive suggestions for sentence components and correctly inflected word forms. In addition, EasyTalk reminds users to add text elements that enhance text comprehensibility in terms of audience design (e.g., time and place of an event) and improve text coherence (e.g., explicit connectors to express discourse-relations). To tailor the system to the needs of the target group, the development of EasyTalk followed the principles of human-centered design (HCD). Accordingly, we matured the system in iterative development cycles, combined with purposeful evaluations of specific aspects conducted with expert groups from the fields of CCN, LS, and IT, as well as L2 learners of the German language. In a final case study, members of the target audience tested the system in free writing sessions. The study confirmed that adults with IDD and/or CCN who have low reading, writing, and computer skills can write their own personal texts in ELS using EasyTalk. The positive feedback from all tests inspires future long-term studies with EasyTalk and further development of this prototypical system, such as the implementation of a so-called Schreibwerkstatt (writing workshop)
On the recognition of human activities and the evaluation of its imitation by robotic systems
(2023)
This thesis addresses the problem of action recognition through the analysis of human motion and the benchmarking of its imitation by robotic systems.
For our action recognition related approaches, we focus on presenting approaches that generalize well across different sensor modalities. We transform multivariate signal streams from various sensors to a common image representation. The action recognition problem on sequential multivariate signal streams can then be reduced to an image classification task for which we utilize recent advances in machine learning. We demonstrate the broad applicability of our approaches formulated as a supervised classification task for action recognition, a semi-supervised classification task for one-shot action recognition, modality fusion and temporal action segmentation.
For action classification, we use an EfficientNet Convolutional Neural Network (CNN) model to classify the image representations of various data modalities. Further, we present approaches for filtering and the fusion of various modalities on a representation level. We extend the approach to be applicable for semi-supervised classification and train a metric-learning model that encodes action similarity. During training, the encoder optimizes the distances in embedding space for self-, positive- and negative-pair similarities. The resulting encoder allows estimating action similarity by calculating distances in embedding space. At training time, no action classes from the test set are used.
Graph Convolutional Network (GCN) generalized the concept of CNNs to non-Euclidean data structures and showed great success for action recognition directly operating on spatio-temporal sequences like skeleton sequences. GCNs have recently shown state-of-the-art performance for skeleton-based action recognition but are currently widely neglected as the foundation for the fusion of various sensor modalities. We propose incorporating additional modalities, like inertial measurements or RGB features, into a skeleton-graph, by proposing fusion on two different dimensionality levels. On a channel dimension, modalities are fused by introducing additional node attributes. On a spatial dimension, additional nodes are incorporated into the skeleton-graph.
Transformer models showed excellent performance in the analysis of sequential data. We formulate the temporal action segmentation task as an object detection task and use a detection transformer model on our proposed motion image representations. Experiments for our action recognition related approaches are executed on large-scale publicly available datasets. Our approaches for action recognition for various modalities, action recognition by fusion of various modalities, and one-shot action recognition demonstrate state-of-the-art results on some datasets.
Finally, we present a hybrid imitation learning benchmark. The benchmark consists of a dataset, metrics, and a simulator integration. The dataset contains RGB-D image sequences of humans performing movements and executing manipulation tasks, as well as the corresponding ground truth. The RGB-D camera is calibrated against a motion-capturing system, and the resulting sequences serve as input for imitation learning approaches. The resulting policy is then executed in the simulated environment on different robots. We propose two metrics to assess the quality of the imitation. The trajectory metric gives insights into how close the execution was to the demonstration. The effect metric describes how close the final state was reached according to the demonstration. The Simitate benchmark can improve the comparability of imitation learning approaches.
This thesis explores a 3D object detection and pose estimation approach based on the point pair features method presented by Drost et. al. [Dro+10]. While pose estimation methods have shown good improvements, they still remain a crucial problem on the computer vision field. In this work, we implemented a program that takes point cloud scenes as input and returns the detected object with their estimated pose. The program fully covers an object detection pipeline by processing 3D models during an offline phase, extracting their point pair features and creating a global descriptor out of them. During an online phase, the same features are extracted from a point cloud scene and are matched to the model features. After the voting scheme, potential poses of the object are retrieved. The poses end being clustered together and post-processed to finally deliver a result. The program was tested using simulated and real data. We evaluate these tests and present the final results, by discussing the achieved accuracy of the detections and the estimated poses.
Augmented reality (AR) applications typically extend the user's view of the real world with virtual objects.
In recent years, AR has gained increasing popularity and attention, which has led to improvements in the required technologies. AR has become available to almost everyone.
Researchers have made great progress towards the goal of believable AR, in which the real and virtual worlds are combined seamlessly.
They mainly focus on issues like tracking, display technologies and user interaction, and give little attention to visual and physical coherence when real and virtual objects are combined. For example, virtual objects should not only respond to the user's input; they should also interact with real objects. Generally, AR becomes more believable and realistic if virtual objects appear fixed or anchored in the real scene, appear indistinguishable from the real scene, and response to any changes within it.
This thesis examines on three challenges in the field of computer vision to meet the goal of a believable combined world in which virtual objects appear and behave like real objects.
Firstly, the thesis concentrates on the well-known tracking and registration problem. The tracking and registration challenge is discussed and an approach is presented to estimate the position and viewpoint of the user so that virtual objects appear fixed in the real world. Appearance-based line models, which keep only relevant edges for tracking purposes, enable absolute registration in the real world and provide robust tracking. On the one hand, there is no need to spend much time creating suitable models manually. On the other hand, the tracking can deal with changes within the object or the scene to be tracked. Experiments have shown that the use of appearance-based line models improves the robustness, accuracy and re-initialization speed of the tracking process.
Secondly, the thesis deals with the subject of reconstructing the surface of a real environment and presents an algorithm to optimize an ongoing surface reconstruction. A complete 3D surface reconstruction of the target scene
offers new possibilities for creating more realistic AR applications. Several interactions between real and virtual objects, such as collision and occlusions, can be handled with physical correctness. Whereas previous methods focused on improving surface reconstructions offline after a capturing step, the presented method de-noises, extends and fills holes during the capturing process. Thus, users can explore an unknown environment without any preparation tasks such as moving around and scanning the scene, and without having to deal with the underlying technology in advance. In experiments, the approach provided realistic results where known surfaces were extended and filled in plausibly for different surface types.
Finally, the thesis focuses on handling occlusions between the real and virtual worlds more realistically, by re-interpreting the occlusion challenge as an alpha matting problem. The presented method overcomes limitations in state-of-the-art methods by estimating a blending coefficient per pixel of the rendered virtual scene, instead of calculating only their visibility. In several experiments and comparisons with other methods, occlusion handling through alpha matting worked robustly and overcame limitations of low-cost sensor data; it also outperformed previous work in terms of quality, realism and practical applicability.
The method can deal with noisy depth data and yields realistic results in regions where foreground and background are not strictly separable (e.g. caused by fuzzy objects or motion blur).
Efficient Cochlear Implant (CI) surgery requires prior knowledge of the cochlea’s size and its characteristics. This information helps to select suitable implants for different patients. Registered and fused images helps doctors by providing more informative image that takes advantages of different modalities. The cochlea’s small size and complex structure, in addition to the different resolutions and head positions during imaging, reveals a big challenge for the automated registration of the different image modalities. To obtain an automatic measurement of the cochlea length and the volume size, a segmentation method of cochlea medical images is needed. The goal of this dissertation is to introduce new practical and automatic algorithms for the human cochlea multi-modal 3D image registration, fusion, segmentation and analysis. Two novel methods for automatic cochlea image registration (ACIR) and automatic cochlea analysis (ACA) are introduced. The proposed methods crop the input images to the cochlea part and then align the cropped images to obtain the optimal transformation. After that, this transformation is used to align the original images. ACIR and ACA use Mattes mutual information as similarity metric, the adaptive stochastic gradient descent (ASGD) or the stochastic limited memory Broyden–Fletcher–Goldfarb–Shanno (s-LBFGS) optimizer to estimate the parameters of 3D rigid transform. The second stage of nonrigid registration estimates B-spline coefficients that are used in an atlas-model-based segmentation to extract cochlea scalae and the relative measurements of the input image. The image which has segmentation is aligned to the input image to obtain the non-rigid transformation. After that the segmentation of the first image, in addition to point-models are transformed to the input image. The detailed transformed segmentation provides the scala volume size. Using the transformed point-models, the A-value, the central scala lengths, the lateral and the organ of corti scala tympani lengths are computed. The methods have been tested using clinical 3D images of total 67 patients: from Germany (41 patients) and Egypt (26 patients). The atients are of different ages and gender. The number of images used in the experiments is 217, which are multi-modal 3D clinical images from CT, CBCT, and MRI scanners. The proposed methods are compared to the state of the arts ptimizers related medical image registration methods e.g. fast adaptive stochastic gradient descent (FASGD) and efficient preconditioned tochastic gradient descent (EPSGD). The comparison used the root mean squared distance (RMSE) between the ground truth landmarks and the resulted landmarks. The landmarks are located manually by two experts to represent the round window and the top of the cochlea. After obtaining the transformation using ACIR, the landmarks of the moving image are transformed using the resulted transformation and RMSE of the transformed landmarks, and at the same time the fixed image landmarks are computed. I also used the active length of the cochlea implant electrodes to compute the error aroused by the image artifact, and I found out an error ranged from 0.5 mm to 1.12 mm. ACIR method’s RMSE average was 0.36 mm with a standard deviation (SD) of 0.17 mm. The total time average required for registration of an image pair using ACIR was 4.62 seconds with SD of 1.19 seconds. All experiments are repeated 3 times for justifications. Comparing the RMSE of ACIR2017 and ACIR2020 using paired T-test shows no significant difference (p-value = 0.17). The total RMSE average of ACA method was 0.61 mm with a SD of 0.22 mm. The total time average required for analysing an image was 5.21 seconds with SD of 0.93 seconds. The statistical tests show that there is no difference between the results from automatic A-value method and the manual A-value method (p-value = 0.42). There is no difference also between length’s measurements of the left and the right ear sides (p-value > 0.16). Comparing the results from German and Egypt dataset shows there is no difference when using manual or automatic A-value methods (p-value > 0.20). However, there is a significant difference when using ACA2000 method between the German and the Egyptian results (p-value < 0.001). The average time to obtain the segmentation and all measurements was 5.21 second per image. The cochlea scala tympani volume size ranged from 38.98 mm3 to 57.67 mm3 . The combined scala media and scala vestibuli volume size ranged from 34.98 mm 3 to 49.3 mm 3 . The overall volume size of the cochlea should range from 73.96 mm 3 to 106.97 mm 3 . The lateral wall length of scala tympani ranged from 42.93 mm to 47.19 mm. The organ-of-Corti length of scala tympani ranged from 31.11 mm to 34.08 mm. Using the A-value method, the lateral length of scala tympani ranged from 36.69 mm to 45.91 mm. The organ-of-Corti length of scala tympani ranged from 29.12 mm to 39.05 mm. The length from ACA2020 method can be visualised and has a well-defined endpoints. The ACA2020 method works on different modalities and different images despite the noise level or the resolution. In the other hand, the A-value method works neither on MRI nor noisy images. Hence, ACA2020 method may provide more reliable and accurate measurement than the A-value method. The source-code and the datasets are made publicly available to help reproduction and validation of my result.
Human action recognition from a video has received growing attention in computer vision and has made significant progress in recent years. Action recognition is described as a requirement to decide which human actions appear in videos. The difficulties involved in distinguishing human actions are due to the high complexity of human behaviors as well as appearance variation, motion pattern variation, occlusions, etc. Many applications use human action recognition on captured video from cameras, resulting in video surveillance systems, health monitoring, human-computer interaction, and robotics. Action recognition based on RGB-D data has increasingly drawn more attention to it in recent years. RGB-D data contain color (Red, Green, and Blue (RGB)) and depth data that represent the distance from the sensor to every pixel in the object (object point). The main problem that this thesis deals with is how to automate the classification of specific human activities/actions through RGB-D data. The classification process of these activities utilizes a spatial and temporal structure of actions. Therefore, the goal of this work is to develop algorithms that can distinguish these activities by recognizing low-level and high-level activities of interest from one another. These algorithms are developed by introducing new features and methods using RGB-D data to enhance the detection and recognition of human activities. In this thesis, the most popular state-of-the-art techniques are reviewed, presented, and evaluated. From the literature review, these techniques are categorized into hand-crafted features and deep learning-based approaches. The proposed new action recognition framework is based on these two categories that are approved in this work by embedding novel methods for human action recognition. These methods are based on features extracted from RGB-D data that are
evaluated using machine learning techniques. The presented work of this thesis improves human action recognition in two distinct parts. The first part focuses on improving current successful hand-crafted approaches. It contributes into two significant areas of state-of-the-art: Execute the existing feature detectors, and classify the human action in the 3D spatio-temporal domains by testing a new combination of different feature representations. The contributions of this part are tested based on machine learning techniques that include unsupervised and supervised learning to evaluate this suitability for the task of human action recognition. A k-means clustering represents the unsupervised learning technique, while the supervised learning technique is represented by: Support Vector Machine, Random Forest, K-Nearest Neighbor, Naive Bayes, and Artificial Neural Networks classifiers. The second part focuses on studying the current deep-learning-based approach and how to use it with RGB-D data for the human action recognition task. As the first step of each contribution, an input video is analyzed as a sequence of frames. Then, pre-processing steps are applied to the video frames, like filtering and smoothing methods to remove the noisy data from each frame. Afterward, different motion detection and feature representation methods are used to extract features presented in each frame. The extracted features
are represented by local features, global features, and feature combination besides deep learning methods, e.g., Convolutional Neural Networks. The feature combination achieves an excellent accuracy performance that outperforms other methods on the same RGB-D datasets. All the results from the proposed methods in this thesis are evaluated based on publicly available datasets, which illustrate that using spatiotemporal features can improve the recognition accuracy. The competitive experimental results are achieved overall. In particular, the proposed methods can be better applied to the test set compared to the state-of-the-art methods using the RGB-D datasets.
Point Rendering
(2021)
In this thesis different methods for rendering point data are shown and compared with each other. The methods can be divided into two categories. For one visual methods are introduced that strictly deal with the displaying of point primitves. The main problem here lies in the depiction of surfaces since point data, unlike traditional triangle meshes, doesn't contain any connectivity information. On the other hand data strucutres are shown that enable real-time rendering of large point clouds. Point clouds often contain large amounts of data since they are mostly generated through 3D scanning processes such as laser scanning and photogrammetry.
Der Wettbewerb um die besten Technologien zur Realisierung des autonomen Fahrens ist weltweit in vollem Gange.
Trotz großer Anstrengungen ist jedoch die autonome Navigation in strukturierter und vor allem unstrukturierter Umgebung bisher nicht gelöst.
Ein entscheidender Baustein in diesem Themenkomplex ist die Umgebungswahrnehmung und Analyse durch passende Sensorik und entsprechende Sensordatenauswertung.
Insbesondere bildgebende Verfahren im Bereich des für den Menschen sichtbaren Spektrums finden sowohl in der Praxis als auch in der Forschung breite Anwendung.
Dadurch wird jedoch nur ein Bruchteil des elektromagnetischen Spektrums genutzt und folglich ein großer Teil der verfügbaren Informationen zur Umgebungswahrnehmung ignoriert.
Um das vorhandene Spektrum besser zu nutzen, werden in anderen Forschungsbereichen schon seit Jahrzehnten \sog spektrale Sensoren eingesetzt, welche das elektromagnetische Spektrum wesentlich feiner und in einem größeren Bereich im Vergleich zu klassischen Farbkameras analysieren. Jedoch können diese Systeme aufgrund technischer Limitationen nur statische Szenen aufnehmen. Neueste Entwicklungen der Sensortechnik ermöglichen nun dank der \sog Snapshot-Mosaik-Filter-Technik die spektrale Abtastung dynamischer Szenen.
In dieser Dissertation wird der Einsatz und die Eignung der Snapshot-Mosaik-Technik zur Umgebungswahrnehmung und Szenenanalyse im Bereich der autonomen Navigation in strukturierten und unstrukturierten Umgebungen untersucht. Dazu wird erforscht, ob die aufgenommen spektralen Daten einen Vorteil gegenüber klassischen RGB- \bzw Grauwertdaten hinsichtlich der semantischen Szenenanalyse und Klassifikation bieten.
Zunächst wird eine geeignete Vorverarbeitung entwickelt, welche aus den Rohdaten der Sensorik spektrale Werte berechnet. Anschließend wird der Aufbau von neuartigen Datensätzen mit spektralen Daten erläutert. Diese Datensätze dienen als Basis zur Evaluation von verschiedenen Klassifikatoren aus dem Bereich des klassischen maschinellen Lernens.
Darauf aufbauend werden Methoden und Architekturen aus dem Bereich des Deep-Learnings vorgestellt. Anhand ausgewählter Architekturen wird untersucht, ob diese auch mit spektralen Daten trainiert werden können. Weiterhin wird die Verwendung von Deep-Learning-Methoden zur Datenkompression thematisiert. In einem nächsten Schritt werden die komprimierten Daten genutzt, um damit Netzarchitekturen zu trainieren, welche bisher nur mit RGB-Daten kompatibel sind. Abschließend wird analysiert, ob die hochdimensionalen spektralen Daten bei der Szenenanalyse Vorteile gegenüber RGB-Daten bieten
In this thesis, the performance of the IceCube projects photon propagation
code (clsim) is optimized. The process of GPU code analysis and perfor-
mance optimization is described in detail. When run on the same hard-
ware, the new version achieves a speedup of about 3x over the original
implementation. Comparing the unmodified code on hardware currently
used by IceCube (NVIDIA GTX 1080) against the optimized version run on
a recent GPU (NVIDIA A100) a speedup of about 9.23x is observed. All
changes made to the code are shown and their performance impact as well
as the implications for simulation accuracy are discussed individually.
The approach taken for optimization is then generalized into a recipe.
Programmers can use it as a guide, when approaching large and complex
GPU programs. In addition, the per warp job-queue, a design pattern used
for load balancing among threads in a CUDA thread block, is discussed in
detail.