Refine
Year of publication
Document Type
- Bachelor Thesis (100)
- Diploma Thesis (81)
- Study Thesis (76)
- Master's Thesis (45)
- Doctoral Thesis (17)
- Part of Periodical (12)
- Conference Proceedings (3)
Language
- German (282)
- English (50)
- Multiple languages (2)
Keywords
- Bildverarbeitung (16)
- Augmented Reality (13)
- Robotik (10)
- Computergrafik (9)
- Computergraphik (9)
- Computervisualistik (9)
- OpenGL (8)
- GPGPU (5)
- GPU (5)
- Line Space (5)
- Mustererkennung (5)
- Programmierung (5)
- Raytracing (5)
- Shader (5)
- Volumendaten (5)
- Android (4)
- Computerspiel (4)
- Graphik (4)
- Segmentierung (4)
- VOXEL (4)
- Volumen-Rendering (4)
- Analyse durch Synthese (3)
- C++ (3)
- Compute Shader (3)
- Computersimulation (3)
- Datenstruktur (3)
- Erweiterte Realität (3)
- Globale Beleuchtung (3)
- Grafikkarte (3)
- Informatik (3)
- Interaktion (3)
- Linespace (3)
- Markerloses Tracking (3)
- Modellbasiertes Tracking (3)
- Objektverfolgung (3)
- OpenGL Shading Language (3)
- Physiksimulation (3)
- Rendering (3)
- Software (3)
- Tracking (3)
- Virtual Reality (3)
- Virtuelle Realität (3)
- Visualisierung (3)
- virtual reality (3)
- 3D (2)
- Applikation (2)
- Augenbewegung (2)
- Automatische Klassifikation (2)
- Autonomes Robotersystem (2)
- Bild (2)
- Bildanalyse (2)
- Bildregistrierung (2)
- Computer (2)
- Computer Graphics (2)
- Computeranimation (2)
- Computertomografie (2)
- DTI (2)
- Datenschutz (2)
- Frau (2)
- GLSL (2)
- Gaze Tracker (2)
- Global Illumination (2)
- Grafikprozessor (2)
- Graphikprozessor (2)
- High dynamic Range (2)
- Immersion (2)
- Kalibrierung (2)
- Kamera (2)
- Laserscanner (2)
- Material Point Method (2)
- Mikroskopie (2)
- Navigation (2)
- Partikelsystem (2)
- Path Tracing (2)
- Point Rendering (2)
- Ray tracing (2)
- Realistische Computergrafik (2)
- Registrierung <Bildverarbeitung> (2)
- Rettungsroboter (2)
- Sand (2)
- Schatten (2)
- Schnee (2)
- Selbstlokalisation (2)
- Smartphone (2)
- Software Engineering (2)
- Spiel (2)
- Starrkörper (2)
- Stereosehen (2)
- Textur-Mapping (2)
- Tourismus (2)
- Transferfunktion (2)
- Transform Feedback (2)
- Volumen (2)
- Volumenrendering (2)
- Webcam (2)
- Whiteboard (2)
- Wiimote (2)
- Wiki (2)
- Zielverfolgung (2)
- image processing (2)
- interaktiv (2)
- tracking (2)
- volume rendering (2)
- 2-D (1)
- 2D/3D-Segmentierung (1)
- 3-D-Laserscannen (1)
- 3-D-Modell (1)
- 3-D-Modelle (1)
- 3-D-Scanning (1)
- 360 Grad (1)
- 360 degree (1)
- 3D Modell Referenz Bildsynthese (1)
- 3D Painting (1)
- 3D-Abbildungen (1)
- 3D-Color-Structure-Code (1)
- 3D-Darstellung (1)
- 3D-RETISEG (1)
- 3D-Scan (1)
- 3D-Vermessung (1)
- 3D-Videospiel (1)
- 3D-Visualisierung (1)
- AMIRE (1)
- AR (1)
- ARToolkit (1)
- ARToolkitPlus (1)
- Acceleration Structures (1)
- Action Recognition (1)
- Action Segmentation (1)
- Adaptive Abtastung (1)
- Adobe Flex (1)
- Algorithmus (1)
- Ambient Occlusion (1)
- Analyse-durch-Synthese (1)
- Android development (1)
- Androidentwicklung (1)
- Angiographiebilder (1)
- Animationsverfahren (1)
- Anzeige (1)
- App (1)
- Assembly (1)
- Astrophysik (1)
- Augenblick-SDK (1)
- Augmented Reality Szenario (1)
- Ausrichtungswerkzeug (1)
- Austausch (1)
- Auszeichnungssprache (1)
- Automatische Kontrolle im Produktionsprozess (1)
- Autonome Exploration (1)
- Autonome Robotik (1)
- Autonomous Exploration (1)
- Autorensystem (1)
- Avatar (1)
- Avatare (1)
- Axis Aligned Bounding Box (AABB) (1)
- BRDF (1)
- Balancetheorie (1)
- Beleuchtung (1)
- Beleuchtungsverfahren (1)
- Benutzerorientierung (1)
- Benutzerprofil (1)
- Benutzerschnittstelle (1)
- Beschleunigungsensor (1)
- Besuchersoftware (1)
- Bewegungsmodell (1)
- BildKlang (1)
- Bildanalyse-Verfahren (1)
- Bildaufloesung (1)
- Bildauflösung (1)
- Bilderkennung (1)
- Bildsegmentierung (1)
- Bildsensor (1)
- Bildverstehen (1)
- Billard (1)
- Blickpunktabhängig (1)
- Blinder Fleck (1)
- Bordinstrument (1)
- Bounding Volume Hierarchie (1)
- Bounding Volume Hierarchy (1)
- Breaking (1)
- Brechung (1)
- Bruchsimulation (1)
- Business Intelligence (1)
- CAD (1)
- CB2 (1)
- CSMs (1)
- CT-Datensätze (1)
- Cascada (1)
- Casual Game (1)
- Casual Games (1)
- Cg (1)
- Charakter-Objekt-Interaktion (1)
- Charakteranimation (1)
- Colonoscopy (1)
- Color Structure Code (1)
- ColorSym (1)
- Coloskopie (1)
- Communication System (1)
- Compute-Shader (1)
- Computer Vision (1)
- Computer-aided diagnosis (1)
- Computerspiele (1)
- Computertomographie (1)
- Constitutive Model (1)
- CosiMail (1)
- Cybersickness (1)
- DCMTK (1)
- DICOM (1)
- DIN EN ISO 9241 (1)
- Darmpolyp (1)
- Darmspiegelung (1)
- Data compression (1)
- Datenanalyse (1)
- Datenbrille (1)
- Datenkompression (1)
- Datenschutzkompetenz (1)
- Datenschutzkompetenzmodell (1)
- Deep Learning (1)
- Deep Metric Learning (1)
- Designwerkzeug (1)
- Diagnoseunterstützung (1)
- Diagnosis assistance (1)
- Diffusionsbildgebung (1)
- Digital Storytelling (1)
- Digitale Bilder (1)
- Displacement Mapping (1)
- Display (1)
- Distanzfeld (1)
- Dreidimensionale Visualisierung (1)
- Dreidimensionales Modell (1)
- Dynamische Programmierung (1)
- E-Learning (1)
- E-Mail (1)
- ECSA (1)
- EEG (1)
- Echtzeit (1)
- Echtzeit Anwendung (1)
- Echtzeit Raytracing (1)
- Echtzeit-Raytracer "Augenblick" (1)
- Echtzeitanimationssystem (1)
- Echtzeitfähigkeit (1)
- Elektroencephalographie (1)
- Enhanced Reality (1)
- Entity Component System Architecture (1)
- Entwicklung (1)
- Erweiterte Realität <Informatik> (1)
- Eulerian grid (1)
- Evaluation (1)
- Explorationstransformation (1)
- Exporation Transform (1)
- Extended Reality (1)
- Extension (1)
- Eye-Tracking (1)
- Eyetracking (1)
- FPIC (1)
- Fabric Simulation (1)
- Facebook Application (1)
- Fachinformationssysteme (1)
- Farbbildverarbeitung (1)
- Farbenlehre (1)
- Farbsymmetrie (1)
- Farbtheorie (1)
- Fast Convolution (1)
- FastSLAM Algorithmus (1)
- Fedora (1)
- Fell (1)
- Fell Rendering (1)
- Fernsehtechnik (1)
- Feuer (1)
- Fiber Tracking (1)
- Fire (1)
- Flow-Erlebnis (1)
- Flugkörper (1)
- Fluid Simulation (1)
- Fluid dynamics (1)
- Fluiddynamik (1)
- Fluidsimulation (1)
- Foils <Programm> (1)
- Fourier (1)
- Fracturing (1)
- Fragmentshader (1)
- Framework (1)
- Frei Programmierbares Kombinationsinstrument (1)
- Fresnel (1)
- Funktionelle NMR-Tomographie (1)
- GPGPU-Programmierung (1)
- GPU - Raycasting (1)
- GPU Programmierung (1)
- GPU basiert (1)
- GPU-Programmierung (1)
- Gamebased (1)
- Gamebasiertheit (1)
- Gaze-Tracking (1)
- Gebrauchstauglichkeit (1)
- Gebäudemodelle (1)
- Gefäßanalyse (1)
- Gefühl (1)
- Gehirn (1)
- Geländemodell (1)
- Geodaten (1)
- Geometrie-Shader (1)
- Geometry (1)
- Geometry-Shader (1)
- Geometryshader (1)
- Gerstner (1)
- Geste (1)
- Gesteninterpretation (1)
- Gittererzeugung (1)
- Glaukom (1)
- Glint Center Estimation Algorithmus (1)
- Gonioreflectometer (1)
- Gradientenanalyse (1)
- Grafik (1)
- Grafikprogrammierung (1)
- Grails (1)
- Grails 1.2 (1)
- Graphicsprogramming (1)
- Graphik-Hardware (1)
- Graphikhardware (1)
- Graphpipes (1)
- Grundschule (1)
- HDR-Bild (1)
- HDR-Bilder (1)
- HDR-Kamera (1)
- Haare (1)
- Halbschatten (1)
- Haptik (1)
- Headerdaten Netzwerkpaket SOCK_RAW SOCK_PACKET (1)
- Heidelberg Retina Tomograph (1)
- Herausforderung (1)
- Herzerkrankungen (1)
- Hochgeschwindigkeitstracking (1)
- Hochpräzise Volumenmessung (1)
- HoloLens (1)
- Human motion (1)
- ICP-Algorithmus (1)
- IK-Algorithmus (1)
- IceCube (1)
- Image Processing (1)
- Image Understanding (1)
- Imitation Learning (1)
- Implizite Oberfläche (1)
- Indirekte Beleuchtung (1)
- Industrial-CT (1)
- Informatikunterricht (1)
- Information Retrieval (1)
- Information-Retrieval (1)
- Informationsvisualisierung (1)
- Innenarchitektur (1)
- Inpainting-Verfahren (1)
- Interaktionselemente (1)
- Interaktives Whiteboard (1)
- Inverse Kinematik (1)
- Inverses Modell (1)
- Iterative Closest Points (1)
- Iterative Dual Correspondences (1)
- J2ME (1)
- JGraph (1)
- Jump'n Run-Spiel (1)
- KD-Bäume und Binary-Space-Partitioning-Bäume (1)
- Kabel (1)
- Kalibrieren <Messtechnik> (1)
- Kamera Pose Tracking (1)
- Kartesisches Euler-Gitter (1)
- Kartesisches Gitter (1)
- Kartographie (1)
- Katadioptrisches Stereosystem (1)
- Kinect (1)
- Klassifikation (1)
- Klassifikationsframework (1)
- Kohärenzanalyse (1)
- Kollaboration (1)
- Kollisionserkennung (1)
- Koloskopie (1)
- Kontrastlose Oberflächen (1)
- Konturfindung (1)
- Landschaftsgestaltung (1)
- Landschaftsmodell (1)
- Leap-Motion (1)
- Lehrer (1)
- Lernen (1)
- Lernspiel (1)
- Lichttransport (1)
- Light Injection (1)
- Linienkorrelationen (1)
- Lokale Bildmerkmale (1)
- Lokalisierung (1)
- MP3 Player (1)
- Machine Learning (1)
- Malen (1)
- Mammographie (1)
- Maschinelles Lernen (1)
- Matching-Verfahren (1)
- Matchingverfahren (1)
- Material point method (1)
- Materialmodell (1)
- Materie-Punkt-Methode (1)
- Matting (1)
- Maya (1)
- MeVisLab (1)
- Medizinische Bildverarbeitung (1)
- Medizinische Visualisierung (1)
- Medizintechnik (1)
- Mensch-Maschine-Schnittstelle (1)
- Merkmalsbewertung (1)
- Merkmalsdetektion (1)
- Merkmalsmanagement (1)
- Merkmalssynthese (1)
- Metric based Icp (1)
- Mikroskopische Objekte (1)
- Mind Maps (1)
- Mitral Valve (1)
- Mitralklappe (1)
- Mixed Reality (1)
- Mobile Kommunikation (1)
- Mobile Roboter (1)
- Mobile Robots (1)
- Mobiler Roboter (1)
- Mobiles Informatinssystem (1)
- Modellieren (1)
- Modellierung (1)
- Modellierungstool (1)
- Modulare Softwarearchitektur (1)
- Modulares Softwaresystem (1)
- Montage (1)
- Motion Capturing (1)
- Motion Sickness (1)
- Motivation (1)
- Multidimensional (1)
- Multimodal Action Recognition (1)
- Multimodal Medical Image Analysis Cochlea Spine Non-rigid Registration Segmentation ITK VTK 3D Slicer CT MRI CBCT (1)
- Multiple Object Tracking (1)
- Multivariate Datenanalyse (1)
- Musik (1)
- Musteranalyse (1)
- Myokard (1)
- N-Body Simulation (1)
- N-Körper Simulation (1)
- NURBS (1)
- Natural Image Matting (1)
- Navier"Stokes equations (1)
- Navier-Stokes-Gleichungen (1)
- Networked Graphs (1)
- Netzhaut (1)
- Netzwerkmanagement (1)
- Netzwerkmonitoring (1)
- Netzwerkschicht (1)
- Netzwerktechnik (1)
- Neutino (1)
- Nicht-photorealistisches Rendering (1)
- Nintendo DS (1)
- Nintendo Wii Fernbedienung (1)
- Nvidia CUDA (1)
- OGRE (1)
- Oberflächenentwurf (1)
- Oberflächengestaltung (1)
- Oberflächenorientierung (1)
- Oberflächenstruktur (1)
- Objektentfernung (1)
- Objekterkennung (1)
- Octree Textur (1)
- Oktalbaum (1)
- Oktonärbäume (1)
- One-Shot Action Recognition (1)
- Open Source 3D Graphics Engine (1)
- OpenCL (1)
- Optische Messtechnik (1)
- Optisches Mikroskop (1)
- Ornamentik (1)
- Ornamentklassifikation (1)
- PACS- Picture Archiving (1)
- PBRT (1)
- Packet Header SOCK_RAW SOCK_PACKET (1)
- Par (1)
- Partikel (1)
- Partikelsimulation (1)
- Path transformation (1)
- Pathtracing (1)
- Pattern Recognition (1)
- Personentracking (1)
- Personenverfolgungssystem (1)
- Pfadplanung (1)
- Pfadtransformation (1)
- Phantom Omni (1)
- Photogrammetrie Kalibrierung EMVA-1288 (1)
- Photogrammetry (1)
- Physik (1)
- Physikengine (1)
- Planspiegel-System (1)
- Planspiel 2.0 (1)
- Pollenanalyse (1)
- Polypendetektion (1)
- Pose (1)
- Posebestimmung (1)
- Poseschätzung (1)
- Position Based Dynamics (1)
- Positionsbestimmung (1)
- Potenzialfeld (1)
- Pre-Integration (1)
- Primus (1)
- Privacy Competence Model (1)
- Programm (1)
- Präsentation (1)
- Punktkorrelationen (1)
- Punktrendering (1)
- Pupil Center Estimation Algorithmus (1)
- Qt/C++ (1)
- Qualitätssicherung (1)
- RIP-MTI (1)
- RIP-XT (1)
- RMTI (1)
- Radiales System (1)
- Radiosity (1)
- Random Finite Sets (1)
- Raumplanung (1)
- Ray Tracer (1)
- Ray Tracing (1)
- Ray casting (1)
- Ray-Casting (1)
- Real-time computing (1)
- Realistisches Kameramodell (1)
- Reengineering (1)
- Reflections (1)
- Reflektionen (1)
- Reflexion (1)
- Relativitätstheorie (1)
- Rescue League (1)
- Rescue Robots (1)
- Retinex-Algorithmus (1)
- Reverberation (1)
- Rich Internet Application (1)
- Riss (1)
- Rissmusteranalyse (1)
- RoboCup (1)
- RoboCup Rescue (1)
- Roboter (1)
- Robotersystem (1)
- Robust Principal Component Analysis (1)
- Routing (1)
- SIFT-Algorithmus (1)
- SIFT-Merkmale (1)
- SIFT-Verfahren (1)
- SIMD (1)
- SPARQL (1)
- STOR (1)
- SURF-Algorithmus (1)
- Scanmatching (1)
- Schlauch (1)
- Schnelle Faltung (1)
- Segmentation (1)
- Semantic Desktop (1)
- Semantic Web (1)
- Semantik (1)
- Sensor (1)
- Sensorensteuerung (1)
- Separating-Axis-Theorem (1)
- Shaderframework (1)
- Shaderhochsprache (1)
- Shaderprogramm (1)
- Shattering (1)
- Simulation (1)
- Skript (1)
- Skript <Programm> (1)
- Social Games (1)
- Socket (1)
- Socket-Schnittstelle (1)
- Software- Ergonomie (1)
- Spaß (1)
- Specular (1)
- Spektrale Bildgewinnung (1)
- Spielkonsole (1)
- Sprachverarbeitung (1)
- Sprechtherapie (1)
- Stadt (1)
- Stadtmodelle (1)
- Standortbestimmung (1)
- Statistical Shape Model (1)
- Stereobildfolgen (1)
- Stereografie (1)
- Stereoskopie (1)
- Sterne (1)
- Steuerung (1)
- Stilisierung (1)
- Stoffsimulation (1)
- Subsurface Scattering (1)
- Szeneneditor (1)
- Tageslichtprojektor (1)
- Tanzspiel (1)
- Text (1)
- Texterkennung (1)
- Textur (1)
- Texturierung (1)
- Thunderbird <Programm> (1)
- Tiefenbild (1)
- Tiefenkarte (1)
- Ton (1)
- Tone Mapping (1)
- Tone-Mapping (1)
- Tone-Mapping-Verfahren (1)
- Tonemapping (1)
- Tonemapping-Verfahren (1)
- Touchscreen (1)
- Tourism (1)
- Tracking-System (1)
- Trackingsystem (1)
- Transfer function (1)
- Transferfunction (1)
- Transportschicht (1)
- Triangulierung (1)
- USARSim (1)
- Ultraschall (1)
- Ultrasound (1)
- Umgebungslicht (1)
- Ungarische Methode (1)
- Uniforme Raumunterteilung (1)
- Unterricht (1)
- Unterwasser-Pipeline (1)
- Unterwasserfahrzeug (1)
- Unterwasserkabel (1)
- User Interface (1)
- VIACOBI (1)
- VR (1)
- VST (1)
- Vascular analysis (1)
- Verbindungsschicht (1)
- Vertex Buffer Object (1)
- Vertex Buffer Objekt (1)
- Video (1)
- Video-Installation (1)
- Videospielentwicklung (1)
- Virtual characters (1)
- Virtuelle Menschen (1)
- Visualisierungstechniken (1)
- Visuelle Installation (1)
- Visuelle Odometrie (1)
- Vocabulary Trainer (1)
- Volume Hatching (1)
- Volume Rendering (1)
- Volumenvisualisierung (1)
- Vorliebe (1)
- Vorwärtsmodell (1)
- Voxel (1)
- Voxelisierung (1)
- Wasseroberfläche (1)
- Wavelet (1)
- Weibliches Publikum (1)
- Weiterentwicklung (1)
- Wellen (1)
- Wiederhall (1)
- Wii-Balance Board (1)
- Wolkensimulation (1)
- X-Cosim (1)
- XML (1)
- XTpeer (1)
- Zerbrechen (1)
- accelerometer (1)
- android (1)
- area light (1)
- augmented reality (1)
- color image processing (1)
- computer games (1)
- depth map (1)
- directed acyclic graphs (1)
- disparity map (1)
- dreidimensionale Computergraphik (1)
- dynamic programming (1)
- engine (1)
- finite state automata (1)
- flow (1)
- fur rendering (1)
- game (1)
- general purpose (1)
- geometry shader (1)
- graphics card (1)
- hybride App (1)
- image warping (1)
- indirect lighting (1)
- indirektes Licht (1)
- interactive (1)
- interactive application (1)
- interaktive Applikation (1)
- interaktiven (1)
- jump'n run game (1)
- leap motion (1)
- line space (1)
- machine learning (1)
- markerless tracking (1)
- media competence model (1)
- medizinische Bilddaten (1)
- menschliches Auge (1)
- multidimensional (1)
- multidimensional transfer function (1)
- multidimensionale Transferfunktion (1)
- natural language processing (1)
- natürliche Sprache (1)
- octree texture (1)
- path planning (1)
- performance optimization (1)
- physics (1)
- physics simulation (1)
- polyp detection (1)
- privacy and personal data (1)
- privacy competence model (1)
- projektives Malen (1)
- prozedural (1)
- ray casting (1)
- ray tracing (1)
- raycasting (1)
- raytracing (1)
- reflection (1)
- refraction (1)
- regular dag languages (1)
- relativity (1)
- rigid body (1)
- risk (1)
- robotics (1)
- rtx (1)
- scene analysis (1)
- security awareness (1)
- sensor control (1)
- shader (1)
- snow (1)
- soft shadow (1)
- stereo vision (1)
- stereoscopic rendering (1)
- syntaktische Analyse (1)
- texturing (1)
- transform feedback (1)
- virtual human (1)
- visualization (1)
- volume visualization (1)
- volumenrendering (1)
- voxelization (1)
- vulkan (1)
- warp divergence (1)
- water surface (1)
- waves (1)
Institute
- Institut für Computervisualistik (334) (remove)
On the recognition of human activities and the evaluation of its imitation by robotic systems
(2023)
This thesis addresses the problem of action recognition through the analysis of human motion and the benchmarking of its imitation by robotic systems.
For our action recognition related approaches, we focus on presenting approaches that generalize well across different sensor modalities. We transform multivariate signal streams from various sensors to a common image representation. The action recognition problem on sequential multivariate signal streams can then be reduced to an image classification task for which we utilize recent advances in machine learning. We demonstrate the broad applicability of our approaches formulated as a supervised classification task for action recognition, a semi-supervised classification task for one-shot action recognition, modality fusion and temporal action segmentation.
For action classification, we use an EfficientNet Convolutional Neural Network (CNN) model to classify the image representations of various data modalities. Further, we present approaches for filtering and the fusion of various modalities on a representation level. We extend the approach to be applicable for semi-supervised classification and train a metric-learning model that encodes action similarity. During training, the encoder optimizes the distances in embedding space for self-, positive- and negative-pair similarities. The resulting encoder allows estimating action similarity by calculating distances in embedding space. At training time, no action classes from the test set are used.
Graph Convolutional Network (GCN) generalized the concept of CNNs to non-Euclidean data structures and showed great success for action recognition directly operating on spatio-temporal sequences like skeleton sequences. GCNs have recently shown state-of-the-art performance for skeleton-based action recognition but are currently widely neglected as the foundation for the fusion of various sensor modalities. We propose incorporating additional modalities, like inertial measurements or RGB features, into a skeleton-graph, by proposing fusion on two different dimensionality levels. On a channel dimension, modalities are fused by introducing additional node attributes. On a spatial dimension, additional nodes are incorporated into the skeleton-graph.
Transformer models showed excellent performance in the analysis of sequential data. We formulate the temporal action segmentation task as an object detection task and use a detection transformer model on our proposed motion image representations. Experiments for our action recognition related approaches are executed on large-scale publicly available datasets. Our approaches for action recognition for various modalities, action recognition by fusion of various modalities, and one-shot action recognition demonstrate state-of-the-art results on some datasets.
Finally, we present a hybrid imitation learning benchmark. The benchmark consists of a dataset, metrics, and a simulator integration. The dataset contains RGB-D image sequences of humans performing movements and executing manipulation tasks, as well as the corresponding ground truth. The RGB-D camera is calibrated against a motion-capturing system, and the resulting sequences serve as input for imitation learning approaches. The resulting policy is then executed in the simulated environment on different robots. We propose two metrics to assess the quality of the imitation. The trajectory metric gives insights into how close the execution was to the demonstration. The effect metric describes how close the final state was reached according to the demonstration. The Simitate benchmark can improve the comparability of imitation learning approaches.
This thesis explores a 3D object detection and pose estimation approach based on the point pair features method presented by Drost et. al. [Dro+10]. While pose estimation methods have shown good improvements, they still remain a crucial problem on the computer vision field. In this work, we implemented a program that takes point cloud scenes as input and returns the detected object with their estimated pose. The program fully covers an object detection pipeline by processing 3D models during an offline phase, extracting their point pair features and creating a global descriptor out of them. During an online phase, the same features are extracted from a point cloud scene and are matched to the model features. After the voting scheme, potential poses of the object are retrieved. The poses end being clustered together and post-processed to finally deliver a result. The program was tested using simulated and real data. We evaluate these tests and present the final results, by discussing the achieved accuracy of the detections and the estimated poses.
Augmented reality (AR) applications typically extend the user's view of the real world with virtual objects.
In recent years, AR has gained increasing popularity and attention, which has led to improvements in the required technologies. AR has become available to almost everyone.
Researchers have made great progress towards the goal of believable AR, in which the real and virtual worlds are combined seamlessly.
They mainly focus on issues like tracking, display technologies and user interaction, and give little attention to visual and physical coherence when real and virtual objects are combined. For example, virtual objects should not only respond to the user's input; they should also interact with real objects. Generally, AR becomes more believable and realistic if virtual objects appear fixed or anchored in the real scene, appear indistinguishable from the real scene, and response to any changes within it.
This thesis examines on three challenges in the field of computer vision to meet the goal of a believable combined world in which virtual objects appear and behave like real objects.
Firstly, the thesis concentrates on the well-known tracking and registration problem. The tracking and registration challenge is discussed and an approach is presented to estimate the position and viewpoint of the user so that virtual objects appear fixed in the real world. Appearance-based line models, which keep only relevant edges for tracking purposes, enable absolute registration in the real world and provide robust tracking. On the one hand, there is no need to spend much time creating suitable models manually. On the other hand, the tracking can deal with changes within the object or the scene to be tracked. Experiments have shown that the use of appearance-based line models improves the robustness, accuracy and re-initialization speed of the tracking process.
Secondly, the thesis deals with the subject of reconstructing the surface of a real environment and presents an algorithm to optimize an ongoing surface reconstruction. A complete 3D surface reconstruction of the target scene
offers new possibilities for creating more realistic AR applications. Several interactions between real and virtual objects, such as collision and occlusions, can be handled with physical correctness. Whereas previous methods focused on improving surface reconstructions offline after a capturing step, the presented method de-noises, extends and fills holes during the capturing process. Thus, users can explore an unknown environment without any preparation tasks such as moving around and scanning the scene, and without having to deal with the underlying technology in advance. In experiments, the approach provided realistic results where known surfaces were extended and filled in plausibly for different surface types.
Finally, the thesis focuses on handling occlusions between the real and virtual worlds more realistically, by re-interpreting the occlusion challenge as an alpha matting problem. The presented method overcomes limitations in state-of-the-art methods by estimating a blending coefficient per pixel of the rendered virtual scene, instead of calculating only their visibility. In several experiments and comparisons with other methods, occlusion handling through alpha matting worked robustly and overcame limitations of low-cost sensor data; it also outperformed previous work in terms of quality, realism and practical applicability.
The method can deal with noisy depth data and yields realistic results in regions where foreground and background are not strictly separable (e.g. caused by fuzzy objects or motion blur).
Efficient Cochlear Implant (CI) surgery requires prior knowledge of the cochlea’s size and its characteristics. This information helps to select suitable implants for different patients. Registered and fused images helps doctors by providing more informative image that takes advantages of different modalities. The cochlea’s small size and complex structure, in addition to the different resolutions and head positions during imaging, reveals a big challenge for the automated registration of the different image modalities. To obtain an automatic measurement of the cochlea length and the volume size, a segmentation method of cochlea medical images is needed. The goal of this dissertation is to introduce new practical and automatic algorithms for the human cochlea multi-modal 3D image registration, fusion, segmentation and analysis. Two novel methods for automatic cochlea image registration (ACIR) and automatic cochlea analysis (ACA) are introduced. The proposed methods crop the input images to the cochlea part and then align the cropped images to obtain the optimal transformation. After that, this transformation is used to align the original images. ACIR and ACA use Mattes mutual information as similarity metric, the adaptive stochastic gradient descent (ASGD) or the stochastic limited memory Broyden–Fletcher–Goldfarb–Shanno (s-LBFGS) optimizer to estimate the parameters of 3D rigid transform. The second stage of nonrigid registration estimates B-spline coefficients that are used in an atlas-model-based segmentation to extract cochlea scalae and the relative measurements of the input image. The image which has segmentation is aligned to the input image to obtain the non-rigid transformation. After that the segmentation of the first image, in addition to point-models are transformed to the input image. The detailed transformed segmentation provides the scala volume size. Using the transformed point-models, the A-value, the central scala lengths, the lateral and the organ of corti scala tympani lengths are computed. The methods have been tested using clinical 3D images of total 67 patients: from Germany (41 patients) and Egypt (26 patients). The atients are of different ages and gender. The number of images used in the experiments is 217, which are multi-modal 3D clinical images from CT, CBCT, and MRI scanners. The proposed methods are compared to the state of the arts ptimizers related medical image registration methods e.g. fast adaptive stochastic gradient descent (FASGD) and efficient preconditioned tochastic gradient descent (EPSGD). The comparison used the root mean squared distance (RMSE) between the ground truth landmarks and the resulted landmarks. The landmarks are located manually by two experts to represent the round window and the top of the cochlea. After obtaining the transformation using ACIR, the landmarks of the moving image are transformed using the resulted transformation and RMSE of the transformed landmarks, and at the same time the fixed image landmarks are computed. I also used the active length of the cochlea implant electrodes to compute the error aroused by the image artifact, and I found out an error ranged from 0.5 mm to 1.12 mm. ACIR method’s RMSE average was 0.36 mm with a standard deviation (SD) of 0.17 mm. The total time average required for registration of an image pair using ACIR was 4.62 seconds with SD of 1.19 seconds. All experiments are repeated 3 times for justifications. Comparing the RMSE of ACIR2017 and ACIR2020 using paired T-test shows no significant difference (p-value = 0.17). The total RMSE average of ACA method was 0.61 mm with a SD of 0.22 mm. The total time average required for analysing an image was 5.21 seconds with SD of 0.93 seconds. The statistical tests show that there is no difference between the results from automatic A-value method and the manual A-value method (p-value = 0.42). There is no difference also between length’s measurements of the left and the right ear sides (p-value > 0.16). Comparing the results from German and Egypt dataset shows there is no difference when using manual or automatic A-value methods (p-value > 0.20). However, there is a significant difference when using ACA2000 method between the German and the Egyptian results (p-value < 0.001). The average time to obtain the segmentation and all measurements was 5.21 second per image. The cochlea scala tympani volume size ranged from 38.98 mm3 to 57.67 mm3 . The combined scala media and scala vestibuli volume size ranged from 34.98 mm 3 to 49.3 mm 3 . The overall volume size of the cochlea should range from 73.96 mm 3 to 106.97 mm 3 . The lateral wall length of scala tympani ranged from 42.93 mm to 47.19 mm. The organ-of-Corti length of scala tympani ranged from 31.11 mm to 34.08 mm. Using the A-value method, the lateral length of scala tympani ranged from 36.69 mm to 45.91 mm. The organ-of-Corti length of scala tympani ranged from 29.12 mm to 39.05 mm. The length from ACA2020 method can be visualised and has a well-defined endpoints. The ACA2020 method works on different modalities and different images despite the noise level or the resolution. In the other hand, the A-value method works neither on MRI nor noisy images. Hence, ACA2020 method may provide more reliable and accurate measurement than the A-value method. The source-code and the datasets are made publicly available to help reproduction and validation of my result.
Human action recognition from a video has received growing attention in computer vision and has made significant progress in recent years. Action recognition is described as a requirement to decide which human actions appear in videos. The difficulties involved in distinguishing human actions are due to the high complexity of human behaviors as well as appearance variation, motion pattern variation, occlusions, etc. Many applications use human action recognition on captured video from cameras, resulting in video surveillance systems, health monitoring, human-computer interaction, and robotics. Action recognition based on RGB-D data has increasingly drawn more attention to it in recent years. RGB-D data contain color (Red, Green, and Blue (RGB)) and depth data that represent the distance from the sensor to every pixel in the object (object point). The main problem that this thesis deals with is how to automate the classification of specific human activities/actions through RGB-D data. The classification process of these activities utilizes a spatial and temporal structure of actions. Therefore, the goal of this work is to develop algorithms that can distinguish these activities by recognizing low-level and high-level activities of interest from one another. These algorithms are developed by introducing new features and methods using RGB-D data to enhance the detection and recognition of human activities. In this thesis, the most popular state-of-the-art techniques are reviewed, presented, and evaluated. From the literature review, these techniques are categorized into hand-crafted features and deep learning-based approaches. The proposed new action recognition framework is based on these two categories that are approved in this work by embedding novel methods for human action recognition. These methods are based on features extracted from RGB-D data that are
evaluated using machine learning techniques. The presented work of this thesis improves human action recognition in two distinct parts. The first part focuses on improving current successful hand-crafted approaches. It contributes into two significant areas of state-of-the-art: Execute the existing feature detectors, and classify the human action in the 3D spatio-temporal domains by testing a new combination of different feature representations. The contributions of this part are tested based on machine learning techniques that include unsupervised and supervised learning to evaluate this suitability for the task of human action recognition. A k-means clustering represents the unsupervised learning technique, while the supervised learning technique is represented by: Support Vector Machine, Random Forest, K-Nearest Neighbor, Naive Bayes, and Artificial Neural Networks classifiers. The second part focuses on studying the current deep-learning-based approach and how to use it with RGB-D data for the human action recognition task. As the first step of each contribution, an input video is analyzed as a sequence of frames. Then, pre-processing steps are applied to the video frames, like filtering and smoothing methods to remove the noisy data from each frame. Afterward, different motion detection and feature representation methods are used to extract features presented in each frame. The extracted features
are represented by local features, global features, and feature combination besides deep learning methods, e.g., Convolutional Neural Networks. The feature combination achieves an excellent accuracy performance that outperforms other methods on the same RGB-D datasets. All the results from the proposed methods in this thesis are evaluated based on publicly available datasets, which illustrate that using spatiotemporal features can improve the recognition accuracy. The competitive experimental results are achieved overall. In particular, the proposed methods can be better applied to the test set compared to the state-of-the-art methods using the RGB-D datasets.
Point Rendering
(2021)
In this thesis different methods for rendering point data are shown and compared with each other. The methods can be divided into two categories. For one visual methods are introduced that strictly deal with the displaying of point primitves. The main problem here lies in the depiction of surfaces since point data, unlike traditional triangle meshes, doesn't contain any connectivity information. On the other hand data strucutres are shown that enable real-time rendering of large point clouds. Point clouds often contain large amounts of data since they are mostly generated through 3D scanning processes such as laser scanning and photogrammetry.
Der Wettbewerb um die besten Technologien zur Realisierung des autonomen Fahrens ist weltweit in vollem Gange.
Trotz großer Anstrengungen ist jedoch die autonome Navigation in strukturierter und vor allem unstrukturierter Umgebung bisher nicht gelöst.
Ein entscheidender Baustein in diesem Themenkomplex ist die Umgebungswahrnehmung und Analyse durch passende Sensorik und entsprechende Sensordatenauswertung.
Insbesondere bildgebende Verfahren im Bereich des für den Menschen sichtbaren Spektrums finden sowohl in der Praxis als auch in der Forschung breite Anwendung.
Dadurch wird jedoch nur ein Bruchteil des elektromagnetischen Spektrums genutzt und folglich ein großer Teil der verfügbaren Informationen zur Umgebungswahrnehmung ignoriert.
Um das vorhandene Spektrum besser zu nutzen, werden in anderen Forschungsbereichen schon seit Jahrzehnten \sog spektrale Sensoren eingesetzt, welche das elektromagnetische Spektrum wesentlich feiner und in einem größeren Bereich im Vergleich zu klassischen Farbkameras analysieren. Jedoch können diese Systeme aufgrund technischer Limitationen nur statische Szenen aufnehmen. Neueste Entwicklungen der Sensortechnik ermöglichen nun dank der \sog Snapshot-Mosaik-Filter-Technik die spektrale Abtastung dynamischer Szenen.
In dieser Dissertation wird der Einsatz und die Eignung der Snapshot-Mosaik-Technik zur Umgebungswahrnehmung und Szenenanalyse im Bereich der autonomen Navigation in strukturierten und unstrukturierten Umgebungen untersucht. Dazu wird erforscht, ob die aufgenommen spektralen Daten einen Vorteil gegenüber klassischen RGB- \bzw Grauwertdaten hinsichtlich der semantischen Szenenanalyse und Klassifikation bieten.
Zunächst wird eine geeignete Vorverarbeitung entwickelt, welche aus den Rohdaten der Sensorik spektrale Werte berechnet. Anschließend wird der Aufbau von neuartigen Datensätzen mit spektralen Daten erläutert. Diese Datensätze dienen als Basis zur Evaluation von verschiedenen Klassifikatoren aus dem Bereich des klassischen maschinellen Lernens.
Darauf aufbauend werden Methoden und Architekturen aus dem Bereich des Deep-Learnings vorgestellt. Anhand ausgewählter Architekturen wird untersucht, ob diese auch mit spektralen Daten trainiert werden können. Weiterhin wird die Verwendung von Deep-Learning-Methoden zur Datenkompression thematisiert. In einem nächsten Schritt werden die komprimierten Daten genutzt, um damit Netzarchitekturen zu trainieren, welche bisher nur mit RGB-Daten kompatibel sind. Abschließend wird analysiert, ob die hochdimensionalen spektralen Daten bei der Szenenanalyse Vorteile gegenüber RGB-Daten bieten
In this thesis, the performance of the IceCube projects photon propagation
code (clsim) is optimized. The process of GPU code analysis and perfor-
mance optimization is described in detail. When run on the same hard-
ware, the new version achieves a speedup of about 3x over the original
implementation. Comparing the unmodified code on hardware currently
used by IceCube (NVIDIA GTX 1080) against the optimized version run on
a recent GPU (NVIDIA A100) a speedup of about 9.23x is observed. All
changes made to the code are shown and their performance impact as well
as the implications for simulation accuracy are discussed individually.
The approach taken for optimization is then generalized into a recipe.
Programmers can use it as a guide, when approaching large and complex
GPU programs. In addition, the per warp job-queue, a design pattern used
for load balancing among threads in a CUDA thread block, is discussed in
detail.
Soziale Netzwerke spielen im Alltagsleben der Schülerinnen und Schüler eine entscheidende Rolle. Im Rahmen der vorliegenden Masterarbeit wurde ein Konzept für die Anzeige von Profilvorschlägen innerhalb des sozialen Netzwerks „InstaHub“, welches ein speziell für den Informatikunterricht programmiertes Werkzeug zum Thema „Datenbanken“ darstellt, entwickelt. Als Hürde stellte sich dabei dar, dass von den etablierten sozialen Netzwerken nur wenig bis gar keine Informationen über die Berechnung von Profil- oder Freundschaftsvorschlägen preisgegeben werden. Daher wurde zunächst das Wesen von Beziehungen zwischen Menschen in nicht-internetbasierten und in internetbasierten sozialen Netzwerken sowie die Gründe für Beziehungen zwischen Menschen in diesen Netzwerken dargelegt. Anhand der Beobachtung von Vorschlägen in anderen sozialen Netzwerken sowie der in InstaHub gespeicherten Nutzerdaten wurde ein Algorithmus für Profilvorschläge in InstaHub entworfen und mitsamt einer passenden Visualisierung entsprechend implementiert. Den zweiten Teil der Arbeit bildete eine Unterrichtseinheit für die Sekundarstufe II mit dem Thema Gefahren der Erzeugung und Verarbeitung von personenbezogenen Daten. In der Unterrichtseinheit dienen die Profilvorschläge in InstaHub, die auf von InstaHub über dessen Nutzer gesammelten Daten aufbauen, als Einstieg in die Thematik. Anschließend wird der Fokus von sozialen Netzwerken auf andere Online-Dienste erweitert und auf die Verarbeitung und Weitergabe dieser Daten eingegangen.
Ray tracing acceleration through dedicated data structures has long been an important topic in computer graphics. In general, two different approaches are proposed: spatial and directional acceleration structures. The thesis at hand presents an innovative combined approach of these two areas, which enables a further acceleration of the tracing process of rays. State-of-the-art spatial data structures are used as base structures and enhanced by precomputed directional visibility information based on a sophisticated abstraction concept of shafts within an original structure, the Line Space.
In the course of the work, novel approaches for the precomputed visibility information are proposed: a binary value that indicates whether a shaft is empty or non-empty as well as a single candidate approximating the actual surface as a representative candidate. It is shown how the binary value is used in a simple but effective empty space skipping technique, which allows a performance gain in ray tracing of up to 40% compared to the pure base data structure, regardless of the spatial structure that is actually used. In addition, it is shown that this binary visibility information provides a fast technique for calculating soft shadows and ambient occlusion based on blocker approximations. Although the results contain a certain inaccuracy error, which is also presented and discussed, it is shown that a further tracing acceleration of up to 300% compared to the base structure is achieved. As an extension of this approach, the representative candidate precomputation is demonstrated, which is used to accelerate the indirect lighting computation, resulting in a significant performance gain at the expense of image errors. Finally, techniques based on two-stage structures and a usage heuristic are proposed and evaluated. These reduce memory consumption and approximation errors while maintaining the performance gain and also enabling further possibilities with object instancing and rigid transformations.
All performance and memory values as well as the approximation errors are measured, presented and discussed. Overall, the Line Space is shown to result in a considerate improvement in ray tracing performance at the cost of higher memory consumption and possible approximation errors. The presented findings thus demonstrate the capability of the combined approach and enable further possibilities for future work.
Das Hauptziel der vorliegenden Arbeit ist die Absicherung der Qualität eines pharmazeutischen Produktionsprozesses durch die Überprüfung des Volumens mikroskopischer Polymerstäbchen mit einem hochgenauen 3D Messverfahren. Die Polymerstäbchen werden für pharmazeutische Anwendungen hergestellt. Aus Gründen der Qualitätssicherung muss das Istgewicht überprüft werden. Derzeit werden die Polymerstäbchen stichprobenartig mit einer hochpräzisen Waage gewogen. Für die nächste Generation von Polymeren wird angenommen, dass die Produktabmessungen weiter reduziert werden sollen und die Produktionstoleranzen auf 2,5% gesenkt werden. Die daraus resultierenden Genauigkeitsanforderungen übersteigen jedoch die Möglichkeiten der Wiegetechnik. Bei homogenen Materialien ist die Masse proportional zum Volumen. Aus diesem Grund kommt dessen Bestimmung als Alternative in Frage. Dies verschafft Zugang zu optischen Messverfahren und deren Flexibilität und Genauigkeitpotenzial. Für den Entwurf eines auf die Fragestellung angepassten Messkonzeptes sind weiterhin von Bedeutung, dass das Objekt kontaktlos, mit einer Taktzeit von maximal fünf Sekunden vermessen und das Volumen approximiert wird. Die Querschnitte der Polymerstäbchen sind etwa kreisförmig. Aufgrund der Herstellung der Fragmente kann nicht davon ausgegangen werden, dass die Anlageflächen orthogonal zur Symmetrieachse des Objektes sind. Daher muss analysiert werden, wie sich kleine Abweichungen von kreisförmigen Querschnitten sowie die nicht idealen Anlageflächen auswirken. Die maximale Standardabweichung für das Volumen, die nicht überschritten werden sollte, beträgt 2,5%. Dies entspricht einer maximalen Abweichung der Querschnittsfläche um 1106 µm² (Fehlerfortpfanzung). Als Bewertungskriterium wird der Korrelationskoeffzient zwischen den gemessenen Volumina und den Massen bestimmt. Ein ideales Ergebnis wäre 100%. Die Messung zielt auf einen Koeffzienten von 98% ab. Um dies zu erreichen, ist ein präzises Messverfahren für Volumen erforderlich. Basierend auf dem aktuellen Stand der Technik können die vorhandenen optischen Messverfahren nicht verwendet werden. Das Polymerstäbchen wird von einer Kamera im Durchlicht beobachtet. Daher sind der Durchmesser und die Länge sichtbar. Das Objekt wird mittels einer mechanischen Vorrichtung um die Längsachse gedreht. So können Bilder von allen Seiten aufgenommen werden. Der Durchmesser und die Länge werden mit der Bildverarbeitung berechnet. Das neue Konzept vereint die Vorteile der Verfahren: Es ist unempfindlich gegen Farb-/Helligkeitsänderungen und die Bilder können in beliebiger Anzahl aufgenommen werden. Außerdem sind die Erfassung und Auswertung wesentlich schneller. Es wird ein Entwurf und die Umsetzung einer Lösung zur hochpräzisen Volumenmessung von Polymerstäbchen mit optischer Messtechnik und Bildverarbeitung ausgearbeitet. Diese spezielle Prozesslösung in der Prozesslinie (inline) sollte eine 100%ige Qualitätskontrolle während der Produktion garantieren. Die Zykluszeiten des Systems sollte fünf Sekunden pro Polymerstäbchen nicht überschreiten. Die Rahmenbedienungen für den Prozess sind durch die Materialeigenschaften des Objekts, die geringe Objektgröße (Breite = 199 µm, Länge = 935 µm bis 1683 µm) und die undeffinierte Querschnittsform (durch den Trocknungsprozess) vorgegeben. Darüber hinaus sollten die Kosten für den Prozess nicht zu hoch sein. Der Messaufbau sollte klein sein und ohne Sicherheitsvorkehrungen oder Abschirmungen arbeiten. Das entstandene System nimmt die Objekte in verschiedenen Winkelschritten auf, wertet mit Hilfe der Bildverarbeitung die Aufnahmen aus und approximiert das Volumen. Der Korrelationskoffizient zwischen Volumen und Gewicht beträgt für 77 Polymerstäbchen mit einem Gewicht von 37 µg bis 80 µg 99; 87%. Mit Hilfe eines Referenzsystems kann die Genauigkeit der Messung bestimmt werden. Die Standardabweichung sollte maximal 2,5% betragen. Das entstandene System erzielt eine maximale Volumenabweichung von 1,7%. Die Volumenvermessung erfüllt alle Anforderungen und kann somit als Alternative für die Waage verwendet werden.
The Material Point Method (MPM) has proven to be a very capable simulation method in computer graphics that is able to model materials that were previously very challenging to animate [1, 2]. Apart from simulating singular materials, the simulation of multiple materials that interact with each other introduces new challenges. This is the focus of this thesis. It will be shown that the self-collision capabilities of the MPM can naturally handle multiple materials interacting in the same scene on a collision basis, even if the materials use distinct constitutive models. This is then extended by porous interaction of materials as in[3], which also integrates easily with MPM.It will furthermore be shown that regular single-grid MPM can be viewed as a subset of this multi-grid approach, meaning that its behavior can also be achieved if multiple grids are used. The porous interaction is generalized to arbitrary materials and freely changeable material interaction terms, yielding a flexible, user-controllable framework that is independent of specific constitutive models. The framework is implemented on the GPU in a straightforward and simple way and takes advantage of the rasterization pipeline to resolve write-conflicts, resulting in a portable implementation with wide hardware support, unlike other approaches such as [4].
In the context of augmented reality we define tracking as a collection of methods to obtain the position and orientation (pose) of a user. By means of various displaying techniques, this ensures a correct visual overlay of graphical information onto the reality perceived. Precise results for calculation of the camera pose are gained by methods of image processing, usually analyzing the pixels of an image and extracing features, which can be recognized over the image sequence. However, these methods do not regard the process of image synthesis or at least in a very simplyfied way. In contrast, the class of model-based methods assumes a given 3D model of the observed scene. Based on the model data features can be identified to establish correspondences in the camera image. From these feature correspondences the camera pose is calculated. An interesting approach is the strategy of analysis-by-synthesis, regarding the computer graphics rendering process for extending the knowledge about the model by information from image synthesis and other environment variables.
In this thesis the components of a tracking system are identified and further it is analyzed, to what extend information about the model, the rendering process and the environment can contribute to the components for improvement of the tracking process using analysis-by-synthesis. In particular, by using knowledge as topological information, lighting or perspective, the feature synthesis and correspondence finding should lead to visually unambiguous features that can be predicted and evaluated to be suitable for stable tracking of the camera pose.
Studies in recent years have demonstrate adolescents and young adults to have a deficient data protection competence, however children and adolescents between the ages of ten and 13 were mostly not focus of these studies. Therefore, the guiding question of the work is how data protection competence is developed in children and adolescents at a young age in order to be able to infer suitable, educational concepts for this age group. At the beginning of the work, a data protection competence model is derived from a media competence model, which serves as the basis for the further field investigation. A survey was carried out at general secondary schools in Rhineland-Palatinate, which shows that the respondents still have sufficiently developed Risk Assessment Competence, but were insufficiently developed in terms of knowledge, Selection and Usage Competence and the Implementation Competence. Recommendations for actions are given in the last part of the work – containing learning goal descriptions to be possibly implemented in an educational framework – in order to address this issue.
In dieser Arbeit wird die Konzeption, Implementierung und Evaluierung einer Augmented Reality-App beschrieben. Diese wurde mit dem Ziel entwickelt, Objekte im realen Raum mit virtuellen Hilfsmitteln auszumessen, sodass diese Anwendung einen Holzgliedermaßstab ersetzen kann. Hinzu kommt die praktische Speicherung der Messwerte. Angefertigt wurde die App mit der Unity Engine und programmiert in C#.
Schwerpunkte dieser Arbeit sind die Benutzerfreundlichkeit der App, sowie die Eignung von AR Foundation für das Ausmessungstool.
Die Anwendung wird auf die genannten Kriterien im Rahmen eines Nutzertests in einer abschließenden Evaluation bewertet.
Als Ergebnis ließ sich festhalten, dass sich die AR-App noch im Prototyp-Stadium befindet, aber im Allgemeinen schon als benutzerfreundlich gilt. Kleinere Änderungen sollen und müssen noch vorgenommen werden, um auch den Umgang mit dem AR-Tool zu vereinfachen.
Bio-medical data comes in various shapes and with different representations.
Domain experts use such data for analysis or diagnosis,
during research or clinical applications. As the opportunities to obtain
or to simulate bio-medical data become more complex and productive,
the experts face the problem of data overflow. Providing a
reduced, uncluttered representation of data, that maintains the data’s
features of interest falls into the area of Data Abstraction. Via abstraction,
undesired features are filtered out to give space - concerning the
cognitive and visual load of the viewer - to more interesting features,
which are therefore accentuated. To address this challenge, the dissertation
at hand will investigate methods that deal with Data Abstraction
in the fields of liver vasculature, molecular and cardiac visualization.
Advanced visualization techniques will be applied for this purpose.
This usually requires some pre-processing of the data, which will also
be covered by this work. Data Abstraction itself can be implemented
in various ways. The morphology of a surface may be maintained,
while abstracting its visual cues. Alternatively, the morphology may
be changed to a more comprehensive and tangible representation.
Further, spatial or temporal dimensions of a complex data set may
be projected to a lower space in order to facilitate processing of the
data. This thesis will tackle these challenges and therefore provide an
overview of Data Abstraction in the bio-medical field, and associated
challenges, opportunities and solutions.
Molecular dynamics (MD) as a field of molecular modelling has great potential to revolutionize our knowledge and understanding of complex macromolecular structures. Its field of application is huge, reaching from computational chemistry and biology over material sciences to computer-aided drug design. This thesis on one hand provides insights into the underlying physical concepts of molecular dynamics simulations and how they are applied in the MD algorithm, and also briefly illustrates different approaches, as for instance the molecular mechanics and molecular quantum mechanics approaches.
On the other hand an own all-atom MD algorithm is implemented utilizing and simplifying a version of the molecular mechanics based AMBER force field published by \big[\cite{cornell1995second}\big]. This simulation algorithm is then used to show by the example of oxytocin how individual energy terms of a force field function. As a result it has been observed, that applying the bond stretch forces alone caused the molecule to be compacted first in certain regions and then as a whole, and that with adding more energy terms the molecule got to move with increasing flexibility.
In der Computergrafik stellte die Berechnung von Reflexionen lange ein
Problem dar. Doch mit der ständigen Weiterentwicklung der Hardware
und Vorstellung neuer Verfahren ist eine realitätsnahe,
echtzeitfähige(durchschnittlich 60 FPS) Berechnung von Reflexionen möglich. In der folgenden Ausarbeitung werden verschiedene Reflexionsverfahren vorgestellt. Alle mathematischen und physikalischen Grundlagen werden gegeben, um die Algorithmen nachvollziehen zu können. Da eine Reflexion immer das Abtasten eines reflektierten Vektors bedeutet, werden zwei verschiedene Abtastungsverfahren für blickabhängige Reflexionen vorgestellt und anschließend implementiert. Zuletzt werden die Verfahren auf Basis von Qualität und Performance gegenübergestellt.
Since the invention of U-net architecture in 2015, convolutional networks based on its encoder-decoder approach significantly improved results in image analysis challenges. It has been proven that such architectures can also be successfully applied in different domains by winning numerous championships in recent years. Also, the transfer learning technique created an opportunity to push state-of-the-art benchmarks to a higher level. Using this approach is beneficial for the medical domain, as collecting datasets is generally a difficult and expensive process.
In this thesis, we address the task of semantic segmentation with Deep Learning and make three main contributions and release experimental results that have practical value for medical imaging.
First, we evaluate the performance of four neural network architectures on the dataset of the cervical spine MRI scans. Second, we use transfer learning from models trained on the Imagenet dataset and compare it to randomly initialized networks. Third, we evaluate models trained on the bias field corrected and raw MRI data. All code to reproduce results is publicly available online.
Constituent parsing attempts to extract syntactic structure from a sentence. These parsing systems are helpful in many NLP applications such as grammar checking, question answering, and information extraction. This thesis work is about implementing a constituent parser for German language using neural networks. Over the past, recurrent neural networks have been used in building a parser and also many NLP applications. In this, self-attention neural network modules are used intensively to understand sentences effectively. With multilayered self-attention networks, constituent parsing achieves 93.68% F1 score. This is improved even further by using both character and word embeddings as a representation of the input. An F1 score of 94.10% was the best achieved by constituent parser using only the dataset provided. With the help of external datasets such as German Wikipedia, pre-trained ELMo models are used along with self-attention networks achieving 95.87% F1 score.