Refine
Year of publication
- 2016 (19) (remove)
Document Type
- Bachelor Thesis (7)
- Master's Thesis (6)
- Diploma Thesis (3)
- Conference Proceedings (1)
- Doctoral Thesis (1)
- Part of Periodical (1)
Language
- German (13)
- English (5)
- Multiple languages (1)
Keywords
- Raytracing (2)
- Augmented Reality (1)
- Bildverarbeitung (1)
- Bildverstehen (1)
- Bounding Volume Hierarchie (1)
- Distanzfeld (1)
- Farbenlehre (1)
- Farbtheorie (1)
- Gamebased (1)
- Gamebasiertheit (1)
Institute
- Institut für Computervisualistik (19) (remove)
Proceedings des FWS 2015
(2016)
Die Aufnahme, Verarbeitung und Analyse farbiger bzw. mehrkanaliger Bilder gewinnt seit Jahren ständig an Bedeutung. Diese Entwicklung wird durch die verbesserten technischen Möglichkeiten und die stetig steigenden Ansprüche aus den vielfältigen Anwendungsfeldern in Industrie, Medizin, Umwelt und Medien befördert. Diesem Trend folgend wurde in Koblenz 1995 erstmals der Workshop Farbbildverarbeitung durchgeführt und hat sich seitdem als jährlich stattfindende Veranstaltung etabliert. Als Veranstaltung der German ColorGroup bietet der Workshop ein Diskussionsforum für Forscher, Entwickler und Anwender, das sich den Problemen der Farbtheorie, Farbmessung, Farbbildaufnahme und spektralen Bildgewinnung ("hyper-spectral imaging") genauso wie der Entwicklung von neuen Methoden und Algorithmen zur Verarbeitung und Analyse von Farbbildern und mehrkanaligen (spektroskopischen) Bilddaten widmet. Ebenso nehmen Fragestellungen der farbtreuen Bildreproduktion auf verschiedenen Ausgabemedien wie auch die Nutzung von Methoden und Verfahren der Farbbildverarbeitung im Rahmen der industriellen Qualitätskontrolle sowie in Robotik und Automatisierung gebührenden Platz ein.
This examination broaches the issue of an investigation that is supported by the eye-tracking technologie and uses the e-learn-system Compass. For this investigation, there was carried out an experiment to test participants if they recognize agreement errors in the subjekt-verb- congruence in german sentences. The idea for that came from an other survey, that was hold for the englisch language. The integration of the e-learn-system Compass intended the observation of the participants learning behaviour. The experiment was able to be evaluated and analysed by the software of the eye-tracker, that recorded all eye movements and created valuable data out of them. It was assumed to catch the participants eye movements such as fixations or regressions.
The result was very clear: errors in congruence betweet subject and verb were done often. Besides, the error-rate increases proportionally to the length of the sentence.
Virtueller Dirigent
(2016)
Ihm Rahmen einer Bachelorarbeit wurde eine "Virtual Reality"-Anwendung entwickelt. Ein Benuzter erhält die Möglichkeit die Rolle eines Dirigenten zu übernehmen und ein virtuelles Orchester zu leiten. Die Leitung erfolgt mit Hilfe von Handgesten. Mithilfe einer Evaluation der Anwendung wurden allgemeine Anforderungen an "Virtual Reality"-Spiele festgehalten.
In einer Welt, in der mittlerweile "Die Cloud" als Lösung für nahezu alles angepriesen wird, stellt sich immer häufiger die Frage, ob man seine persönlichen Daten einem Fremden anvertrauen möchte, oder sie doch lieber unter der eigenen Kontrolle behält. Für die Befürworter der letzten Option steht "ownCloud" als freies Softwarepaket zur Verfügung, um eine eigene Cloud aufzusetzen und ihre Inhalte mit Anderen zu teilen.
Um das Teilen von Lernwerkzeugen zu vereinheitlichen und damit zu vereinfachen, wurde von IMS GLOBAL die "Learning Tools Interoperability" Spezifikation - kurz LTI - entwickelt. Diese wird inzwischen von einer zunehmenden Anzahl von Lernmanagementsystemen und Lernressourcen unterstützt. Eine interessante Herausforderung ist daher, zu untersuchen, ob und wie man ownCloud mit verschiedenen bestehenden Lernwerkzeugen mittels LTI verbinden und daraus Nutzen ziehen kann.
Ziel dieser Arbeit ist es, ein Plugin für ownCloud zu konzeptionieren und zu entwickeln, das die Kommunikation mit Lernwerkzeugen per LTI
ermöglicht. Dabei soll sowohl die Consumer- als auch die Providerseite mit einem Proof of Concept berücksichtigt werden, um jeweils die Möglichkeiten und Grenzen dieser Verbindungen zu untersuchen.
Der natürliche Prozess der Verwitterung ist ein komplexer Vorgang, der von unterschiedlichsten Parametern beeinflusstwird. Hauptbestandteil dieses Prozesses ist das Zusammenziehen des Fruchtvolumens infolge von Wasserverlust durch Transpiration sowie die Veränderung der Fruchtfarbe und Oberfläche. Es wurden bereits Verfahren entwickelt, die diese Eigenschaften mit Hilfe von Parametrisierung sowie physikalischer Ansätze simulieren. Die in dieser Arbeit erstellte Anwendung simuliert das Fruchtfleisch durch ein Tetraedernetz und die Veränderung der Haut mit Hilfe von dynamischer Texturanpassung. Der entwickelte Algorithmus arbeitet in linearer Laufzeit und seine Ergebnisse werden anhand selbst erstellter Fruchtmodelle präsentiert.
This thesis presents the analysis of gamebased touristic applications. In tourism, actions can only be motivated intrinsic. Thus, this thesis at first researches specific intrinsic motivation concepts. It shows how gamebased motivation can be produced on purpose and answers the question whether gamebased motivation can be transferred to non-gamebased applications.
Using these results, different touristic applications have been developed and evaluated.
All applications aimed to add value to the touristic experience. The applications are sorted by their mobility. There are completely mobile, completely stationary and hybrid systems in this work. There are different ways to add value which are presented in this work: Gamebased exploration, knowledge transfer and social interaction between tourists.
Finally, an authoring tool for gamebased touristic tours on smartphones is presented.
Statistical Shape Models (SSMs) are one of the most successful tools in 3Dimage analysis and especially medical image segmentation. By modeling the variability of a population of training shapes, the statistical information inherent in such data are used for automatic interpretation of new images. However, building a high-quality SSM requires manually generated ground truth data from clinical experts. Unfortunately, the acquisition of such data is a time-consuming, error-prone and subjective process. Due to this effort, the majority of SSMs is often based on a limited set of this ground truth training data, which makes the models less statistically meaningful. On the other hand, image data itself is abundant in clinics from daily routine. In this work, methods for automatically constructing a reliable SSM without the need of manual image interpretation from experts are proposed. Thus, the training data is assumed to be the result of any segmentation algorithm or may originate from other sources, e.g. non-expert manual delineations. Depending on the algorithm, the output segmentations will contain errors to a higher or lower degree. In order to account for these errors, areas of low probability of being a boundary should be excluded from the training of the SSM. Therefore, the probabilities are estimated with the help of image-based approaches. By including many shape variations, the corrupted parts can be statistically reconstructed. Two approaches for reconstruction are proposed - an Imputation method and Weighted Robust Principal Component Analysis (WRPCA). This allows the inclusion of many data sets from clinical routine, covering a lot more variations of shape examples. To assess the quality of the models, which are robust against erroneous training shapes, an evaluation compares the generalization and specificity ability to a model build from ground truth data. The results show, that especially WRPCA is a powerful tool to handle corrupted parts and yields to reasonable models, which have a higher quality than the initial segmentations.
Six and Gimmler have identified concrete capabilities that enable users to use the Internet in a competent way. Their media competence model can be used for the didactical design of media usage in secondary schools. However, the special challenge of security awareness is not addressed by the model. In this paper, the important dimension of risk and risk assessment will be introduced into the model. This is especially relevant for the risk of the protection of personal data and privacy. This paper will apply the method of IT risk analysis in order to select those dimensions of the Six/Gimmler media competence model that are appropriate to describe privacy aware Internet usage. Privacy risk aware decisions for or against the Internet usage is made visible by the trust model of Mayer et al.. The privacy extension of the competence model will lead to a measurement of the existing privacy awareness in secondary schools, which, in turn, can serve as a didactically well-reasoned design of Informatics modules in secondary schools. This paper will provide the privacy-extended competence model, while empirical measurement and module design is planned for further research activities.
This work covers techniques for interactive and physically - based rendering of hair for computer generated imagery (CGI). To this end techniques
for the simulation and approximation of the interaction of light with hair are derived and presented. Furthermore it is described how hair, despite such computationally expensive algorithms, can be rendered interactively.
Techniques for computing the shadowing in hair as well as approaches to render hair as transparent geometry are also presented. A main focus of
this work is the DBK-Buffer, which was conceived, implemented and evaluated. Using the DBK-Buffer, it is possible to render thousands of hairs as
transparent geometry without being dependent on either the newest GPU hardware generation or a great amount of video memory. Moreover, a comprehensive evaluation of all the techniques described was conducted with respect to the visual quality, performance and memory requirements. This
revealed that hair can be rendered physically - based at interactive or even at real - time frame rates.
This thesis presents an approach to optimizing the computation of soft shadows from area lights. The light source is sampled uniformly by traversing shadow rays as packets through an N-tree. This data structure stores an additional line space for every node. A line space stores precomputed information about geometry inside of shafts from one to another side of the node. This visibility information is used to terminate a ray. Additionally the graphics processing unit (short GPU) is used to speed up the computations through parallelism. The scene is rendered with OpenGL and the shadow value is computed on the GPU for each pixel. Evaluating the implementation shows a performance gain of 86% by comparison to the CPU, if using the GPU implementation. Using the line space instead of triangle intersections also increases the performance. The implementation provides good scaling with an increasing amount of triangles and has no visual disadvantages for many rays.