Refine
Year of publication
- 2016 (6) (remove)
Document Type
- Diploma Thesis (6) (remove)
Keywords
- Animationsfilm (1)
- Bounding Volume Hierarchie (1)
- Filmdarstellung (1)
- Filmfiguren (1)
- Filmgestalt (1)
- Inhaltsanalyse (1)
- Quantitative Inhaltsanalyse (1)
- Ray Tracing (1)
- Zeichentrickfilm (1)
The aim of this work was to present the accident and traffic modeling of Oman (Muskat City) under theoretical assumptions by means of agent-based simulation. TRASS should be taken as the basis for the simulation environment. In addition, possible counter-measures should be proposed with regard to the high accident rate, which result from different simulation variants.
It was not part of this work whether the theoretical assumptions by the simulation should actually be applied and implemented in reality.
It was also necessary to check whether three different simulation variants could be represented differently by TRASS. The three variants were divided into simulation sequence with traffic light control, without traffic light control and finally with traffic light control, which however should be ignored with a probability of 70%. All three variants could produce different results. It has been shown that all three variants have advantages and disadvantages with respect to traffic flow and accident rate.
Not to be neglected is the fact that this work is modeled, analyzed and evaluated exclusively with TRASS framework. All potentials and deficits thus flow into the overall framework of this work. Therefore, after critical observation, this statement was viewed from a single angle only: the theoretical assumptions of Oman and the modeling limits of TRASS. In order to be able to make a promising statement about the actual implementation of the proposals, further comprehensive analyzes and simulations are necessary in a much wider range of variants of transport for Oman.
Learning or doing ones homework isn’t among the first things students like to do. They rather play videogames — a situation every teacher knows. One chance to solve this problem is to combine learning and video games.
Scientists are discussing game-based learning and gamification in this context. To shed some light at the meaning of these terms, especially on gamification, is one of the purposes of this thesis. Gamification doesn’t mean to cover boring
learning content with some sort of game mechanics or to reward a right answer with a minigame, that has no connection to the teaching and learning content at all. In contrast, gamification is the inclusion of game elements in learning applications to motivate students to use these applications to learn.
Learning languages takes special profit of computer-assisted learning. Using an online learning environment enables the students to get in touch with each other and exchange experiences. Certain game elements can be used to let students participate in a sort of competition. Here they can compare their knowledge in a game-based manner.
In the context of this thesis, a prototype will be developed that connects game elements to a learning application for “German as a Foreign Language“. The
purpose of this prototype is to show teachers possibilites to motivate their students. A usability test will be conducted, the collected data will be evaluated and improvements will be proposed. As a result of this thesis, it is shown that the prototype includes promising approches and was evaluated successfully. Nonetheless the prototype needs to be tested more thoroughly and improvements need to be implemented. Furthermore, evaluation methods need to be developed to take the peculiarities of games and their special characteristics into account.
Animated children’s films can be seen as integral parts of western children’s culture and are very popular among their young target group. Accordingly an accurate scientific examination of these film’s contents seems useful. Previous quantitative content analyses are mainly concerned with “classic” Disney animated films. There is less research regarding more modern computer-animated films from different film studios. Thus the aim of the quantitative content analysis in this study is to contribute to the research field by examining portrayals in 20 very popular computer-animated films from 2000 to 2013. The focus here is on the relations between demographic, physical, linguistic as well as on personality-based and behavioral features of the film characters and their “goodness” or rather “evilness”. In conjunction with analyses of extreme groups the features of particularly “good” or rather particularly “evil” film characters are also taken into account.
In addition the features of film characters undergoing a fundamental change of their “goodness” or “evilness” during the film plot are examined. Hereby a comprehensive picture of depiction-tendencies regarding the “goodness” or rather the “evilness” of animated film characters and of the extent to which they may be associated with certain stereotypes should be enabled. In the theoretical part of this study important findings from the actual state of research concerning typical depictions in animated children’s films are summed up. Then theories and empirical findings about possible effects on children who watch such films are described. Subsequently in the empirical part of the study the methodological approach in this content analysis is explained and the corresponding results are mentioned and discussed with regard to possible effects and pedagogical conclusions. The findings indicate that the portrayal of “good” and “evil” in the films examined as a whole can be considered as stereotypical only in some points: For example “good” film characters are significantly more attractive and achieve better results at the end of a film compared to “evil” characters. But in total the findings of this content analysis can be viewed rather positive, also regarding potential effects on children. Nevertheless further research on that topic can deemed to be of concern.
In einer Welt, in der mittlerweile "Die Cloud" als Lösung für nahezu alles angepriesen wird, stellt sich immer häufiger die Frage, ob man seine persönlichen Daten einem Fremden anvertrauen möchte, oder sie doch lieber unter der eigenen Kontrolle behält. Für die Befürworter der letzten Option steht "ownCloud" als freies Softwarepaket zur Verfügung, um eine eigene Cloud aufzusetzen und ihre Inhalte mit Anderen zu teilen.
Um das Teilen von Lernwerkzeugen zu vereinheitlichen und damit zu vereinfachen, wurde von IMS GLOBAL die "Learning Tools Interoperability" Spezifikation - kurz LTI - entwickelt. Diese wird inzwischen von einer zunehmenden Anzahl von Lernmanagementsystemen und Lernressourcen unterstützt. Eine interessante Herausforderung ist daher, zu untersuchen, ob und wie man ownCloud mit verschiedenen bestehenden Lernwerkzeugen mittels LTI verbinden und daraus Nutzen ziehen kann.
Ziel dieser Arbeit ist es, ein Plugin für ownCloud zu konzeptionieren und zu entwickeln, das die Kommunikation mit Lernwerkzeugen per LTI
ermöglicht. Dabei soll sowohl die Consumer- als auch die Providerseite mit einem Proof of Concept berücksichtigt werden, um jeweils die Möglichkeiten und Grenzen dieser Verbindungen zu untersuchen.
Ray Tracing als Bildsyntheseverfahren ist relevant für viele Anwendungsbereiche, da es Aspekte des Lichttransports physikalisch korrekt simulieren kann. Aufgrund des hohen Berechnungsaufwands sind der Einsatz von Datenstrukturen zur Beschleunigung und die parallele Verarbeitung notwendig. GPUs sind inzwischen hoch parallele, programmierbare Prozessoren mit zahlreichen Kernen und eignen sich aufgrund ihrer hohen Leistungsfähigkeit dazu, aufwändige, parallelisierbare Probleme zu lösen. In dieser Arbeit geht es um GPU Ray Tracing, beschleunigt durch Bounding Volume Hierarchien (BVH). Auf Basis relevanter Veröffentlichungen zu Aufbau und Traversierung von BVHs und der Abbildung des Ray Tracing Prozesses auf die GPU Architektur wird ein GPU Ray Tracer konzeptioniert und entwickelt. Während der BVH Aufbau vorab auf dem Host stattfindet, wird der gesamte Ray Tracing Prozess durch mehrere Kernel komplett auf der GPU ausgeführt. Die Implementierung der Kernel erfolgt in Form von OpenGL Compute Shader Programmen, und die Aufteilung des Ray Tracers auf mehrere Kernel ist durch die GPU Achitektur und das SIMT Ausführungsmodell motiviert. Für die Speicherorganisation der binären BVHs werden zwei Varianten betrachtet, klassisch und als MBVH, wobei sich die MBVH Organisation als effizienter erweist. Zudem werden verschiedene Varianten für die Traversierung ohne Stack und für die Stack-basierte Traversierung umgesetzt und bewertet. Der in mehrere Kernel strukturierte GPU Ray Tracer wird zudem mit einer Einzelkernel Version verglichen. Die besten Ergebnisse erreicht die Traversierung ohne Stack mit einem while-while Ablauf und MBVH im Rahmen des aufgeteilten GPU Ray Tracers.