Refine
Year of publication
- 2015 (106) (remove)
Document Type
- Part of Periodical (34)
- Doctoral Thesis (33)
- Bachelor Thesis (19)
- Master's Thesis (18)
- Conference Proceedings (1)
- Lecture (1)
Keywords
- Vorlesungsverzeichnis (4)
- OpenGL (3)
- Android (2)
- Compute Shader (2)
- Crowdsourcing (2)
- Eyetracking (2)
- Führung (2)
- Grafikkarte (2)
- Serviceorientierte Architektur (2)
- 360 Grad (1)
Institute
Today, augmented reality is becoming more and more important in several areas like industrial sectors, medicine, or tourism. This gain of importance can easily be explained by its powerful extension of real world content. Therefore, augmented realty became a way to explain and enhance the real world information. Yet, to create a system which can enhance a scene with additional information, the relation between the system and the real world must be known. In order to establish this relationship a commonly used method is optical tracking. The system calculates its relation to the real world from camera images. To do so, a reference which is known is needed in the scene to serve as an orientation. Today, this is mostly a 2D-marker or a 2D-texture. These are placed in the real world scenery to serve as a reference. But, this is an intrusion in the scene. That is why it is desirable that the system works without such an additional aid. An strategy without manipulating the scene is object-tracking. In this approach, any object from the scene can be used as a reference for the system. As an object is far more complex than a marker, it is harder for the system to establish its relationship with the real world. That is why most methods for 3D-object-tracking reduce the object by not using the whole object as reference. The focus of this thesis is to research how a whole object can be used as a reference in a way that either the system or the camera can be moved in any 360 degree angle around the object without loosing the relation to the real world. As a basis the augmented reality framework, the so called VisionLib, is used. Extensions to this system for 360 degree tracking are implemented in different ways and analyzed in the scope of this work. Also, the different extensions are compared. The best results were achieved by improving the reinitialization process. With this extension, current camera images of the scene are given to the system. With the hek of these images, the system can calculate the relation to the real world faster in case the relation went missing.
Real-time graphics applications are tending to get more realistic and approximate real world illumination gets more reasonable due to improvement of graphics hardware. Using a wide variation of algorithms and ideas, graphics processing units (GPU) can simulate complex lighting situations rendering computer generated imagery with complicated effects such as shadows, refraction and reflection of light. Particularly, reflections are an improvement of realism, because they make shiny materials, e.g. brushed metals, wet surfaces like puddles or polished floors, appear more realistic and reveal information of their properties such as roughness and reflectance. Moreover, reflections can get more complex, depending on the view: a wet surface like a street during rain for example will reflect lights depending on the distance of the viewer, resulting in more streaky reflection, which will look more stretched, if the viewer is locatedrnfarther away from the light source. This bachelor thesis aims to give an overview of the state-of-the-art in terms of rendering reflections. Understanding light is a basic need to understand reflections and therefore a physical model of light and its reflection will be covered in section 2, followed by the motivational section 2.2, that will give visual appealing examples for reflections from the real world and the media. Coming to rendering techniques, first, the main principle will be explained in section 3 followed by a short general view of a wide variety of approaches that try to generate correct reflections in section 4. This thesis will describe the implementation of three major algorithms, that produce plausible local reflections. Therefore, the developed framework is described in section 5, then three major algorithms will be covered, that are common methods in most current game and graphics engines: Screen space reflections (SSR), parallax-corrected cube mapping (PCCM) and billboard reflections (BBR). After describing their functional principle, they will be analysed of their visual quality and the possibilities of their real-time application. Finally they will be compared to each other to investigate the advantages and disadvantages over each other. In conclusion, the gained experiences will be described by summarizing advantages and disadvantages of each technique and giving suggestions for improvements. A short perspective will be given, trying to create a view of upcoming real-time rendering techniques for the creation of reflections as specular effects.
The present work introduces a rigid-body physics engine, focusing on the collision detection by GPU. The increasing performance and accessibility of modern graphics cards ensures that they can be also used for algorithms that are meant not only for imaging. This advantage is used to implement an efficient collision detection based on particles. The performance differences between CPU and GPU are presented by using a test environment.
A fundamental understanding of attachment of engineered nanoparticles to environmentalrnsurfaces is essential for the prediction of nanoparticle fate and transport in the environment.
The present work investigates the attachment of non-coated silver nanoparticles and citraterncoated silver nanoparticles to different model surfaces and environmental surfaces in thernpresence and absence of humic acid. Batch sorption experiments were used for this investigation.
The objective of this thesis was to investigate how silver nanoparticles interactrnwith surfaces having different chemical functional groups. The effect of presence of HA, on the particle-surface interactions was also investigated. In the absence of humic acid, nanoparticle-surface interactions or attachment was influencedrnby the chemical nature of the interacting surfaces. On the other hand, in the presence ofrnhumic acid, nanoparticle-surface attachment was influenced by the specific surface area of the sorbent surfaces. The sorption of non-coated silver nanoparticles and citrate coatedrnnanoparticles to all the surfaces was nonlinear and best described by Langmuir isotherm, indicating monolayer sorption of nanoparticles on to the surfaces. This can be explained as due to the blocking effect generated by the particle-particle repulsion. In the presence of humic acid, sorption of nanoparticles to the surfaces was linear. When the humic acid was present in the interacting medium, both the nanoparticles and surfaces were getting coated with humic acid and this masks the chemical functionalities of the surfaces. This leads to the change in particle-surface interactions, in the presence of humic acid. For the silver nanoparticle sorption from an unstable suspension, the sorption isotherms did not follow any classical sorption models, suggesting interplay between aggregation and sorption. Citrate coated silver nanoparticles and humic acid coated silver nanoparticles showed arndepression in sorption compared to the sorption of non-coated silver nanoparticles. In therncase of citrate coated silver nanoparticles the decrease in sorption can be explained by thernmore negative zeta potential of citrate coated nanoparticles compared to non-coated ones. For humic acid coated nanoparticles the sorption depression can be due to the steric hindrance caused by the free humic acid molecules which may coat the sorbent surface or due to the competition for sorption sites between the nanoparticle and free humic acid molecules present in the suspension. Thus nanoparticle surface chemistry is an important factor that determines the attachment of nanoparticles towards surfaces and it makes the characterization of nanoparticle surface an essential step in the study of their fate in the environment.
Another aim of this study was to introduce the potential of chemical force microscopy for nanoparticle surface characterization. With the use of this technique, it was possible to distinguish between bare silver nanoparticles, citrate coated silver nanoparticles, and humic acid coated silver nanoparticles. This was possible by measuring the adhesion forces between the nanoparticles and five different AFM probes having different chemical functionalization.
Zentrale Aufgaben der Hochschule sind die Bewertung, die Ursachenklärung und die Förderung von Studienleistungen (Heublein & Wolter, 2011, S. 215). In diesem Kontext gilt neben intellektuellen Fähigkeiten die Leistungsmotivation als bedeutsamer Prädiktor für den akademischen Erfolg (z. B. Schmidt-Atzert, 2005, S. 132; Steinmayr & Spinath, 2009, S. 80). Im Fokus der vorliegenden Studie stehen deshalb Überlegungen zu Motivationsprozessen von 332 Studienanfängern der Hochschule der Bundesagentur für Arbeit und zu den Faktoren, die sich förderlich auf ihre Lernresultate auswirken. Mit einer Ausschöpfungsquote von 89 % sind die gewonnenen Daten für die Grundgesamtheit repräsentativ. Anhand einer Ex-post-facto-Versuchsanordnung in Form eines quantitativen Prädiktor-Kriteriums-Ansatzes (spezielle Variante eines Längsschnittdesigns) mit unterschiedlichen Erhebungsmethoden, wie standardisiertem Selbstbeurteilungsfragebogen, Leistungstests und offiziellen Dokumenten/Aktenmaterial, wurden folgende Forschungshypothesen zugrunde gelegt: Die Stärke der Leistungsmotivation ist sowohl von Erwartungskomponenten (Fähigkeitsselbstkonzept, Selbstwert, subjektive Notenerwartung, Erfolgszuversicht und Misserfolgsfurcht) als auch von Anreizkomponenten (Gegenstands-, Tätigkeits-, Folgenanreizen) abhängig, welche wiederum vermittelt über das leistungsmotivierte Verhalten einen Einfluss auf die Studienleistung besitzt. Dabei wurde postuliert, dass motivationale Variablen auch dann noch einen bedeutsamen Effekt auf die Studienleistung ausüben, wenn weitere Leistungsprädiktoren, wie die Schulabschlussnote, die Intelligenz, die emotionale Stabilität und die Gewissenhaftigkeit kontrolliert werden.
The publication of freely available and machine-readable information has increased significantly in the last years. Especially the Linked Data initiative has been receiving a lot of attention. Linked Data is based on the Resource Description Framework (RDF) and anybody can simply publish their data in RDF and link it to other datasets. The structure is similar to the World Wide Web where individual HTML documents are connected with links. Linked Data entities are identified by URIs which are dereferenceable to retrieve information describing the entity. Additionally, so called SPARQL endpoints can be used to access the data with an algebraic query language (SPARQL) similar to SQL. By integrating multiple SPARQL endpoints it is possible to create a federation of distributed RDF data sources which acts like one big data store.
In contrast to the federation of classical relational database systems there are some differences for federated RDF data. RDF stores are accessed either via SPARQL endpoints or by resolving URIs. There is no coordination between RDF data sources and machine-readable meta data about a source- data is commonly limited or not available at all. Moreover, there is no common directory which can be used to discover RDF data sources or ask for sources which offer specific data. The federation of distributed and linked RDF data sources has to deal with various challenges. In order to distribute queries automatically, suitable data sources have to be selected based on query details and information that is available about the data sources. Furthermore, the minimization of query execution time requires optimization techniques that take into account the execution cost for query operators and the network communication overhead for contacting individual data sources. In this thesis, solutions for these problems are discussed. Moreover, SPLENDID is presented, a new federation infrastructure for distributed RDF data sources which uses optimization techniques based on statistical information.
This thesis addresses the problem of terrain classification in unstructured outdoor environments. Terrain classification includes the detection of obstacles and passable areas as well as the analysis of ground surfaces. A 3D laser range finder is used as primary sensor for perceiving the surroundings of the robot. First of all, a grid structure is introduced for data reduction. The chosen data representation allows for multi-sensor integration, e.g., cameras for color and texture information or further laser range finders for improved data density. Subsequently, features are computed for each terrain cell within the grid. Classification is performedrnwith a Markov random field for context-sensitivity and to compensate for sensor noise and varying data density within the grid. A Gibbs sampler is used for optimization and is parallelized on the CPU and GPU in order to achieve real-time performance. Dynamic obstacles are detected and tracked using different state-of-the-art approaches. The resulting information - where other traffic participants move and are going to move to - is used to perform inference in regions where the terrain surface is partially or completely invisible for the sensors. Algorithms are tested and validated on different autonomous robot platforms and the evaluation is carried out with human-annotated ground truth maps of millions of measurements. The terrain classification approach of this thesis proved reliable in all real-time scenarios and domains and yielded new insights. Furthermore, if combined with a path planning algorithm, it enables full autonomy for all kinds of wheeled outdoor robots in natural outdoor environments.
Durch eine systematische Literaturanalyse sollen die wichtigsten Aspekte des Phänomens Crowdsourcing abgedeckt werden. Da die Summe an Forschungsfragen relativ breit gefächert ist, soll der Fokus der Arbeit auf die im Folgenden aufgelisteten Fragen gelegt werden: Was ist unter dem Begriff Crowdsourcing gezielt zu verstehen? Wie lässt sich das Phänomen Crowdsourcing von anderen angrenzenden Konzepten trennen? Wo liegen die Gemeinsamkeiten und wesentlichen Unterschiede zwischen den einzelnen Konzepten? Welche Ausprägungsformen von Crowdsourcing sind in Theorie und Praxis vorzufinden? In welchen Bereichen kommt Crowdsourcing zum Einsatz? Welche Unternehmen setzen Crowdsourcing erfolgreich um? Welche Plattformen zur Unterstützung von Crowdsourcing sind vorhanden? Welche Ziele bzw. Ergebnisse sollen mit dem Einsatz von Crowdsourcing erreicht bzw. erzielt werden? Wie läuft der Crowdsourcing-Prozess ab und in welche Phasen lässt sich dieser unterteilen? Wie sieht die Wertschöpfung durch Crowdsourcing (a) allgemein und (b) speziell für Unternehmen aus? Welche Chancen und Potenziale sowie Risiken und Grenzen entstehen dabei den Unternehmen? Was lässt sich in Zukunft im Bereich des Crowdsourcing noch verbessern, das heißt in welchen Bereichen besteht noch Forschungsbedarf?
Der Fachbereich 4 (Informatik) besteht aus fünfundzwanzig Arbeitsgruppen unter der Leitung von Professorinnen und Professoren, die für die Forschung und Lehre in sechs Instituten zusammenarbeiten.
In jedem Jahresbericht stellen sich die Arbeitsgruppen nach einem einheitlichen Muster dar, welche personelle Zusammensetzung sie haben, welche Projekte in den Berichtszeitraum fallen und welche wissenschaftlichen Leistungen erbracht wurden. In den folgenden Kapiteln werden einzelne Parameter aufgeführt, die den Fachbereich in quantitativer Hinsicht, was Drittmitteleinwerbungen, Abdeckung der Lehre, Absolventen oder Veröffentlichungen angeht, beschreiben.