Refine
Year of publication
- 2015 (106) (remove)
Document Type
- Part of Periodical (34)
- Doctoral Thesis (33)
- Bachelor Thesis (19)
- Master's Thesis (18)
- Conference Proceedings (1)
- Lecture (1)
Keywords
- Vorlesungsverzeichnis (4)
- OpenGL (3)
- Android (2)
- Compute Shader (2)
- Crowdsourcing (2)
- Eyetracking (2)
- Führung (2)
- Grafikkarte (2)
- Serviceorientierte Architektur (2)
- 360 Grad (1)
- 360 degree (1)
- API analysis (1)
- API-Analyse (1)
- AUTOSAR (1)
- Abfluss (1)
- Adaptation (1)
- Adhäsion (1)
- Amazon Mechanical Turks (1)
- Amazonia (1)
- Amazonien (1)
- Android development (1)
- Androidentwicklung (1)
- Animation (1)
- Anpassung (1)
- Anreiz (1)
- Applikation (1)
- Arbeitsmotivation (1)
- Arbeitszufriedenheit (1)
- Architektur <Informatik> (1)
- Augmented Reality (1)
- Ausrichtungswerkzeug (1)
- Austenitischer Stahl (1)
- Automotive Systems (1)
- Bakterien (1)
- Befahrbarkeit (1)
- Beregnung (1)
- Bildverstehen (1)
- Biofilm (1)
- Biofouling (1)
- Bioreaktor (1)
- Blickpunktabhängig (1)
- Bodenerosion (1)
- C++ (1)
- Chromatographie (1)
- Codierungstheorie (1)
- Communication Networks (1)
- Computer Graphics (1)
- Computergrafik (1)
- Computergraphik (1)
- Computervisualistik (1)
- Constitutive Model (1)
- Decodierung (1)
- Delphi (1)
- Design Thinking (1)
- Developer profiling (1)
- Diagnosekriterien (1)
- Dialog (1)
- Diffusion (1)
- Dimension 3 (1)
- Dokumentation (1)
- Driver Assistance Systems (1)
- Echtzeitsysteme (1)
- Eltern (1)
- Enterprise 2.0 (1)
- Entropia Universe (1)
- Entwickler Profil (1)
- Entwicklung (1)
- Epifluoreszenzmikroskopie (1)
- Erleben (1)
- Erwartung (1)
- Eulerian grid (1)
- Fahrerassistenzsystem (1)
- Fahrerassistenzsysteme (1)
- Fahrzeug (1)
- Feature Modeling (1)
- Feldsaum (1)
- Fließgewässer (1)
- Folgenanreize (1)
- Folie (1)
- Forsterschließung (1)
- Forstwirtschaft (1)
- Französischunterricht (1)
- Fremdsprachenunterricht (1)
- Fähigkeitsselbstkonzept (1)
- Führungstheorie (1)
- GMI-Controlling (1)
- GPGPU (1)
- Gegenstandsanreize (1)
- Gelände (1)
- Geschäftsmodellinnovation (1)
- Gesteninterpretation (1)
- Gesundheit (1)
- Grafik (1)
- Grafikprogrammierung (1)
- Graphicsprogramming (1)
- Graphik (1)
- Herausforderung (1)
- Herbizid (1)
- Hindernis (1)
- Hydrodynamik (1)
- ICP-MS (1)
- Image Processing (1)
- Image Understanding (1)
- Information Asset Register (1)
- Information Audit (1)
- Information Capturing Methods (1)
- Informationstheorie (1)
- Inklusion (1)
- Innovation (1)
- Insecticide (1)
- Insektizid (1)
- Internationale Organisationen (1)
- Kanalcodierung (1)
- Kartesisches Euler-Gitter (1)
- Kartesisches Gitter (1)
- Klassifikation (1)
- Kollisionserkennung (1)
- Kolloid (1)
- Kommunikation (1)
- Komplexe Beeinträchtigungen (1)
- Kryptographie (1)
- Kunststoff (1)
- Laser (1)
- Leap-Motion (1)
- Lebenslanges Lernen (1)
- Leistungsmotivation (1)
- Lernen (1)
- Makrophyten (1)
- Mass-Spektrometrie (1)
- Material Point Method (1)
- Materialmodell (1)
- Materie-Punkt-Methode (1)
- Mathematik (1)
- Medien (1)
- Medienpsychologie (1)
- Mezzanine-Finanzierung (1)
- Mikroskopie (1)
- Mining Software Repositories (1)
- Mitral Valve (1)
- Mitralklappe (1)
- Mittelstand (1)
- Mixed Methods (1)
- Modellierung (1)
- Motivation (1)
- Nachtfalter (1)
- Nachtschmetterlinge (1)
- Nanopartikel (1)
- Narrativität (1)
- Natürliches organisches Material (1)
- Nvidia CUDA (1)
- OpenGL Shading Language (1)
- Optimierung (1)
- Organisationsentwicklung (1)
- Osteocephalus (1)
- Par (1)
- Partikel (1)
- Partikelsystem (1)
- Pattern Recognition (1)
- Personalentwicklung (1)
- Polymere (1)
- Positionierung (1)
- Programmierung (1)
- Psychologie (1)
- Raman-Spektroskopie (1)
- Raupe (1)
- Ray Tracer (1)
- Reflections (1)
- Reflektionen (1)
- Rendering (1)
- Reverse Engineering (1)
- Roboter (1)
- RuneScape (1)
- SOA (1)
- SOK-Modell (1)
- Schadstoffbelastung (1)
- Schadstoffkonzentration (1)
- Schule (1)
- Schwermetalle (1)
- Second Life (1)
- Segmentation (1)
- Segmentierung (1)
- Selbstbeschädigung (1)
- Selbstregulation (1)
- Selbstwert (1)
- Service Oriented Architecture (1)
- Shader (1)
- Simulation (1)
- Software (1)
- Software Engineering (1)
- Softwarearchitektur (1)
- Softwaretest (1)
- Spaß (1)
- Specular (1)
- Spektroskopie (1)
- Starrkörper (1)
- Straßenzustand (1)
- Studienanfänger (1)
- Studienleistung (1)
- Survey Research (1)
- Systematics (1)
- Systematik (1)
- TRIZ (1)
- Taxonomie (1)
- Taxonomy (1)
- Testen (1)
- Titandioxid-Nanopartikeln (1)
- Toxizität (1)
- Tracking (1)
- Transition (1)
- Tätigkeitsanreize (1)
- Ultraschall (1)
- Ultrasound (1)
- Umwelt (1)
- Unterhaltung (1)
- Untersuchung (1)
- Virtuelle Welten (1)
- Virtuelle Ökonomien (1)
- Volumen (1)
- WEB (1)
- Wahrscheinlichkeitsrechnung (1)
- Wegebau (1)
- Weltkultur (1)
- Wirtschaft (1)
- Wissensrepräsentation (1)
- Word-of-Mouth (1)
- World of Warcraft (1)
- Wärme (1)
- Zoologie (1)
- Zoology (1)
- adhesion (1)
- bacteria (1)
- biofilm (1)
- biomass (1)
- bioreactor (1)
- cation-bridges (CaB) (1)
- chemical force microscopy (1)
- chemical risk assessment (1)
- colloid (1)
- computer games (1)
- cross-linking (1)
- cultural landscape (1)
- decomposition (1)
- dialogische Führung (1)
- dialogische Kommunikation (1)
- dialogisches Management (1)
- diffusion (1)
- disturbance (1)
- e-Commerce (1)
- ecotoxicology (1)
- environmental surfaces (1)
- eye tracking (1)
- flow (1)
- forest roads (1)
- freshwater organisms (1)
- functional web testing tools (1)
- general purpose (1)
- gesundheitsgerechtes Führen (1)
- glass transition (1)
- grade (1)
- graphics card (1)
- heat exchanger (1)
- humanistische Psychologie (1)
- humic acid (1)
- hydrodynamic chromatography (1)
- hydrodynamische Chromatographie (1)
- inclusive education (1)
- interactive application (1)
- interaktive Applikation (1)
- interaktiven (1)
- invertebrates (1)
- land use (1)
- lead desorption (1)
- lifelong learning (1)
- mentale Repräsentation (1)
- methane (1)
- multidimensional transfer function (1)
- multidimensionale Transferfunktion (1)
- nanoparticle (1)
- natural organic matter (1)
- nature conservation (1)
- nichtsuizidale Selbstverletzung (1)
- nternational organizations (1)
- parents (1)
- plants (1)
- polymer films (1)
- rainfall simulation (1)
- ray casting (1)
- raycasting (1)
- rigid body (1)
- runoff (1)
- school (1)
- school tier (1)
- silver nanoparticles (1)
- soil erosion (1)
- soil organic matter (1)
- sorption (1)
- special educational needs (1)
- stainless steel (1)
- streams (1)
- surface energy (1)
- surface roughness (1)
- task orientation (1)
- text-picture integration (1)
- traits (1)
- volume rendering (1)
- volume visualization (1)
- volumenrendering (1)
- water-molecule-bridges (WaMB) (1)
- world polity (1)
- Übergang von der Grundschule zur weiterführenden Schule (1)
Institute
Traditional Driver Assistance Systems (DAS) like for example Lane Departure Warning Systems or the well-known Electronic Stability Program have in common that their system and software architecture is static. This means that neither the number and topology of Electronic Control Units (ECUs) nor the presence and functionality of software modules changes after the vehicles leave the factory.
However, some future DAS do face changes at runtime. This is true for example for truck and trailer DAS as their hardware components and software entities are spread over both parts of the combination. These new requirements cannot be faced by state-of-the-art approaches of automotive software systems. Instead, a different technique of designing such Distributed Driver Assistance Systems (DDAS) needs to be developed. The main contribution of this thesis is the development of a novel software and system architecture for dynamically changing DAS using the example of driving assistance for truck and trailer. This architecture has to be able to autonomously detect and handle changes within the topology. In order to do so, the system decides which degree of assistance and which types of HMI can be offered every time a trailer is connected or disconnected. Therefore an analysis of the available software and hardware components as well as a determination of possible assistance functionality and a re-configuration of the system take place. Such adaptation can be granted by the principles of Service-oriented Architecture (SOA). In this architectural style all functionality is encapsulated in self-contained units, so-called Services. These Services offer the functionality through well-defined interfaces whose behavior is described in contracts. Using these Services, large-scale applications can be built and adapted at runtime. This thesis describes the research conducted in achieving the goals described by introducing Service-oriented Architectures into the automotive domain. SOA deals with the high degree of distribution, the demand for re-usability and the heterogeneity of the needed components.
It also applies automatic re-configuration in the event of a system change. Instead of adapting one of the frameworks available to this scenario, the main principles of Service-orientation are picked up and tailored. This leads to the development of the Service-oriented Driver Assistance (SODA) framework, which implements the benefits of Service-orientation while ensuring compatibility and compliance to automotive requirements, best-practices and standards. Within this thesis several state-of-the-art Service-oriented frameworks are analyzed and compared. Furthermore, the SODA framework as well as all its different aspects regarding the automotive software domain are described in detail. These aspects include a well-defined reference model that introduces and relates terms and concepts and defines an architectural blueprint. Furthermore, some of the modules of this blueprint such as the re-configuration module and the Communication Model are presented in full detail. In order to prove the compliance of the framework regarding state-of-the-art automotive software systems, a development process respecting today's best practices in automotive design procedures as well as the integration of SODA into the AUTOSAR standard are discussed. Finally, the SODA framework is used to build a full-scale demonstrator in order to evaluate its performance and efficiency.
101worker is the modular knowledge engineering component of the 101companies project. It has developed maintainability and performance problems due to growing organically, rather than following best software design practices. This thesis lays out these problems, drafts a set of requirements for refactoring the system and then describes and analyzes the resulting implementation. The solution involves collation of scattered and redundant information, setup of unit and functional test suites and incrementalization of the bus architecture of 101worker.
This thesis deals with the development of an interactive Android card game. As an example, the Hebrew game Yaniv was implemented. Focus is the elaboration of required background components and the corresponding implementation in that application. Required game processes will be screened and a possible solution will be identified.
Taxonomy and Systematics of Spiny-Backed Treefrogs, Genus Osteocephalus (Amphibia: Anura: Hylidae)
(2015)
The pan-Amazonian treefrog genus Ostecephalus is poorly understood both on a taxonomic and phylogenetic level. The status of several frogs already or not yet referred to the genus is unclear and the relationships among the genus and with respect to related genera is not understood. In this work O. cabrerai (Cochran and Goin, 1970) from Colombia and Peru is redescribed and O. festae (Peracca, 1904) from the foothills of the Andes in Ecuador is revalidated. Hyla inframaculata Boulenger, 1882, from the lower Amazon in Brazil, is reallocated to Osteocephalus and O. elkejungingerae (Henle, 1981) from the Andean foothills in Peru is shown to be a synonym of Hyla mimetica (Melin, 1941), the valid name being O. mimeticus. Hyla vilarsi Melin, 1941 is considered a valid species in the genus Osteocephalus and revalidated from the synonymies of several other frogs. Three new species, O. castaneicola from northern Bolivia and southern Peru, O. duellmani from a sub-Andean mountain range in southern Ecuador, and O. camufatus from central Amazonian Brazil, are described. A phylogenetic analysis based on up to nine mitochondrial genes and one nuclear one reveals the paraphyly of the genus as previously understood with respect to the genus Tepuihyla. A new taxonomy is proposed, securing the monophyly of Osteocephalus and Tepuihyla by rearranging and redefining the content of both genera. A new genus, Dryaderces, is erected for the sister group of Osteocephalus. The colouration of newly metamorphosed individuals is proposed as a morphological synapomorphy for Osteocephalus. Five monophyletic species groups within Osteocephalus are recognized, three species of Osteocephalus (O. germani, O.rnphasmatus, O. vilmae) and three species of Tepuihyla (T. celsae, T. galani, T. talbergae) synonymized and three species (Hyla helenae to Osteocephalus, O.rnexophthalmus to Tepuihyla and O. pearsoni to Dryaderces gen. n.) reallocated. Furthermore, nine putative new species are flagged (an increase to 138% of the current diversity), an indication that species numbers are largely underestimated, with most hidden diversity centred on widespread and polymorphic nominal species. The evolutionary origin of breeding strategies within Osteocephalus is discussed in the light of this new phylogeny and a novel type of amplexus (gular amplexus) is described.
Animationen können in instruktionalen Kontexten genutzt werden, um Wissen über Sachverhalte zu vermitteln, die Prozesse oder Abläufe beinhalten. So können dynamische Sachverhalte explizit dargestellt werden und müssen nicht vom Lerner selbst in Gedanken hergestellt, sondern nur anhand der Animation nachvollzogen werden. Dies sollte sich positiv auf den Wissenserwerb auswirken. Dabei stellen Animationen mit ihrer besonderen Eigenschaft der Darstellung zeitlicher Abläufe besondere Herausforderungen an den Lerner. Das menschliche Informationsverarbeitungssystem unterliegt bestimmten Begrenzungen im Hinblick auf die Wahrnehmung von Geschwindigkeiten. Zu schnelle und zu langsame Geschwindigkeiten können beispielsweise nur schwer wahrgenommen und dementsprechend auch nicht kognitiv verarbeitet werden. Die Zielsetzung der Arbeit, die sich daraus ergibt, war eine systematische Untersuchung der Wirkung unterschiedlicher Präsentationsgeschwindigkeiten auf das Wahrnehmen und Verstehen eines dynamischen Sachverhaltes anhand einer Animation.
Um die Fragestellungen der Arbeit beantworten zu können, wurden vier experimentelle Studien durchgeführt. Die Pilotstudie hatte das Ziel, sowohl das Lernmaterial als auch den entwickelten Wissenstest zu evaluieren. In Studie 1 wurde der Frage nach dem Einfluss der Präsentationsgeschwindigkeit auf den Wissenserwerb beim Lernen mit einer interaktiven Animation nachgegangen.
Die Studien 2 und 3 untersuchten den Einfluss verschiedener Reihenfolgen von Geschwindigkeiten auf den Wissenserwerb. Hier ging es um eine systematische Erfassung der perzeptuellen und kognitiven Verarbeitung dynamischer Informationen in zwei verschiedenen Geschwindigkeiten mittels Blickbewegungsmessung (Studie 2) und wiederholten Testungen des Wissenserwerbs zwischen den einzelnen Lernphasen (Studie 3).
Die Ergebnisse der Studien deuten darauf hin, dass bei langsamer Geschwindigkeit Wissen über Ereignisse auf untergeordneter zeitlicher Ebene erworben wurde und dass je schneller eine Animation gesehen wurde, umso mehr anteiliges Wissen auf einer übergeordneten zeitlichen Ebene erworben wurde (Studie 1), aber eindeutige Aussagen über den Einfluss der Geschwindigkeit auf den Wissenserwerb auf verschiedenen zeitlichen Hierarchieebenen lassen sich aufgrund der Ergebnisse der Studien nicht machen. Im Hinblick auf die Lernförderlichkeit verschiedener Arten der Sequenzierung von Geschwindigkeiten zeigten sich auch keine eindeutigen Ergebnisse. Aufgrund der Analyse der Blickbewegungsdaten deutet sich jedoch an, dass die Reihenfolge "langsam - schnell" den Bedingungen auf Seiten der Lerner eher entgegen kommt als die Reihenfolge "schnell - langsam".
Heat exchangers are used for thickening of various products or desalination of saltwater. Nevertheless, they are used as cooling unit in industries. Thereby, the stainless steel heat transferring elements get in contact with microorganism containing media, such as river water or saltwater, and corrode. After at least two years of utilization the material is covered with bacterial slime called biofilm. This process is called biofouling and causes loss in efficiency and creates huge costs depending on cleaning technique and efficiency. Cleaning a heat exchanger is very expensive and time consuming. It only can be done while the device is out of business.
Changing the surface properties of materials is the best and easiest way to lengthen the initial phase of biofilm formation. This leads to less biofouling (Mogha et al. 2014).
Thin polymer films as novel materials have less costs in production than stainless steel and are easy to handle. Furthermore, they can be functionalzed easily and can be bougth in different sizes all over the world. Because of this, they can reduce the costs of cleaning techniques and lead to a longer high efficiency state of the heat exchanger. If the efficiency of the heat exchanger decreases, the thin polymer films can be replaced.
For a successful investigation of the microbial and the process engineering challenges a cooperation of Technical University of Kaiserslautern (chair of seperation science and technology) and University of Koblenz-Landau (working goup microbiology) was established.
The aim of this work was design engineering and production of a reactor for investigation of biofouling taking place on thin polymeric films and stainless steel. Furthermore, an experimental design has to be established. Several requirements have to be applied for these tasks. Therefore, a real heat exchanger is downscaled, so the process parameters are at least comparable. There are many commercial flow cell kits available. Reducing the costs by selfassembling increased the number of samples, so there is a basis for statistic analysis. In addition, fast and minimal invasive online-in-situ microscopy and Raman spectroscopy can be performed. By creating laminary flow and using a weir we implemented homogenous inflow to the reactors. Reproduceable data on biomass and cell number were created.
The assessment of biomass and cell number is well established for drinking water analysis. Epifluorescense microscopy and gravimetric determination are the basic techniques for this work, too. Differences in cell number and biomass between surface modifications and materials are quantified and statistically analysed.
The wildtype strain Escherichia coli K12 and an inoculum of 500 ml fresh water were used to describe the biofouling of the films. Thereby, we generated data with natural bacterial community in unknown media properties and data with well known media properties, so the technical relevance of the data is given.
Free surface energy and surface roughness are the first attachment hurdles for bacteria. These parameters were measured according to DIN 55660 and DIN EN ISO 4287. The materials science data were correlated with the number of cells and the biomass. This correlation acts as basal link of biofouling as biological induced parameter to the material properties. Material properties for reducing the biofouling can be prospected.
By using Raman spectroscopy as a cutting edge method future investigations could be shortened. If biomass or cell number can be linked with the spectra, new functional materials can be investigated in a short time.
Die vorliegende Arbeit betrachtet den Einfluss von Wald- und Wirtschaftswegen auf Abflussentstehung und Bodenerosionsraten innerhalb eines bewaldeten Einzugsgebiets im Naturschutzgebiet Laacher See. Hierfür wurden sowohl bestehende Erosions- und Akkumulationsformen im Gelände kartiert, als auch Erosionssimulationen mittels einer Kleinberegnungsanlage durchgeführt. Zuletzt erfolgte eine Modellierung des Erosionspotentials auf Grundlage der Simulationsergebnisse.
Die Analyse bestehender Erosions- und Akkumulationsformen im Gelände gab einen Hinweis auf Bodenerosionsraten von Wegoberflächen, die zwischen 27,3 und 93,5 t ha-1 a-1 und somit in derselben Größenordnung wie Erosionsraten unter intensiver ackerbaulicher Nutzung lagen.
Die Simulationsläufe zeigten, dass persistente Waldwege ein deutlich verändertes Infiltrationsverhalten aufwiesen. Auf natürlichen Waldböden lag der Anteil des infiltrierten Niederschlags bei durchschnittlich 96%. Im Falle von Waldwegen nahm dieser Anteil im Mittel auf 14% bis 7% ab. Besonders auffällig waren die Ergebnisse auf Rückegassen, auf denen ein erheblicher Einfluss der Bodenverdichtung durch Befahrung nachgewiesen werden konnte. Hier sank der Anteil des infiltrierten Niederschlags auf 31% in den Fahrspuren, zwischen den Spuren wurden noch 76 % infiltriert.
Während der Simulationsläufe konnten maximale Sedimentmengen von 446 g m-2 erodiert werden, was einer mittleren Bodenerosionsrate von 4,96 g m-2 min-1 entspricht. Diese hohen Abtragsraten wurden auf persistenten Wegen mit geringer Befestigung gemessen. Rückegassen wiesen die geringsten Abtragswerte auf, maximal konnten 37 g m-2 erodiert werden, gleichbedeutend mit einer Abtragsrate von 0,41 g m-2 min-1. Die erodierten Sedimentmengen betrugen im Mittel bei Wegen 167 bis 319 g m-2 und im Falle von Rückegassen 17 g m-2. Anhand von Vergleichsmessungen auf Waldstandorten, bei denen ein mittlerer Bodenabtrag von ca. 5 g m-2 festgestellt wurde, konnte eine erhöhte Erodierbarkeit für jedwede Form der Weganlage bestätigt werden.
Auf Basis der im Gelände gemessenen Abtragsraten wurden die Modellierungen kalibriert. Die Ergebnisse der ABAG / DIN 19708 zeigten für das betrachtete Untersuchungsgebiet eine mittlere jährliche Bodenerosionsgefährdung von 2,4 - 5,8 t ha-1 a-1 für persistente Wege und von 0,5 t ha-1 a-1 für Rückegassen. Im Vergleich zum Mittelwert weitgehend unbeeinflusster Waldflächen im Untersuchungsgebiet von 0,1 t ha-1 a-1 zeigte sich abermals ein erhöhtes Abtragspotential. Die physikalisch basierte Modellierung der Beregnungsversuche mittels WEPP zeigte ein zufriedenstellendes Ergebnis bei der Einschätzung des Abflussverhaltens, so wurden für persistente Wege nur Abweichungen von maximal -5% festgestellt. Die Abflussmodellierung auf Rückegassen sowie die generelle Modellierung der Bodenerosion während der Beregnungsversuche zeigte sich im Kontrast hierzu noch fehlerbehaftet, was ursächlich mit der für ein physikalisches Modell relativ geringen Eingangsdatentiefe zu begründen ist.
Es wurde nachgewiesen, dass Waldwege einen bedeutenden Einfluss auf den Wasserhaushalt und das Bodenerosionsgeschehen haben. Der Rückhalt von Niederschlägen wird gemindert und es kommt zu intensivierten Bodenerosionsprozessen. Schlecht befestigte Wege zeigten einen stark erhöhten Bodenabtrag, der zu ökologischen Folgeschäden führen kann. Der Abtrag kann ebenso zu einer Beeinträchtigung der Befahrbarkeit führen. Anhand der Folgen lässt sich die Relevanz der Betrachtung von Abfluss- und Bodenerosionsprozessen auf Wald- und Wirtschaftswegen deutlich machen. Die vorliegende Arbeit stellt die erste Studie dar, innerhalb derer Abfluss- und Bodenerosionspozesse für Walderschließungsnetzwerke in Mitteleuropa untersucht wurden.
Ray Tracing enables a close to reality rendering implementation of a modelled scene. Because of its functioning, it is able to display optical phenomena and complex lighting. Though, numerous computations per pixel have to be done. In practice implementations can not achieve computer graphics" aim of real-time rendering close to 60 frames per second. Current Graphics Processing Units (GPU) allows high execution parallelism of general-purpose computations. By using the graphics-API OpenGL this parallelism can be achieved and it is possible to design and realize a Ray-Tracer, which operates entirely on the GPU. The developed approach will be extended by an Uniform Grid - a Ray-Tracing acceleration structure. Hence, a speed-up is expected.
This thesis` purpose is the implementation of Ray-Tracer, which operates completely on the GPU, and its expansion by integrating an Uniform Grid. Afterwards, the evaluation of maximum achievable performance takes place. Possible problems regarding GPU-programming will be identified and analysed.
Proceedings of the 9th Open German-Russian Workshop on Pattern Recognition and Image Understanding
(2015)
The Proceedings of the 9th Open German-Russian Workshop on Pattern Recognition and Image Understanding include publications (extended abstracts), that cover but are not limited to the following topics: - Mathematical Theory of Pattern Recognition, Image and Speech Processing, Analysis, Recognition and Understanding. - Cognitive Technologies, Information Technologies, Automated Systems and Software for Pattern Recognition, Image, Speech and Signal Processing, Analysis and Understanding - Databases, Knowledge Bases, and Linguistic Tools - Special-Purpose Architectures, Software and Hardware Tools - Vision and Sensor Data Interpretation for Robotics - Industrial, Medical, Multimedia and Other Applications - Algorithms, Software, Automated Systems and Information Technologies in Bioinformatics and Medical Informatics. The workshop took place from December 1st-5th, 2014, at the University of Koblenz-Landau in Koblenz, Germany.
The mitral valve is one of the four valves in the human heart. It is located in the left heart chamber and its function is to control the blood flow from the left atrium to the left ventricle. Pathologies can lead to malfunctions of the valve so that blood can flow back to the atrium. Patients with a faulty mitral valve function may suffer from fatigue and chest pain. The functionality can be surgically restored, which is often a long and exhaustive intervention. Thorough planning is necessary to ensure a safe and effective surgery. This can be supported by creating pre-operative segmentations of the mitral valve. A post-operative analysis can determine the success of an intervention. This work will combine existing and new ideas to propose a new approach to (semi-)automatically create such valve models. The manual part can guarantee a high quality model and reliability, whereas the automatic part contributes to saving valuable labour time.
The main contributions of the automatic algorithm are an estimated semantic separation of the two leaflets of the mitral valve and an optimization process that is capable of finding a coaptation-line and -area between the leaflets. The segmentation method can perform a fully automatic segmentation of the mitral leaflets if the annulus ring is already given. The intermediate steps of this process will be integrated into a manual segmentation method so a user can guide the whole procedure. The quality of the valve models generated by the method proposed in this work will be measured by comparing them to completely manually segmented models. This will show that commonly used methods to measure the quality of a segmentation are too general and do not suffice to reflect the real quality of a model. Consequently the work at hand will introduce a set of measurements that can qualify a mitral valve segmentation in more detail and with respect to anatomical landmarks. Besides the intra-operative support for a surgeon, a segmented mitral valve provides additional benefits. The ability to patient-specifically obtain and objectively describe the valve anatomy may be the base for future medical research in this field and automation allows to process large data sets with reduced expert dependency. Further, simulation methods that use the segmented models as input may predict the outcome of a surgery.
In this thesis, an interactive application is developed for Android OS. The application is about a virtual-reality game. The game is settled in the genre of first-person shooters and takes place in a space scenario. By using a stereo renderer, it is possible to play the game combined with virtual-reality glasses.
For definite isolation and classification of important features in 3D multi-attribute volume data, multidimensional transfer functions are inalienable. Yet, when using multiple dimensions, the comprehension of the data and the interaction with it become a challenge. That- because neither the control of the versatile input parameters nor the visualization in a higher dimensional space are straightforward.
The goal of this thesis is the implementation of a transfer function editor which supports the creation of a multidimensional transfer function. Therefore different visualization and interaction techniques, like Parallel Coordinates, are used. Furthermore it will be possible to choose and combine the used dimensions interactively and the rendered volume will be adapted to the user interaction in real time.
Immersion into narrative worlds - theoretical and empirical approaches to audience experience
(2015)
Die vorliegende Dissertation widmet sich dem Phänomen des Erlebens der Rezeption von audiovisuellen narrativen unterhaltenden Medieninhalten. Dieses wird zunächst in die Rezeptions- und Wirkungsforschung eingeordnet und für das weitere Vorgehen konkretisiert: Transportation und Narrative Engagement stellen aktuell die beiden wichtigsten Konzepte der Medienpsychologie bezüglich der Nutzung und Wirkung von Geschichten dar.
Anschließend werden drei Fragestellungen bearbeitet. Bisher standen Forscher und Forscherinnen vor dem Problem der Manipulation des Rezeptionserlebens. Daher wurden in der vorliegenden Arbeit zwei Verfahren vorgeschlagen und in vier experimentellen Studien geprüft. Der Einsatz von Rezensionen erwies sich als geeignet, um bei allen narrativen unterhaltenden Texten das Rezeptionserleben ökonomisch zu manipulieren. Weiterhin gibt es bislang kein etabliertes Verfahren zur rezeptionsbegleitenden Messung des Rezeptionserlebens.
In dieser Arbeit wurde ein Verfahren aus einer Kombination von Real Time Response Measurement (RTR), Secondary Task Reaction Times (STRT) und der Erhebung der Lidschlagfrequenz entwickelt. Vor allem RTR war in der Lage, die im Zusammenhang mit dem Rezeptionserleben auftretenden emotionalen Prozesse zu erfassen. Die Befürchtung, die rezeptionsbegleitenden Messmethoden könnten das Rezeptionserleben verhindern, wurde in einer weiteren experimentellen Studie größtenteils entkräftet. Zuletzt wurde der Prozess des Zusammenfassens des Rezeptionserlebens in ein postrezeptives Urteil thematisiert. Nach der Entwicklung eines Rahmenmodells der Beantwortung postrezeptiver Skalen wurde in einer weiteren Studie die Bedeutung verschiedener Verlaufsparameter für das postrezeptive Urteil untersucht. Vier ausgesuchte Parameter zusammen können das postrezeptive Urteil besser erklären als der Verlaufs-Mittelwert. Die Arbeit schließt mit einer Diskussion, in der unter anderem die dynamische und die postrezeptive Messung des Rezeptionserlebens aufeinanderrnbezogen werden und hinsichtlich ihrer Bedeutung kritische Würdigung erfahren.
Web application testing is an active research area. Garousi et al. did a systematic mapping study and classified 79 papers published between 2000-2011. However, there seems to be a lack of information exchange between the scientific community and tool developers.
This thesis systematically analyzes the field of functional, system level web application testing tools. 194 candidate tools were collected in the tool search and screened, with 23 tools being selected as foundation of this thesis. These 23 tools were systematically used to generate a feature model of the domain. The methodology to support this is an additional contribution of this thesis. It processes end user documentation of tools belonging to an examined domain and creates a feature model. The feature model gives an overview over the existing features, their alternatives and their distribution. It can be used to identify trends and problems, extraordinary features, help decision making of tool purchase or guide scientists how to focus research.
Thematik dieser Arbeit ist das dreidimensionale Image-Warping für diffuse und reflektierende Oberflächen. Das Warpingverfahren für den reflektierenden Fall gibt es erst seit 2014. Bei diesem neuen Algorithmus treten Artefakte auf, sobald ein Bild für einen alternativen Blickwinkel auf eine sehr unebene Fläche berechnet werden soll.
In dieser Arbeit wird der Weg von einem Raytracer, der die Eingabetexturen erzeugt, über das Warpingverfahren für beide Arten der Oberflächen, bis zur Optimierung des Reflective-Warping-Verfahrens erarbeitet. Schließlich werden die Ergebnisse der Optimierung bewertet und in den aktuellen sowie zukünftigen Stand der Technik eingeordnet.
Engineered nanoparticles are emerging pollutants. Their increasing use in commercial products suggests a similar increase of their concentrations in the environment. Studying the fate of engineered colloids in the environment is highly challenging due to the complexity of their possible interactions with the main actors present in aquatic systems. Solution chemistry is one of the most central aspects. In particular, the interactions with dissolved organic matter (DOM) and with natural colloids are still weakly understood.
The aim of this work was to further develop the dedicated analytical methods required for investigating the fate of engineered colloids in environmental media as influenced by DOM. Reviewing the literature on DOM interactions with inorganic colloids revealed that a systematic characterization of both colloids and DOM, although essential, lacks in most studies and that further investigations on the fractionation of DOM on the surface of engineered colloids is needed. Another knowledge gap concerns the effects of DOM on the dynamic structure of colloid agglomerates. For this question, analytical techniques dedicated to the characterization of agglomerates in environmental media at low concentrations are required. Such techniques should remain accurate at low concentrations, be specific, widely matrix independent and free of artefact due to sample preparation. Unfortunately, none of the currently available techniques (microscopy, light scattering based methods, separation techniques etc.) fulfills these requirements.
However, a compromise was found with hydrodynamic chromatography coupled to inductively coupled plasma mass spectrometry (HDC-ICP-MS). This method has the potential to size inorganic particles in complex media in concentration ranges below ppb and is element specific; however, its limitations were not systematically explored. In this work, the potential of this method has been further explored. The simple size separation mechanism ensures a high flexibility of the elution parameters and universal calibration can be accurately applied to particles of different compositions and surface chemistries. The most important limitations of the method are its low size resolution and the effect of the particle shape on the retention factor. The implementation of HDC coupled to single particle ICP-MS (HDC-SP-ICP-MS) offers new possibilities for the recognition of particle shape and hence the differentiation between primary particles and homoagglomerates. Therefore, this coupling technique is highly attractive for monitoring the effects of DOM on the stability of colloids in complex media. The versatility of HDC ICP MS is demonstrated by its successful applications to diverse samples. In particular, it has been used to investigate the stability of citrate stabilized silver colloids in reconstituted natural water in the presence of different types of natural organic matter. These particles were stable for at least one hour independently of the type of DOM used and the pH, in accordance with a coauthored publication addressing the stability of silver colloids in the River Rhine. Direct monitoring of DOM adsorption on colloids was not possible using UV and fluorescence detectors. Preliminary attempts to investigate the adsorption mechanism of humic acids on silver colloids using fluorescence spectroscopy suggest that fluorescent molecules are not adsorbed on silver particles. Several solutions for overcoming the encountered difficulties in the analysis of DOM interactions are proposed and the numerous perspectives offered by further developments and applications of HDC-(SP)-ICP-MS in environmental sciences are discussed in detail.
Information systems research has started to use crowdsourcing platforms such as Amazon Mechanical Turks (MTurk) for scientific research, recently. In particular, MTurk provides a scalable, cheap work-force that can also be used as a pool of potential respondents for online survey research. In light of the increasing use of crowdsourcing platforms for survey research, the authors aim to contribute to the understanding of its appropriate usage. Therefore, they assess if samples drawn from MTurk deviate from those drawn via conventional online surveys (COS) in terms of answers in relation to relevant e-commerce variables and test the data in a nomological network for assessing differences in effects.
The authors compare responses from 138 MTurk workers with those of 150 German shoppers recruited via COS. The findings indicate, inter alia, that MTurk workers tend to exhibit more positive word-of mouth, perceived risk, customer orientation and commitment to the focal company. The authors discuss the study- results, point to limitations, and provide avenues for further research.
Fünfte Ordnung zur Änderung der Prüfungsordnung für die Prüfung im lehramtsbezogenen Bachelorstudiengang Berufsbildende Schulen an der Universität Koblenz-Landau und der Hochschule Koblenz
Vierte Ordnung zur Änderung der Ordnung für die Prüfung im Masterstudiengang für das Lehramt an berufsbildenden Schulen an der Universität Koblenz-Landau und der Hochschule Koblenz
Promotionsordnung des Fachbereichs 3: Mathematik/Naturwissenschaften der Universität Koblenz-Landau, Campus Koblenz
Zweite Ordnung zur Änderung der Masterprüfungsordnung für den Weiterbildenden Fernstudiengang "Energiemanagement" an der Universität Koblenz-Landau
This research examines information audit methodologies and information capturing methods for enterprise social software which are an elementary part of the audit process. Information auditing is lacking of a standardized definition and methodology because the scope of the audit process is diversified and dependent on the organization undertaking the audit. The benefits of information auditing and potential challenges of Enterprise 2.0 the audit can overcome are comprehensive and provide a major incentive for managers to conduct an audit. Information asset registers as a starting point for information auditing are not specifically focusing on social software assets. Therefore this research pro-ject combines asset registers from different areas to create a new register suitable for the requirements of Enterprise 2.0. The necssary adaptations caused by the new character of the assets are minor. The case study applying the asset register for the first time however reveals several problematic areas for information auditors completing the register. Rounding up the thesis a template is developed for setting up new work spaces on enterprise social software systems with appropriate metadata taking into account the meaningful metadata discovered in the asset register.
Ordnung für die Eignungsprüfung Bildende Kunst der Universität Koblenz-Landau, Campus Landau
Beitragsordnung der Studierendenschaft der Universität Koblenz-Landau, Campus Landau
Vierte Satzung zur Änderung der Satzung der Universität Koblenz-Landau über das Auswahlverfahren in zulassungsbeschränkten Studiengängen
Ordnung zur Änderung der Einschreibeordnung für die Universität Koblenz-Landau
Elfte Ordnung zur Änderung der Prüfungsordnung für die Prüfung im lehramtsbezogenen Bachelorstudiengang an der Universität Koblenz-Landau
Zehnte Ordnung zur Änderung der Prüfungsordnung für die Prüfung in den Masterstudiengängen für das Lehramt an Grundschulen, das Lehramt an Realschulen plus, das Lehramt an Förderschulen sowie das Lehramt an Gymnasien an der Universität Koblenz-Landau
Neunte Ordnung zur Änderung der Ordnung für die Prüfung im lehramtsbezogenen Zertifikatsstudiengang (Erweiterungsprüfung) an der Universität Koblenz-Landau
Achte Ordnung zur Änderung der Prüfungsordnung für die Prüfung im Zwei-Fach-Bachelorstudiengang an der Universität Koblenz-Landau
Erste Ordnung zur Änderung der Gemeinsamen Prüfungsordnung für die Bachelor- und Masterstudiengänge des Fachbereichs Informatik an der Universität Koblenz-Landau
One task of executives and project managers in IT companies or departments is to hire suitable developers and to assign them to suitable problems. In this paper, we propose a new technique that directly leverages previous work experience of developers in a systematic manner. Existing evidence for developer expertise based on the version history of existing projects is analyzed. More specifically, we analyze the commits to a repository in terms of affected API usage. On these grounds, we associate APIs with developers and thus we assess API experience of developers. In transitive closure, we also assess programming domain experience.
The intention of this thesis was to characterise the effect of naturally occurring multivalent cations like Calcium and Aluminium on the structure of Soil Organic Matter (SOM) as well as on the sorption behaviour of SOM for heavy metals such as lead.
The first part of this thesis describes the results of experiments in which the Al and Ca cation content was changed for various samples originated from soils and peats of different regions in Germany. The second part focusses on SOM-metal cation precipitates to study rigidity in dependence of the cation content. In the third part the effects of various cation contents in SOM on the binding strength of Pb cations were characterised by using a cation exchange resin as desorption method.
It was found for soil and peat samples as well as precipitates that matrix rigidity was affected by both type and content of cation. The influence of Ca on rigidity was less pronounced than the influence of Al and of Pb used in the precipitation experiments. For each sample one cation content was identified where matrix rigidity was most pronounced. This specific cation content is below the cation saturation as expected by cation exchange capacity. These findings resulted in a model describing the relation between cation type, content and the degree of networking in SOM. For all treated soil and precipitate samples a step transition like glass transition was observed, determined by the step transition temperature T*. It is known from literature that this type of step transition is due to bridges between water molecules and organic functional groups in SOM. In contrast to the glass transition temperature this thermal event is slowly reversing after days or weeks depending on the re-conformation of the water molecules. Therefore, changes of T* with different cation compositions in the samples are explained by the formation of water-molecule-cation bridges between SOM-functional groups. No influence on desorption kinetics of lead for different cation compositions in soil samples was observed. Therefore it can be assumed that the observed changes of matrix rigidity are highly reversible by changing the water status, pH or putting agitation energy by shaking in there.
Geographic cluster based routing in ad-hoc wireless sensor networks is a current field of research. Various algorithms to route in wireless ad-hoc networks based on position information already exist. Among them algorithms that use the traditional beaconing approach as well as algorithms that work beaconless (no information about the environment is required besides the own position and the destination). Geographic cluster based routing with guaranteed message delivery can be carried out on overlay graphs as well. Until now the required planar overlay graphs are not being constructed reactively.
This thesis proposes a reactive algorithm, the Beaconless Cluster Based Planarization (BCBP) algorithm, which constructs a planar overlay graph and noticeably reduces the number of messages required for that. Based on an algorithm for cluster based planarization it beaconlessly constructs a planar overlay graph in an unit disk graph (UDG). An UDG is a model for a wireless network in which every participant has the same sending radius. Evaluation of the algorithm shows it to be more efficient than the non beaconless variant. Another result of this thesis is the Beaconless LLRAP (BLLRAP) algorithm, for which planarity but not continued connectivity could be proven.
Nutzung von Big Data im Marketing : theoretische Grundlagen, Anwendungsfelder und Best-Practices
(2015)
Die zunehmende Digitalisierung des Alltags und die damit verbundene omnipräsente Datengenerierung bieten für Unternehmen und insbesondere Marketingabteilungen die Chance, Informationen in bisher ungekannter Fülle über ihre Kunden und Produkte zu erhalten. Die Gewinnung solcher Informationen aus riesigen Datenmengen, die durch neue Technologien ermöglicht wird, hat sich dabei unter dem Begriff Big Data etabliert.
Die vorliegende Arbeit analysiert diese Entwicklung im Hinblick auf das Potenzial für die unternehmerische Disziplin des Marketings. Dazu werden die theoretischen Grundlagen des Einsatzes von Big Data im Marketing identifiziert und daraus Anwendungsfelder und Best-Practice-Lösungen abgeleitet. Die Untersuchung basiert auf einer Literaturanalyse zu dem Thema Big Data Marketing, welche neben verschiedenen Studien und Befragungen auch Expertenmeinungen und Zukunftsprognosen einschließt. Die Literatur wird dabei zunächst auf die theoretischen Grundlagen des Konstrukts Big Data analysiert.
Anschließend wird die Eignung von Big Data Lösungen für den Einsatz in Unternehmen geprüft, bevor die Anwendung im Bereich des Marketings konkretisiert und analysiert wird. Es wurde dabei festgestellt, dass anhand der theoretischen Aspekte von Big Data eine starke Eignung für den Einsatz im Rahmen des Marketings besteht. Diese zeichnet sich vor allem durch die detaillierten Informationen über Verhaltensmuster von Kunden und ihre Kaufentscheidungen aus. Weiterhin wurden potenzielle Anwendungsfelder identifiziert, welche besonders im Bereich der Kundenorientierung und der Marktforschung liegen. Im Hinblick auf Best-Practice-Lösungen konnte ein grober Leitfaden für die Integration von Big Data in die Unternehmensorganisation entwickelt werden. Abschließend wurde festgehalten, dass das Thema Big Data eine hohe Relevanz für das Marketing aufweist und dies in der Zukunft maßgeblich mitbestimmen wird.
This thesis conducts a text and network analysis of criminological files. The specific focus during the research is the field money laundering. The analysis showed the most important concepts present in the text which were classified in eleven different classes. The relationships of those concepts were analysed using ego networks, key entity identification and clustering. Some of the statements given about money laundering could be validated by the findings of this analysis and their interpretation. Specific concepts like banks and organizations as well as foreign subsidiaries were identified. Aggregating these concepts with the statements in chapter 1.4.3 on the circular process of money laundering it can be stated that different organizations and individuals, present in the criminological files, were placing money through different banks, organizations and investments in the legal financial market. At last this thesis tries to validate the benefits of the used tools for the kind of conducted research process. An estimation on ORA's and Automap's applicability for this kind of research is given in the end.
In this thesis we present an approach to track a RGB-D camera in 6DOF andconstruct 3D maps. We first acquire, register and synchronize RGB and depth images. After preprocessing we extract FAST features and match them between two consecutive frames. By depth projection we regain the z-value for the inlier correspondences. Afterwards we estimate the camera motion by 3D point set alignment between the correspondence set using least-squares. This local motion estimate is incrementally applied to a global transformation. Additionally wernpresent methods to build maps based on point cloud data acquired by a RGB-D camera. For map creation we use the OctoMap framework and optionally create a colored point cloud map. The system is evaluated with the widespread RGB-D benchmark.
In this work a framework is developed that is used to create an evaluation scheme for the evaluation of text processing tools. The evaluation scheme is developed using a model-dependent software evaluation approach and the focus of the model-dependent part is the text-processing process which is derived from the Conceptual Analysis Process developed in the GLODERS project. As input data a German court document is used containing two incidents of extortion racketeering which happened in 2011 and 2012. The evaluation of six different tools shows that one tool offers great results for the given dataset when it is compared to manual results. It is able to identify and visualize relations between concepts without any additional manual work. Other tools also offer good results with minor drawbacks. The biggest drawback for some tools is the unavailability of models for the German language. They can perform automated tasks only on English documents. Nonetheless some tools can be enhanced by self-written code which allows users with development experience to apply additional methods.
In current research of the autonomous mobile robots, path planning is still a very important issue.
This master's thesis deals with various path planning algorithms for the navigation of such mobile systems. This is not only to determine a collision-free trajectory from one point to another. The path should still be optimal and comply with all vehicle-given constraints. Especially the autonomous driving in an unknown and dynamic environment poses a major challenge, because a closed-loop control is necessary and thus a certain dynamic of the planner is demanded.
In this paper, two types of algorithms are presented. First, the path planner, based on A*, which is a common graph search algorithm: A*, Anytime Repairing A*, Lifelong Planning A*, D* Lite, Field D*, hybrid A*. Second, the algorithms which are based on the probabilistic planning algorithm Rapidly-exploring Random Tree (Rapidly-exploring Random Tree, RRT*, Lifelong Planning RRT*), as well as some extensions and heuristics. In addition, methods for collision avoidance and path smoothing are presented. Finally, these different algorithms are evaluated and compared with each other.
Code package managers like Cabal track dependencies between packages. But packages rarely use the functionality that their dependencies provide. This leads to unnecessary compilation of unused parts and to speculative conflicts between package versions where there are no conflicts. In two case studies we show how relevant these two problems are. We then describe how we could avoid them by tracking dependencies not between packages but between individual code fragments.
This master- thesis investigates the topic of intercultural web design. Two websites from different countries are exemplarily compared. On the basis of cultural dimensions, cultural differences are presented on each respective website. The analysis particularly focuses on how detailed the respective website-designer and -operator regards their users" cultural differences and the creation of a cross-cultural web design. The analysis illustrates which cultural - and particularly intercultural - aspects of countries were taken into consideration in the design of the web sites. The investigation led to the conclusion that their implementation was not consequently executed for all web sites. Hence, this thesis offers suggestions for the improvement of aspects which are most important in intercultural web design.
The lasting hype around the mobile internet and the related technology of the mobile applications seem not to drop off. The immense economic potential of this market leads the businesses and ventures to continuously find new ways of monetization. The underlying causes of that phenomenon are rarely challenged. Scientific research in the field of "ubiquitous mobile" has not yet developed a clear overall picture of the causes and effect chains. Attempts of deriving causes by studies in related mass media such as the computer or the internet have been discussed controversially. By combining the research streams of media motive usage and the customer retention, this paper will present a new research model. Based on a quantitative survey in the German speaking the gained data proves the motives for mobility, information gathering and entertainment purposed to be the most important drivers of customer satisfaction in mobile applications. The paper also highlights a significant correlation between the customer satisfaction and the other determinants of customer retention.
The increasing, anthropogenic demand for chemicals has created large environmental problems with repercussions for the health of the environment, especially aquatic ecosystems. As a result, the awareness of the public and decision makers on the risks from chemical pollution has increased over the past half-century, prompting a large number of studies in the field of ecological toxicology (ecotoxicology). However, the majority of ecotoxicological studies are laboratory based, and the few studies extrapolating toxicological effects in the field are limited to local and regional levels. Chemical risk assessment on large spatial scales remains largely unexplored, and therefore, the potential large-scale effects of chemicals may be overlooked.
To answer ecotoxicological questions, multidisciplinary approaches that transcend classical chemical and toxicological concepts are required. For instance, the current models for toxicity predictions - which are mainly based on the prediction of toxicity for a single compound and species - can be expanded to simultaneously predict the toxicity for different species and compounds. This can be done by integrating chemical concepts such as the physicochemical properties of the compounds with evolutionary concepts such as the similarity of species. This thesis introduces new, multidisciplinary tools for chemical risk assessments, and presents for the first time a chemical risk assessment on the continental scale.
After a brief introduction of the main concepts and objectives of the studies, this thesis starts by presenting a new method for assessing the physiological sensitivity of macroinvertebrate species to heavy metals (Chapter 2). To compare the sensitivity of species to different heavy metals, toxicity data were standardized to account for the different laboratory conditions. These rankings were not significantly different for different heavy metals, allowing the aggregation of physiological sensitivity into a single ranking.
Furthermore, the toxicological data for macroinvertebrates were used as input data to develop and validate prediction models for heavy metal toxicity, which are currently lacking for a wide array of species (Chapter 3). Apart from the toxicity data, the phylogenetic information of species (evolutionary relationships among species) and the physicochemical parameters for heavy metals were used. The constructed models had a good explanatory power for the acute sensitivity of species to heavy metals with the majority of the explained variance attributed to phylogeny. Therefore, the integration of evolutionary concepts (relatedness and similarity of species) with the chemical parameters used in ecotoxicology improved prediction models for species lacking experimental toxicity data. The ultimate goal of the prediction models developed in this thesis is to provide accurate predictions of toxicity for a wide range of species and chemicals, which is a crucial prerequisite for conducting chemical risk assessment.
The latter was conducted for the first time on the continental scale (Chapter 4), by making use of a dataset of 4,000 sites distributed throughout 27 European countries and 91 respective river basins. Organic chemicals were likely to exert acute risks for one in seven sites analyzed, while chronic risk was prominent for almost half of the sites. The calculated risks are potentially underestimated by the limited number of chemicals that are routinely analyzed in monitoring programmes, and a series of other uncertainties related with the limit of quantification, the presence of mixtures, or the potential for sublethal effects not covered by direct toxicity.
Furthermore, chemical risk was related to agricultural and urban areas in the upstream catchments. The analysis of ecological data indicated chemical impacts on the ecological status of the river systems; however, it is difficult to discriminate the effects of chemical pollution from other stressors that river systems are exposed to. To test the hypothesis of multiple stressors, and investigate the relative importance of organic toxicants, a dataset for German streams is used in chapter 5. In that study, the risk from abiotic (habitat degradation, organic chemicals, and nutrients enrichment) and biotic stressors (invasive species) was investigated. The results indicated that more than one stressor influenced almost all sites. Stream size and ecoregions influenced the distribution of risks, e.g., the risks for habitat degradation, organic chemicals and invasive species increased with the stream size; whereas organic chemicals and nutrients were more likely to influence lowland streams. In order to successfully mitigate the effects of pollutants in river systems, co-occurrence of stressors has to be considered. Overall, to successfully apply integrated water management strategies, a framework involving multiple environmental stressors on large spatial scales is necessary. Furthermore, to properly address the current research needs in ecotoxicology, a multidisciplinary approach is necessary which integrates fields such as, toxicology, ecology, chemistry and evolutionary biology.
Software systems are often developed as a set of variants to meet diverse requirements. Two common approaches to this are "clone-and-owning" and software product lines. Both approaches have advantages and disadvantages. In previous work we and collaborators proposed an idea which combines both approaches to manage variants, similarities, and cloning by using a virtual platform and cloning-related operators.
In this thesis, we present an approach for aggregating essential metadata to enable a propagate operator, which implements a form of change propagation. For this we have developed a system to annotate code similarities which were extracted throughout the history of a software repository. The annotations express similarity maintenance tasks, which can then either be executed automatically by propagate or have to be performed manually by the user. In this work we outline the automated metadata extraction process and the system for annotating similarities; we explain how the implemented system can be integrated into the workflow of an existing version control system (Git); and, finally, we present a case study using the 101haskell corpus of variants.
Today, augmented reality is becoming more and more important in several areas like industrial sectors, medicine, or tourism. This gain of importance can easily be explained by its powerful extension of real world content. Therefore, augmented realty became a way to explain and enhance the real world information. Yet, to create a system which can enhance a scene with additional information, the relation between the system and the real world must be known. In order to establish this relationship a commonly used method is optical tracking. The system calculates its relation to the real world from camera images. To do so, a reference which is known is needed in the scene to serve as an orientation. Today, this is mostly a 2D-marker or a 2D-texture. These are placed in the real world scenery to serve as a reference. But, this is an intrusion in the scene. That is why it is desirable that the system works without such an additional aid. An strategy without manipulating the scene is object-tracking. In this approach, any object from the scene can be used as a reference for the system. As an object is far more complex than a marker, it is harder for the system to establish its relationship with the real world. That is why most methods for 3D-object-tracking reduce the object by not using the whole object as reference. The focus of this thesis is to research how a whole object can be used as a reference in a way that either the system or the camera can be moved in any 360 degree angle around the object without loosing the relation to the real world. As a basis the augmented reality framework, the so called VisionLib, is used. Extensions to this system for 360 degree tracking are implemented in different ways and analyzed in the scope of this work. Also, the different extensions are compared. The best results were achieved by improving the reinitialization process. With this extension, current camera images of the scene are given to the system. With the hek of these images, the system can calculate the relation to the real world faster in case the relation went missing.
Real-time graphics applications are tending to get more realistic and approximate real world illumination gets more reasonable due to improvement of graphics hardware. Using a wide variation of algorithms and ideas, graphics processing units (GPU) can simulate complex lighting situations rendering computer generated imagery with complicated effects such as shadows, refraction and reflection of light. Particularly, reflections are an improvement of realism, because they make shiny materials, e.g. brushed metals, wet surfaces like puddles or polished floors, appear more realistic and reveal information of their properties such as roughness and reflectance. Moreover, reflections can get more complex, depending on the view: a wet surface like a street during rain for example will reflect lights depending on the distance of the viewer, resulting in more streaky reflection, which will look more stretched, if the viewer is locatedrnfarther away from the light source. This bachelor thesis aims to give an overview of the state-of-the-art in terms of rendering reflections. Understanding light is a basic need to understand reflections and therefore a physical model of light and its reflection will be covered in section 2, followed by the motivational section 2.2, that will give visual appealing examples for reflections from the real world and the media. Coming to rendering techniques, first, the main principle will be explained in section 3 followed by a short general view of a wide variety of approaches that try to generate correct reflections in section 4. This thesis will describe the implementation of three major algorithms, that produce plausible local reflections. Therefore, the developed framework is described in section 5, then three major algorithms will be covered, that are common methods in most current game and graphics engines: Screen space reflections (SSR), parallax-corrected cube mapping (PCCM) and billboard reflections (BBR). After describing their functional principle, they will be analysed of their visual quality and the possibilities of their real-time application. Finally they will be compared to each other to investigate the advantages and disadvantages over each other. In conclusion, the gained experiences will be described by summarizing advantages and disadvantages of each technique and giving suggestions for improvements. A short perspective will be given, trying to create a view of upcoming real-time rendering techniques for the creation of reflections as specular effects.
The present work introduces a rigid-body physics engine, focusing on the collision detection by GPU. The increasing performance and accessibility of modern graphics cards ensures that they can be also used for algorithms that are meant not only for imaging. This advantage is used to implement an efficient collision detection based on particles. The performance differences between CPU and GPU are presented by using a test environment.
A fundamental understanding of attachment of engineered nanoparticles to environmentalrnsurfaces is essential for the prediction of nanoparticle fate and transport in the environment.
The present work investigates the attachment of non-coated silver nanoparticles and citraterncoated silver nanoparticles to different model surfaces and environmental surfaces in thernpresence and absence of humic acid. Batch sorption experiments were used for this investigation.
The objective of this thesis was to investigate how silver nanoparticles interactrnwith surfaces having different chemical functional groups. The effect of presence of HA, on the particle-surface interactions was also investigated. In the absence of humic acid, nanoparticle-surface interactions or attachment was influencedrnby the chemical nature of the interacting surfaces. On the other hand, in the presence ofrnhumic acid, nanoparticle-surface attachment was influenced by the specific surface area of the sorbent surfaces. The sorption of non-coated silver nanoparticles and citrate coatedrnnanoparticles to all the surfaces was nonlinear and best described by Langmuir isotherm, indicating monolayer sorption of nanoparticles on to the surfaces. This can be explained as due to the blocking effect generated by the particle-particle repulsion. In the presence of humic acid, sorption of nanoparticles to the surfaces was linear. When the humic acid was present in the interacting medium, both the nanoparticles and surfaces were getting coated with humic acid and this masks the chemical functionalities of the surfaces. This leads to the change in particle-surface interactions, in the presence of humic acid. For the silver nanoparticle sorption from an unstable suspension, the sorption isotherms did not follow any classical sorption models, suggesting interplay between aggregation and sorption. Citrate coated silver nanoparticles and humic acid coated silver nanoparticles showed arndepression in sorption compared to the sorption of non-coated silver nanoparticles. In therncase of citrate coated silver nanoparticles the decrease in sorption can be explained by thernmore negative zeta potential of citrate coated nanoparticles compared to non-coated ones. For humic acid coated nanoparticles the sorption depression can be due to the steric hindrance caused by the free humic acid molecules which may coat the sorbent surface or due to the competition for sorption sites between the nanoparticle and free humic acid molecules present in the suspension. Thus nanoparticle surface chemistry is an important factor that determines the attachment of nanoparticles towards surfaces and it makes the characterization of nanoparticle surface an essential step in the study of their fate in the environment.
Another aim of this study was to introduce the potential of chemical force microscopy for nanoparticle surface characterization. With the use of this technique, it was possible to distinguish between bare silver nanoparticles, citrate coated silver nanoparticles, and humic acid coated silver nanoparticles. This was possible by measuring the adhesion forces between the nanoparticles and five different AFM probes having different chemical functionalization.
Zentrale Aufgaben der Hochschule sind die Bewertung, die Ursachenklärung und die Förderung von Studienleistungen (Heublein & Wolter, 2011, S. 215). In diesem Kontext gilt neben intellektuellen Fähigkeiten die Leistungsmotivation als bedeutsamer Prädiktor für den akademischen Erfolg (z. B. Schmidt-Atzert, 2005, S. 132; Steinmayr & Spinath, 2009, S. 80). Im Fokus der vorliegenden Studie stehen deshalb Überlegungen zu Motivationsprozessen von 332 Studienanfängern der Hochschule der Bundesagentur für Arbeit und zu den Faktoren, die sich förderlich auf ihre Lernresultate auswirken. Mit einer Ausschöpfungsquote von 89 % sind die gewonnenen Daten für die Grundgesamtheit repräsentativ. Anhand einer Ex-post-facto-Versuchsanordnung in Form eines quantitativen Prädiktor-Kriteriums-Ansatzes (spezielle Variante eines Längsschnittdesigns) mit unterschiedlichen Erhebungsmethoden, wie standardisiertem Selbstbeurteilungsfragebogen, Leistungstests und offiziellen Dokumenten/Aktenmaterial, wurden folgende Forschungshypothesen zugrunde gelegt: Die Stärke der Leistungsmotivation ist sowohl von Erwartungskomponenten (Fähigkeitsselbstkonzept, Selbstwert, subjektive Notenerwartung, Erfolgszuversicht und Misserfolgsfurcht) als auch von Anreizkomponenten (Gegenstands-, Tätigkeits-, Folgenanreizen) abhängig, welche wiederum vermittelt über das leistungsmotivierte Verhalten einen Einfluss auf die Studienleistung besitzt. Dabei wurde postuliert, dass motivationale Variablen auch dann noch einen bedeutsamen Effekt auf die Studienleistung ausüben, wenn weitere Leistungsprädiktoren, wie die Schulabschlussnote, die Intelligenz, die emotionale Stabilität und die Gewissenhaftigkeit kontrolliert werden.
The publication of freely available and machine-readable information has increased significantly in the last years. Especially the Linked Data initiative has been receiving a lot of attention. Linked Data is based on the Resource Description Framework (RDF) and anybody can simply publish their data in RDF and link it to other datasets. The structure is similar to the World Wide Web where individual HTML documents are connected with links. Linked Data entities are identified by URIs which are dereferenceable to retrieve information describing the entity. Additionally, so called SPARQL endpoints can be used to access the data with an algebraic query language (SPARQL) similar to SQL. By integrating multiple SPARQL endpoints it is possible to create a federation of distributed RDF data sources which acts like one big data store.
In contrast to the federation of classical relational database systems there are some differences for federated RDF data. RDF stores are accessed either via SPARQL endpoints or by resolving URIs. There is no coordination between RDF data sources and machine-readable meta data about a source- data is commonly limited or not available at all. Moreover, there is no common directory which can be used to discover RDF data sources or ask for sources which offer specific data. The federation of distributed and linked RDF data sources has to deal with various challenges. In order to distribute queries automatically, suitable data sources have to be selected based on query details and information that is available about the data sources. Furthermore, the minimization of query execution time requires optimization techniques that take into account the execution cost for query operators and the network communication overhead for contacting individual data sources. In this thesis, solutions for these problems are discussed. Moreover, SPLENDID is presented, a new federation infrastructure for distributed RDF data sources which uses optimization techniques based on statistical information.
This thesis addresses the problem of terrain classification in unstructured outdoor environments. Terrain classification includes the detection of obstacles and passable areas as well as the analysis of ground surfaces. A 3D laser range finder is used as primary sensor for perceiving the surroundings of the robot. First of all, a grid structure is introduced for data reduction. The chosen data representation allows for multi-sensor integration, e.g., cameras for color and texture information or further laser range finders for improved data density. Subsequently, features are computed for each terrain cell within the grid. Classification is performedrnwith a Markov random field for context-sensitivity and to compensate for sensor noise and varying data density within the grid. A Gibbs sampler is used for optimization and is parallelized on the CPU and GPU in order to achieve real-time performance. Dynamic obstacles are detected and tracked using different state-of-the-art approaches. The resulting information - where other traffic participants move and are going to move to - is used to perform inference in regions where the terrain surface is partially or completely invisible for the sensors. Algorithms are tested and validated on different autonomous robot platforms and the evaluation is carried out with human-annotated ground truth maps of millions of measurements. The terrain classification approach of this thesis proved reliable in all real-time scenarios and domains and yielded new insights. Furthermore, if combined with a path planning algorithm, it enables full autonomy for all kinds of wheeled outdoor robots in natural outdoor environments.
Durch eine systematische Literaturanalyse sollen die wichtigsten Aspekte des Phänomens Crowdsourcing abgedeckt werden. Da die Summe an Forschungsfragen relativ breit gefächert ist, soll der Fokus der Arbeit auf die im Folgenden aufgelisteten Fragen gelegt werden: Was ist unter dem Begriff Crowdsourcing gezielt zu verstehen? Wie lässt sich das Phänomen Crowdsourcing von anderen angrenzenden Konzepten trennen? Wo liegen die Gemeinsamkeiten und wesentlichen Unterschiede zwischen den einzelnen Konzepten? Welche Ausprägungsformen von Crowdsourcing sind in Theorie und Praxis vorzufinden? In welchen Bereichen kommt Crowdsourcing zum Einsatz? Welche Unternehmen setzen Crowdsourcing erfolgreich um? Welche Plattformen zur Unterstützung von Crowdsourcing sind vorhanden? Welche Ziele bzw. Ergebnisse sollen mit dem Einsatz von Crowdsourcing erreicht bzw. erzielt werden? Wie läuft der Crowdsourcing-Prozess ab und in welche Phasen lässt sich dieser unterteilen? Wie sieht die Wertschöpfung durch Crowdsourcing (a) allgemein und (b) speziell für Unternehmen aus? Welche Chancen und Potenziale sowie Risiken und Grenzen entstehen dabei den Unternehmen? Was lässt sich in Zukunft im Bereich des Crowdsourcing noch verbessern, das heißt in welchen Bereichen besteht noch Forschungsbedarf?
Der Fachbereich 4 (Informatik) besteht aus fünfundzwanzig Arbeitsgruppen unter der Leitung von Professorinnen und Professoren, die für die Forschung und Lehre in sechs Instituten zusammenarbeiten.
In jedem Jahresbericht stellen sich die Arbeitsgruppen nach einem einheitlichen Muster dar, welche personelle Zusammensetzung sie haben, welche Projekte in den Berichtszeitraum fallen und welche wissenschaftlichen Leistungen erbracht wurden. In den folgenden Kapiteln werden einzelne Parameter aufgeführt, die den Fachbereich in quantitativer Hinsicht, was Drittmitteleinwerbungen, Abdeckung der Lehre, Absolventen oder Veröffentlichungen angeht, beschreiben.
Im Rahmen dieser Arbeit soll eine Methodik erarbeitet werden, die englische, keyword-basierte Anfragen in SPARQL übersetzt und bewertet. Aus allen generierten SPARQL-Queries sollen die relevantesten ermittelt und ein Favorit bestimmt werden. Das Ergebnis soll in einer Nutzerevaluation bewertet werden.
The subject of this thesis was to analyse the involvement of classical creativity techniques and IT tools in different phases of the innovation process. In addition, the present work deals with the integration of Design Thinking and TRIZ into the innovation process. The aim was to define a specific innovation process based on diverse existing Innovation process models from the literature. This specific innovation process should serve as a basis for the analysis of integration of creativity techniques, IT tools, Design Thinking and TRIZ. Summarizing it can be said that the application of creativity techniques and IT Tools is admissible and useful in every phase of the innovation process. In this work it was shown that the design thinking method can be integrated in the early stages of the innovation process. Also, the process model of TRIZ, which differs from traditional innovation processes, can be combined with classical innovation processes.
Einfluss eines Ausrichtungswerkzeugs auf die Bedienbarkeit in unbeaufsichtigten Eyetrackingsystemen
(2015)
Eye gaze trackers are devices that can estimate the direction of gaze of a person. Among usability testing eye tracking also allows persons with decreased limb mobility to control or to interact with the computer. The quality and availability of eye tracking equipment has been increasing while costs have been decreasing. This development leads to entering new markets by using eye tracking as an additional input dimension for a variety of applications. Up to now eye tracking has been supervised by qualified experts, who assured that the important conditions like position in front of the eye tracking device, calibration and light conditions has been kept, while using.
This thesis examines an adjustment tool, which is helping the user to adjust in front of the eye tracker and helping to keep this position during the experiment. Furthermore the accuracy while moving the head has been analysed. In this experiment an remote eye gaze tracker has been used to control a game character in the video game called 'Schau Genau!'. The goal was to determine whether the game is playable without the barrier of adjusting and calibration. The results show that adjusting in front of an eye tracker is not a problem, keeping this position is. Small changes of the head position after the calibration process leads to a lack of accuracy. Giving up the calibration and using someone else calibration shows way bigger deviation. Additional head movement increases error rate and makes controlling more difficult.
In der vorliegenden Arbeit sollen weltweit herrschende inhaltliche Ausprägungen und Schwerpunkte des Themengebiets "BMI" bzw. "GMI" mit Hilfe des Literatur-Reviews akademischer Artikel herausgearbeitet werden. Die festgestellten Beziehungen und Zusammenhänge sollen visualisiert und lokalisiert werden, um eine globale Sicht über das Thema herzustellen. Unter anderem sollen die in das finale Set aufgenommenen Artikel auf eine Korrelation zwischen BMI und Controlling bzw. Management hin überprüft werden. Als letzter Schritt soll eine Ableitung möglicher Forschungslücken unternommen werden.
Im Rahmen dieser Arbeit wird untersucht, wie sich Modellfehler auf die Positionsgenauigkeit und Handhabbarkeit beim Rangieren mit einem Fahrerassistenzsystem auswirken. Besonderer Wert wird dabei auf die Bestimmung von Fehlergrenzen gelegt. Es wird der Frage nachgegangen, wie groß der Eingangsfehler sein darf, damit die Assistenz noch hinreichende Qualitätseigenschaften hinsichtlich ihrer Präzision und Robustheit aufweist. Dazu erfolgt zunächst eine quantitative Betrachtung der Fehler anhand des kinematischen Modells. Danach wird eine qualitative Betrachtung anhand von systematischen Experimenten durchgeführt. Es wird zunächst ein Controller entwickelt, mit dem sich ein Manöver mithilfe der visuellen Informationen der Assistenz simulieren lässt.
Dann wird eine Methode vorgestellt, mit deren Hilfe man das Manöver anhand definierter Fehlergrenzen bewerten kann. Um einen großen Raum möglicher Fehlerkombinationen effizient zu durchsuchen, wird das probabilistische Verfahren des Annealed Particle Filters benutzt. Mithilfe einer Testumgebung werden schließlich systematische Experimente durchgeführt. Zur weiteren Evaluation des Assistenzsystems in einer kontrollierten Umgebung erfolgte in Zusammenarbeit mit dem Fraunhofer ITWM in Kaiserslautern die Portierung des Assistenzsystems auf die dortige Simulationsumgebung RODOS.
While the 1960s and 1970s still knew permanent education (Council of Europe), recurrent education (OECD) and lifelong education (UNESCO), over the past 20 years, lifelong learning has become the single emblem for reforms in (pre-) primary, higher and adult education systems and international debates on education. Both highly industrialized and less industrialized countries embrace the concept as a response to the most diverse economic, social and demographic challenges - in many cases motivated by international organizations (IOs).
Yet, literature on the nature of this influence, the diffusion of the concept among IOs and their understanding of it is scant and usually focuses on a small set of actors. Based on longitudinal data and a large set of education documents, the work identifies rapid diffusion of the concept across a heterogeneous, expansive and dynamic international field of 88 IOs in the period 1990-2013, which is difficult to explain with functionalist accounts.
Based on the premises of world polity theory, this paper argues that what diffuses resembles less the bundle of systemic reforms usually associated with the concept in the literature and more a surprisingly detailed model of a new actor " the lifelong learner.