Refine
Year of publication
- 2015 (35) (remove)
Document Type
- Doctoral Thesis (21)
- Master's Thesis (7)
- Bachelor Thesis (5)
- Conference Proceedings (1)
- Part of Periodical (1)
Language
- English (35) (remove)
Keywords
- API analysis (1)
- API-Analyse (1)
- AUTOSAR (1)
- Adaptation (1)
- Amazon Mechanical Turks (1)
- Amazonia (1)
- Amazonien (1)
- Anpassung (1)
- Architektur <Informatik> (1)
- Automotive Systems (1)
- Befahrbarkeit (1)
- Blickpunktabhängig (1)
- Chromatographie (1)
- Communication Networks (1)
- Computer Graphics (1)
- Computergraphik (1)
- Crowdsourcing (1)
- Decodierung (1)
- Developer profiling (1)
- Diagnosekriterien (1)
- Diffusion (1)
- Dimension 3 (1)
- Dokumentation (1)
- Driver Assistance Systems (1)
- Enterprise 2.0 (1)
- Entwickler Profil (1)
- Fahrzeug (1)
- Feature Modeling (1)
- Feldsaum (1)
- Fließgewässer (1)
- Gelände (1)
- Grafikkarte (1)
- Grafikprogrammierung (1)
- Graphicsprogramming (1)
- Graphik (1)
- Herbizid (1)
- Hindernis (1)
- Hydrodynamik (1)
- ICP-MS (1)
- Image Processing (1)
- Image Understanding (1)
- Information Asset Register (1)
- Information Audit (1)
- Information Capturing Methods (1)
- Insecticide (1)
- Insektizid (1)
- Internationale Organisationen (1)
- Kanalcodierung (1)
- Klassifikation (1)
- Kolloid (1)
- Laser (1)
- Lebenslanges Lernen (1)
- Makrophyten (1)
- Mass-Spektrometrie (1)
- Mathematik (1)
- Mining Software Repositories (1)
- Mitral Valve (1)
- Mitralklappe (1)
- Nachtfalter (1)
- Nachtschmetterlinge (1)
- Nanopartikel (1)
- Natürliches organisches Material (1)
- OpenGL (1)
- Optimierung (1)
- Osteocephalus (1)
- Pattern Recognition (1)
- Raupe (1)
- Reflections (1)
- Reflektionen (1)
- Rendering (1)
- Reverse Engineering (1)
- Roboter (1)
- SOA (1)
- Schadstoffbelastung (1)
- Schadstoffkonzentration (1)
- Schwermetalle (1)
- Segmentation (1)
- Segmentierung (1)
- Selbstbeschädigung (1)
- Selbstregulation (1)
- Serviceorientierte Architektur (1)
- Shader (1)
- Software (1)
- Software Engineering (1)
- Softwarearchitektur (1)
- Softwaretest (1)
- Specular (1)
- Straßenzustand (1)
- Survey Research (1)
- Systematics (1)
- Systematik (1)
- Taxonomie (1)
- Taxonomy (1)
- Testen (1)
- Titandioxid-Nanopartikeln (1)
- Toxizität (1)
- Ultraschall (1)
- Ultrasound (1)
- Umwelt (1)
- Untersuchung (1)
- WEB (1)
- Wahrscheinlichkeitsrechnung (1)
- Weltkultur (1)
- Wirtschaft (1)
- Word-of-Mouth (1)
- Zoologie (1)
- Zoology (1)
- cation-bridges (CaB) (1)
- chemical force microscopy (1)
- chemical risk assessment (1)
- colloid (1)
- cross-linking (1)
- cultural landscape (1)
- decomposition (1)
- diffusion (1)
- disturbance (1)
- e-Commerce (1)
- ecotoxicology (1)
- environmental surfaces (1)
- eye tracking (1)
- freshwater organisms (1)
- functional web testing tools (1)
- glass transition (1)
- grade (1)
- humic acid (1)
- hydrodynamic chromatography (1)
- hydrodynamische Chromatographie (1)
- invertebrates (1)
- land use (1)
- lead desorption (1)
- lifelong learning (1)
- methane (1)
- nanoparticle (1)
- natural organic matter (1)
- nature conservation (1)
- nichtsuizidale Selbstverletzung (1)
- nternational organizations (1)
- plants (1)
- school tier (1)
- silver nanoparticles (1)
- soil organic matter (1)
- sorption (1)
- streams (1)
- task orientation (1)
- text-picture integration (1)
- traits (1)
- water-molecule-bridges (WaMB) (1)
- world polity (1)
Institute
- Fachbereich 7 (13)
- Institut für Informatik (7)
- Institut für Computervisualistik (4)
- Institut für Softwaretechnik (3)
- Institut für Wirtschafts- und Verwaltungsinformatik (2)
- Arbeitsbereich Allgemeine und Pädagogische Psychologie (1)
- Arbeitsbereich Biopsychologie, Klinische Psychologie und Psychotherapie (1)
- Fachbereich 4 (1)
- Fachbereich 5 (1)
- Fachbereich 8 (1)
Code package managers like Cabal track dependencies between packages. But packages rarely use the functionality that their dependencies provide. This leads to unnecessary compilation of unused parts and to speculative conflicts between package versions where there are no conflicts. In two case studies we show how relevant these two problems are. We then describe how we could avoid them by tracking dependencies not between packages but between individual code fragments.
Placing questions before the material or after the material constitute different reading situations. To adapt to these reading situations, readers may apply appropriate reading strategies. Reading strategy caused by location of question has been intensively explored in the context of text comprehension. (1) However, there is still not enough knowledge about whether text plays the same role as pictures when readers apply different reading strategies. To answer this research question, three reading strategies are experimentally manipulated by displaying question before or after the blended text and picture materials: (a) Unguided processing with text and pictures and without the question. (b) Information gathering to answer the questions after the prior experience with text and pictures. (c) Comprehending text and pictures to solve the questions with the prior information of the questions. (2) Besides, it is arguable whether readers prefer text or pictures when the instructed questions are in different difficulty levels. (3) Furthermore, it is still uncertain whether students from higher school tier (Gymnasium) emphasize more on text or on pictures than students from lower school tier (Realschule). (4) Finally, it is rarely mentioned whether higher graders are more able to apply reading strategies in text processing and picture processing than lower graders.
Two experiments were undertaken to investigate the usage of text and pictures in the perspectives of task orientation, question difficulty, school and grade. For a 2x2(x2x2x2) mixed design adopting eye tracking method, participants were recruited from grade 5 (N = 72) and grade 8 (N = 72). In Experiment 1, thirty-six 5th graders were recruited from higher tier (Gymnasium) and thirty-six 5th graders were from lower tier (Realschule). In Experiment 2, thirty-six 8th graders were recruited from higher tier and thirty-six were from lower tier. They were supposed to comprehend the materials combining text and pictures and to answer the questions. A Tobii XL60 eye tracker recorded their eye movements and their answers to the questions. Eye tracking indicators were analyzed and reported, such as accumulated fixation duration, time to the first fixation and transitions between different Areas of Interest. The results reveal that students process text differently from pictures when they follow different reading strategies. (1) Consistent with Hypothesis 1, students mainly use text to construct their mental model in unguided spontaneous processing of text and pictures. They seem to mainly rely on the pictures as external representations when trying to answer questions after the prior experience with the material. They emphasize on both text and pictures when questions are presented before the material. (2) Inconsistent with Hypothesis 2, students are inclined to emphasize on text and on pictures as question difficulty increases. However, the increase of focus on pictures is more than on text when the presented question is difficult. (3) Different from Hypothesis 3, the current study discovers that higher tier students did not differ from lower tier students in text processing. Conversely, students from higher tier attend more to pictures than students from lower tier. (4) Differed from Hypothesis 4, 8th graders outperform 5th graders mainly in text processing. Only a subtle difference is found between 5th graders and 8th graders in picture processing.
To sum up, text processing differs from picture processing when applying different reading strategies. In line with the Integrative Model of Text and Picture Comprehension by Schnotz (2014), text is likely to play a major part in guiding the processing of meaning or general reading, whereas pictures are applied as external representations for information retrieval or selective reading. When question is difficulty, pictures are emphasized due to their advantages in visualizing the internal structure of information. Compared to lower tier students (poorer problem solvers), higher tier students (good problem solvers) are more capable of comprehending pictures rather than text. Eighth graders are more efficient than 5th graders in text processing rather than picture processing. It also suggests that in designing school curricula, more attention should be paid to students’ competence on picture comprehension or text-picture integration in the future.
Software systems are often developed as a set of variants to meet diverse requirements. Two common approaches to this are "clone-and-owning" and software product lines. Both approaches have advantages and disadvantages. In previous work we and collaborators proposed an idea which combines both approaches to manage variants, similarities, and cloning by using a virtual platform and cloning-related operators.
In this thesis, we present an approach for aggregating essential metadata to enable a propagate operator, which implements a form of change propagation. For this we have developed a system to annotate code similarities which were extracted throughout the history of a software repository. The annotations express similarity maintenance tasks, which can then either be executed automatically by propagate or have to be performed manually by the user. In this work we outline the automated metadata extraction process and the system for annotating similarities; we explain how the implemented system can be integrated into the workflow of an existing version control system (Git); and, finally, we present a case study using the 101haskell corpus of variants.
In this work a framework is developed that is used to create an evaluation scheme for the evaluation of text processing tools. The evaluation scheme is developed using a model-dependent software evaluation approach and the focus of the model-dependent part is the text-processing process which is derived from the Conceptual Analysis Process developed in the GLODERS project. As input data a German court document is used containing two incidents of extortion racketeering which happened in 2011 and 2012. The evaluation of six different tools shows that one tool offers great results for the given dataset when it is compared to manual results. It is able to identify and visualize relations between concepts without any additional manual work. Other tools also offer good results with minor drawbacks. The biggest drawback for some tools is the unavailability of models for the German language. They can perform automated tasks only on English documents. Nonetheless some tools can be enhanced by self-written code which allows users with development experience to apply additional methods.
In this thesis we present an approach to track a RGB-D camera in 6DOF andconstruct 3D maps. We first acquire, register and synchronize RGB and depth images. After preprocessing we extract FAST features and match them between two consecutive frames. By depth projection we regain the z-value for the inlier correspondences. Afterwards we estimate the camera motion by 3D point set alignment between the correspondence set using least-squares. This local motion estimate is incrementally applied to a global transformation. Additionally wernpresent methods to build maps based on point cloud data acquired by a RGB-D camera. For map creation we use the OctoMap framework and optionally create a colored point cloud map. The system is evaluated with the widespread RGB-D benchmark.