Refine
Year of publication
- 2012 (36) (remove)
Document Type
- Doctoral Thesis (11)
- Diploma Thesis (8)
- Part of Periodical (8)
- Bachelor Thesis (4)
- Master's Thesis (4)
- Conference Proceedings (1)
Language
- English (36) (remove)
Keywords
- Petri-Netze (2)
- probability propagation nets (2)
- 101companies (1)
- API Analysis (1)
- API Migratiom (1)
- Africa (1)
- Afrika (1)
- Archivierung (1)
- Auditing (1)
- Ausbreitung (1)
- Automatisches Beweisverfahren (1)
- Benutzerverhalten (1)
- Beschreibungslogik (1)
- Bewertungskriterien (1)
- Bioassay (1)
- Bärlappe (1)
- Cloud Computing (1)
- Computerspiel (1)
- Content Management (1)
- Core Ontology on Multimedia (COMM) (1)
- Criteria Matrix (1)
- Daphnia longispina (1)
- Daphnia longispina complex (1)
- Daphnia longispina-Komplex (1)
- Design Pattern (1)
- Destiny (1)
- Distributed Environments (1)
- E-Hyper Tableau (1)
- E-KRHyper (1)
- ECMS 2012 (1)
- Einstellung (1)
- Einstellungen gegenüber bestimmten Filmeigenschaften (1)
- Ekel (1)
- Enterprise 2.0 (1)
- Enterprise Information Management (1)
- Entwurfsmuster (1)
- European Conference on Modelling and Simulation (1)
- Farnpflanzen (1)
- Ferns (1)
- Filmbewertung (1)
- GPGPU (1)
- Galerucinae (1)
- Genetische Variabilität (1)
- Genetischer Fingerabdruck (1)
- Germany (1)
- Hedonic (1)
- Hedonisch (1)
- Hypertableaux (1)
- IAT (1)
- Implicit Association Test (1)
- Java <Programmiersprache> (1)
- Justification (1)
- Konsistenz. Psychologie (1)
- Kriterium (1)
- Linked Open Data (1)
- Logik (1)
- Lycophytes (1)
- Maschinelles Sehen (1)
- Mikrosatelliten-DNA (1)
- Monolepta (1)
- Movie evaluation criteria (1)
- Natural Feature Tracking (1)
- Nutzererleben (1)
- ODRL (1)
- Ontology alignment (1)
- Oriental region (1)
- Petrinetz (1)
- Pharmakokinetik (1)
- Policy Language (1)
- Populationsgenetik (1)
- Pragmatic (1)
- Pragmatisch (1)
- Proceedings (1)
- Product choice (1)
- Produktbewertung (1)
- Produktentscheidung (1)
- Produktwahl (1)
- Provenance (1)
- Prozedurale Synthese (1)
- Pteris (1)
- Rechtfertigung (1)
- Rechtfertigung <Philosophie> (1)
- Registratur (1)
- Revision (1)
- Rezeptionsforschung (1)
- Satelliten-DNS (1)
- Schema Information (1)
- Schlussfolgern (1)
- Selbstbeobachtung (1)
- Selbsteinschaetzung (1)
- Skalenkonstruktion (1)
- Skalenvalidierung (1)
- Softwareentwicklung (1)
- Softwarewartung (1)
- Systematik (1)
- Taxonomy (1)
- Tokens (1)
- Tool Evaluation (1)
- Toxikologische Bewertung (1)
- University (1)
- User experience (1)
- Web Analytics (1)
- Web Analytics Framework (1)
- Web Mining (1)
- Web-application framework (1)
- Web-programming technologies (1)
- Wikipedia (1)
- Wildtiere (1)
- aquatic ecotoxicology (1)
- archiving (1)
- attitudes towards specific movie features (1)
- automatic behavioral cues (1)
- clonal diversity (1)
- description logic (1)
- design thinking (1)
- deutsche Hochschulen (1)
- disgust sensitivity (1)
- distinct object identifiers (1)
- effect assessment (1)
- entrepreneurial design thinking (1)
- entrepreneurial thinking (1)
- entrepreneurship education (1)
- expansion (1)
- faceted search (1)
- genotyping error (1)
- hazard prediction (1)
- hypertableaux (1)
- implicit-explicit consistency (1)
- klonale Diversität (1)
- leaf beetles (1)
- metadata formats (1)
- metadata standards (1)
- microsatellite DNA (1)
- microsatellite analysis (1)
- mobile devices (1)
- multimedia metadata (1)
- population genetics (1)
- priority effects (1)
- priority-Effekte (1)
- procedural content generation (1)
- scale construction (1)
- scale validation (1)
- shiq (1)
- social media data (1)
- systematics (1)
- teams (1)
- tool-integration (1)
- toxicokinetics (1)
- unique name assumption (1)
- video games (1)
- virtual goods (1)
- wildlife management (1)
Institute
- Fachbereich 4 (25)
- Institut für Informatik (4)
- Fachbereich 7 (3)
- Fachbereich 8 (3)
- Institut für Integrierte Naturwissenschaften, Abt. Biologie (3)
- Institute for Web Science and Technologies (3)
- Institut für Management (1)
- Institut für Wirtschafts- und Verwaltungsinformatik (1)
- Mathematisches Institut (1)
A taxonomic revision of the genus Pteris in tropical Africa revealed 26 species. An identification key to the species is provided. Morphological characters were used to prepare a cladistic analysis of the relevant taxa. Each species was evaluated concerning the IUCN red list status. Only Pteris mkomaziensis was considered as Near Threatened, and all other species only as Least Concern. An inventory of the ferns of Kakamega Forest / Kenya and Budongo Forest / Uganda revealed 85 species in Kakamega and 66 species in Budongo. Life form spectra were analysed and the ferns were studied for their value for bioindication.
Dualizing marked Petri nets results in tokens for transitions (t-tokens). A marked transition can strictly not be enabled, even if there are sufficient "enabling" tokens (p-tokens) on its input places. On the other hand, t-tokens can be moved by the firing of places. This permits flows of t-tokens which describe sequences of non-events. Their benefiit to simulation is the possibility to model (and observe) causes and effects of non-events, e.g. if something is broken down.
In this paper, we demonstrate by means of two examples how to work with probability propagation nets (PPNs). The fiirst, which comes from the book by Peng and Reggia [1], is a small example of medical diagnosis. The second one comes from [2]. It is an example of operational risk and is to show how the evidence flow in PPNs gives hints to reduce high losses. In terms of Bayesian networks, both examples contain cycles which are resolved by the conditioning technique [3].
The paper deals with a specific introduction into probability propagation nets. Starting from dependency nets (which in a way can be considered the maximum information which follows from the directed graph structure of Bayesian networks), the probability propagation nets are constructed by joining a dependency net and (a slightly adapted version of) its dual net. Probability propagation nets are the Petri net version of Bayesian networks. In contrast to Bayesian networks, Petri nets are transparent and easy to operate. The high degree of transparency is due to the fact that every state in a process is visible as a marking of the Petri net. The convenient operability consists in the fact that there is no algorithm apart from the firing rule of Petri net transitions. Besides the structural importance of the Petri net duality there is a semantic matter; common sense in the form of probabilities and evidencebased likelihoods are dual to each other.
The first group that was revised within my study is Ochralea Clark, 1865 (Hazmi & Wagner 2010a). I have checked the type specimen of most species that were originally described in Ochralea and there is no doubt that this genus is clearly distinct from Monolepta. Weise (1924) has synonymised Galeruca nigripes (Olivier, 1808) with O. nigricornis Clark, 1865 and the valid name of the species is O. nigripes (Olivier, 1808). Out of ten species originally described in this genus, only this species remain valid and O. pectoralis is a new synonym of O. nigripes. Additionally, Monolepta wangkliana Mohamedsaid, 2000 is very closely related to O. nigripes and need to be transferred to Ochralea. The second genus where the revision is still published is Arcastes Baly, 1865 (Hazmi & Wagner 2010b). I have checked the genitalic characters of A. biplagiata, and most of the type species of other Arcastes. Arcastes biplagiata possesses a peculiar shape of the median lobe and asymmetrically arranged endophallic structures. These peculiar characters are very useful to delimit this genus from the others. Therefore, only three valid species remain in Arcastes, while two new synonyms are found and fourrnother species need to be transferred to other genera. While checking the genitalic characteristics of type species of Arcastes sanguinea, thernmedian lobe as well as the spermatheca of this species possesses strong differences to A. biplagiata. The species was redescribed and transferred in a monotypic new genus Rubrarcastes Hazmi & Wagner, 2010c. The fourth genus that was already revised is Neolepta Jacoby, 1884. It was originally described on base of only two species by that time, N. biplagiata and N. fulvipennis. Jacoby has not designated a type species of the genus, and Maulik (1936) did it later, with the designation of N. biplagiata. Jacoby in his original description has only commented that Neolepta is very close and similar to Monolepta Chevrolat, 1837 and Candezea Chapuis, 1879. Subsequent authors have described further eight species, and transferred one species from Luperodes to it, summing up the total number of eleven described species in Neolepta. I have checked the genitalic characters of the type, N. biplagiata and have found out that the median lobe is not incised apically and stronger sclerotised ventral carinae with an apical hook close to the apex occur. Out of all described species, only two are closely related to the genero-type, N. sumatrensis (Jacoby, 1884) new combination and N. quadriplagiata Jacoby, 1886 that will remain in this group after the revision. All other species need to be transferred to other genera, including the newly described Paraneolepta and Orthoneolepta. The last distinct paper of this thesis presented the results on Monolepta Chevrolat, 1837. The massive number of Monolepta from the entire Oriental Region, with about 260 described species names is a more long-life project and not practicable within a PhD-study. Thus I have focused on the species of Monolepta known from the Sundaland area in this work. A comprehensive revision including the study of the primary types of the described species, has never been done for Monolepta from this sub-region, while new species have also been described in the last decade (e. g. Mohamedsaid 1993, 1997, 1998, 1999, 2000a,b, 2001, 2002, 2005).
On base of the most current species lists of Mohamedsaid (2001, 2004, 2005) and Kimoto (1990), the number of valid species described from this region is about 72. After my revision, only thirteen valid species can remain in Monolepta in the sense of the generotype M. bioculata (Wagner 2007), while seven species have been found as new synonyms, three have been already transferred to other genera and further 49 species need to be transferred to other genera.
Wild boars belong to the most wide spread ungulates in the world. They are characterized by a well performed adaption to their environment mainly due to their omnivorous dietary. The wild boar population in Germany increased during the past three decades. Nowadays their high density leads to problems in agricultural areas due to damage of crops and plays a significant role as disease vector as the classical swine fever. For an effective population management population size information is of crucial importance. Different traditional methods exist to estimate population sizes as direct sightnings, faecal drop counts or hunting harvest which provide only relative estimates and population trends. Absolute population sizes could be yielded by a Capture-Mark-Recapture (CMR) approach. However, capturing of wild boars is difficult to realize and costly in terms of personnel and field effort.
Furthermore the capture probabilities are heterogeneous due to the variable behaviour of individuals influenced by age, sex, and experience of the animals. Non-invasive genetic methods are a promising complement to the traditional methods for population size estimation particularly for wild boar. These methods reduce stress and capture bias and increase the number of re-captures. Faeces proved to be a suitable DNA source for wild boar genotyping, due to almost equal capture probability. However working with faeces implicates difficulties such as low DNA rnquality and quantity, genotyping errors as dropout and false alleles.
The main aim of the present study was to develop a reliable, cost-efficient, reproducible and practicable method for wild boar genotyping. This method should provide a reliable dataset of genotypes obtained from the collected faeces samples. Individual identification forms the basis for an improved mark-recapture approach. As there is no sound method for absolute population counts in free living wild boar, reference values for the validation of this new approach are missing. Therefore, different routines to reduce and to assess genotyping errors were compared within this thesis. For maximum amplification rate, the storage, the extraction methods and the PCR-procedure were optimised. A step by step procedure was evaluated in order to determine the minimum required microsatellite (MS) number for reliable individual identification including a test with family groups (female and embryo tissue) to distinguish even between close relatives. A multiple-tubes approach, post-amplification checking and different correction procedures were applied to reduce genotyping errors. In order to quantify real genotyping error rates (GER) of datasets derived from sampling in the Palatinate Forest in western Germany, different methods for GER determination were compared with each other, obtaining GERs between 0% and 57.5%. As a consequence, more strict criteria for the multi-tube approach and increased repetition number of homozygous samples were used. An additional method validation was the implementation of a blind test to achieve the reliability of the genotyping and error checking procedure. Finally a strict and practicable proposal for the lab procedure was developed, by beginning with faecal sample collection and ending with a reliable dataset with genotypes of each sample.
The results of the presented method were derived from two sampling periods in a 4000 ha area in the Palatinate Forest in Rhineland-Palatinate in December 2006 and 2007. Both provided high confidence intervals (CI) applying inaccurate estimates (eg. for 2006 population size amounted to 215 with CI 95% of 156-314 and for 2007 population size amounted to 415 with CI 95% of 318-561) due to low sampling sizes (for 2006 n = 141 and for 2007 n = 326), successfully analysed samples (for 2006 n = 89 and for 2007 n = 156) and recapture numbers (for 2006 n = 12 and for 2007 n = 24). Furthermore, the population estimates even for the lowest values were considerably higher than previously assumed by hunting statistics, which implicates an ineffective hunting regime in the study area. For the future prospect, to obtain more precise population size estimations the increase of sampling sizes is inevitable, because absolute and reliable estimates are highly desirable for wildlife management and the control of diseases transmission. Nevertheless, the method for individual genotyping of wild boars evaluated in this thesis could be successfully established resulting in reliable datasets for population estimation modelling with sufficiently low GER.
Virtual Goods + ODRL 2012
(2012)
This is the 10th international workshop for technical, economic, and legal aspects of business models for virtual goods incorporating the 8th ODRL community group meeting. This year we did not call for completed research results, but we invited PhD students to present and discuss their ongoing research work. In the traditional international group of virtual goods and ODRL researchers we discussed PhD research from Belgium, Brazil, and Germany. The topics focused on research questions about rights management in the Internet and e-business stimulation. In the center of rights management stands the conception of a formal policy expression that can be used for human readable policy transparency, as well as for machine readable support of policy conformant systems behavior up to automatic policy enforcement. ODRL has proven to be an ideal basis for policy expressions, not only for digital copy rights, but also for the more general "Policy Awareness in the World of Virtual Goods". In this sense, policies support the communication of virtual goods, and they are a virtualization of rules-governed behavior themselves.
Regarding the rapidly growing amount of data produced every year and the increasing acceptance of Enterprise 2.0 enterprises have to care about the management of their data more and more. Content created and stored in an uncoordinated manner can lead to data-silos (Williams & Hardy 2011, p.57), which result in long search times, inaccessible data and in consequence monetary losses. The "expanding digital universe" forces enterprises to develop new archiving solutions and records management policies (Gantz et al. 2007, p.13). Enterprise Content Management (ECM) is the research field that deals with these challenges. It is placed in the scientific context of Enterprise Information Management. This thesis aims to find out to what extent current Enterprise Content Management Systems (ECMS) support these new requirements, especially concerning the archiving of Enterprise 2.0 data. For this purpose, three scenarios were created to evaluate two different kinds of ECMS (one Open Source - and one proprietary system) chosen on the basis of a short marketrnresearch. The application of the scenarios reveals that the system vendors actually face the industry- concerns: both tools provide functionality for the archiving of data arising from online collaboration and also business records management capabilities but the integration of those topics is not, or is only inconsistently solved. At this point new questions - such as, "Which datarngenerated in an Enterprise 2.0 is worth being a record?" - arise and should be examined in future research.
Procedural content generation, the generation of video game content using pseudo-random algorithms, is a field of increasing business and academic interest due to its suitability for reducing development time and cost as well as the possibility of creating interesting, unique game spaces. Although many contemporary games feature procedurally generated content, the author perceived a lack of games using this approach to create realistic outer-space game environments, and the feasibility of employing procedural content generations in such a game was examined. Using current scientific models, a real-time astronomical simulation was developed in Python which generates star and planets object in a fictional galaxy procedurally to serve as the game space of a simple 2D space exploration game where the player has to search for intelligent life.
Schema information about resources in the Linked Open Data (LOD) cloud can be provided in a twofold way: it can be explicitly defined by attaching RDF types to the resources. Or it is provided implicitly via the definition of the resources´ properties.
In this paper, we analyze the correlation between the two sources of schema information. To this end, we have extracted schema information regarding the types and properties defined in two datasets of different size. One dataset is a LOD crawl from TimBL- FOAF profile (11 Mio. triple) and the second is an extract from the Billion Triples Challenge 2011 dataset (500 Mio. triple). We have conducted an in depth analysis and have computed various entropy measures as well as the mutual information encoded in this two manifestations of schema information.
Our analysis provides insights into the information encoded in the different schema characteristics. It shows that a schema based on either types or properties alone will capture only about 75% of the information contained in the data. From these observations, we derive conclusions about the design of future schemas for LOD.
Magnetic resonance (MR) tomography is an imaging method, that is used to expose the structure and function of tissues and organs in the human body for medical diagnosis. Diffusion weighted (DW) imaging is a specific MR imaging technique, which enables us to gain insight into the connectivity of white matter pathways noninvasively and in vivo. It allows for making predictions about the structure and integrity of those connections. In clinical routine this modality finds application in the planning phase of neurosurgical operations, such as in tumor resections. This is especially helpful if the lesion is deeply seated in a functionally important area, where the risk of damage is given. This work reviews the concepts of MR imaging and DW imaging. Generally, at the current resolution of diffusion weighted data, single white matter axons cannot be resolved. The captured signal rather describes whole fiber bundles. Beside this, it often appears that different complex fiber configurations occur in a single voxel, such as crossings, splittings and fannings. For this reason, the main goal is to assist tractography algorithms who are often confound in such complex regions. Tractography is a method which uses local information to reconstruct global connectivities, i.e. fiber tracts. In the course of this thesis, existing reconstruction methods such as diffusion tensor imaging (DTI) and q-ball imaging (QBI) are evaluated on synthetic generated data and real human brain data, whereas the amount of valuable information provided by the individual reconstruction mehods and their corresponding limitations are investigated. The output of QBI is the orientation distribution function (ODF), where the local maxima coincides with the underlying fiber architecture. We determine those local maxima. Furthermore, we propose a new voxel-based classification scheme conducted on diffusion tensor metrics. The main contribution of this work is the combination of voxel-based classification, local maxima from the ODF and global information from a voxel- neighborhood, which leads to the development of a global classifier. This classifier validates the detected ODF maxima and enhances them with neighborhood information. Hence, specific asymmetric fibrous architectures can be determined. The outcome of the global classifier are potential tracking directions. Subsequently, a fiber tractography algorithm is designed that integrates along the potential tracking directions and is able to reproduce splitting fiber tracts.
The objective of this contribution is to conceptually analyze the potentials of entrepreneurial design thinking as being a rather new method for entrepreneurship education. Based on a literature review of different design thinking concepts we carve out a generic design thinking model upon we conceptually build a new model that considers entrepreneurial thinking as a valuable characteristic.
The results of our work show that the characteristics of entrepreneurial design thinking can enhance entrepreneurship education by supporting respective action fields of entrepreneurial learning. In addition we reveal that entrepreneurial design thinking offers beneficial guidelines for the design of entrepreneurship education programs.
The Multimedia Metadata Ontology (M3O) provides a generic modeling framework for representing multimedia metadata. It has been designed based on an analysis of existing metadata standards and metadata formats. The M3O abstracts from the existing metadata standards and formats and provides generic modeling solutions for annotations, decompositions, and provenance of metadata. Being a generic modeling framework, the M3O aims at integrating the existing metadata standards and metadata formats rather than replacing them. This is in particular useful as today's multimedia applications often need to combine and use more than one existing metadata standard or metadata format at the same time. However, applying and specializing the abstract and powerful M3O modeling framework in concrete application domains and integrating it with existing metadata formats and metadata standards is not always straightforward. Thus, we have developed a step-by-step alignment method that describes how to integrate existing multimedia metadata standards and metadata formats with the M3O in order to use them in a concrete application. We demonstrate our alignment method by integrating seven different existing metadata standards and metadata formats with the M3O and describe the experiences made during the integration process.
In this thesis the feasibility of a GPGPU (general-purpose computing on graphics processing units) approach to natural feature description on mobile phone GPUs is assessed. To this end, the SURF descriptor [4] has been implemented with OpenGL ES 2.0/GLSL ES 1.0 and evaluated across different mobile devices. The implementation is multiple times faster than a comparable CPU variant on the same device. The results proof the feasibility of modern mobile graphics accelerators for GPGPU tasks especially for the detection phase in natural feature tracking used in augmented reality applications. Extensive analysis and benchmarking of this approach in comparison to state of the art methods have been undertaken. Insights into the modifications necessary to adapt and modify the SURF algorithm to the limitations of a mobile GPU are presented. Further, an outlook for a GPGPU-based tracking pipeline on a mobile device is provided.
Particle swarm optimization is an optimization technique based on simulation of the social behavior of swarms.
The goal of this thesis is to solve 6DOF local pose estimation using a modified particle swarm technique introduced by Khan et al. in 2010. Local pose estimation is achieved by using continuous depth and color data from a RGB-D sensor. Datasets are aquired from different camera poses and registered into a common model. Accuracy and computation time of the implementation is compared to state of the art algorithms and evaluated in different configurations.
The natural and the artificial environment of mankind is of enormous complexity, and our means of understanding this complex environment are restricted unless we make use of simplified (but not oversimplified) dynamical models with the help of which we can explicate and communicate what we have understood in order to discuss among ourselves how to re-shape reality according to what our simulation models make us believe to be possible. Being both a science and an art, modelling and simulation isrnstill one of the core tools of extended thought experiments, and its use is still spreading into new application areas, particularly as the increasing availability of massive computational resources allows for simulating more and more complex target systems.
In the early summer of 2012, the 26th European Conference on Modelling andrnSimulation (ECMS) once again brings together the best experts and scientists in the field to present their ideas and research, and to discuss new challenges and directions for the field.
The 2012 edition of ECMS includes three new tracks, namely Simulation-BasedrnBusiness Research, Policy Modelling and Social Dynamics and Collective Behaviour, and extended the classical Finance and Economics track with Social Science. It attracted more than 110 papers, 125 participants from 21 countries and backgrounds ranging from electrical engineering to sociology.
This book was inspired by the event, and it was prepared to compile the most recent concepts, advances, challenges and ideas associated with modelling and computer simulation. It contains all papers carefully selected from the large number of submissions by the programme committee for presentation during the conference and is organised according to the still growing number tracks which shaped the event. The book is complemented by two invited pieces from other experts that discussed an emerging approach to modelling and a specialised application. rnrnWe hope these proceedings will serve as a reference to researchers and practitioners in the ever growing field as well as an inspiration to newcomers to the area of modelling and computer simulation. The editors are honoured and proud to present you with this carefully compiled selection of topics and publications in the field.
In summary, this study revealed the widespread occurrence of antiviral drugs in the aquatic environment. Furthermore, it could be shown that the elimination of pharmaceuticals in both biological and oxidative treatment do not necessarily result in their mineralization but rather leads to the formation of a variety of transformation and oxidation products.
This is one of the first studies in which the fate and in particular the transformation of pharmaceuticals has been comprehensively investigated in almost the complete water cycle, from biological wastewater treatment to advanced oxidation processes via ozone. It was shown that the transformation of pharmaceuticals in the urban water cycle can ultimately result in the formation of toxic transformation products.
Audiences' movie evaluations have often been explored as effects of experiencing movies. However, little attention has been paid to the evaluative process itself and its determinants before, during, and after movie exposure. Moreover, until recently, research on the subjective assessment of specific film features (e.g., story, photography) has played a less important role. Adding to this research, this dissertation introduces the idea of subjective movie evaluation criteria (SMEC) and describes the scale construction for their measurement and its validation process. Drawing on social cognition theories, SMEC can be defined as standards that viewers use for assessing the features of films and conceptualized as mental representations of - or attitudes towards - specific movie features guiding cognitive and affective information processing of movies and corresponding evaluative responses. Studies were conducted in five phases to develop and validate scales for measuring and examining the structure of SMEC. In Phase I, open-ended data were categorized and content validated via a modified structure formation technique and items were developed. Subsequently in Phase II, participants completed an online questionnaire including revised and pilot-tested items. Exploratory factor analyses were iteratively applied to explore the latent structure and to select items. The resulting 8-factor model was cross-validated with different samples in Phase III applying confirmatory factor analyses which yielded good fit indices, thereby supporting structural validity. In Phase IV, latent state"trait analyses were carried out to examine the reliability, occasion specificity, common consistency, and method specificity of the eight dimensions. All factors - Story Verisimilitude, Story Innovation, Cinematography, Special Effects, Recommendation, Innocuousness, Light-heartedness, and Cognitive Stimulation - are reliable and are largely determined by stable individual differences, albeit some of them also show substantial systematic, but unstable effects due to the situation or interaction. These results provide evidence for the substantive validity of the SMEC scales. Finally, in Phase V the nomological network of SMEC was explored (external validity by examining correlations with related constructs like film genre preferences and personality traits). Taken together, whereas the SMEC concept - compatible with contemporary social cognition theories - provides a framework to theorize and address research questions about the role of movie evaluation criteria and evaluative processes, the SMEC scales are the proper tool for investigating the role of these criteria and the processes they are involved in.
The purpose of this master thesis is to enable the Robot Lisa to process complex commands and extract the necessary information in order to perform a complex task as a sequence of smaller tasks. This is intended to be achieved by the improvement of the understanding that Lisa has of her environment by adding semantics to the maps that she builds. The complex command itself will be expected to be already parsed. Therefore the way the input is processed to become a parsed command is out of the scope of this work. Maps that Lisa builds will be improved by the addition of semantic annotations that can include any kind of information that might be useful for the performance of generic tasks. This can include (but not necessarily limited to) hierarchical classifications of locations, objects and surfaces. The processing of the command in addition to some information of the environment shall trigger the performance of a sequence of actions. These actions are expected to be included in Lisa- currently implemented tasks and will rely on the currently existing modules that perform them.
Nevertheless the aim of this work is not only to be able to use currently implemented tasks in a more complex sequence of actions but also make it easier to add new tasks to the complex commands that Lisa can perform.
Population genetic structure in European Hyalodaphnia species: Monopolization versus gene flow
(2012)
Cyclic parthenogens displays an alternation of asexual and sexual reproduction which has consequences for the genetic structure of these organisms. The clonal diversity of cyclic parthenogenetic zooplankton populations is influenced by the size of the dormant egg bank, i.e., the amount of sexually produced dormant eggs that assembled in the sediment, as these dormant eggs contribute new genetic variants to the populations. Further, the clonal diversity is impacted by clonal erosion over time, which reduces the number of different clones through stochastic and selective processes. Although freshwater invertebrates are good dispersers through their dormant stages, the influence of gene flow is assumed to be negligible, as the local population successfully monopolizes the available resources. As these populations reach carrying capacity fast due to the asexual reproduction, the first colonizing individuals are able to successfully establish in the habitat, resulting in a priority effect which hinders the invasion of new genotypes. Due to clonal selection and sexual reproduction a population will locally adapt over time and will establish a dormant egg bank which facilitates the fast re-colonization after a hostile period. This thesis evaluates the processes altering the population genetic structure of cyclic parthenogenetic zooplankton with a special focus on the concepts of monopolization as well as the counteracting effects of gene flow, using large-lake Daphnia species. Thirty-two variable microsatellite DNA markers were developed and a subset of twelve markers was evaluated regarding their suitability for species assignment and hybrid class detection. With this marker set and an additional mitochondrial DNA marker forty-four natural European populations of the species D. cucullata, D. galeata and D. longispina were studied. In D. galeata, most populations were characterized by low clonal diversities which suggest high influence from clonal erosion over the growing season and a low contribution from the dormant egg bank. Further, recent expansions as well as gene flow were detected, probably caused by the anthropogenic alteration of freshwater habitats, in particular eutrophication of many European lakes. D. longispina and D. cucullata revealed a different genetic structure compared to D. galeata, with high genetic differentiation among populations. This indicates low levels of effective gene flow which is in line with the predictions of monopolization. Further, high clonal diversities were found in populations of the two taxa, suggesting a high contribution from the dormant egg bank while clonal erosion was often not detectable. In D. longispina, mitochondrial data revealed an ancient expansion which was probably initiated by the formation of glacial lakes after the last ice age.
In addition, in D. longispina not only clonal diversity but also genetic diversity was high, indicating that during the build-up of the studied populations the influence from gene flow was probably high. To better understand the processes that act on early populations the population build-up in regard to the temporal advantage of clones during invasion succession was experimentally studied and revealed that priority effects shape population structure of Daphnia species. However, in certain cases the highly superior clones resulted in the extinction of inferior clones independent of the temporal advantage the single clones had.
This clearly shows that not only the time of succession is important but also the competitive strength. rnIn conclusion, the results obtained show that the population genetic structure in cyclic parthenogenetic zooplankton species is impacted by various processes. In addition to earlier studies, which mainly focus on local adaptation, clonal erosion and the size of the dormant egg bank to understand population genetic structure, this thesis could show that gene flow may be effective as well. During population build-up the advantage of early arriving individuals does not necessarily predict the outcome of population assembly, as additional genotypes may contribute to the population. Finally, the genetic structure of established populations may be severely impacted by effective gene flow, if severe environmental changes alter the habitat of the locally adapted population.
The World Wide Web (WWW) has become a very important communication channel. Its usage has steadily grown within the past. Interest by website owners in identifying user behaviour has been around since Tim Berners-Lee developed the first web browser in 1990. But as the influence of the online channel today eclipses all other media the interest in monitoring website usage and user activities has intensified as well. Gathering and analysing data about the usage of websites can help to understand customer behaviour, improve services and potentially increase profit.
It is further essential for ensuring effective website design and management, efficient mass customization and effective marketing. Web Analytics (WA) is the area addressing these considerations. However, changing technologies and evolving Web Analytic methods and processes present a challenge to organisations starting with Web Analytic programmes. Because of lacking resources in different areas and other types of websites especially small and medium-sized enterprises (SME) as well as non-profit organisations struggle to operate WA in an effective manner.
This research project aims to identify the existing gap between theory, tool possibilities and business needs for undertaking Web Analytic programmes. Therefore the topic was looked at from three different ways: the academic literature, Web Analytic tools and an interpretative case study. The researcher utilized an action research approach to investigate Web Analytics presenting an holistic overview and to identify the gaps that exists. The outcome of this research project is an overall framework, which provides guidance for SMEs who operate information websites on how to proceed in a Web Analytic programme.
In this paper, we compare two approaches for exploring large,rnhierarchical data spaces of social media data on mobile devicesrnusing facets. While the first approach arranges thernfacets in a 3x3 grid, the second approach makes use of arnscrollable list of facets for exploring the data. We have conductedrna between-group experiment of the two approachesrnwith 24 subjects (20 male, 4 female) executing the same set ofrntasks of typical mobile users" information needs. The resultsrnshow that the grid-based approach requires significantly morernclicks, but subjects need less time for completing the tasks.rnFurthermore, it shows that the additional clicks do not hamperrnthe subjects" satisfaction. Thus, the results suggest thatrnthe grid-based approach is a better choice for faceted searchrnon touchscreen mobile devices. To the best of our knowledge,rnsuch a summative evaluation of different approaches for facetedrnsearch on mobile devices has not been done so far.
To assess the effect of organic compounds on the aquatic environment, organisms are typically exposed to toxicant solutions and the adverse effects observed are linked to the concentration in the surrounding media. As compounds generally need to be taken up into the organism and distributed to the respective target sites for the induction of effects, the internal exposure is postulated to best represent the observed effects.
The aim of this work is to contribute to an improved effect assessment of organic compounds by describing experimental and modelling methods to obtain information on the internal exposure of contaminants in organisms.
Chapter 2 details a protocol for the determination of bioconcentration parameter for uptake (k1) and elimination (k2) of organic compounds in zebrafish (Danio rerio) eggs. This enables the simulation of the internal exposure in zebrafish eggs from an ambient exposure concentration over time. The accumulated contaminant amount in zebrafish eggs was also determined, using a biomimetic extraction method. Different bioconc-entration estimation models for the determination of internal steady-state concentrat-ion of pharmaceutical compounds in fish to an environmental exposure are presented in Chapter 3. Bioconcentration factors were estimated from the compounds octanol: water partition coefficient (KOW) to determine the internal exposure to an ambient concentration.
To assess the integral bioavailable fraction from the water and sediment phase of environmental contaminants for rooted aquatic plants, the internal exposure in river-living Myriophyllum aquaticum plants were determined over time, presented in Chapter 4. The plants were collected at different time points, with the accumulated organic contaminants determined using a liquid extraction method.
In Chapter 5 a protocol was established to enable the non-invasive observation of effects in M. aquaticum plants exposed to contaminated sediments over time. Since the toxicant effects are a result of all uptake and distribution processes to the target site and the toxico-dynamic process leading to an observed effect during static exposure, information on the internal exposure could thus be gained from the temporal effect expression.rn
Based on dual process models of information processing, the present research addressed how explicit disgust sensitivity is re-adapted according to implicit disgust sensitivity via self-perception of automatic behavioral cues. Contrary to preceding studies (Hofmann, Gschwendner, & Schmitt, 2009) that concluded that there was a "blind spot" for self- but not for observer perception of automatic behavioral cues, in the present research, a re-adaption process was found for self-perceivers and observers. In Study 1 (N = 75), the predictive validity of an indirect disgust sensitivity measure was tested with a double-dissociation strategy. Study 2 (N = 117) reinvestigated the hypothesis that self-perception of automatic behavioral cues, predicted by an indirect disgust sensitivity measure, led to a re-adaption of explicit disgust sensitivity measures. Using a different approach from Hofmann et al. (2009), the self-perception procedure was modified by (a) feeding back the behavior several times while a small number of cues had to be rated for each feedback condition, (b) using disgust sensitivity as a domain with clearly unequivocal cues of automatic behavior (facial expression, body movements) and describing these cues unambiguously, and (c) using a specific explicit disgust sensitivity measure in addition to a general explicit disgust sensitivity measure. In Study 3 (N = 130), the findings of Study 2 were replicated and display rules and need for closure as moderator effects of predictive validity and cue utilization were additionally investigated. The moderator effects give hints that both displaying a disgusted facial expression and self-perception of one- own disgusted facial expression are subject to a self-serving bias, indicating that facial expression may not be an automatic behavior. Practical implications and implications for future research are discussed.
Web-programming is a huge field of different technologies and concepts. Each technology implements a web-application requirement like content generation or client-server communication. Different technologies within one application are organized by concepts, for example architectural patterns. The thesis describes an approach for creating a taxonomy about these web-programming components using the free encyclopaedia Wikipedia. Our 101companies project uses implementations to identify and classify the different technology sets and concepts behind a web-application framework. These classifications can be used to create taxonomies and ontologies within the project. The thesis also describes, how we priorize useful web-application frameworks with the help of Wikipedia. Finally, the created implementations concerning web-programming are documented.
Technical products have become more than practical tools to us. Mobile phones, for example, are a constant companion in daily life. Besides purely pragmatic tasks, they fulfill psychological needs such as relatedness, stimulation, competence, popularity, or security. Their potential for the mediation of positive experience makes interactive products a rich source of pleasure. Research acknowledged this: in parallel to the hedonic/utilitarian model in consumer research, Human-Computer Interaction (HCI) researchers broadened their focus from mere task-fulfillment (i.e., the pragmatic) to a holistic view, encompassing a product's ability for need-fulfillment and positive experience (i.e., the hedonic). Accordingly, many theoretical models of User Experience (UX) acknowledge both dimensions as equally important determinants of a product's appeal: pragmatic attributes (e.g., usability) as well as hedonic attributes (e.g., beauty). In choice situations, however, people often overemphasize the pragmatic, and fail to acknowledge the hedonic. This phenomenon may be explained by justification. Due to their need for justification, people attend to the justifiability of hedonic and pragmatic attributes rather than to their impact on experience. Given that pragmatic attributes directly contribute to task-fulfillment, they are far easier to justify than hedonic attributes. People may then choose the pragmatic over the hedonic, despite a true preference for the hedonic. This can be considered a dilemma, since people choose what is easy to justify and not what they enjoy the most. The present thesis presents a systematic exploration of the notion of a hedonic dilemma in the context of interactive products.
A first set of four studies explored the assumed phenomenon. Study 1 (N = 422) revealed a reluctance to pay for a hedonic attribute compared to a pragmatic attribute. Study 2 (N = 134) demonstrated that people (secretly) prefer a more hedonic product, but justify their choice by spurious pragmatic advantages. Study 3 (N = 118) confronted participants with a trade-off between hedonic and pragmatic quality. Even though the prospect of receiving a hedonic product was related to more positive affect, participants predominantly chose the pragmatic, especially those with a high need for justification. This correlation between product choice and perceived need for justification lent further support to the notion that justification lies at the heart of the dilemma. Study 4 (N = 125) explored affective consequences and justifications provided for hedonic and pragmatic choice. Data on positive affect suggested a true preference for the hedonic - even among those who chose the pragmatic product.
A second set of three studies tested different ways to reduce the dilemma by manipulating justification. Manipulations referred to the justifiability of attributes as well as the general need for justification. Study 5 (N = 129) enhanced the respective justifiability of hedonic and pragmatic choice by ambiguous product information, which could be interpreted according to latent preferences. As expected, enhanced justifiability led to an increase in hedonic but not in pragmatic choice. Study 6 (N = 178) manipulated the justifiability of hedonic choice through product information provided by a "test report", which suggested hedonic attributes as legitimate. Again, hedonic choice increased with increased justifiability. Study 7 (N = 133) reduced the general need for justification by framing a purchase as gratification. A significant positive effect of the gratification frame on purchase rates occurred for a hedonic but not for a pragmatic product.
Altogether, the present studies revealed a desire for hedonic attributes, even in interactive products, which often are still understood as purely pragmatic "tools". But precisely because of this predominance of pragmatic quality, people may hesitate to give in to their desire for hedonic quality in interactive products - at least, as long as they feel a need for justification. The present findings provide an enhanced understanding of the complex consequences of hedonic and pragmatic attributes, and indicate a general necessity to expand the scope of User Experience research to the moment of product choice. Limitations of the present studies, implications for future research as well as practical implications for design and marketing are discussed.
The goal of this Bachelor thesis is to implement and evaluate the "Simulating of Collective Misbelief"-model into the NetLogo programming language. Therefore, the model requirements have to be specified and implemented into the NetLogo environment. Further tool-related re-quirements have to be specified to enable the model to work in NetLogo. After implementation several simulations will be conducted to answer the research question stated above.
Nandi forests (South and North Nandi forests) are situated in the Rift Valley Province of Kenya very close to Kakamega forest. From previous documents it has been seen that Kakamega and Nandi forests were connected to each other forming one big "U" shaped forest block till the beginnings of 1900s. Due to human pressures, currently there are three different forests form the previous one block forest. Although they were one forest, information on Nandi forests is very scanty when it is compared to that of Kakamega forest. The species composition and diversity as well as plant communities and population structure of Nandi forests have not been studied. Information is not available about the similarity status of South and North Nandi forests. Furthermore the natural regeneration potential (seedling bank) of these forests is not well studied and documented. Hence this study aims to fill these gaps.
In this study totally 76 quadrates (49 from South Nandi and 27 from North Nandi) were used to collect data. In the South Nandi forests 27 of the quadrates were laid in the better side of the forest (at Kobujoi) and the remaining 22 were in the heavily disturbed part of this forest (Bonjoge). The quadrates were arranged on transects that have one to one and half km which were parallel to the slope. The distance between the quadrates was 100 meter and transects are 500 m apart. The size of the main quadrate was 400 m2 (20 X 20 m) which also had five small plots (3 X 3 m) distributed on the four corners and in the center. Each woody plants (climbers, shrubs and trees) having more than one meter and greater than two centimeter diameter at breast height (dbh) were measured and recorded. Seedlings and herbaceous plants were sampled in the smaller plots. Individual plants were identified at species level and when it was not possible to identify in the field voucher specimen were prepared and latter identified at the East African Herbarium, National Museum of Kenya, and Nairobi. Clustering and ordination were performed using PC-ORD and CANOCO ecological softwares, respectively. For both clustering and ordination abundance data of the species was used. Shannon diversity index and evenness were computed using PC-ORD while similarity indices, Fisher alpha, rarefaction, species richness estimation (nonparametric species richness estimators) were conducted using EstimateS. Indicator species analysis was undertaken using PC-ORD. Basal area and height class distribution at forests level or site level (Bonjoge and Kobujoi) and diameter (dbh) class distribution for selected trees species were performed to evaluate population structure.
Furthermore importance value (IV) of woody plant species was calculated. SPSS version 16 was used to undertake both parametric (when data assume normal distribution) and nonparametric (when data are not assuming normal distribution) comparison of means, correlation and regression analysis.
In this study totally 321 vascular plant species comprising 92 families and 243 genera were identified in Nandi forests (both South and North Nandi forests). In South Nandi forest 253 plant species form 82 families and 201 genera were recorded while in North Nandi 181 species comprising 67 families and 155 genera were recorded. Jackknife second order estimators gave the highest species richness estimate for both South and North Nandi forests i.e. 284 and 209, respectively. In the case of highly disturbed and less disturbed parts of South Nandi forest 138 and 172 vascular plant species were recorded, respectively. Asteraceae, Rubiaceae and Euphorbiaceae are the top three species rich families of Nandi forests. In terms of different diversity measures (i.e. alpha and beta diversity, Fisher alpha, Shannon diversity and evenness indices) South Nandi is more diverse than North Nandi forest. Sörensen and Jaccard (classic) as well as their respective abundance based similarities showed that there is a low species similarity between South and Nandi forests. The cluster analysis resulted in three different plant communities and this result is supported by the ordination result.
South and North Nandi forest has inverted "J" height class distribution showing that larger proportion of woody plant individuals are found in the lower height classes. Similar pattern is observed when the diameters of all woody plants were considered together. However, different diameter class distributions (seven types) were identified when selected tree species were analyzed separately. It has been observed that the basal area of South Nandi forest is significantly lower than that of North Nandi forest (Mann-Whitney U =358, p < 0.001). Similarly Bonjoge has significantly lower basal area (t-value=3.77, p<0.01) than that of Kobujoi. Number of woody plat seedlings in South Nandi forest is significantly higher than that of North Nandi (Mann-Whitney U = 362.5, p<0.001). In the same way Bonjoge has significantly smaller number of ssedlings than Kobujoi (t-value 4.24, p<0.001). Most of species in both forests are able to resprout from stumps after physical damage; hence this helps the regeneration of the forests in addition to seedling banks. This study enables to fill some of the information gaps about Nandi forests especially of floristic composition, population structure, natural regeneration and human impacts on this ecosystem.
Software projects typically rely on several, external libraries. The interface provided by such a library is called API (application programming interface). APIs often evolve over time, thereby implying the need to adapt applications that use them. There are also reasons which may call for the replacement of one library by another one, what also results in a need to adapt the applications where the library is replaced. The process of adapting applications to use a different API is called API migration. Doing API migration manually is a cumbersome task. Automated API migration is an active research field. A related field of research is API analysis which can also provide data for developing API migration tools.
The following thesis investigates techniques and technologies for API analysis and API migration frameworks. To this end, design patterns are leveraged. These patterns are based on experience with API analysis and migration within the Software Languages Team.
In automated theorem proving, there are some problems that need information on the inequality of certain constants. In most cases this information is provided by adding facts which explicitly state that two constants are unequal. Depending on the number of constants, a huge amount of this facts can clutter the knowledge base and distract the author and readers of the problem from its actual proposition. For most cases it is save to assume that a larger knowledge base reduces the performance of a theorem prover, which is another drawback of explicit inequality facts. Using the unique name assumption in those reasoning tasks renders the introduction of inequality facts obsolete as the unique name assumptions states that two constants are identical iff their interpretation is identical. Implicit handling of non-identical constants makes the problems easier to comprehend and reduces the execution time of reasoning. In this thesis we will show how to integrate the unique name assumption into the E-hyper tableau calculus and that the modified calculus is sound and complete. The calculus will be implemented into the E-KRHyper theorem prover and we will show, by empiric evaluation, that the changed implementation, which is able to use the unique name assumption, is superior to the traditional version of E-KRHyper.
In dieser Ausarbeitung beschreibe ich die Ergebnisse meiner Untersuchungen zur Erweiterung des LogAnswer-Systemsmit nutzerspezifischen Profilinformationen. LogAnswer ist ein natürlichsprachliches open-domain Frage-Antwort-System. Das heißt: es beantwortet Fragen zu beliebigen Themen und liefert dabei konkrete (möglichst knappe und korrekte) Antworten zurück. Das System wird im Rahmen eines Gemeinschaftsprojekts der Arbeitsgruppe für künstliche Intelligenz von Professor Ulrich Furbach an der Universität Koblenz-Landau und der Arbeitsgruppe Intelligent Information and Communication Systems (IICS) von Professor Hermann Helbig an der Fernuniversität Hagen entwickelt. Die Motivation meiner Arbeit war die Idee, dass der Prozess der Antwortfindung optimiert werden kann, wenn das Themengebiet, auf das die Frage abzielt, im Vorhinein bestimmt werden kann. Dazu versuchte ich im Rahmen meiner Arbeit die Interessensgebiete von Nutzern basierend auf Profilinformationen zu bestimmen. Das Semantic Desktop System NEPOMUK wurde verwendet um diese Profilinformationen zu erhalten. NEPOMUK wird verwendet um alle Daten, Dokumente und Informationen, die ein Nutzer auf seinem Rechner hat zu strukturieren. Dazu nutzt das System ein sogenanntes Personal Information Model (PIMO) in Form einer Ontologie. Diese Ontologie enthält unter anderem eine Klasse "Topic", welche die wichtigste Grundlage für das Erstellen der in meiner Arbeit verwendeten Nutzerprofile bildete. Konkret wurde die RDF-Anfragesprache SPARQL verwendet, um eine Liste aller für den Nutzer relevanten Themen aus der Ontologie zu filtern. Die zentrale Idee meiner Arbeit war es nun diese Profilinformationen zur Optimierung des Ranking von Antwortkandidaten einzusetzen. In LogAnswer werden zu jeder gestellten Frage bis zu 200 potentiell relevante Textstellen aus der deutschen Wikipedia extrahiert. Diese Textstellen werden auf Basis von Eigenschaften (wie z.B. lexikalische Übereinstimmungen zwischen Frage und Textstelle) geordnet, da innerhalb des zur Verfügung stehenden Zeitlimits nicht alle Kandidaten bearbeitet werden können.
Mein Ansatz verfolgte das Ziel, diesen Algorithmus durch Nutzerprofile so zu erweitern, dass Antwortkandidaten, welche für den Benutzer relevante Informationen enthalten, höher in der Rangfolge eingeordnet werden. Zur Umsetzung dieser Idee musste eine Methode gefunden werden, um zu bestimmen ob ein Antwortkandidat mit dem Profil übereinstimmt. Da sich die in einer Textstelle enthaltenen Informationen in den meisten Fällen auf das übergeordnete Thema des Artikels beziehen, ohne den Namen des Artikels explizit zu erwähnen, wurde in meiner Implementierung der Artikelname betrachtet, um zu ermitteln, zu welchem Themengebiet die Textstelle Informationen liefert. Als zusätzliches Hilfsmittel wurde außerdem die DBpedia-Ontologie eingesetzt, welche die Informationen der Wikipedia strukturiert im RDF Format enthält. Mit Hilfe dieser Ontologie war es möglich, jeden Artikel in Kategorien einzuordnen, die dann mit den im Profil enthaltenen Stichworten verglichen wurden. Zur Untersuchung der Auswirkungen des Ansatzes auf das Ranking-Verfahren wurden mehrere Testläufe mit je 200 Testfragen durchgeführt. Die erste Testmenge bestand aus zufällig ausgewählten Fragen, die mit meinem eigenen Nutzerprofil getestet wurden. Dieser Testlauf lieferte kaum nutzbare Ergebnisse, da nur bei 29 der getesteten Fragen überhaupt ein Antwortkandidat mit dem Profil in Verbindung gebracht werden konnte. Außerdem konnte eine potentielle Verbesserung der Ergebnisse nur bei einer dieser 29 Fragen festgestellt werden, was zu der Schlussfolgerung führte, dass der Einsatz von Profildaten nicht für Anwendungsfälle geeignet ist, in denen die Fragen keine Korrelation mit dem genutzten Profil aufweisen.
Da die Grundannahme meiner Arbeit war, dass Nutzer in erster Linie Fragen zu den Interessensgebieten stellen, welche sich aus ihrem Profil ableiten lassen, sollten die weiteren Testläufe genau diesen Fall beleuchten. Dazu wurden 200 Testfragen aus dem Bereich Sport ausgewählt und mit einem Profil getestet, welches Stichworte zu unterschiedlichen Sportarten enthielt. Die Tests mit den Sportfragen waren wesentlich aussagekräftiger. Auch hier deuteten die Ergebnisse darauf hin, dass der Ansatz kein großes Potential zur Verbesserung des Rankings hat. Eine genauere Betrachtung einiger ausgewählter Beispiele zeigte allerdings, dass die Integration von Profildaten für bestimmte Anwendungsfälle, wie z.B. offene Fragen für die es mehr als eine korrekte Antwort gibt, durchaus zu einer Verbesserung der Ergebnisse führen kann. Außerdem wurde festgestellt, dass viele der schlechten Ergebnisse auf Inkosistenzen in der DBpedia-Ontologie und grundsätzliche Probleme im Umgang mit Wissensbasen in natürlicher Sprache beruhen.
Die Schlussfolgerung meiner Arbeit ist, dass der in dieser Arbeit vorgestellte Ansatz zur Integration von Profilinformationen für den aktuellen Anwendungsfall von LogAnswer nicht geeignet ist, da vor allem Faktenwissen aus sehr unterschiedlichen Domänen abgefragt wird und offene Fragen nur einen geringen Anteil ausmachen.
Robotics research today is primarily about enabling autonomous, mobile robots to seamlessly interact with arbitrary, previously unknown environments. One of the most basic problems to be solved in this context is the question of where the robot is, and what the world around it, and in previously visited places looks like " the so-called simultaneous localization and mapping (SLAM) problem. We present a GraphSLAM system, which is a graph-based approach to this problem. This system consists of a frontend and a backend: The frontend- task is to incrementally construct a graph from the sensor data that models the spatial relationship between measurements. These measurements may be contradicting and therefore the graph is inconsistent in general. The backend is responsible for optimizing this graph, i. e. finding a configuration of the nodes that is least contradicting. The nodes represent poses, which do not form a regular vector space due to the contained rotations. We respect this fact by treating them as what they really are mathematically: manifolds. This leads to a very efficient and elegant optimization algorithm.
Augmented Reality bedeutet eine reale Umgebung mit, meistens grafischen, virtuellen Inhalten zu erweitern. Oft sind dabei die virtuellen Inhalte der Szene jedoch nur ein Overlay und interagieren nicht mit den realen Bestandteilen der Szene. Daraus ergibt sich ein Authentizitätsproblem für Augmented Reatliy Anwendungen. Diese Arbeit betrachtet Augmented Reality in einer speziellen Umgebung, mit deren Hilfe eine authentischere Darstellung möglich ist. Ziel dieserArbeitwar die Erstellung eines Systems, das Zeichnungen durch Techniken der Augmented Reality mit virtuellen Inhalten erweitert. Durch das Anlegen einer Repräsentation soll es der Anwendung dabei möglich sein die virtuellen Szeneelementemit der Zeichnung interagieren zu lassen. Dazu wurden verschiedene Methoden aus den Bereichen des Pose Tracking und der Sketch Recognition disktutiert und für die Implementierung in einem prototypischen System ausgewählt. Als Zielhardware fungiert ein Android Smartphone. Kontext der Zeichnungen ist eine Dungeon Karte, wie sie in Rollenspielen vorkommt. Die virtuellen Inhalte nehmen dabei die Form von Bewohnern des Dungeons an, welche von einer Agentensimulation verwaltet werden. Die Agentensimulation ist Gegenstand einer eigenen Diplomarbeit [18]. Für das Pose Tracking wurde ARToolkitPlus eingesetzt, ein optisches Tracking System, das auf Basis von Markern arbeitet. Die Sketch Recognition ist dafür zuständig die Inhalte der Zeichnung zu erkennen und zu interpretieren. Dafür wurde ein eigener Ansatz implementiert der Techniken aus verschiedenen Sketch Recognition Systemen kombiniert. Die Evaluation konzentriert sich auf die technischen Aspekte des Systems, die für eine authentische Erweiterung der Zeichnung mit virtuellen Inhalten wichtig sind.
Modern Internet and Intranet techniques, such as Web services and virtualization, facilitate the distributed processing of data providing improved flexibility. The gain in flexibility also incurs disadvantages. Integrated workflows forward and distribute data between departments and across organizations. The data may be affected by privacy laws, contracts, or intellectual property rights. Under such circumstances of flexible cooperations between organizations, accounting for the processing of data and restricting actions performed on the data may be legally and contractually required. In the Internet and Intranet, monitoring mechanisms provide means for observing and auditing the processing of data, while policy languages constitute a mechanism for specifying restrictions and obligations.
In this thesis, we present our contributions to these fields by providing improvements for auditing and restricting the data processing in distributed environments. We define formal qualities of auditing methods used in distributed environments. Based on these qualities, we provide a novel monitoring solution supporting a data-centric view on the distributed data processing. We present a solution for provenance-aware policies and a formal specification of obligations offering a procedure to decide whether obligatory processing steps can be met in the future.
Only little information is available about the diffusion of cloud computing in German higher educational institutions. A better understanding of the state of the art in this field would support the modernization of the higher educational institutions in Germany and allow the development of more adequate cloud products and more appropriate business models for this niche. For this purpose, a literature research on Cloud Computing and IT-diffusion will be run and an empirical investigation with an online questionnaire addressed to higher educational institutions in Germany will be performed to illustrate the state of the art of Cloud Computing in German higher educational institutions as well as the threats and opportunities perceived by employees of higher educational institutions data centers connected to the usage of the cloud.
In addition to that, different experts from universities and businesses will be interviewed to complete the knowledge and information collected through the online questionnaire and during the research phase. The expected results will serve to create a recommendation for higher educational institutions in Germany about either they should migration to the cloud or not and introduce a list of guiding questions of critical issues to consider before using cloud-computing technologies.