Refine
Year of publication
- 2012 (36) (remove)
Document Type
- Doctoral Thesis (11)
- Diploma Thesis (8)
- Part of Periodical (8)
- Bachelor Thesis (4)
- Master's Thesis (4)
- Conference Proceedings (1)
Language
- English (36) (remove)
Keywords
- Petri-Netze (2)
- probability propagation nets (2)
- 101companies (1)
- API Analysis (1)
- API Migratiom (1)
- Africa (1)
- Afrika (1)
- Archivierung (1)
- Auditing (1)
- Ausbreitung (1)
Institute
- Fachbereich 4 (25)
- Institut für Informatik (4)
- Fachbereich 7 (3)
- Fachbereich 8 (3)
- Institut für Integrierte Naturwissenschaften, Abt. Biologie (3)
- Institute for Web Science and Technologies (3)
- Institut für Management (1)
- Institut für Wirtschafts- und Verwaltungsinformatik (1)
- Mathematisches Institut (1)
Robotics research today is primarily about enabling autonomous, mobile robots to seamlessly interact with arbitrary, previously unknown environments. One of the most basic problems to be solved in this context is the question of where the robot is, and what the world around it, and in previously visited places looks like " the so-called simultaneous localization and mapping (SLAM) problem. We present a GraphSLAM system, which is a graph-based approach to this problem. This system consists of a frontend and a backend: The frontend- task is to incrementally construct a graph from the sensor data that models the spatial relationship between measurements. These measurements may be contradicting and therefore the graph is inconsistent in general. The backend is responsible for optimizing this graph, i. e. finding a configuration of the nodes that is least contradicting. The nodes represent poses, which do not form a regular vector space due to the contained rotations. We respect this fact by treating them as what they really are mathematically: manifolds. This leads to a very efficient and elegant optimization algorithm.
In summary, this study revealed the widespread occurrence of antiviral drugs in the aquatic environment. Furthermore, it could be shown that the elimination of pharmaceuticals in both biological and oxidative treatment do not necessarily result in their mineralization but rather leads to the formation of a variety of transformation and oxidation products.
This is one of the first studies in which the fate and in particular the transformation of pharmaceuticals has been comprehensively investigated in almost the complete water cycle, from biological wastewater treatment to advanced oxidation processes via ozone. It was shown that the transformation of pharmaceuticals in the urban water cycle can ultimately result in the formation of toxic transformation products.
In this paper, we demonstrate by means of two examples how to work with probability propagation nets (PPNs). The fiirst, which comes from the book by Peng and Reggia [1], is a small example of medical diagnosis. The second one comes from [2]. It is an example of operational risk and is to show how the evidence flow in PPNs gives hints to reduce high losses. In terms of Bayesian networks, both examples contain cycles which are resolved by the conditioning technique [3].
Only little information is available about the diffusion of cloud computing in German higher educational institutions. A better understanding of the state of the art in this field would support the modernization of the higher educational institutions in Germany and allow the development of more adequate cloud products and more appropriate business models for this niche. For this purpose, a literature research on Cloud Computing and IT-diffusion will be run and an empirical investigation with an online questionnaire addressed to higher educational institutions in Germany will be performed to illustrate the state of the art of Cloud Computing in German higher educational institutions as well as the threats and opportunities perceived by employees of higher educational institutions data centers connected to the usage of the cloud.
In addition to that, different experts from universities and businesses will be interviewed to complete the knowledge and information collected through the online questionnaire and during the research phase. The expected results will serve to create a recommendation for higher educational institutions in Germany about either they should migration to the cloud or not and introduce a list of guiding questions of critical issues to consider before using cloud-computing technologies.
Augmented Reality bedeutet eine reale Umgebung mit, meistens grafischen, virtuellen Inhalten zu erweitern. Oft sind dabei die virtuellen Inhalte der Szene jedoch nur ein Overlay und interagieren nicht mit den realen Bestandteilen der Szene. Daraus ergibt sich ein Authentizitätsproblem für Augmented Reatliy Anwendungen. Diese Arbeit betrachtet Augmented Reality in einer speziellen Umgebung, mit deren Hilfe eine authentischere Darstellung möglich ist. Ziel dieserArbeitwar die Erstellung eines Systems, das Zeichnungen durch Techniken der Augmented Reality mit virtuellen Inhalten erweitert. Durch das Anlegen einer Repräsentation soll es der Anwendung dabei möglich sein die virtuellen Szeneelementemit der Zeichnung interagieren zu lassen. Dazu wurden verschiedene Methoden aus den Bereichen des Pose Tracking und der Sketch Recognition disktutiert und für die Implementierung in einem prototypischen System ausgewählt. Als Zielhardware fungiert ein Android Smartphone. Kontext der Zeichnungen ist eine Dungeon Karte, wie sie in Rollenspielen vorkommt. Die virtuellen Inhalte nehmen dabei die Form von Bewohnern des Dungeons an, welche von einer Agentensimulation verwaltet werden. Die Agentensimulation ist Gegenstand einer eigenen Diplomarbeit [18]. Für das Pose Tracking wurde ARToolkitPlus eingesetzt, ein optisches Tracking System, das auf Basis von Markern arbeitet. Die Sketch Recognition ist dafür zuständig die Inhalte der Zeichnung zu erkennen und zu interpretieren. Dafür wurde ein eigener Ansatz implementiert der Techniken aus verschiedenen Sketch Recognition Systemen kombiniert. Die Evaluation konzentriert sich auf die technischen Aspekte des Systems, die für eine authentische Erweiterung der Zeichnung mit virtuellen Inhalten wichtig sind.
In this paper, we compare two approaches for exploring large,rnhierarchical data spaces of social media data on mobile devicesrnusing facets. While the first approach arranges thernfacets in a 3x3 grid, the second approach makes use of arnscrollable list of facets for exploring the data. We have conductedrna between-group experiment of the two approachesrnwith 24 subjects (20 male, 4 female) executing the same set ofrntasks of typical mobile users" information needs. The resultsrnshow that the grid-based approach requires significantly morernclicks, but subjects need less time for completing the tasks.rnFurthermore, it shows that the additional clicks do not hamperrnthe subjects" satisfaction. Thus, the results suggest thatrnthe grid-based approach is a better choice for faceted searchrnon touchscreen mobile devices. To the best of our knowledge,rnsuch a summative evaluation of different approaches for facetedrnsearch on mobile devices has not been done so far.
Modern Internet and Intranet techniques, such as Web services and virtualization, facilitate the distributed processing of data providing improved flexibility. The gain in flexibility also incurs disadvantages. Integrated workflows forward and distribute data between departments and across organizations. The data may be affected by privacy laws, contracts, or intellectual property rights. Under such circumstances of flexible cooperations between organizations, accounting for the processing of data and restricting actions performed on the data may be legally and contractually required. In the Internet and Intranet, monitoring mechanisms provide means for observing and auditing the processing of data, while policy languages constitute a mechanism for specifying restrictions and obligations.
In this thesis, we present our contributions to these fields by providing improvements for auditing and restricting the data processing in distributed environments. We define formal qualities of auditing methods used in distributed environments. Based on these qualities, we provide a novel monitoring solution supporting a data-centric view on the distributed data processing. We present a solution for provenance-aware policies and a formal specification of obligations offering a procedure to decide whether obligatory processing steps can be met in the future.
Software projects typically rely on several, external libraries. The interface provided by such a library is called API (application programming interface). APIs often evolve over time, thereby implying the need to adapt applications that use them. There are also reasons which may call for the replacement of one library by another one, what also results in a need to adapt the applications where the library is replaced. The process of adapting applications to use a different API is called API migration. Doing API migration manually is a cumbersome task. Automated API migration is an active research field. A related field of research is API analysis which can also provide data for developing API migration tools.
The following thesis investigates techniques and technologies for API analysis and API migration frameworks. To this end, design patterns are leveraged. These patterns are based on experience with API analysis and migration within the Software Languages Team.
The World Wide Web (WWW) has become a very important communication channel. Its usage has steadily grown within the past. Interest by website owners in identifying user behaviour has been around since Tim Berners-Lee developed the first web browser in 1990. But as the influence of the online channel today eclipses all other media the interest in monitoring website usage and user activities has intensified as well. Gathering and analysing data about the usage of websites can help to understand customer behaviour, improve services and potentially increase profit.
It is further essential for ensuring effective website design and management, efficient mass customization and effective marketing. Web Analytics (WA) is the area addressing these considerations. However, changing technologies and evolving Web Analytic methods and processes present a challenge to organisations starting with Web Analytic programmes. Because of lacking resources in different areas and other types of websites especially small and medium-sized enterprises (SME) as well as non-profit organisations struggle to operate WA in an effective manner.
This research project aims to identify the existing gap between theory, tool possibilities and business needs for undertaking Web Analytic programmes. Therefore the topic was looked at from three different ways: the academic literature, Web Analytic tools and an interpretative case study. The researcher utilized an action research approach to investigate Web Analytics presenting an holistic overview and to identify the gaps that exists. The outcome of this research project is an overall framework, which provides guidance for SMEs who operate information websites on how to proceed in a Web Analytic programme.
Magnetic resonance (MR) tomography is an imaging method, that is used to expose the structure and function of tissues and organs in the human body for medical diagnosis. Diffusion weighted (DW) imaging is a specific MR imaging technique, which enables us to gain insight into the connectivity of white matter pathways noninvasively and in vivo. It allows for making predictions about the structure and integrity of those connections. In clinical routine this modality finds application in the planning phase of neurosurgical operations, such as in tumor resections. This is especially helpful if the lesion is deeply seated in a functionally important area, where the risk of damage is given. This work reviews the concepts of MR imaging and DW imaging. Generally, at the current resolution of diffusion weighted data, single white matter axons cannot be resolved. The captured signal rather describes whole fiber bundles. Beside this, it often appears that different complex fiber configurations occur in a single voxel, such as crossings, splittings and fannings. For this reason, the main goal is to assist tractography algorithms who are often confound in such complex regions. Tractography is a method which uses local information to reconstruct global connectivities, i.e. fiber tracts. In the course of this thesis, existing reconstruction methods such as diffusion tensor imaging (DTI) and q-ball imaging (QBI) are evaluated on synthetic generated data and real human brain data, whereas the amount of valuable information provided by the individual reconstruction mehods and their corresponding limitations are investigated. The output of QBI is the orientation distribution function (ODF), where the local maxima coincides with the underlying fiber architecture. We determine those local maxima. Furthermore, we propose a new voxel-based classification scheme conducted on diffusion tensor metrics. The main contribution of this work is the combination of voxel-based classification, local maxima from the ODF and global information from a voxel- neighborhood, which leads to the development of a global classifier. This classifier validates the detected ODF maxima and enhances them with neighborhood information. Hence, specific asymmetric fibrous architectures can be determined. The outcome of the global classifier are potential tracking directions. Subsequently, a fiber tractography algorithm is designed that integrates along the potential tracking directions and is able to reproduce splitting fiber tracts.
Procedural content generation, the generation of video game content using pseudo-random algorithms, is a field of increasing business and academic interest due to its suitability for reducing development time and cost as well as the possibility of creating interesting, unique game spaces. Although many contemporary games feature procedurally generated content, the author perceived a lack of games using this approach to create realistic outer-space game environments, and the feasibility of employing procedural content generations in such a game was examined. Using current scientific models, a real-time astronomical simulation was developed in Python which generates star and planets object in a fictional galaxy procedurally to serve as the game space of a simple 2D space exploration game where the player has to search for intelligent life.
Wild boars belong to the most wide spread ungulates in the world. They are characterized by a well performed adaption to their environment mainly due to their omnivorous dietary. The wild boar population in Germany increased during the past three decades. Nowadays their high density leads to problems in agricultural areas due to damage of crops and plays a significant role as disease vector as the classical swine fever. For an effective population management population size information is of crucial importance. Different traditional methods exist to estimate population sizes as direct sightnings, faecal drop counts or hunting harvest which provide only relative estimates and population trends. Absolute population sizes could be yielded by a Capture-Mark-Recapture (CMR) approach. However, capturing of wild boars is difficult to realize and costly in terms of personnel and field effort.
Furthermore the capture probabilities are heterogeneous due to the variable behaviour of individuals influenced by age, sex, and experience of the animals. Non-invasive genetic methods are a promising complement to the traditional methods for population size estimation particularly for wild boar. These methods reduce stress and capture bias and increase the number of re-captures. Faeces proved to be a suitable DNA source for wild boar genotyping, due to almost equal capture probability. However working with faeces implicates difficulties such as low DNA rnquality and quantity, genotyping errors as dropout and false alleles.
The main aim of the present study was to develop a reliable, cost-efficient, reproducible and practicable method for wild boar genotyping. This method should provide a reliable dataset of genotypes obtained from the collected faeces samples. Individual identification forms the basis for an improved mark-recapture approach. As there is no sound method for absolute population counts in free living wild boar, reference values for the validation of this new approach are missing. Therefore, different routines to reduce and to assess genotyping errors were compared within this thesis. For maximum amplification rate, the storage, the extraction methods and the PCR-procedure were optimised. A step by step procedure was evaluated in order to determine the minimum required microsatellite (MS) number for reliable individual identification including a test with family groups (female and embryo tissue) to distinguish even between close relatives. A multiple-tubes approach, post-amplification checking and different correction procedures were applied to reduce genotyping errors. In order to quantify real genotyping error rates (GER) of datasets derived from sampling in the Palatinate Forest in western Germany, different methods for GER determination were compared with each other, obtaining GERs between 0% and 57.5%. As a consequence, more strict criteria for the multi-tube approach and increased repetition number of homozygous samples were used. An additional method validation was the implementation of a blind test to achieve the reliability of the genotyping and error checking procedure. Finally a strict and practicable proposal for the lab procedure was developed, by beginning with faecal sample collection and ending with a reliable dataset with genotypes of each sample.
The results of the presented method were derived from two sampling periods in a 4000 ha area in the Palatinate Forest in Rhineland-Palatinate in December 2006 and 2007. Both provided high confidence intervals (CI) applying inaccurate estimates (eg. for 2006 population size amounted to 215 with CI 95% of 156-314 and for 2007 population size amounted to 415 with CI 95% of 318-561) due to low sampling sizes (for 2006 n = 141 and for 2007 n = 326), successfully analysed samples (for 2006 n = 89 and for 2007 n = 156) and recapture numbers (for 2006 n = 12 and for 2007 n = 24). Furthermore, the population estimates even for the lowest values were considerably higher than previously assumed by hunting statistics, which implicates an ineffective hunting regime in the study area. For the future prospect, to obtain more precise population size estimations the increase of sampling sizes is inevitable, because absolute and reliable estimates are highly desirable for wildlife management and the control of diseases transmission. Nevertheless, the method for individual genotyping of wild boars evaluated in this thesis could be successfully established resulting in reliable datasets for population estimation modelling with sufficiently low GER.
Regarding the rapidly growing amount of data produced every year and the increasing acceptance of Enterprise 2.0 enterprises have to care about the management of their data more and more. Content created and stored in an uncoordinated manner can lead to data-silos (Williams & Hardy 2011, p.57), which result in long search times, inaccessible data and in consequence monetary losses. The "expanding digital universe" forces enterprises to develop new archiving solutions and records management policies (Gantz et al. 2007, p.13). Enterprise Content Management (ECM) is the research field that deals with these challenges. It is placed in the scientific context of Enterprise Information Management. This thesis aims to find out to what extent current Enterprise Content Management Systems (ECMS) support these new requirements, especially concerning the archiving of Enterprise 2.0 data. For this purpose, three scenarios were created to evaluate two different kinds of ECMS (one Open Source - and one proprietary system) chosen on the basis of a short marketrnresearch. The application of the scenarios reveals that the system vendors actually face the industry- concerns: both tools provide functionality for the archiving of data arising from online collaboration and also business records management capabilities but the integration of those topics is not, or is only inconsistently solved. At this point new questions - such as, "Which datarngenerated in an Enterprise 2.0 is worth being a record?" - arise and should be examined in future research.
Schema information about resources in the Linked Open Data (LOD) cloud can be provided in a twofold way: it can be explicitly defined by attaching RDF types to the resources. Or it is provided implicitly via the definition of the resources´ properties.
In this paper, we analyze the correlation between the two sources of schema information. To this end, we have extracted schema information regarding the types and properties defined in two datasets of different size. One dataset is a LOD crawl from TimBL- FOAF profile (11 Mio. triple) and the second is an extract from the Billion Triples Challenge 2011 dataset (500 Mio. triple). We have conducted an in depth analysis and have computed various entropy measures as well as the mutual information encoded in this two manifestations of schema information.
Our analysis provides insights into the information encoded in the different schema characteristics. It shows that a schema based on either types or properties alone will capture only about 75% of the information contained in the data. From these observations, we derive conclusions about the design of future schemas for LOD.
In automated theorem proving, there are some problems that need information on the inequality of certain constants. In most cases this information is provided by adding facts which explicitly state that two constants are unequal. Depending on the number of constants, a huge amount of this facts can clutter the knowledge base and distract the author and readers of the problem from its actual proposition. For most cases it is save to assume that a larger knowledge base reduces the performance of a theorem prover, which is another drawback of explicit inequality facts. Using the unique name assumption in those reasoning tasks renders the introduction of inequality facts obsolete as the unique name assumptions states that two constants are identical iff their interpretation is identical. Implicit handling of non-identical constants makes the problems easier to comprehend and reduces the execution time of reasoning. In this thesis we will show how to integrate the unique name assumption into the E-hyper tableau calculus and that the modified calculus is sound and complete. The calculus will be implemented into the E-KRHyper theorem prover and we will show, by empiric evaluation, that the changed implementation, which is able to use the unique name assumption, is superior to the traditional version of E-KRHyper.
In this thesis the feasibility of a GPGPU (general-purpose computing on graphics processing units) approach to natural feature description on mobile phone GPUs is assessed. To this end, the SURF descriptor [4] has been implemented with OpenGL ES 2.0/GLSL ES 1.0 and evaluated across different mobile devices. The implementation is multiple times faster than a comparable CPU variant on the same device. The results proof the feasibility of modern mobile graphics accelerators for GPGPU tasks especially for the detection phase in natural feature tracking used in augmented reality applications. Extensive analysis and benchmarking of this approach in comparison to state of the art methods have been undertaken. Insights into the modifications necessary to adapt and modify the SURF algorithm to the limitations of a mobile GPU are presented. Further, an outlook for a GPGPU-based tracking pipeline on a mobile device is provided.
The purpose of this master thesis is to enable the Robot Lisa to process complex commands and extract the necessary information in order to perform a complex task as a sequence of smaller tasks. This is intended to be achieved by the improvement of the understanding that Lisa has of her environment by adding semantics to the maps that she builds. The complex command itself will be expected to be already parsed. Therefore the way the input is processed to become a parsed command is out of the scope of this work. Maps that Lisa builds will be improved by the addition of semantic annotations that can include any kind of information that might be useful for the performance of generic tasks. This can include (but not necessarily limited to) hierarchical classifications of locations, objects and surfaces. The processing of the command in addition to some information of the environment shall trigger the performance of a sequence of actions. These actions are expected to be included in Lisa- currently implemented tasks and will rely on the currently existing modules that perform them.
Nevertheless the aim of this work is not only to be able to use currently implemented tasks in a more complex sequence of actions but also make it easier to add new tasks to the complex commands that Lisa can perform.
The goal of this Bachelor thesis is to implement and evaluate the "Simulating of Collective Misbelief"-model into the NetLogo programming language. Therefore, the model requirements have to be specified and implemented into the NetLogo environment. Further tool-related re-quirements have to be specified to enable the model to work in NetLogo. After implementation several simulations will be conducted to answer the research question stated above.
To assess the effect of organic compounds on the aquatic environment, organisms are typically exposed to toxicant solutions and the adverse effects observed are linked to the concentration in the surrounding media. As compounds generally need to be taken up into the organism and distributed to the respective target sites for the induction of effects, the internal exposure is postulated to best represent the observed effects.
The aim of this work is to contribute to an improved effect assessment of organic compounds by describing experimental and modelling methods to obtain information on the internal exposure of contaminants in organisms.
Chapter 2 details a protocol for the determination of bioconcentration parameter for uptake (k1) and elimination (k2) of organic compounds in zebrafish (Danio rerio) eggs. This enables the simulation of the internal exposure in zebrafish eggs from an ambient exposure concentration over time. The accumulated contaminant amount in zebrafish eggs was also determined, using a biomimetic extraction method. Different bioconc-entration estimation models for the determination of internal steady-state concentrat-ion of pharmaceutical compounds in fish to an environmental exposure are presented in Chapter 3. Bioconcentration factors were estimated from the compounds octanol: water partition coefficient (KOW) to determine the internal exposure to an ambient concentration.
To assess the integral bioavailable fraction from the water and sediment phase of environmental contaminants for rooted aquatic plants, the internal exposure in river-living Myriophyllum aquaticum plants were determined over time, presented in Chapter 4. The plants were collected at different time points, with the accumulated organic contaminants determined using a liquid extraction method.
In Chapter 5 a protocol was established to enable the non-invasive observation of effects in M. aquaticum plants exposed to contaminated sediments over time. Since the toxicant effects are a result of all uptake and distribution processes to the target site and the toxico-dynamic process leading to an observed effect during static exposure, information on the internal exposure could thus be gained from the temporal effect expression.rn
The Multimedia Metadata Ontology (M3O) provides a generic modeling framework for representing multimedia metadata. It has been designed based on an analysis of existing metadata standards and metadata formats. The M3O abstracts from the existing metadata standards and formats and provides generic modeling solutions for annotations, decompositions, and provenance of metadata. Being a generic modeling framework, the M3O aims at integrating the existing metadata standards and metadata formats rather than replacing them. This is in particular useful as today's multimedia applications often need to combine and use more than one existing metadata standard or metadata format at the same time. However, applying and specializing the abstract and powerful M3O modeling framework in concrete application domains and integrating it with existing metadata formats and metadata standards is not always straightforward. Thus, we have developed a step-by-step alignment method that describes how to integrate existing multimedia metadata standards and metadata formats with the M3O in order to use them in a concrete application. We demonstrate our alignment method by integrating seven different existing metadata standards and metadata formats with the M3O and describe the experiences made during the integration process.