Refine
Year of publication
- 2011 (39) (remove)
Document Type
- Part of Periodical (14)
- Doctoral Thesis (9)
- Diploma Thesis (5)
- Bachelor Thesis (4)
- Study Thesis (3)
- Conference Proceedings (2)
- Master's Thesis (2)
Language
- English (39) (remove)
Keywords
- computer clusters (3)
- Data Mining (2)
- Modellgetriebene Entwicklung (2)
- OWL <Informatik> (2)
- Ontologie <Wissensverarbeitung> (2)
- Software Engineering (2)
- artificial neural networks (2)
- classification (2)
- parallel algorithms (2)
- 8C model (1)
- API (1)
- Abwasser (1)
- Abwasserbehandlung (1)
- Adobe Flex (1)
- Agenten (1)
- Agents (1)
- Algolib (1)
- Android <Systemplattform> (1)
- Annotation (1)
- Aquatisches Ökosystem (1)
- Arzneistoffe (1)
- B2B Integration (1)
- Bebauungsdichte (1)
- Belebtschlamm (1)
- Biozide (1)
- Bipartiter Graph (1)
- Budongo Forest (1)
- Business Collaboration (1)
- Casual Games (1)
- Chemische Abwasserreinigung (1)
- Cloud Computing (1)
- Content Analysis (1)
- DTI (1)
- Delta (1)
- Diffusionsbildgebung (1)
- Distance Vector Routing (1)
- Distanz Vektor Routing (1)
- Drahtloses lokales Netz (1)
- Dreidimensionale Bildverarbeitung (1)
- Dynamische Analyse (1)
- E-Business (1)
- E-government (1)
- E-services (1)
- East Africa (1)
- Endangerment (1)
- Enterprise 2.0 (1)
- Enterprise Systems (1)
- Ergonomic Principles (1)
- Facebook Application (1)
- Fiber Tracking (1)
- FlexViz (1)
- Formale Ontologie (1)
- Fungicides (1)
- Fungizid (1)
- GPS (1)
- GReQL2 (1)
- GSM-Standard (1)
- Gammarus fossarum (1)
- Gangart (1)
- Gefährdung (1)
- Gehirn (1)
- Geocaching (1)
- Gerichteter Graph (1)
- Grails (1)
- Grails 1.2 (1)
- Hand-based Gestures (1)
- Haskell (1)
- Healthcare institution (1)
- Herzrate (1)
- IT Outsourcing (1)
- IT Services (1)
- Incremental Algorithms (1)
- Information Retrieval (1)
- Internet (1)
- JGraLab (1)
- Kakamega Forest (1)
- Kantenbewerteter Graph (1)
- Körperliche Aktivität (1)
- Künstliche Intelligenz (1)
- Laufen (1)
- Lokalisation (1)
- MapReduce (1)
- Maschinelles Lernen (1)
- Matching (1)
- MeVisLab (1)
- Migration (1)
- Mikroverunreinigung (1)
- Minderung (1)
- Mister X (1)
- Motivation (1)
- Mouse Gestures (1)
- Netzwerk (1)
- Netzwerk Routing (1)
- Neuronales Netz (1)
- Ostafrika (1)
- Ozon (1)
- Path Tracing (1)
- Pesticides (1)
- Pestizid (1)
- Pfadnachverfolgung (1)
- Pointing Devices (1)
- Quality assessment system (1)
- RDF (1)
- Real-Life Game (1)
- Real-Life Spiel (1)
- Reengineering (1)
- Regenwald (1)
- Routing Information Protocol (1)
- Routing Information Protokoll (1)
- Search engine (1)
- Security Routing (1)
- Semantic Web (1)
- Service identification (1)
- Service-Identifizierung (1)
- Serviceorientierte Architektur (1)
- Sicherheit Routing (1)
- Smartphone (1)
- Social Games (1)
- Software migration (1)
- Software-Migration (1)
- Sorption (1)
- Soziales Netzwerk (1)
- Stimmungsveränderung (1)
- Strassenkreuzung (1)
- Technologischer Raum (1)
- Tracing (1)
- Transformationsprodukte (1)
- Usability (1)
- Visibility Skeleton (1)
- Vocabulary Trainer (1)
- WLAN Fingerprinting (1)
- Wasserverschmutzung (1)
- Web Science (1)
- Website (1)
- Wechselkursänderung (1)
- WiFi Fingerprinting (1)
- Wissensmanagement (1)
- World Wide Web 2.0 (1)
- Wrapping (1)
- activated sludge (1)
- adaptive resonance theory (1)
- advanced wastewater treatment (1)
- artifcial neural networks (1)
- artiffficial neural networks (1)
- artififfcial neural networks (1)
- assessment model (1)
- benefits (1)
- biocides (1)
- biological degradation (1)
- biologischer Abbau (1)
- blood analysis (1)
- business process management (1)
- collaborative technologies (1)
- contexts of use (1)
- core ontologies (1)
- currency exchange rates (1)
- data mining (1)
- e-service (1)
- e-service quality (1)
- eXperience methodology (1)
- ecosystem functioning (1)
- ecotoxicology (1)
- emerging micropollutants (1)
- erweiterte Abwasserbehandlung (1)
- estimation of algorithm efficiency (1)
- folksonomies (1)
- gait (1)
- gaze information (1)
- gradient method of training weight coefficients (1)
- heart rate (1)
- image processing (1)
- image semantics (1)
- in situ (1)
- information retrieval (1)
- information system (1)
- jOWL (1)
- knowledge work (1)
- landmarks (1)
- mathematical model (1)
- methodology (1)
- micropollutants (1)
- mitigation (1)
- mobile application (1)
- mobile facets (1)
- mobile phone (1)
- mood change (1)
- neuartige Spurenstoffe (1)
- ontology (1)
- ozone (1)
- parallel calculations (1)
- pharmaceuticals (1)
- physical activity (1)
- pollution (1)
- rainforest (1)
- regression analysis (1)
- running (1)
- scene analysis (1)
- semantics (1)
- social media (1)
- social object (1)
- social simulation (1)
- sorption (1)
- tag recommendation (1)
- tagging (1)
- time series (1)
- tracking (1)
- vascular plants (1)
- vaskuläre Planzen (1)
- vegetated treatment systems (1)
- wastewater treatment (1)
- web 2.0 (1)
- web-portal medical e-services (1)
- Ökosystemdienstleistung (1)
- Ökotoxikologie (1)
Institute
- Fachbereich 4 (23)
- Institut für Wirtschafts- und Verwaltungsinformatik (7)
- Institut für Informatik (6)
- Institute for Web Science and Technologies (4)
- Institut für Computervisualistik (3)
- Fachbereich 7 (2)
- Institut für Integrierte Naturwissenschaften, Abt. Biologie (2)
- Fachbereich 8 (1)
- Institut für Softwaretechnik (1)
This paper introduces Vocville, a causal online game for learning vocabularies. I am creating this application for my master thesis of my career as a "Computervisualist" (computer visions) for the University of Koblenz - Landau. The application is an online browser game based on the idea of the really successful Facebook game FarmVille. The application is seperated in two parts; a Grails application manages a database which holds the game objects like vocabulary, a Flex/Flash application generates the actual game by using these data. The user can create his own home with everything in it. For creating things, the user has to give the correct translation of the object he wants to create several times. After every query he has to wait a certain amount of time to be queried again. When the correct answer is given sufficient times, the object is builded. After building one object the user is allowed to build others. After building enough objects in one area (i.e. a room, a street etc.) the user can activate other areas by translating all the vocabularies of the previous area. Users can also interact with other users by adding them as neighbors and then visiting their homes or sending them gifts, for which they have to fill in the correct word in a given sentence.
Tractography on HARDI data
(2011)
Diffusion weighted imaging is an important modality in clinical imaging and the only possibility to gain insight into the human brain noninvasively and in-vivo. The applications of this imaging technique are diversified. It is used to study the brain, its structure, development and the functionality of the different areas. Further, important fields of application are neurosurgical planning, examinations of pathologies, investigation of Alzheimer-, strokes, and multiple sclerosis. This thesis gives a brief introduction to MRI and diffusion MRI. Based on this, the mostly used data representation in diffusion MRI in clinical imaging, the diffusion tensor, is introduced. As the diffusion tensor suffers from severe limitations new techniques subsumed under the term HARDI (high angular resolution diffusion imaging) are introduced and discussed in detail. Further, an extensive introduction to tractography, approaches that aim at reconstructing neuronal fibers, is given. Based on the knowledge fromthe theoretical part established tractography algorithms are redesigned to handle HARDI data and, thus, improve the reconstruction of neuronal fibers. Among these algorithms, a novel approach is presented that successfully reconstructs fibers on phantom data as well as on human brain data. Further, a novel global classification approach is presented to cluster voxels according to their diffusion properties.
Towards Improving the Understanding of Image Semantics by Gaze-based Tag-to-Region Assignments
(2011)
Eye-trackers have been used in the past to identify visual foci in images, find task-related image regions, or localize affective regions in images. However, they have not been used for identifying specific objects in images. In this paper, we investigate whether it is possible to assign image regions showing specific objects with tags describing these objects by analyzing the users' gaze paths. To this end, we have conducted an experiment with 20 subjects viewing 50 image-tag-pairs each. We have compared the tag-to-region assignments for nine existing and four new fixation measures. In addition, we have investigated the impact of extending region boundaries, weighting small image regions, and the number of subjects viewing the images. The paper shows that a tag-to-region assignment with an accuracy of 67% can be achieved by using gaze information. In addition, we show that multiple regions on the same image can be differentiated with an accuracy of 38%.
The paper is devoted to solving the problem of assessing the quality of the medical electronic service. A variety of dimensions and factors of quality, methods and models applied in different scopes of activity for assessing quality of service is researched. The basic aspects, requirements and peculiarities of implementing medical electronic services are investigated. The results of the analysis and the set of information models describing the processes of assessing quality of the electronic service "Booking an appointment with a physician" and developed for this paper allowed us to describe the methodology and to state the problem of the assessment of quality of this service.
With the ongoing process of building business networks in today- economy, business to-business integration (B2B Integration) has become a strategic tool for utilizing and optimizing information exchange between business partners. Industry and academia have made remarkable progress in implementing and conceptualizing different kinds of electronic inter-company relationships in the last years. Nevertheless, academic findings generally focus exclusively on certain aspects of the research object, e.g. document standards, process integration or other descriptive criteria. Without arncommon framework these results stay unrelated and their mutual impact on each other remains largely unexplained. In this paper we explore motivational factors of B2B integration in practice. In a research project using a uniform taxonomy (eXperience methodology) we classified real-world B2B integration projects from a pool of over 400 case studies using a pre-developed framework for integration scenarios. The result of our partly exploratory research shows the influence of the role of a company in the supply chain and its motive to invest in a B2B solution.
This paper describes results of the simulation of social objects, the dependence of schoolchildren's professional abilities on their personal characteristics. The simulation tool is the artificial neural network (ANN) technology. Results of a comparison of the time expense for training the ANN and for calculating the weight coefficients with serial and parallel algorithms, respectively, are presented.
Distance vector routing protocols are interior gateway protocols in which every router sets up a routing table with the help of the information it receives from its neighboring routers. The routing table contains the next hops and associated distances on the shortest paths to every other router in the network. Security mechanisms implemented in distance vector routing protocols are insufficient. It is rather assumed that the environment is trustworthy. However, routers can be malicious for several reasons and manipulate routing by injecting false routing updates. Authenticity and integrity of transmitted routing updates have to be guaranteed and at the same time performance and benefits should be well-balanced.
In this paper several approaches that aim at meeting the above mentioned conditions are examined and their advantages and disadvantages are compared.
The paper is devoted to solving a problem of the development of the website of Russian municipal policlinics and provides a selection of a set of elements which should be posted on a website. Such elements are necessary to provide citizens with correct and ergonomic e-services. The insufficient development of an infrastructure of institutions of public and municipal administration (particularly, healthcare institutions) in Russia made it necessary to analyze webresources used in different countries at different levels of providing medical services. The information resources of medical treatment facilities of the United Kingdom, of the United Statesrnof America and of the Federal Republic of Germany were researched separately for three existing economic models of healthcare. A set of criteria for the assessment of medical webresources was developed.
API migration refers to the change of a used API to a different API in a program. A special case is called wrapper-based API migration. The API change is done without touching the program but the old API is reimplemented by means of the from now on used one. This so called wrapper has the interface of the reimplemented API but uses the implementation of the new one.
This is an interesting approach because the wrapper can be used in each program, which uses the old API.
To make this approach reproducible we study a ranking-based method for implementing a wrapper, where we always implement the method with the highest priority depending on failing test cases. Thus, we can reconstruct each implementation step.
We first develop an infrastructure to run and log test suites of Java projects that use an API, which we want to change.
We then build a wrapper for a given API using the ranking-based approach.
This bachelor thesis deals with the comparison related to the similarity of recorded WiFi patterns during the tracing of a path through the streets of a large city. Both MAC address only comparison has been investigated as well as the incorporation of RSSI values, whereby the localization accuracy has been evaluated. Methods for the detection of different types and combinations of loops in the path are demonstrated likewise the attempt to estimate the degree of urban development in the environment of the user by assessing the received signal strength and signal-to-noise ratio of GPS satellites and GSM cell towers.
In order to observe a user- proximity to a certain spot on a large public square the absorption of WiFi signals by the human body has been taken into account. Finally, the results of a comparison of the computing performance of a modern smartphone versus the alternative of remote calculation on a server including data transmission via cellular data network are presented.
In this diploma thesis a skeleton-based matching technique for 2D shapes is introduced. First, current approaches for the matching of shapes will be presented. The basics of skeleton-based matchings will be introduced. In the context of this thesis, a skeleton-based matching approach was implemented as presented in the original paper. This implementation is evaluated by performing a similarity search in three shape databases. Strengths and limitations of the approach are pointed out. In addition, the introduced algorithm will be examined with respect to extending it towards matching of 3D objects. In particular, the approach is applied to medical data sets: Pre- and postoperative CT images of the abdominal aorta of one patient will be compared. Problems and approaches for matching of 3D objects in general and blood vessels in particular will be presented.
In this thesis, I study the spectral characteristics of large dynamic networks and formulate the spectral evolution model. The spectral evolution model applies to networks that evolve over time, and describes their spectral decompositions such as the eigenvalue and singular value decomposition. The spectral evolution model states that over time, the eigenvalues of a network change while its eigenvectors stay approximately constant.
I validate the spectral evolution model empirically on over a hundred network datasets, and theoretically by showing that it generalizes arncertain number of known link prediction functions, including graph kernels, path counting methods, rank reduction and triangle closing. The collection of datasets I use contains 118 distinct network datasets. One dataset, the signed social network of the Slashdot Zoo, was specifically extracted during work on this thesis. I also show that the spectral evolution model can be understood as a generalization of the preferential attachment model, if we consider growth in latent dimensions of a network individually. As applications of the spectral evolution model, I introduce two new link prediction algorithms that can be used for recommender systems, search engines, collaborative filtering, rating prediction, link sign prediction and more.
The first link prediction algorithm reduces to a one-dimensional curve fitting problem from which a spectral transformation is learned. The second method uses extrapolation of eigenvalues to predict future eigenvalues. As special cases, I show that the spectral evolution model applies to directed, undirected, weighted, unweighted, signed and bipartite networks. For signed graphs, I introduce new applications of the Laplacian matrix for graph drawing, spectral clustering, and describe new Laplacian graph kernels. I also define the algebraic conflict, a measure of the conflict present in a signed graph based on the signed graph Laplacian. I describe the problem of link sign prediction spectrally, and introduce the signed resistance distance. For bipartite and directed graphs, I introduce the hyperbolic sine and odd Neumann kernels, which generalize the exponential and Neumann kernels for undirected unipartite graphs. I show that the problem of directed and bipartite link prediction are related by the fact that both can be solved by considering spectral evolution in the singular value decomposition.
More than 10,000 organic chemicals such as pharmaceuticals, ingredients of personal care products and biocides are ubiquitously used in every day life. After their application, many of these chemicals enter the domestic sewer. Research has shown that conventional biological wastewater treatment in municipal wastewater treatment plants (WWTPs) is an insufficient barrier for the release of most of these anthropogenic chemicals into the receiving waters.
This bears unforeseen risks for aquatic wildlife and drinking water resources. Especially for recently introduced and/or detected compounds (so called emerging micropollutants), there is a growing need to investigate the occurrence and fate in WWTPs. In order to get a comprehensive picture on the behavior in municipal wastewater treatment, the following groups of emerging organic micropollutants, spanning a broad range of applications and physico-chemical properties, were selected as target compounds: pharmaceuticals (beta blockers, psycho-active drugs), UV-filters, vulcanization accelerators (benzothiazoles), biocides (anti-dandruffs, preservatives, disinfectants) and pesticides (phenylurea and triazine herbicides).
We present the user-centered, iterative design of Mobile Facets, a mobile application for the faceted search and exploration of a large, multi-dimensional data set of social media on a touchscreen mobile phone. Mobile Facets provides retrieval of resources such as places, persons, organizations, and events from an integration of different open social media sources and professional content sources, namely Wikipedia, Eventful, Upcoming, geo-located Flickr photos, and GeoNames. The data is queried live from the data sources. Thus, in contrast to other approaches we do not know in advance the number and type of facets and data items the Mobile Facets application receives in a specific contextual situation. While developingrnMobile Facets, we have continuously evaluated it with a small group of fifive users. We have conducted a task-based, formative evaluation of the fifinal prototype with 12 subjects to show the applicability and usability of our approach for faceted search and exploration on a touchscreen mobile phone.
Recent EU-frameworks enforce the implementation of risk mitigation measures for nonpoint-source pesticide pollution in surface waters. Vegetated surface flow treatments systems (VTS) can be a way to mitigate risk of adverse effects in the aquatic ecosystems following unavoidable pollution after rainfall-related runoff events. Studies in experimental wetland cells and vegetated ditch mesocosms with common fungicides, herbicides and insecticides were performed to assess efficiency of VTS. Comprehensive monitoring of fungicide exposure after rainfall-related runoff events and reduction of pesticide concentrations within partially optimised VTS was performed from 2006-2009 at five vegetated detention ponds and two vegetated ditches in the wine growing region of the Southern Palatinate (SW-Germany).
Influence of plant density, size related parameters and pesticide properties in the performance of the experimental devices, and the monitored systems were the focus of the analysis. A spatial tool for prediction of pesticide pollution of surface waters after rainfall-related runoff events was programmed in a geographic information system (GIS). A sophisticated and high resolution database on European scale was built for simulation. With the results of the experiments, the monitoring campaign and further results of the EU-Life Project ArtWET mitigation measures were implemented in a georeferenced spatial decision support system. The database for the GIS tools was built with open data. The REXTOX (ratio of exposure to toxicity) Risk Indicator, which was proposed by the OECD (Organisation for Economic Co-operation and Development), was extended, and used for modeling the risk of rainfall-related runoff exposure to pesticides, for all agricultural waterbodies on European scale. Results show good performance of VTS. The vegetated ditches and wetland cells of the experimental systems showed a very high reduction of more than 90% of pesticide concentrations and potential adverse effects. Vegetated ditches and wetland cells performed significantly better than devices without vegetation. Plant density and sorptivity of the pesticide were the variables with the highest explanatory power regarding the response variable reduction of concentrations. In the experimental vegetated ditches 65% of the reduction of peak concentrations was explained with plant density and KOC. The monitoring campaign showed that concentrations of the fungicides and potential adverse effects of the mixtures were reduced significantly within vegetated ditches (Median 56%) and detention ponds (Median 38%) systems. Regression analysis with data from the monitoring campaign identified plant density and size related properties as explanatory variables for mitigation efficiency (DP: R²=0.57, p<0.001; VD:
R²=0.19, p<0.001). Results of risk model runs are the input for the second tool, simulating three risk mitigation measures. VTS as risk mitigation measures are implemented using the results for plant density and size related performance of the experimental and monitoring studies, supported by additional data from the ArtWET project. Based on the risk tool, simulations can be performed for single crops, selected regions, different pesticide compounds and rainfall events. Costs for implementation of the mitigation measures are estimated. Experiments and monitoring, with focus on the whole range of pesticides, provide novel information on VTS for pesticide pollution. The monitoring campaign also shows that fungicide pollution may affect surface waters. Tools developed for this study are easy to use and are not only a good base for further spatial analysis but are also useful as decision support of the non-scientific community. On a large scale, the tools on the one hand can help to compute external costs of pesticide use with simulation of mitigation costs on three levels, on the other hand feasible measures mitigating or remediating the effects of nonpoint-source pollution can be identified for implementation. Further study of risk of adverse effects caused by fungicide pollution and long-time performance of optimised VTS is needed.
In this study, the author evaluates four visualization methods (FlexViz, the Jambalaya applet, Experimental jOWL TouchGraph, and Plone ontology) from the pool of currently available web based visualization methods.
Further, this study attempts to answer of the following research questions: 1. What are the requirements for ontology visualization? 2. How is comparative analysis and evaluation carried out? 3. How can a chosen method be tested? Based on the results of the evaluation, the visualization method FlexViz was found to be the most suitable for the given scenario. The deployment of FlexViz was carried out and integrated within Plone CMS and logical parts of the VCD ontology are tested. Finally, FlexViz was analyzed from different stakeholder- perspectives. Results showed that FleViz is a reliable tool for visualizing, understanding and analyzing a developed ontology because of its user-friendly and interactive interface, but performs poorly in regards to the editing, extension and development process.
The semantic web and model-driven engineering are changing the enterprise computing paradigm. By introducing technologies like ontologies, metadata and logic, the semantic web improves drastically how companies manage knowledge. In counterpart, model-driven engineering relies on the principle of using models to provide abstraction, enabling developers to concentrate on the system functionality rather than on technical platforms. The next enterprise computing era will rely on the synergy between both technologies. On the one side, ontology technologies organize system knowledge in conceptual domains according to its meaning. It addresses enterprise computing needs by identifying, abstracting and rationalizing commonalities, and checking for inconsistencies across system specifications. On the other side, model-driven engineering is closing the gap among business requirements, designs and executables by using domain-specific languages with custom-built syntax and semantics. In this scenario, the research question that arises is: What are the scientific and technical results around ontology technologies that can be used in model-driven engineering and vice versa? The objective is to analyze approaches available in the literature that involve both ontologies and model-driven engineering. Therefore, we conduct a literature review that resulted in a feature model for classifying state-of-the-art approaches. The results show that the usage of ontologies and model-driven engineering together have multiple purposes: validation, visual notation, expressiveness and interoperability. While approaches involving both paradigms exist, an integrated approach for UML class-based modeling and ontology modeling is lacking so far. Therefore, we investigate the techniques and languages for designing integrated models. The objective is to provide an approach to support the design of integrated solutions. Thus, we develop a conceptual framework involving the structure and the notations of a solution to represent and query software artifacts using a combination of ontologies and class-based modeling. As proof of concept, we have implemented our approach as a set of open source plug-ins -- the TwoUse Toolkit. The hypothesis is that a combination of both paradigms yields improvements in both fields, ontology engineering and model-driven engineering. For MDE, we investigate the impact of using features of the Web Ontology Language in software modeling. The results are patterns and guidelines for designing ontology-based information systems and for supporting software engineers in modeling software. The results include alternative ways of describing classes and objects and querying software models and metamodels. Applications show improvements on changeability and extensibility. In the ontology engineering domain, we investigate the application of techniques used in model-driven engineering to fill the abstraction gap between ontology specification languages and programming languages. The objective is to provide a model-driven platform for supporting activities in the ontology engineering life cycle. Therefore, we study the development of core ontologies in our department, namely the core ontology for multimedia (COMM) and the multimedia metadata ontology. The results are domain-specific languages that allow ontology engineers to abstract from implementation issues and concentrate on the ontology engineering task. It results in increasing productivity by filling the gap between domain models and source code.
MapReduce with Deltas
(2011)
The MapReduce programming model is extended slightly in order to use deltas. Because many MapReduce jobs are being re-executed over slightly changing input, processing only those changes promises significant improvements. Reduced execution time allows for more frequent execution of tasks, yielding more up-to-date results in practical applications. In the context of compound MapReduce jobs, benefits even add up over the individual jobs, as each job gains from processing less input data. The individual steps necessary in working with deltas are being analyzed and examined for efficiency. Several use cases have been implemented and tested on top of Hadoop. The correctness of the extended programming model relies on a simple correctness criterion.