Refine
Year of publication
Document Type
- Part of Periodical (72) (remove)
Language
- English (72) (remove)
Keywords
- Bluetooth (4)
- ontology (4)
- Knowledge Compilation (3)
- computer clusters (3)
- Campus Information System (2)
- Linked Open Data (2)
- OWL (2)
- Ontology (2)
- Petri-Netze (2)
- Semantic Web (2)
Institute
- Fachbereich 4 (72) (remove)
Existing tools for generating application programming interfaces (APIs) for ontologies lack sophisticated support for mapping the logics-based concepts of the ontology to an appropriate object-oriented implementation of the API. Such a mapping has to overcome the fundamental differences between the semantics described in the ontology and the pragmatics, i.e., structure, functionalities, and behavior implemented in the API. Typically, concepts from the ontology are mapped one-to-one to classes in the targeted programming language. Such a mapping only produces concept representations but not an API at the desired level of granularity expected by an application developer. We present a Model-Driven Engineering (MDE) process to generate customized APIs for ontologies. This API generation is based on the semantics defined in the ontology but also leverages additional information the ontology provides. This can be the inheritance structure of the ontology concepts, the scope of relevance of an ontology concept, or design patterns defined in the ontology.
Modeling and publishing Linked Open Data (LOD) involves the choice of which vocabulary to use. This choice is far from trivial and poses a challenge to a Linked Data engineer. It covers the search for appropriate vocabulary terms, making decisions regarding the number of vocabularies to consider in the design process, as well as the way of selecting and combining vocabularies. Until today, there is no study that investigates the different strategies of reusing vocabularies for LOD modeling and publishing. In this paper, we present the results of a survey with 79 participants that examines the most preferred vocabulary reuse strategies of LOD modeling. Participants of our survey are LOD publishers and practitioners. Their task was to assess different vocabulary reuse strategies and explain their ranking decision. We found significant differences between the modeling strategies that range from reusing popular vocabularies, minimizing the number of vocabularies, and staying within one domain vocabulary. A very interesting insight is that the popularity in the meaning of how frequent a vocabulary is used in a data source is more important than how often individual classes and properties arernused in the LOD cloud. Overall, the results of this survey help in understanding the strategies how data engineers reuse vocabularies, and theyrnmay also be used to develop future vocabulary engineering tools.
Social networking platforms as creativity fostering systems: research model and exploratory study
(2008)
Social networking platforms are enabling users to create their own content, share this content with anyone they invite and organize connections with existing or new online contacts. Within these electronic environments users voluntarily add comments on virtual boards, distribute their search results or add information about their expertise areas to their social networking profiles and thereby share it with acquaintances, friends and increasingly even with colleagues in the corporate world. As a result, it is most likely that the underlying knowledge sharing processes result in many new and creative ideas. The objective of our research therefore is to understand if and how social social networking platforms can enforce creativity. In addition, we look at how these processes could be embedded within the organizational structures that influence innovative knowledge sharing behavior. The basis for our research is a framework which focuses on the relations between intrinsic motivation, creativity and social networking platforms. First results of our empirical investigation of a social software platform called "StudiVZ.net" proved that our two propositions are valid.
Information systems research has started to use crowdsourcing platforms such as Amazon Mechanical Turks (MTurk) for scientific research, recently. In particular, MTurk provides a scalable, cheap work-force that can also be used as a pool of potential respondents for online survey research. In light of the increasing use of crowdsourcing platforms for survey research, the authors aim to contribute to the understanding of its appropriate usage. Therefore, they assess if samples drawn from MTurk deviate from those drawn via conventional online surveys (COS) in terms of answers in relation to relevant e-commerce variables and test the data in a nomological network for assessing differences in effects.
The authors compare responses from 138 MTurk workers with those of 150 German shoppers recruited via COS. The findings indicate, inter alia, that MTurk workers tend to exhibit more positive word-of mouth, perceived risk, customer orientation and commitment to the focal company. The authors discuss the study- results, point to limitations, and provide avenues for further research.
The processing of data is often restricted by contractual and legal requirements for protecting privacy and IPRs. Policies provide means to control how and by whom data is processed. Conditions of policies may depend on the previous processing of the data. However, existing policy languages do not provide means to express such conditions. In this work we present a formal model and language allowing for specifying conditions based on the history of data processing. We base the model and language on XACML.
We present a non-linear camera pose estimator, which is able to handle a combined input of point and line feature correspondences. For three or more correspondences, the estimator works on any arbitrary number and choice of the feature type, which provides an estimation of the pose on a preferably small and flexible amount of 2D-3D correspondences. We also give an analysis of different minimization techniques, parametrizations of the pose data, and of error measurements between 2D and 3D data. These will be tested for the usage of point features, lines and the combination case. The result shows the most stable and fast working non-linear parameter set for pose estimation in model-based tracking.
We introduce linear expressions for unrestricted dags (directed acyclic graphs) and finite deterministic and nondeterministic automata operating on them. Those dag automata are a conservative extension of the Tu,u-automata of Courcelle on unranked, unordered trees and forests. Several examples of dag languages acceptable and not acceptable by dag automata and some closure properties are given.
UML models and OWL ontologies constitute modeling approaches with different strength and weaknesses that make them appropriate for use of specifying different aspects of software systems. In particular, OWL ontologies are well suited to specify classes using an expressive logical language with highly flexible, dynamic and polymorphic class membership, while UML diagrams are much more suitable for specifying not only static models including classes and associations, but also dynamic behavior. Though MOF based metamodels and UML profiles for OWL have been proposed in the past, an integrated use of both modeling approaches in a coherent framework has been lacking so far. We present such a framework, TwoUse, for developing integrated models, comprising the benefits of UML models and OWL ontologies
Hybrid systems are the result of merging the two most commonly used models for dynamical systems, namely continuous dynamical systems defined by differential equations and discrete-event systems defined by automata. One can view hybrid systems as constrained systems, where the constraints describe the possible process flows, invariants within states, and transitions on the one hand, and to characterize certain parts of the state space (e.g. the set of initial states, or the set of unsafe states) on the other hand. Therefore, it is advantageous to use constraint logic programming (CLP) as an approach to model hybrid systems. In this paper, we provide CLP implementations, that model hybrid systems comprising several concurrent hybrid automata, whose size is only straight proportional to the size of the given system description. Furthermore, we allow different levels of abstraction by making use of hierarchies as in UML statecharts. In consequence, the CLP model can be used for analyzing and testing the absence or existence of (un)wanted behaviors in hybrid systems. Thus in summary, we get a procedure for the formal verification of hybrid systems by model checking, employing logic programming with constraints.
Hybrid automata are used as standard means for the specification and analysis of dynamical systems. Several researches have approached them to formally specify reactive Multi-agent systems situated in a physical environment, where the agents react continuously to their environment. The specified systems, in turn, are formally checked with the help of existing hybrid automata verification tools. However, when dealing with multi-agent systems, two problems may be raised. The first problem is a state space problem raised due to the composition process, where the agents have to be parallel composed into an agent capturing all possible behaviors of the multi-agent system prior to the verification phase. The second problem concerns the expressiveness of verification tools when modeling and verifying certain behaviors. Therefore, this paper tackles these problems by showing how multi-agent systems, specified as hybrid automata, can be modeled and verified using constraint logic programming(CLP). In particular, a CLP framework is presented to show how the composition of multi-agent behaviors can be captured dynamically during the verification phase. This can relieve the state space complexity that may occur as a result of the composition process. Additionally, the expressiveness of the CLP model flexibly allows not only to model multi-agent systems, but also to check various properties by means of the reachability analysis. Experiments are promising to show the feasibility of our approach.
This paper describes the robot Lisa used by team homer@UniKoblenz of the University of Koblenz Landau, Germany, for the participation at the RoboCup@Home 2017 in Nagoya, Japan. A special focus is put on novel system components and the open source contributions of our team. We have released packages for object recognition, a robot face including speech synthesis, mapping and navigation, speech recognition interface via android and a GUI. The packages are available (and new packages will be released) on
http://wiki.ros.org/agas-ros-pkg.
This paper describes the robot Lisa used by team
homer@UniKoblenz of the University of Koblenz Landau, Germany, for the participation at the RoboCup@Home 2016 in Leipzig, Germany. A special focus is put on novel system components and the open source contributions of our team. We have released packages for object recognition, a robot face including speech synthesis, mapping and navigation, speech recognition interface via android and a GUI. The packages are available (and new packages will be released) on http://wiki.ros.org/agas-ros-pkg.
CAMPUS NEWS - artificial intelligence methods combined for an intelligent information network
(2008)
In this paper we describe a network for distributing personalised information with the usage of artificial intelligence methods. Reception of this information should be possible with everyday mobile equipment. Intelligent filtering and spam protection aim at integrating this technology into our environment. Information on the system architecture and usage of the installation are also presented.
Conventional security infrastructures in the Internet cannot be directly adopted to ambient systems, especially if based on short-range communication channels: Personal, mobile devices are used and the participants are present during communication, so privacy protection is a crucial issue. As ambient systems cannot rely on an uninterrupted connection to a Trust Center, certiffed data has to be veriffed locally. Security techniques have to be adjusted to the special environment. This paper introduces a public key infrastructure (PKI) to provide secure communication channels with respect to privacy, confidentiality, data integrity, non-repudiability, and user or device authentication. It supports three certiffcate levels with a different balance between authenticity and anonymity. This PKI is currently under implementation as part of the iCity project.
An empirical study to evaluate the location of advertisement panels by using a mobile marketing tool
(2009)
The efficiency of marketing campaigns is a precondition for business success. This paper discusses a technique to transfer advertisement content vie Bluetooth technology and collects market research information at the same time. Conventional advertisement media were enhanced by devices to automatically measure the number, distance, frequency and exposure time of passersby, making information available to evaluate both the wireless media as well as the location in general. This paper presents a study analyzing these data. A cryptographic one-way function protects privacy during data acquisition.
The paper deals with a specific introduction into probability propagation nets. Starting from dependency nets (which in a way can be considered the maximum information which follows from the directed graph structure of Bayesian networks), the probability propagation nets are constructed by joining a dependency net and (a slightly adapted version of) its dual net. Probability propagation nets are the Petri net version of Bayesian networks. In contrast to Bayesian networks, Petri nets are transparent and easy to operate. The high degree of transparency is due to the fact that every state in a process is visible as a marking of the Petri net. The convenient operability consists in the fact that there is no algorithm apart from the firing rule of Petri net transitions. Besides the structural importance of the Petri net duality there is a semantic matter; common sense in the form of probabilities and evidencebased likelihoods are dual to each other.
In this paper, we demonstrate by means of two examples how to work with probability propagation nets (PPNs). The fiirst, which comes from the book by Peng and Reggia [1], is a small example of medical diagnosis. The second one comes from [2]. It is an example of operational risk and is to show how the evidence flow in PPNs gives hints to reduce high losses. In terms of Bayesian networks, both examples contain cycles which are resolved by the conditioning technique [3].
Probability propagation nets
(2007)
A class of high level Petri nets, called "probability propagation nets", is introduced which is particularly useful for modeling probability and evidence propagation. These nets themselves are well suited to represent the probabilistic Horn abduction, whereas specific foldings of them will be used for representing the flows of probabilities and likelihoods in Bayesian networks.
Dualizing marked Petri nets results in tokens for transitions (t-tokens). A marked transition can strictly not be enabled, even if there are sufficient "enabling" tokens (p-tokens) on its input places. On the other hand, t-tokens can be moved by the firing of places. This permits flows of t-tokens which describe sequences of non-events. Their benefiit to simulation is the possibility to model (and observe) causes and effects of non-events, e.g. if something is broken down.
The paper is devoted to solving a problem of the development of the website of Russian municipal policlinics and provides a selection of a set of elements which should be posted on a website. Such elements are necessary to provide citizens with correct and ergonomic e-services. The insufficient development of an infrastructure of institutions of public and municipal administration (particularly, healthcare institutions) in Russia made it necessary to analyze webresources used in different countries at different levels of providing medical services. The information resources of medical treatment facilities of the United Kingdom, of the United Statesrnof America and of the Federal Republic of Germany were researched separately for three existing economic models of healthcare. A set of criteria for the assessment of medical webresources was developed.
This paper describes results of the simulation of social objects, the dependence of schoolchildren's professional abilities on their personal characteristics. The simulation tool is the artificial neural network (ANN) technology. Results of a comparison of the time expense for training the ANN and for calculating the weight coefficients with serial and parallel algorithms, respectively, are presented.
An estimation of the number of multiplication and addition operations for training artififfcial neural networks by means of consecutive and parallel algorithms on a computer cluster is carried out. The evaluation of the efficiency of these algorithms is developed. The multilayer perceptron, the Volterra network and the cascade-correlation network are used as structures of artififfcial neural networks. Different methods of non-linear programming such as gradient and non-gradient methods are used for the calculation of the weight coefficients.
In order to enhance the company’s appeal for potential employees and improve the satisfaction of already salaried workers, it is necessary to offer a variety of work-life balance measures. But as their implementation causes time and financial costs, a prioritization of measures is needed. To express a recommendation for companies, this study is led by the questions if there are work-life balance measures which have more impact on employee satisfaction than others, how big the relative impact of work-life balance measures on job satisfaction in comparison to other work and private life variables is, if there is a relation between the effectiveness of measures and their use and if there is a difference between the measures which are most important from the employees’ perspective and the companies’ offers.
These questions are formulated in eight research hypotheses which are examined in a quantitative research design with online survey data from 289 employees of fifteen different German companies. The formation of a hierarchy of the effectiveness of measures towards job satisfaction as well as the investigation of the relative impact in comparison to other variables is performed using a multiple regression analysis, whilst the differences between employees’ expectations and the availability of offers are examined with t-tests.
Support in childcare, support in voluntary activities and teambuilding events have a significantly higher impact on job satisfaction than other work-life balance measures, and their potential use is higher than the actual use which leads to the conclusion that there is yet potential for companies to improve their employees’ satisfaction by implementing these measures. In addition, flexible work hours, flexible work locations and free time and overtime accounts are the most important measures from the employees’ point of view and already widely offered by the surveyed companies. In general, the overall use of the available measures and the quantity of offered measures are more important with regard to job satisfaction than the specific kind of measure. In addition, work-life balance measures are more important towards job satisfaction for younger people.
The objective of this contribution is to conceptually analyze the potentials of entrepreneurial design thinking as being a rather new method for entrepreneurship education. Based on a literature review of different design thinking concepts we carve out a generic design thinking model upon we conceptually build a new model that considers entrepreneurial thinking as a valuable characteristic.
The results of our work show that the characteristics of entrepreneurial design thinking can enhance entrepreneurship education by supporting respective action fields of entrepreneurial learning. In addition we reveal that entrepreneurial design thinking offers beneficial guidelines for the design of entrepreneurship education programs.
Despite widespread plans of big companies like Amazon and Google to develop unmanned delivery drones, scholarly research in this field is scarce, especially in the information systems field. From technical and legal perspectives, drone delivery in last-mile scenarios is in a quite mature state. However, estimates of user acceptance are varying between high skepticism and exaggerated optimism. This research follows a mixed method approach consisting both qualitative and quantitative research, to identify and test determinants of consumer delivery drone service adoption. The qualitative part rests on ten interviews among average consumers, who use delivery services on a regular basis. Insights gained from the qualitative part were used to develop an online survey and to assess the influence of associated risks on adoption intentions. The quantitative results show that especially financial and physical risks impede drone delivery service adoption. Delivery companies who are currently thinking about providing a delivery drone service may find these results useful when evaluating usage behaviors in the future market for delivery drones.
We present the user-centered, iterative design of Mobile Facets, a mobile application for the faceted search and exploration of a large, multi-dimensional data set of social media on a touchscreen mobile phone. Mobile Facets provides retrieval of resources such as places, persons, organizations, and events from an integration of different open social media sources and professional content sources, namely Wikipedia, Eventful, Upcoming, geo-located Flickr photos, and GeoNames. The data is queried live from the data sources. Thus, in contrast to other approaches we do not know in advance the number and type of facets and data items the Mobile Facets application receives in a specific contextual situation. While developingrnMobile Facets, we have continuously evaluated it with a small group of fifive users. We have conducted a task-based, formative evaluation of the fifinal prototype with 12 subjects to show the applicability and usability of our approach for faceted search and exploration on a touchscreen mobile phone.
Iterative Signing of RDF(S) Graphs, Named Graphs, and OWL Graphs: Formalization and Application
(2013)
When publishing graph data on the web such as vocabulariesrnusing RDF(S) or OWL, one has only limited means to verify the authenticity and integrity of the graph data. Today's approaches require a high signature overhead and do not allow for an iterative signing of graph data. This paper presents a formally defined framework for signing arbitrary graph data provided in RDF(S), Named Graphs, or OWL. Our framework supports signing graph data at different levels of granularity: minimum self-contained graphs (MSG), sets of MSGs, and entire graphs. It supports for an iterative signing of graph data, e. g., when different parties provide different parts of a common graph, and allows for signing multiple graphs. Both can be done with a constant, low overhead for the signature graph, even when iteratively signing graph data.
Six and Gimmler have identified concrete capabilities that enable users to use the Internet in a competent way. Their media competence model can be used for the didactical design of media usage in secondary schools. However, the special challenge of security awareness is not addressed by the model. In this paper, the important dimension of risk and risk assessment will be introduced into the model. This is especially relevant for the risk of the protection of personal data and privacy. This paper will apply the method of IT risk analysis in order to select those dimensions of the Six/Gimmler media competence model that are appropriate to describe privacy aware Internet usage. Privacy risk aware decisions for or against the Internet usage is made visible by the trust model of Mayer et al.. The privacy extension of the competence model will lead to a measurement of the existing privacy awareness in secondary schools, which, in turn, can serve as a didactically well-reasoned design of Informatics modules in secondary schools. This paper will provide the privacy-extended competence model, while empirical measurement and module design is planned for further research activities.
The aim of this paper is to identify and understand the risks and issues companies are experiencing from the business use of social media and to develop a framework for describing and categorising those social media risks. The goal is to contribute to the evolving theorisation of social media risk and to provide a foundation for the further development of social media risk management strategies and processes. The study findings identify thirty risk types organised into five categories (technical, human, content, compliance and reputational). A risk-chain is used to illustrate the complex interrelated, multi-stakeholder nature of these risks and directions for future work are identified.
Ontologies play an important role in knowledge representation for sharing information and collaboratively developing knowledge bases. They are changed, adapted and reused in different applications and domains resulting in multiple versions of an ontology. The comparison of different versions and the analysis of changes at a higher level of abstraction may be insightful to understand the changes that were applied to an ontology. While there is existing work on detecting (syntactical) differences and changes in ontologies, there is still a need in analyzing ontology changes at a higher level of abstraction like ontology evolution or refactoring pattern. In our approach we start from a classification of model refactoring patterns found in software engineering for identifying such refactoring patterns in OWL ontologies using DL reasoning to recognize these patterns.
SOA-Security
(2007)
This paper is a part of the ASG project (Adaptive Services Grid) and addresses some IT security issues of service oriented architectures. It defines a service-oriented security concept, it explores the SOA security challenge, it describes the existing WS-Security standard, and it undertakes a first step into a survey on best practice examples. In particular, the ASG middleware platform technology (JBossWS) is analyzed with respect to its ability to handle security functions.
This paper describes the development of security requirements for non-political Internet voting. The practical background is our experience with the Internet voting within the Gesellschaft für Informatik (GI - Informatics Society) 2004 and 2005. The theoretical background is the international state-of-the-art of requirements about electronic voting, especially in the US and in Europe. A focus of this paper is on the user community driven standardization of security requirements by means of a Protection Profile of the international Common Criteria standard.
Virtual Goods + ODRL 2012
(2012)
This is the 10th international workshop for technical, economic, and legal aspects of business models for virtual goods incorporating the 8th ODRL community group meeting. This year we did not call for completed research results, but we invited PhD students to present and discuss their ongoing research work. In the traditional international group of virtual goods and ODRL researchers we discussed PhD research from Belgium, Brazil, and Germany. The topics focused on research questions about rights management in the Internet and e-business stimulation. In the center of rights management stands the conception of a formal policy expression that can be used for human readable policy transparency, as well as for machine readable support of policy conformant systems behavior up to automatic policy enforcement. ODRL has proven to be an ideal basis for policy expressions, not only for digital copy rights, but also for the more general "Policy Awareness in the World of Virtual Goods". In this sense, policies support the communication of virtual goods, and they are a virtualization of rules-governed behavior themselves.
Various best practices and principles guide an ontology engineer when modeling Linked Data. The choice of appropriate vocabularies is one essential aspect in the guidelines, as it leads to better interpretation, querying, and consumption of the data by Linked Data applications and users.
In this paper, we present the various types of support features for an ontology engineer to model a Linked Data dataset, discuss existing tools and services with respect to these support features, and propose LOVER: a novel approach to support the ontology engineer in modeling a Linked Data dataset. We demonstrate that none of the existing tools and services incorporate all types of supporting features and illustrate the concept of LOVER, which supports the engineer by recommending appropriate classes and properties from existing and actively used vocabularies. Hereby, the recommendations are made on the basis of an iterative multimodal search. LOVER uses different, orthogonal information sources for finding terms, e.g. based on a best string match or schema information on other datasets published in the Linked Open Data cloud. We describe LOVER's recommendation mechanism in general and illustrate it alongrna real-life example from the social sciences domain.
This paper presents a method for the evolution of SHI ABoxes which is based on a compilation technique of the knowledge base. For this the ABox is regarded as an interpretation of the TBox which is close to a model. It is shown, that the ABox can be used for a semantically guided transformation resulting in an equisatisfiable knowledge base. We use the result of this transformation to effciently delete assertions from the ABox. Furthermore, insertion of assertions as well as repair of inconsistent ABoxes is addressed. For the computation of the necessary actions for deletion, insertion and repair, the E-KRHyper theorem prover is used.
Knowledge compilation is a common technique for propositional logic knowledge bases. The idea is to transform a given knowledge base into a special normal form ([MR03],[DH05]), for which queries can be answered efficiently. This precompilation step is very expensive but it only has to be performed once. We propose to apply this technique to knowledge bases defined in Description Logics. For this, we introduce a normal form, called linkless concept descriptions, for ALC concepts. Further we present an algorithm, based on path dissolution, which can be used to transform a given concept description into an equivalent linkless concept description. Finally we discuss a linear satisfiability test as well as a subsumption test for linkless concept descriptions.
This paper offers an informal overview and discussion on first order predicate logic reasoning systems together with a description of applications which are carried out in the Artificial Intelligence Research Group of the University in Koblenz. Furthermore the technique of knowledge compilation is shortly introduced.
This paper shows how multiagent systems can be modeled by a combination of UML statecharts and hybrid automata. This allows formal system specification on different levels of abstraction on the one hand, and expressing real-time system behavior with continuous variables on the other hand. It is not only shown how multi-robot systems can be modeled by a combination of hybrid automata and hierarchical state machines, but also how model checking techniques for hybrid automata can be applied. An enhanced synchronization concept is introduced that allows synchronization taking time and avoids state explosion to a certain extent.
In this paper we describe a series of projects on location based and personalised information systems. We start wit a basic research project and we show how we came with the help of two other more application oriented project to a product. This is developed by a consortium of enterprises and it already is in use in the city of Koblenz.
In this paper we describe a network for distributing personalized information within a pervasive university. We discuss the system architecture of our Bluetooth-based CampusNews-system, both, from the administrator and the user viewpoint. We furthermore present first statistical data about the usage of the partial installation at the Koblenz campus together with an outlook to future work.
In this paper we describe a network for distributing personalized Information in a metropolitan area. We discuss the system architecture of our Bluetooth-based information system as well as the reasoning process that fits users" needs with potential messages. We furthermore present our findings on parallelizing Bluetooth connection setup and performance.
Semantic desktop environments aim at improving the effectiveness and efficiency of users carrying out daily tasks within their personal information management infrastructure (PIM). They support the user by transferring and exploiting the explicit semantics of data items across different PIM applications. Whether such an approach does indeed reach its aim of facilitating users" life and—if so—to which extent, however, remains an open question that we address in this paper with the first summative evaluation of a semantic desktop approach. We approach the research question exploiting our own semantic desktop infrastructure, X-COSIM. As data corpus, we have used over 100 emails and 50 documents extracted from the organizers of a conference-like event at our university. The evaluation has been carried out with 18 subjects. We have developed a test environment to evaluate COSIMail and COSIFile, two semantic PIM applications based on X-COSIM. As result, we have found a significant improvement for typical PIM tasks compared to a standard desktop environment.
The STOR project aims at the development of a scientific component system employing models and knowledge for object recognition in images. This interim report elaborates on the requirements for such a component system, structures the application area by identifying a large set of basic operations, and shows how a set of appropriate data structures and components can be derived. A small case studies exemplifies the approach.
Expert-driven business process management is an established means for improving efficiency of organizational knowledge work. Implicit procedural knowledge in the organization is made explicit by defining processes. This approach is not applicable to individual knowledge work due to its high complexity and variability. However, without explicitly described processes there is no analysis and efficient communication of best practices of individual knowledge work within the organization. In addition, the activities of the individual knowledge work cannot be synchronized with the activities in the organizational knowledge work.rnrnSolution to this problem is the semantic integration of individual knowledgernwork and organizational knowledge work by means of the patternbased core ontology strukt. The ontology allows for defining and managing the dynamic tasks of individual knowledge work in a formal way and to synchronize them with organizational business processes. Using the strukt ontology, we have implemented a prototype application for knowledge workers and have evaluated it at the use case of an architectural fifirm conducting construction projects.
With the Multimedia Metadata Ontology (M3O), we have developed a sophisticated model for representing among others the annotation, decomposition, and provenance of multimedia metadata. The goal of the M3O is to integrate the existing metadata standards and metadata formats rather than replacing them. To this end, the M3O provides a scaffold needed to represent multimedia metadata. Being an abstract model for multimedia metadata, it is not straightforward how to use and specialize the M3O for concrete application requirements and existing metadata formats and metadata standards. In this paper, we present a step-by-step alignment method describing how to integrate and leverage existing multimedia metadata standards and metadata formats in the M3O in order to use them in a concrete application. We demonstrate our approach by integrating three existing metadata models: the Core Ontology on Multimedia (COMM), which is a formalization of the multimedia metadata standard MPEG-7, the Ontology for Media Resource of the W3C, and the widely known industry standard EXIF for image metadata
The term "Augmented Reality (AR)" denotes the superposition of additional virtual objects and supplementary information over real images. The joint project Enhanced Reality (ER)1 aims at a generic AR-system. The ER-project is a cooperation of six different research groups of the Department of Computer Science at the University of Koblenz-Landau. According to Ronald Azuma an AR-system combines real and virtual environments, where the real and virtual objects are registered in 3-D, and it provides interactivity in real time [Azu97]. Enhanced Reality extends Augmented Reality by requiring the virtual objects to be seamlessly embedded into the real world as photo-realistic objects according to the exact lighting conditions. Furthermore, additional information supplying value-added services may be displayed and interaction of the user may even be immersive. The short-term goal of the ER-project is the exploration of ER-fundamentals using some specific research scenarios; the long-term goal is the development of a component-based ER-framework for the creation of ER-applications for arbitrary application areas. ER-applications are developed as single-user applications for users who are moving in a real environment and are wearing some kind of visual output device like see-through glasses and some mobile end device. By these devices the user is able to see reality as it is, but he can also see the virtual objects and the additional information about some value-added service. Furthermore he might have additional devices whereby he can interact with the available virtual objects. The development of a generic framework for ER-applications requires the definition of generic components which are customizable and composable to build concrete applications and it requires a homogeneous data model which supports all components equally well. The workgroup "Software Technology"2 is responsible for this subproject. This report gives some preliminary results concerning the derivation of a component-based view of ER. There are several augmented reality frameworks like ARVIKA, AMIRE, DWARF, MORGAN, Studierstube and others which offer some support for the development of AR-applications. All of them ease the use of existing subsystems like AR-Toolkit, OpenGL and others and leverage the generation process for realistic systems by making efficient use of those subsystems. Consequently, they highly rely on them.
The novel mobile application csxPOI (short for: collaborative, semantic, and context-aware points-of-interest) enables its users to collaboratively create, share, and modify semantic points of interest (POI). Semantic POIs describe geographic places with explicit semantic properties of a collaboratively created ontology. As the ontology includes multiple subclassiffcations and instantiations and as it links to DBpedia, the richness of annotation goes far beyond mere textual annotations such as tags. With the intuitive interface of csxPOI, users can easily create, delete, and modify their POIs and those shared by others. Thereby, the users adapt the structure of the ontology underlying the semantic annotations of the POIs. Data mining techniques are employed to cluster and thus improve the quality of the collaboratively created POIs. The semantic POIs and collaborative POI ontology are published as Linked Open Data.
The University of Koblenz-Landau would like to apply for participation in the RoboCup Mixed Reality League in Suzhou, China 2008. Our team is composed of ten team members and two supervisors. All members are graduate students of Computational Visualistics. Our supervisors are Ph.D. candidates currently researching in the working groups of artificial intelligence and computer graphics.
Avoidance of routing loops
(2009)
We introduce a new routing algorithm which can detect routing loops by evaluating routing updates more thoroughly. Our new algorithm is called Routing with Metric based Topology Investigation (RMTI), which is based on the simple Routing Information Protocol (RIP) and is compatible to all RIP versions. In case of a link failure, a network can reorganize itself if there are redundant links available. Redundant links are only available in a network system like the internet if the topology contains loops. Therefore, it is necessary to recognize and to prevent routing loops. A routing loop can be seen as a circular trace of a routing update information which returns to the same router, either directly from the neighbor router or via a loop topology. Routing loops could consume a large amount of network bandwidth and could impact the endtoend performance of the network. Our RMTI approach is capable to improve the efficiency of Distance Vector Routing.
E-KRHyper is a versatile theorem prover and model generator for firstorder logic that natively supports equality. Inequality of constants, however, has to be given by explicitly adding facts. As the amount of these facts grows quadratically in the number of these distinct constants, the knowledge base is blown up. This makes it harder for a human reader to focus on the actual problem, and impairs the reasoning process. We extend E-Hyper- underlying E-KRhyper tableau calculus to avoid this blow-up by implementing a native handling for inequality of constants. This is done by introducing the unique name assumption for a subset of the constants (the so called distinct object identifiers). The obtained calculus is shown to be sound and complete and is implemented into the E-KRHyper system. Synthetic benchmarks, situated in the theory of arrays, are used to back up the benefits of the new calculus.
Semantic descriptions of non-textual media available on the web can be used to facilitate retrieval and presentation of media assets and documents containing them. While technologies for multimedia semantic descriptions already exist, there is as yet no formal description of a high quality multimedia ontology that is compatible with existing (semantic) web technologies. We explain the complexity of the problem using an annotation scenario. We then derive a number of requirements for specifying a formal multimedia ontology, including: compatibility with MPEG-7, embedding in foundational ontologies, and modularisation including separation of document structure from domain knowledge. We then present the developed ontology and discuss it with respect to our requirements.
This volume contains those research papers presented at the Second International Conference on Tests and Proofs (TAP 2008) that were not included in the main conference proceedings. TAP was the second conference devoted to the convergence of proofs and tests. It combines ideas from both areas for the advancement of software quality. To prove the correctness of a program is to demonstrate, through impeccable mathematical techniques, that it has no bugs; to test a program is to run it with the expectation of discovering bugs. On the surface, the two techniques seem contradictory: if you have proved your program, it is fruitless to comb it for bugs; and if you are testing it, that is surely a sign that you have given up on any hope of proving its correctness. Accordingly, proofs and tests have, since the onset of software engineering research, been pursued by distinct communities using rather different techniques and tools. And yet the development of both approaches leads to the discovery of common issues and to the realization that each may need the other. The emergence of model checking has been one of the first signs that contradiction may yield to complementarity, but in the past few years an increasing number of research efforts have encountered the need for combining proofs and tests, dropping earlier dogmatic views of their incompatibility and taking instead the best of what each of these software engineering domains has to offer. The first TAP conference (held at ETH Zurich in February 2007) was an attempt to provide a forum for the cross-fertilization of ideas and approaches from the testing and proving communities. For the 2008 edition we found the Monash University Prato Centre near Florence to be an ideal place providing a stimulating environment. We wish to sincerely thank all the authors who submitted their work for consideration. And we would like to thank the Program Committee members as well as additional referees for their great effort and professional work in the review and selection process. Their names are listed on the following pages. In addition to the contributed papers, the program included three excellent keynote talks. We are grateful to Michael Hennell (LDRA Ltd., Cheshire, UK), Orna Kupferman (Hebrew University, Israel), and Elaine Weyuker (AT&T Labs Inc., USA) for accepting the invitation to address the conference. Two very interesting tutorials were part of TAP 2008: "Parameterized Unit Testing with Pex" (J. de Halleux, N. Tillmann) and "Integrating Verification and Testing of Object-Oriented Software" (C. Engel, C. Gladisch, V. Klebanov, and P. Rümmer). We would like to express our thanks to the tutorial presenters for their contribution. It was a team effort that made the conference so successful. We are grateful to the Conference Chair and the Steering Committee members for their support. And we particularly thank Christoph Gladisch, Beate Körner, and Philipp Rümmer for their hard work and help in making the conference a success. In addition, we gratefully acknowledge the generous support of Microsoft Research Redmond, who financed an invited speaker.
This paper describes the robots TIAGo and Lisa used by
team homer@UniKoblenz of the University of Koblenz-Landau, Germany,
for the participation at the RoboCup@Home 2019 in Sydney,
Australia. We ended up first at RoboCup@Home 2019 in the Open Platform
League and won the competition in our league now three times
in a row (four times in total) which makes our team the most successful
in RoboCup@Home. We demonstrated approaches for learning from
demonstration, touch enforcing manipulation and autonomous semantic
exploration in the finals. A special focus is put on novel system components
and the open source contributions of our team. We have released
packages for object recognition, a robot face including speech synthesis,
mapping and navigation, speech recognition interface, gesture recognition
and imitation learning. The packages are available (and new packages
will be released) on http://homer.uni-koblenz.de.
This paper describes the robots TIAGo and Lisa used by team homer@UniKoblenz of the University of Koblenz-Landau, Germany, for the participation at the RoboCup@Home 2018 in Montreal, Canada. Further this paper serves as qualification material for the RoboCup-@Home participation in 2018. A special focus is put on novel system components and the open source contributions of our team. This year the team from Koblenz won the biggest annual scientianc robot competition in Montreal in the RoboCup@Home Open Platform track for the third time and also won the RoboCup@Home German Open for the second time. As a research highlight a novel symbolic imitation learning approach was demonstrated during the annals. The TIAGo robotic research platform was used for the first time by the team. We have released packages for object recognition, a robot face including speech synthesis, mapping and navigation, speech recognition interface via android and a GUI. The packages are available (and new packages will be released) on http://wiki.ros.org/agas-ros-pkg. Further information can be found on our project page http://homer.uni-koblenz.de.