Refine
Year of publication
Document Type
- Part of Periodical (132) (remove)
Keywords
- Bluetooth (4)
- computer clusters (4)
- ontology (4)
- Knowledge Compilation (3)
- parallel algorithms (3)
- Augmented Reality (2)
- Campus Information System (2)
- Customer Relationship Management (2)
- DOCHOUSE (2)
- Datensicherheit (2)
- Enhanced Reality (2)
- Fallstudie (2)
- IBM Lotus Quickr (2)
- Informatik (2)
- Linked Open Data (2)
- OWL (2)
- Ontology (2)
- Petri-Netze (2)
- Semantic Web (2)
- University (2)
- artifcial neural networks (2)
- artificial neural networks (2)
- classification (2)
- constraint logic programming (2)
- mobile phone (2)
- privacy protection (2)
- probability propagation nets (2)
- risk (2)
- social media (2)
- 2D/3D-Segmentierung (1)
- 3-D-Modell (1)
- 3D-Color-Structure-Code (1)
- 3D-RETISEG (1)
- ABox (1)
- Absorptive capacity (1)
- Adaptive Services Grid (ASG) (1)
- Amazon Mechanical Turks (1)
- Annotationsmodell (1)
- Ausstellung (1)
- Authentifizierung (1)
- Authentisierung (1)
- Automated Theorem Proving (1)
- Automated Theorem Proving Systems (1)
- B2B (1)
- Bachelorstudiengang (1)
- Bayes Procedures (1)
- Bilderkennung (1)
- Calculus (1)
- Campus Koblenz (1)
- Cloud Computing (1)
- Colonoscopy (1)
- Common Annotation Framework (1)
- Conference (1)
- Context-aware processes (1)
- Core Ontology on Multimedia (1)
- Core Ontology on Multimedia (COMM) (1)
- Creativity (1)
- Crowdsourcing (1)
- Darmspiegelung (1)
- Datenbrille (1)
- Datenschutz (1)
- Delphi-Studie (1)
- Description Logic (1)
- Description Logics (1)
- Dienstleistungsproduktivität (1)
- Discussion Forums (1)
- Distributed process execution (1)
- E-KRHyper (1)
- E-KRHyper theorem prover (1)
- E-Mail (1)
- E-government (1)
- E-services (1)
- ERP-Systeme (1)
- Enterprise Systems (1)
- Enterprise-Resource-Planning (1)
- Evaluation (1)
- Forensik (1)
- Forschungsprojekt KMU 2.0 (1)
- Generative Model (1)
- Gesundheitswesen (1)
- Graph Technology (1)
- GraphUML (1)
- Healthcare institution (1)
- Horn Clauses (1)
- IASON (1)
- IEC 61131 (1)
- IP-Adressen (1)
- IT Outsourcing (1)
- IT Security (1)
- IT Services (1)
- IT security analysis (1)
- IT-Risk-Management (1)
- IT-Sicherheitsmodelle (1)
- Image (1)
- Innovation (1)
- Innovationsmanagement (1)
- Intelligent Information Network (1)
- Interaktionselemente (1)
- Internet (1)
- Internet Voting (1)
- Java (1)
- Kalibrierung (1)
- Knowledge Sharing (1)
- Koloskopie (1)
- Kontextbezogener Dienst (1)
- Krankenhausinformationssystem (1)
- Kryptographie (1)
- Leistungsmessung (1)
- Linked Data Modeling (1)
- Logistik (1)
- MIA (1)
- MPEG-7 (1)
- Masterstudiengang (1)
- Mehrbenutzer-Annotationssystem (1)
- Mixed method (1)
- Mobile Information Systems (1)
- Mobiles Umfeld (1)
- Model-Driven Engineering (1)
- Multi-robot System (1)
- Multiagent System (1)
- Multimedia Metadata Ontology (1)
- Netzwerk kleiner und mittlerer Unternehmen (1)
- Neuronales Netz (1)
- ODRL (1)
- Object Recognition (1)
- Online Community (1)
- Online-Überwachung (1)
- Onlinewahl (1)
- Ontology alignment (1)
- Open Source (1)
- Outsourcing (1)
- POIs (1)
- Personal Key Infrastruktur (1)
- Personalausweis (1)
- Personalised Information Systems (1)
- Petri Nets (1)
- Petri net (1)
- Petrinetz (1)
- Polypendetektion (1)
- Pose (1)
- Probability (1)
- Probability propagation nets (1)
- Process tracing (1)
- Programmiersprachen (1)
- Projekt iCity (1)
- Propagation (1)
- Prozessmanagement (1)
- Quality assessment system (1)
- RDF (1)
- RDF Graphs (1)
- RDF modeling (1)
- RMIS (1)
- ReDSeeDS-Project (1)
- Resource Description Framework (RDF) (1)
- Risikomanagement-Informationssysteme (1)
- Robocup 2008 (1)
- Router (1)
- Routing Information Protocol (RIP) (1)
- Routing Loops (1)
- Routing with Metric based Topology Investigation (RMTI) (1)
- SOAMIG (1)
- SOMA (1)
- SPARQL (1)
- Schema Information (1)
- Schutzprofil (1)
- Search engine (1)
- Security (1)
- Semantics (1)
- Semantik (1)
- Service-Oriented Architectures (SOA) (1)
- Service-oriented Architectures (SOA) (1)
- Shared Annotation Model (1)
- Shared Annotations (1)
- Social Networking Platforms (1)
- Software (1)
- Software Development (1)
- Software techniques for object recognition (STOR) (1)
- Software-Customizing (1)
- Stochastic Logic (1)
- Support System (1)
- Survey Research (1)
- Syntax (1)
- TAP (1)
- TBox (1)
- Technologiemanagement (1)
- Theorem prover (1)
- Tokens (1)
- Traceability (1)
- Transformation (1)
- UML (1)
- Unified Modeling Language (UML ) (1)
- United Internet AG (1)
- Universität Koblenz-Landau (1)
- Vocabulary Mapping (1)
- Vocabulary Reuse (1)
- Web 2.0 (1)
- Web Ontology Language (OWL) (1)
- Website (1)
- Wechselkursänderung (1)
- Werbung (1)
- Word-of-Mouth (1)
- XSD (1)
- Zertifikat (1)
- activation functions of neurons (1)
- adaptive resonance theory (1)
- application programming interfaces (1)
- artiffficial neural networks (1)
- artififfcial neural networks (1)
- blood analysis (1)
- business process management (1)
- categorisation (1)
- core ontologies (1)
- currency exchange rates (1)
- delivery drone (1)
- design thinking (1)
- deutschsprachiger Markt (1)
- directed acyclic graphs (1)
- drone (1)
- e-Commerce (1)
- elektronische Anwendung (1)
- entrepreneurial design thinking (1)
- entrepreneurial thinking (1)
- entrepreneurship education (1)
- estimation of algorithm efficiency (1)
- event model (1)
- event-based systems (1)
- events (1)
- faceted search (1)
- finite state automata (1)
- governance (1)
- gradient method of training weight coefficients (1)
- hybrid automata (1)
- hybrid systems (1)
- iCity project (1)
- image processing (1)
- information system (1)
- knowledge work (1)
- kollaboratives Lernen (1)
- legacy software (1)
- mathematical model (1)
- media competence model (1)
- metadata formats (1)
- metadata standards (1)
- migration (1)
- minimum self-contained graphs (1)
- mobile application (1)
- mobile devices (1)
- mobile facets (1)
- mobile interaction (1)
- mobile phones (1)
- multi-agent systems (1)
- multimedia metadata (1)
- myAnnotations (1)
- parallel calculations (1)
- personal information management (1)
- persönliches Informationsmanagement (1)
- points of interest (1)
- polyp detection (1)
- privacy and personal data (1)
- privacy competence model (1)
- public key infrastructure (1)
- regular dag languages (1)
- risks (1)
- scene analysis (1)
- security awareness (1)
- semantic desktop (1)
- semantics (1)
- semantischer Desktop (1)
- sensor data (1)
- social media data (1)
- social object (1)
- social simulation (1)
- summative evaluation (1)
- teams (1)
- technology acceptance model (1)
- time series (1)
- tracking (1)
- traffic survey (1)
- virtual goods (1)
- visualization (1)
Institute
- Fachbereich 4 (132) (remove)
The way information is presented to users in online community platforms has an influence on the way the users create new information. This is the case, for instance, in question-answering fora, crowdsourcing platforms or other social computation settings. To better understand the effects of presentation policies on user activity, we introduce a generative model of user behaviour in this paper. Running simulations based on this user behaviour we demonstrate the ability of the model to evoke macro phenomena comparable to the ones observed on real world data.
Cloud Computing is a topic that has gained momentum in the last years. Current studies show that an increasing number of companies is evaluating the promised advantages and considering making use of cloud services. In this paper we investigate the phenomenon of cloud computing and its importance for the operation of ERP systems. We argue that the phenomenon of cloud computing could lead to a decisive change in the way business software is deployed in companies. Our reference framework contains three levels (IaaS, PaaS, SaaS) and clarifies the meaning of public, private and hybrid clouds. The three levels of cloud computing and their impact on ERP systems operation are discussed. From the literature we identify areas for future research and propose a research agenda.
Knowledge compilation is a common technique for propositional logic knowledge bases. A given knowledge base is transformed into a normal form, for which queries can be answered efficiently. This precompilation step is expensive, but it only has to be performed once. We apply this technique to concepts defined in the Description Logic ALC. We introduce a normal form called linkless normal form for ALC concepts and discuss an efficient satisability test for concepts given in this normal form. Furthermore, we will show how to efficiently calculate uniform interpolants of precompiled concepts w.r.t. a given signature.
In this paper, we compare two approaches for exploring large,rnhierarchical data spaces of social media data on mobile devicesrnusing facets. While the first approach arranges thernfacets in a 3x3 grid, the second approach makes use of arnscrollable list of facets for exploring the data. We have conductedrna between-group experiment of the two approachesrnwith 24 subjects (20 male, 4 female) executing the same set ofrntasks of typical mobile users" information needs. The resultsrnshow that the grid-based approach requires significantly morernclicks, but subjects need less time for completing the tasks.rnFurthermore, it shows that the additional clicks do not hamperrnthe subjects" satisfaction. Thus, the results suggest thatrnthe grid-based approach is a better choice for faceted searchrnon touchscreen mobile devices. To the best of our knowledge,rnsuch a summative evaluation of different approaches for facetedrnsearch on mobile devices has not been done so far.
We propose a new approach for mobile visualization and interaction of temporal information by integrating support for time with today's most prevalent visualization of spatial information, the map. Our approach allows for an easy and precise selection of the time that is of interest and provides immediate feedback to the users when interacting with it. It has been developed in an evolutionary process gaining formative feedback from end users.
Designing Core Ontologies
(2011)
One of the key factors that hinders integration of distributed, heterogeneous information systems is the lack of a formal basis for modeling the complex, structured knowledge that is to be exchanged. To alleviate this situation, we present an approach based on core ontologies. Core ontologies are characterized by a high degree of axiomatization and formal precision. This is achieved by basing on a foundational ontology. In addition, core ontologies should follow a pattern-oriented design approach. By this, they are modular and extensible. Core ontologies allow for reusing the structured knowledge they define as well as integrating existing domainrnknowledge. The structured knowledge of the core ontologies is clearly separated from the domain-specific knowledge. Such core ontologies allow for both formally conceptualize their particular fields and to be flexibly combined to cover the needsrnof concrete, complex application domains. Over the last years, we have developed three independent core ontologies for events and objects, multimedia annotations, and personal information management. In this paper, we present the simultaneousrnuse and integration of our core ontologies at the example of a complex, distributed socio-technical system of emergency response. We describe our design approach for core ontologies and discuss the lessons learned in designing them. Finally, we elaborate on the beauty aspects of our core ontologies.
Schema information about resources in the Linked Open Data (LOD) cloud can be provided in a twofold way: it can be explicitly defined by attaching RDF types to the resources. Or it is provided implicitly via the definition of the resources´ properties.
In this paper, we analyze the correlation between the two sources of schema information. To this end, we have extracted schema information regarding the types and properties defined in two datasets of different size. One dataset is a LOD crawl from TimBL- FOAF profile (11 Mio. triple) and the second is an extract from the Billion Triples Challenge 2011 dataset (500 Mio. triple). We have conducted an in depth analysis and have computed various entropy measures as well as the mutual information encoded in this two manifestations of schema information.
Our analysis provides insights into the information encoded in the different schema characteristics. It shows that a schema based on either types or properties alone will capture only about 75% of the information contained in the data. From these observations, we derive conclusions about the design of future schemas for LOD.
The lack of a formal event model hinders interoperability in distributed event-based systems. Consequently, we present in this paper a formal model of events, called F. The model bases on an upper-level ontology and pro-vides comprehensive support for all aspects of events such as time and space, objects and persons involved, as well as the structural aspects, namely mereological, causal, and correlational relationships. The event model provides a flexible means for event composition, modeling of event causality and correlation, and allows for representing different interpretations of the same event. The foundational event model F is developed in a pattern-oriented approach, modularized in different ontologies, and can be easily extended by domain specifific ontologies.
The Multimedia Metadata Ontology (M3O) provides a generic modeling framework for representing multimedia metadata. It has been designed based on an analysis of existing metadata standards and metadata formats. The M3O abstracts from the existing metadata standards and formats and provides generic modeling solutions for annotations, decompositions, and provenance of metadata. Being a generic modeling framework, the M3O aims at integrating the existing metadata standards and metadata formats rather than replacing them. This is in particular useful as today's multimedia applications often need to combine and use more than one existing metadata standard or metadata format at the same time. However, applying and specializing the abstract and powerful M3O modeling framework in concrete application domains and integrating it with existing metadata formats and metadata standards is not always straightforward. Thus, we have developed a step-by-step alignment method that describes how to integrate existing multimedia metadata standards and metadata formats with the M3O in order to use them in a concrete application. We demonstrate our alignment method by integrating seven different existing metadata standards and metadata formats with the M3O and describe the experiences made during the integration process.
Networked RDF graphs
(2007)
Networked graphs are defined in this paper as a small syntactic extension of named graphs in RDF. They allow for the definition of a graph by explicitly listing triples as well as by SPARQL queries on one or multiple other graphs. By this extension it becomes possible to define a graph including a view onto other graphs and to define the meaning of a set of graphs by the way they reference each other. The semantics of networked graphs is defined by their mapping into logic programs. The expressiveness and computational complexity of networked graphs, varying by the set of constraints imposed on the underlying SPARQL queries, is investigated. We demonstrate the capabilities of networked graphs by a simple use case.
Existing tools for generating application programming interfaces (APIs) for ontologies lack sophisticated support for mapping the logics-based concepts of the ontology to an appropriate object-oriented implementation of the API. Such a mapping has to overcome the fundamental differences between the semantics described in the ontology and the pragmatics, i.e., structure, functionalities, and behavior implemented in the API. Typically, concepts from the ontology are mapped one-to-one to classes in the targeted programming language. Such a mapping only produces concept representations but not an API at the desired level of granularity expected by an application developer. We present a Model-Driven Engineering (MDE) process to generate customized APIs for ontologies. This API generation is based on the semantics defined in the ontology but also leverages additional information the ontology provides. This can be the inheritance structure of the ontology concepts, the scope of relevance of an ontology concept, or design patterns defined in the ontology.
Modeling and publishing Linked Open Data (LOD) involves the choice of which vocabulary to use. This choice is far from trivial and poses a challenge to a Linked Data engineer. It covers the search for appropriate vocabulary terms, making decisions regarding the number of vocabularies to consider in the design process, as well as the way of selecting and combining vocabularies. Until today, there is no study that investigates the different strategies of reusing vocabularies for LOD modeling and publishing. In this paper, we present the results of a survey with 79 participants that examines the most preferred vocabulary reuse strategies of LOD modeling. Participants of our survey are LOD publishers and practitioners. Their task was to assess different vocabulary reuse strategies and explain their ranking decision. We found significant differences between the modeling strategies that range from reusing popular vocabularies, minimizing the number of vocabularies, and staying within one domain vocabulary. A very interesting insight is that the popularity in the meaning of how frequent a vocabulary is used in a data source is more important than how often individual classes and properties arernused in the LOD cloud. Overall, the results of this survey help in understanding the strategies how data engineers reuse vocabularies, and theyrnmay also be used to develop future vocabulary engineering tools.
Die folgende Arbeit soll einen Überblick über bestehende Lösungen zur Interaktion in Erweiterten Realitäten (Augmented Reality) schaffen. Hierzu werden anhand dreier grundlegender Betrachtungsweisen unterschiedliche Interaktionskonzepte und -umsetzungen sowohl von der technischen, als auch von der konzeptuellen Seite her, vorgestellt. Neben Fragen der Visualisierung werden unterschiedliche Typen von Benutzungsschnittstellen vorstellt. Den größten Teil nehmen die drei typischen Interaktionsaufgaben Selektion- und Manipulation, Navigation und Systemkontrolle und die damit verbundenen Interaktionstechniken ein. Die Inhalte des Arbeitsberichts beschränken sich auf den Einsatz von Interaktionelementen in Augmented Reality Umgebungen. Dies geschieht in Abgrenzung zu Forschungsarbeiten auf dem Gebiet zu Interaktionstechniken in Virtual Reality Umgebungen (vollimmersiv oder auch desktoporientiert). Zwar standen und stehen viele Interaktionstechniken aus dem Bereich VR in der AR Pate, doch haben sich gerade im Bereich der AR neue Techniken und Konzepte entwickelt. Folglich sollen VR Techniken nur dann betrachtet werden, wenn Sie in AR Anwendungen angewendet wurden bzw. wenn ihre Anwendung sinnvoll erscheint.
Social networking platforms as creativity fostering systems: research model and exploratory study
(2008)
Social networking platforms are enabling users to create their own content, share this content with anyone they invite and organize connections with existing or new online contacts. Within these electronic environments users voluntarily add comments on virtual boards, distribute their search results or add information about their expertise areas to their social networking profiles and thereby share it with acquaintances, friends and increasingly even with colleagues in the corporate world. As a result, it is most likely that the underlying knowledge sharing processes result in many new and creative ideas. The objective of our research therefore is to understand if and how social social networking platforms can enforce creativity. In addition, we look at how these processes could be embedded within the organizational structures that influence innovative knowledge sharing behavior. The basis for our research is a framework which focuses on the relations between intrinsic motivation, creativity and social networking platforms. First results of our empirical investigation of a social software platform called "StudiVZ.net" proved that our two propositions are valid.
The impact of innovation in order to secure a company- existence is increasing. Especially in the field of information and communication systems, modified conditions, reduced product life cycles (in some markets e.g. in the software market down to a few months), and increased competition force many companies to adapt their innovation speed. Long-term profit will just be obtained by those companies that will manage to consolidate or further to enlarge their competitive position with technological innovation. The increasing relevance of new developments requires a separate treatment of technological innovations and demands for a technology and innovation management. Not every innovation is thereby a result of a single, spontaneous idea. Many innovations evolve from the integration or combination of already existing technologies. Especially in markets with insignificant growing rates, this kind of innovation is quite important. The purpose of this article is a case-oriented identification of existing deficiencies within established recommendations for a strategic technology and innovation management and the discussion about the role of modularity and technology integration as an alternative way to innovate.
Information systems research has started to use crowdsourcing platforms such as Amazon Mechanical Turks (MTurk) for scientific research, recently. In particular, MTurk provides a scalable, cheap work-force that can also be used as a pool of potential respondents for online survey research. In light of the increasing use of crowdsourcing platforms for survey research, the authors aim to contribute to the understanding of its appropriate usage. Therefore, they assess if samples drawn from MTurk deviate from those drawn via conventional online surveys (COS) in terms of answers in relation to relevant e-commerce variables and test the data in a nomological network for assessing differences in effects.
The authors compare responses from 138 MTurk workers with those of 150 German shoppers recruited via COS. The findings indicate, inter alia, that MTurk workers tend to exhibit more positive word-of mouth, perceived risk, customer orientation and commitment to the focal company. The authors discuss the study- results, point to limitations, and provide avenues for further research.
In dieser Arbeit wird ein Datenmodell für Shared Annotations vorgestellt. Basierend auf einem bereits existierenden Datenmodell für Annotationen, wird dieses erweitert um die Möglichkeit der Modellierung von Shared Annotations. Daraufhin werden Funktionen von Shared Annotations erläutert, die über das neue Annotationsmodell möglich sind.
In dieser Arbeit wird ein Mehrbenutzer-Annotationssystem namens myAnnotations vorgestellt, das mit Hilfe von sogenannten "Shared In-situ Problem Solving" Annotationen auf beliebigen Webseiten kollaborative Lern- und Arbeitsszenarien unterstützt. Hierbei wird insbesondere auf die Einsatzmöglichkeiten von "Shared In-situ Problem Solving" Annotationen beim kollaborativen Bearbeiten eines Textes und bei kollaborativen Lernerfolgskontrollen eingegangen.
The processing of data is often restricted by contractual and legal requirements for protecting privacy and IPRs. Policies provide means to control how and by whom data is processed. Conditions of policies may depend on the previous processing of the data. However, existing policy languages do not provide means to express such conditions. In this work we present a formal model and language allowing for specifying conditions based on the history of data processing. We base the model and language on XACML.
We present a non-linear camera pose estimator, which is able to handle a combined input of point and line feature correspondences. For three or more correspondences, the estimator works on any arbitrary number and choice of the feature type, which provides an estimation of the pose on a preferably small and flexible amount of 2D-3D correspondences. We also give an analysis of different minimization techniques, parametrizations of the pose data, and of error measurements between 2D and 3D data. These will be tested for the usage of point features, lines and the combination case. The result shows the most stable and fast working non-linear parameter set for pose estimation in model-based tracking.
Ziel des Verbundprojektes 3D-RETISEG ist es, ein optimiertes echtzeitfähiges Soft- und Hardwarepaket zur 2D/3D-Segmentierung und Klassifizierung basierend auf hierarchischen Inselstrukturen, und zur schnellen Volumenvisualisierung sehr großer Volumendaten aus komprimierten Datensätzen zu entwickeln. Die Entwicklungen sollen in konkreten Anwendungsfällen der Forschung und der Wirtschaft als marktreifes Modul in das Softwarepaket VGStudioMax integriert werden. In der Medizin, speziell bei der Segmentierung von Volumendatensätzen, die durch unterschiedliche Tomographietechniken (z.B. CT und MRI) erzeugt werden, liefern zweidimensionale Verfahren häufig eine unzureichende Qualität aufgrund der anisotropen Berücksichtigung der Rauminformationen. Bisher entwickelte dreidimensionale Verfahren sind, neben der der schlechten Verfügbarkeit ihrer Implementierungen, häufig nur für bestimmte Fragestellungen dediziert und daher für andere Anwendungsfälle schlecht adaptierbar. Weiterhin benötigen Verfahren, die gute Ergebnisse liefern, meist eine hohe Laufzeit. Dies ist aber speziell in medizinischen Anwendungen von großem Nachteil, da hier die Qualität der Ergebnisse in letzter Instanz von einem Experten beurteilt wird. Damit das Ergebnis schnell zu benutzerdefinierten Optimierungen durch Veränderung der Verfahrensparameter zur Verfügung steht, sollte ein Verfahren nur sehr kurze Rechenzeiten benötigen. Weist ein Verfahren eine geringe Laufzeit auf, so kann es außerdem intra-operativ eingesetzt werden. Der hier im Projekt verwendete 3D-Color-Structure-Code verspricht, ein besseres und generisches Verfahren zu ermöglichen.
In Enhanced Reality Umgebungen nimmt der Betrachter die Umgebung und zusätzliche, in eine halbdurchsichtige Datenbrille eingeblendete Informationen wahr. Das Kalibrierungsproblem der Datenbrille ist die Aufgabe, die eingeblendete Information mit dem korrekten realen Hintergrund zur Deckung zu bringen. Heutige Datenbrillen sind vergleichsweise klobig und schwer, deshalb kommt es häufig zu leichtem Verrutschen der Brille. Wird dieses Verrutschen nicht in die Position der Einblendung in der Brille einbezogen, so passt die Einblendung nicht mehr zum realen Hintergrund. Dies wird in Abbildung 1.1 exemplarisch dargestellt. Nach initialer Kalibrierung der teildurchsichtigen Datenbrille auf das Auge des Betrachters soll deshalb bei relativer Lageveränderung der Datenbrille zum Auge (durch leichtes Verrutschen) eine Re-Kalibrierung in Echtzeit automatisch erfolgen. Eine automatische Re-Kalibrierung bei Verrutschen wird von uns erstmalig erforscht.
We introduce linear expressions for unrestricted dags (directed acyclic graphs) and finite deterministic and nondeterministic automata operating on them. Those dag automata are a conservative extension of the Tu,u-automata of Courcelle on unranked, unordered trees and forests. Several examples of dag languages acceptable and not acceptable by dag automata and some closure properties are given.
Pose-Tracking
(2009)
Die bildbasierte automatische Bestimmung der Pose, d. h. der Position und Blickrichtung, einer Kamera in derWelt, ist eine relevante, aber immer noch unzureichend gelöste Aufgabe im Rechnersehen. In dem diesem Bericht zugrunde liegenden Projekt werden höhere markante Merkmale in den Bildern sicherer lokalisiert, sowie deren semantische Signifikanz vom Rechner bestimmt. Eine Posebestimmung wird durch eine Registrierung der elementaren Bestandteile dieser markanten Merkmale im Bild mit Merkmalen im 3-D-Modell erreicht. Dazu werden neue Algorithmen zur Detektion, Lokalisation und Registrierung der markanten Merkmale entwickelt bzw. vorhandene Algorithmen weiter verbessert. Modelle, wie sie aus der Rekonstruktion von Mehrfachansichten entstehen, werden durch weitere Semantik angereichert. Als Anwendungsszenario wird die Posebestimmung auf dem Campusgelände unter Verwendung von Bildern und einem semantischen CG-Modell des Campus gewählt. Die allgemeinen Verfahren werden an diesem Beispiel getestet und ihre Tragfähigkeit wird in Experimenten belegt. Im modularen System entstehen problemunabhängige Einzelbausteine zur Detektion markanter Merkmale und zur 3-D-Rekonstruktion und Posebestimmung aus Merkmalen, Punkten und Linien. Damit wird belegt, dass eine robuste Detektion markanter Merkmale möglich ist und zu einer effektiven Rekonstruktion und Posebestimmung auch in teilweise wenig strukturierten Außengebieten genutzt werden kann.
UML models and OWL ontologies constitute modeling approaches with different strength and weaknesses that make them appropriate for use of specifying different aspects of software systems. In particular, OWL ontologies are well suited to specify classes using an expressive logical language with highly flexible, dynamic and polymorphic class membership, while UML diagrams are much more suitable for specifying not only static models including classes and associations, but also dynamic behavior. Though MOF based metamodels and UML profiles for OWL have been proposed in the past, an integrated use of both modeling approaches in a coherent framework has been lacking so far. We present such a framework, TwoUse, for developing integrated models, comprising the benefits of UML models and OWL ontologies
Hybrid systems are the result of merging the two most commonly used models for dynamical systems, namely continuous dynamical systems defined by differential equations and discrete-event systems defined by automata. One can view hybrid systems as constrained systems, where the constraints describe the possible process flows, invariants within states, and transitions on the one hand, and to characterize certain parts of the state space (e.g. the set of initial states, or the set of unsafe states) on the other hand. Therefore, it is advantageous to use constraint logic programming (CLP) as an approach to model hybrid systems. In this paper, we provide CLP implementations, that model hybrid systems comprising several concurrent hybrid automata, whose size is only straight proportional to the size of the given system description. Furthermore, we allow different levels of abstraction by making use of hierarchies as in UML statecharts. In consequence, the CLP model can be used for analyzing and testing the absence or existence of (un)wanted behaviors in hybrid systems. Thus in summary, we get a procedure for the formal verification of hybrid systems by model checking, employing logic programming with constraints.
Hybrid automata are used as standard means for the specification and analysis of dynamical systems. Several researches have approached them to formally specify reactive Multi-agent systems situated in a physical environment, where the agents react continuously to their environment. The specified systems, in turn, are formally checked with the help of existing hybrid automata verification tools. However, when dealing with multi-agent systems, two problems may be raised. The first problem is a state space problem raised due to the composition process, where the agents have to be parallel composed into an agent capturing all possible behaviors of the multi-agent system prior to the verification phase. The second problem concerns the expressiveness of verification tools when modeling and verifying certain behaviors. Therefore, this paper tackles these problems by showing how multi-agent systems, specified as hybrid automata, can be modeled and verified using constraint logic programming(CLP). In particular, a CLP framework is presented to show how the composition of multi-agent behaviors can be captured dynamically during the verification phase. This can relieve the state space complexity that may occur as a result of the composition process. Additionally, the expressiveness of the CLP model flexibly allows not only to model multi-agent systems, but also to check various properties by means of the reachability analysis. Experiments are promising to show the feasibility of our approach.
This paper describes the robot Lisa used by team homer@UniKoblenz of the University of Koblenz Landau, Germany, for the participation at the RoboCup@Home 2017 in Nagoya, Japan. A special focus is put on novel system components and the open source contributions of our team. We have released packages for object recognition, a robot face including speech synthesis, mapping and navigation, speech recognition interface via android and a GUI. The packages are available (and new packages will be released) on
http://wiki.ros.org/agas-ros-pkg.
This paper describes the robot Lisa used by team
homer@UniKoblenz of the University of Koblenz Landau, Germany, for the participation at the RoboCup@Home 2016 in Leipzig, Germany. A special focus is put on novel system components and the open source contributions of our team. We have released packages for object recognition, a robot face including speech synthesis, mapping and navigation, speech recognition interface via android and a GUI. The packages are available (and new packages will be released) on http://wiki.ros.org/agas-ros-pkg.
So genannte Risikomanagement-Informationssysteme (RMIS) unterstützen Unternehmen darin, ihr Risiko-Management unternehmensweit durchzuführen. Im vorliegenden Beitrag analysieren wir mit Hilfe der Scoring-Methode die inhaltlichen Anforderungen an ein RMIS und zeigen, wie das zu einer begründeten Auswahl für die Anschaffung eines RMIS führt. Dazu diskutieren wir erstens, welche Anforderungen an ein RMIS gestellt werden, zweitens klassifizieren wir die auf dem Markt existierenden RMIS bezüglich ihrer Anwendungsziele und drittens erstellen wir einen Anforderungskatalog und ein dreistufiges Verfahren zur Handhabung des Auswahlprozesses. Dieses wird im Rahmen einer erfolgreich durchgeführten Fallstudie im Rahmen einer Masterarbeit [11] in einem konkreten Konzern, der Telekommunikations- und Internetzugang anbietet (United Internet AG, Montabaur), erläutert und evaluiert. 1.
IT-Risk-Management ist ein Bereich, der in den letzten 2 Jahren stark diskutiert und sich verändert hat. Die Anlässe findet man sowohl in den terroristischen Überfällen und entsprechenden Konsequenzen für Unternehmen als auch an die Naturkatastrophen oder Fehlinvestitionen und undurchsichtigen Geschäfte mancher Unternehmen. Daher wurden Richtlinien und Gesetze entworfen, die Vorstände verpflichten, das eigene Unternehmen gegen konkrete Risiken zu schützen. Der Detaillierungsgrad der Vorschläge ist je nach Bereich unterschiedlich. Sind aber Unternehmen bereit, die hohen Investitionen, die mit solchen umfangreichen Projekten zusammenhängen, auf sich zu nehmen um diese Richtlinien zu verfolgen? Wo sehen sie die Vorteile einer IT-Risk-Management-Strategie und wo die Herausforderungen? Welche Entwicklungen oder Veränderung wünschen sie sich? Das sind Fragen, die in der vorliegenden Studie, Experten des Bereich IT-Risk-Management gestellt wurden um die Zukunft des IT-Risk-Managements beeinflussen zu können. Diese Fragestellungen wurden in der vorliegenden Studie untersucht, dazu haben wir Experten des Bereichs IT-Risk-Managements mit Hilfe der Delphi-Methode befragt. Der vorliegende Bericht fasst die wichtigsten Ergebnisse zusammen.
Im Laufe der letzten Jahre hat sich der typische Komplex an kritischen Erfolgsfaktoren für Unternehmen verändert, infolgedessen der Faktor Wissen eine wachsende Bedeutung erlangt hat. Insofern kann man zum heutigen Zeitpunkt von Wissen als viertem Produktionsfaktor sprechen, welcher die Faktoren Arbeit, Kapital und Boden als wichtigste Faktoren eines Unternehmens ablöst (vgl. Keller & Yeaple 2013, S. 2; Kogut & Zander 1993, S. 631). Dies liegt darin begründet, dass aktive Maßnahmen zur Unterstützung von Wissenstransfer in Unternehmen höhere Profite und Marktanteile sowie bessere Überlebensfähigkeit gegenüber Wettbewerbern ohne derartige Maßnahmen nach sich ziehen (vgl. Argote 1999, S. 28; Szulanski 1996, S. 27; Osterloh & Frey 2000, S. 538). Der hauptsächliche Vorteil von wissensbasierten Entwicklungen liegt dabei in deren Nachhaltigkeit, da aufgrund der immateriellen Struktur (vgl. Inkpen & Dinur 1998, S. 456; Spender 1996a, S. 65 f.; Spender 1996b, S. 49; Nelson & Winter 1982, S. 76 ff.) eine Nachahmung durch Wettbewerber erschwert wird (vgl. Wernerfelt 1984, S. 173; Barney 1991, S. 102).
CAMPUS NEWS - artificial intelligence methods combined for an intelligent information network
(2008)
In this paper we describe a network for distributing personalised information with the usage of artificial intelligence methods. Reception of this information should be possible with everyday mobile equipment. Intelligent filtering and spam protection aim at integrating this technology into our environment. Information on the system architecture and usage of the installation are also presented.
Dieses Dokument schlägt ein Konzept für eine Personal Key Infrastruktur in iCity vor. Über ein Trust Center (TC) ausgestellte Zertiffkate gewährleisten einen sicheren Schlüsselaustausch mit nachweisbarer Authentisierung des Kommunikationspartners, Abhörsicherheit sowie Unverf älschtheit und Nachweisbarkeit der Nachrichten. Das gemeinsam vertrauensw ürdige TC muss während der Kommunikation nicht erreichbar sein. Es erhält lediglich öffentliche Informationen. Das Konzept stellt mehrere Sicherheitsstufen vor, die sichere Identiffkation und Anonymität unterschiedlich gewichten.
Conventional security infrastructures in the Internet cannot be directly adopted to ambient systems, especially if based on short-range communication channels: Personal, mobile devices are used and the participants are present during communication, so privacy protection is a crucial issue. As ambient systems cannot rely on an uninterrupted connection to a Trust Center, certiffed data has to be veriffed locally. Security techniques have to be adjusted to the special environment. This paper introduces a public key infrastructure (PKI) to provide secure communication channels with respect to privacy, confidentiality, data integrity, non-repudiability, and user or device authentication. It supports three certiffcate levels with a different balance between authenticity and anonymity. This PKI is currently under implementation as part of the iCity project.
An empirical study to evaluate the location of advertisement panels by using a mobile marketing tool
(2009)
The efficiency of marketing campaigns is a precondition for business success. This paper discusses a technique to transfer advertisement content vie Bluetooth technology and collects market research information at the same time. Conventional advertisement media were enhanced by devices to automatically measure the number, distance, frequency and exposure time of passersby, making information available to evaluate both the wireless media as well as the location in general. This paper presents a study analyzing these data. A cryptographic one-way function protects privacy during data acquisition.
Das Forschungsprojekt KMU 2.0 zielt auf die Gestaltung einer modernen Arbeitswelt ab, in der die Mitarbeiter eines Netzwerkes kleiner und mittlerer Unternehmen (KMU) ihre Kompetenzen, Kreativität und Ideen in einen unternehmensübergreifenden Austausch einbringen, um innovative Lösungen für Probleme aus dem Berufsalltag gemeinsam zu generieren. Hierüber gilt es neue Wege und Szenarien für das Management der unternehmensübergreifenden Zusammenarbeit in KMU-Netzwerken zu erforschen, mit Hinblick auf die Entstehung und Umsetzung von Innovationen für Probleme aus der modernen Arbeitswelt. Fokussiert wird dabei auf den Einsatz von Web 2.0-Technologien, die als Instrument hinsichtlich ihres Unterstützungspotentials bei der kooperativen Generierung von innovativen Lösungen untersucht werden. Zentrale Fragestellung dabei ist, ob der Einsatz von Web 2.0 in einem Netzwerk von KMU den Austausch innovativer Ideen durch Vernetzung der Mitarbeiter und ihres kreativen Potentials fördert. Praxispartner ist das WirtschaftsForum Neuwied e.V., ein im nördlichen Rheinland-Pfalz angesiedeltes Unternehmen, das aus ca. 115 Mitgliedern und 10.000 Mitarbeitern besteht und durch eine heterogene Struktur an Unternehmen unterschiedlichster Größen und Branchen sowie verschiedenster Leistungsangebote charakterisiert ist.
The paper deals with a specific introduction into probability propagation nets. Starting from dependency nets (which in a way can be considered the maximum information which follows from the directed graph structure of Bayesian networks), the probability propagation nets are constructed by joining a dependency net and (a slightly adapted version of) its dual net. Probability propagation nets are the Petri net version of Bayesian networks. In contrast to Bayesian networks, Petri nets are transparent and easy to operate. The high degree of transparency is due to the fact that every state in a process is visible as a marking of the Petri net. The convenient operability consists in the fact that there is no algorithm apart from the firing rule of Petri net transitions. Besides the structural importance of the Petri net duality there is a semantic matter; common sense in the form of probabilities and evidencebased likelihoods are dual to each other.
In this paper, we demonstrate by means of two examples how to work with probability propagation nets (PPNs). The fiirst, which comes from the book by Peng and Reggia [1], is a small example of medical diagnosis. The second one comes from [2]. It is an example of operational risk and is to show how the evidence flow in PPNs gives hints to reduce high losses. In terms of Bayesian networks, both examples contain cycles which are resolved by the conditioning technique [3].
Probability propagation nets
(2007)
A class of high level Petri nets, called "probability propagation nets", is introduced which is particularly useful for modeling probability and evidence propagation. These nets themselves are well suited to represent the probabilistic Horn abduction, whereas specific foldings of them will be used for representing the flows of probabilities and likelihoods in Bayesian networks.
Dualizing marked Petri nets results in tokens for transitions (t-tokens). A marked transition can strictly not be enabled, even if there are sufficient "enabling" tokens (p-tokens) on its input places. On the other hand, t-tokens can be moved by the firing of places. This permits flows of t-tokens which describe sequences of non-events. Their benefiit to simulation is the possibility to model (and observe) causes and effects of non-events, e.g. if something is broken down.
The paper is devoted to solving a problem of the development of the website of Russian municipal policlinics and provides a selection of a set of elements which should be posted on a website. Such elements are necessary to provide citizens with correct and ergonomic e-services. The insufficient development of an infrastructure of institutions of public and municipal administration (particularly, healthcare institutions) in Russia made it necessary to analyze webresources used in different countries at different levels of providing medical services. The information resources of medical treatment facilities of the United Kingdom, of the United Statesrnof America and of the Federal Republic of Germany were researched separately for three existing economic models of healthcare. A set of criteria for the assessment of medical webresources was developed.
Die Messung der Produktivität von Dienstleistungen ist trotz zunehmender Relevanz immer noch ein wenig durchdrungenes Forschungsfeld. Ursachen hierfür sind vor allem in den besonderen Merkmalen von Dienstleistungen - Immaterialität und Integrativität - zu suchen. Eine typische Dienstleistung der B2B Softwarebranche ist die Anpassung von Systemen an die Bedürfnisse des Kunden - das sogenannte Customizing. Die Ausschöpfung des Customizing-Potentials von Standardsoftware und -produkten und eine stärkere Einbeziehung des Kunden in Innovationsprozesse werden jedoch dadurch erschwert, dass die Produktivität dieser Dienstleistung nur unzureichend mess- und somit bewertbar zu sein scheint.
Der vorliegende Beitrag beschreibt die Entwicklung eines Modells zur Messung der Produktivität von Dienstleistungen auf der Basis unterschiedlicher Vorstudien im Rahmen des CustomB2B Projektes an der Universität Koblenz-Landau.
This paper describes results of the simulation of social objects, the dependence of schoolchildren's professional abilities on their personal characteristics. The simulation tool is the artificial neural network (ANN) technology. Results of a comparison of the time expense for training the ANN and for calculating the weight coefficients with serial and parallel algorithms, respectively, are presented.
This paper describes a parallel algorithm for selecting activation functionsrnof an artifcial network. For checking the efficiency of this algorithm a count of multiplicative and additive operations is used.
An estimation of the number of multiplication and addition operations for training artififfcial neural networks by means of consecutive and parallel algorithms on a computer cluster is carried out. The evaluation of the efficiency of these algorithms is developed. The multilayer perceptron, the Volterra network and the cascade-correlation network are used as structures of artififfcial neural networks. Different methods of non-linear programming such as gradient and non-gradient methods are used for the calculation of the weight coefficients.
In order to enhance the company’s appeal for potential employees and improve the satisfaction of already salaried workers, it is necessary to offer a variety of work-life balance measures. But as their implementation causes time and financial costs, a prioritization of measures is needed. To express a recommendation for companies, this study is led by the questions if there are work-life balance measures which have more impact on employee satisfaction than others, how big the relative impact of work-life balance measures on job satisfaction in comparison to other work and private life variables is, if there is a relation between the effectiveness of measures and their use and if there is a difference between the measures which are most important from the employees’ perspective and the companies’ offers.
These questions are formulated in eight research hypotheses which are examined in a quantitative research design with online survey data from 289 employees of fifteen different German companies. The formation of a hierarchy of the effectiveness of measures towards job satisfaction as well as the investigation of the relative impact in comparison to other variables is performed using a multiple regression analysis, whilst the differences between employees’ expectations and the availability of offers are examined with t-tests.
Support in childcare, support in voluntary activities and teambuilding events have a significantly higher impact on job satisfaction than other work-life balance measures, and their potential use is higher than the actual use which leads to the conclusion that there is yet potential for companies to improve their employees’ satisfaction by implementing these measures. In addition, flexible work hours, flexible work locations and free time and overtime accounts are the most important measures from the employees’ point of view and already widely offered by the surveyed companies. In general, the overall use of the available measures and the quantity of offered measures are more important with regard to job satisfaction than the specific kind of measure. In addition, work-life balance measures are more important towards job satisfaction for younger people.
The objective of this contribution is to conceptually analyze the potentials of entrepreneurial design thinking as being a rather new method for entrepreneurship education. Based on a literature review of different design thinking concepts we carve out a generic design thinking model upon we conceptually build a new model that considers entrepreneurial thinking as a valuable characteristic.
The results of our work show that the characteristics of entrepreneurial design thinking can enhance entrepreneurship education by supporting respective action fields of entrepreneurial learning. In addition we reveal that entrepreneurial design thinking offers beneficial guidelines for the design of entrepreneurship education programs.
Despite widespread plans of big companies like Amazon and Google to develop unmanned delivery drones, scholarly research in this field is scarce, especially in the information systems field. From technical and legal perspectives, drone delivery in last-mile scenarios is in a quite mature state. However, estimates of user acceptance are varying between high skepticism and exaggerated optimism. This research follows a mixed method approach consisting both qualitative and quantitative research, to identify and test determinants of consumer delivery drone service adoption. The qualitative part rests on ten interviews among average consumers, who use delivery services on a regular basis. Insights gained from the qualitative part were used to develop an online survey and to assess the influence of associated risks on adoption intentions. The quantitative results show that especially financial and physical risks impede drone delivery service adoption. Delivery companies who are currently thinking about providing a delivery drone service may find these results useful when evaluating usage behaviors in the future market for delivery drones.
We present the user-centered, iterative design of Mobile Facets, a mobile application for the faceted search and exploration of a large, multi-dimensional data set of social media on a touchscreen mobile phone. Mobile Facets provides retrieval of resources such as places, persons, organizations, and events from an integration of different open social media sources and professional content sources, namely Wikipedia, Eventful, Upcoming, geo-located Flickr photos, and GeoNames. The data is queried live from the data sources. Thus, in contrast to other approaches we do not know in advance the number and type of facets and data items the Mobile Facets application receives in a specific contextual situation. While developingrnMobile Facets, we have continuously evaluated it with a small group of fifive users. We have conducted a task-based, formative evaluation of the fifinal prototype with 12 subjects to show the applicability and usability of our approach for faceted search and exploration on a touchscreen mobile phone.
Bei dem für dieses Schutzprofil (engl. "Protection Profile" im Folgenden PP genannt) herangezogenen Evaluationsgegenstand (im Folgenden EVG genannt) handelt es sich um einen Router, der einen Schutz gegen Online-Überwachung bietet. Der EVG-Typ ist hierbei ein handelsüblicher Internet-Router, der in Privathaushalten und kleineren Firmen eingesetzt wird. Der EVG umfasst den gesamten Router, so dass sowohl die Hardwarekomponenten als auch die Softwarekomponenten betrachtet werden. Es ist dabei irrelevant, ob es ein Modell mit integriertem Breitband-Modem ist oder ob dieses separat angeschlossen ist.
Iterative Signing of RDF(S) Graphs, Named Graphs, and OWL Graphs: Formalization and Application
(2013)
When publishing graph data on the web such as vocabulariesrnusing RDF(S) or OWL, one has only limited means to verify the authenticity and integrity of the graph data. Today's approaches require a high signature overhead and do not allow for an iterative signing of graph data. This paper presents a formally defined framework for signing arbitrary graph data provided in RDF(S), Named Graphs, or OWL. Our framework supports signing graph data at different levels of granularity: minimum self-contained graphs (MSG), sets of MSGs, and entire graphs. It supports for an iterative signing of graph data, e. g., when different parties provide different parts of a common graph, and allows for signing multiple graphs. Both can be done with a constant, low overhead for the signature graph, even when iteratively signing graph data.
Six and Gimmler have identified concrete capabilities that enable users to use the Internet in a competent way. Their media competence model can be used for the didactical design of media usage in secondary schools. However, the special challenge of security awareness is not addressed by the model. In this paper, the important dimension of risk and risk assessment will be introduced into the model. This is especially relevant for the risk of the protection of personal data and privacy. This paper will apply the method of IT risk analysis in order to select those dimensions of the Six/Gimmler media competence model that are appropriate to describe privacy aware Internet usage. Privacy risk aware decisions for or against the Internet usage is made visible by the trust model of Mayer et al.. The privacy extension of the competence model will lead to a measurement of the existing privacy awareness in secondary schools, which, in turn, can serve as a didactically well-reasoned design of Informatics modules in secondary schools. This paper will provide the privacy-extended competence model, while empirical measurement and module design is planned for further research activities.
Informatik hautnah erleben
(2010)
In vielen Köpfen - sowohl bei Erwachsenen wie auch bei Schülern - geistert der Glaube, dass Informatik die Wissenschaft der "Computerlehre" ist. Schon der berühmte Satz "In der Informatik geht es genauso wenig um Computer wie in der Astronomie um Teleskope", der dem Informatiker Edsger W. Dijkstra (1930 - 2002) zugeschrieben wird, drückt historisch schon früh den Gedanken aus, dass die Informatik den Computer nur als ein Hilfsmittel und Medium nutzt, genauso wie die Mathematik den Taschenrechner. Die Fehlvorstellung, die leider auch häufig in den Schulen vermittelt wird, zeigt, dass hier Aufklärung nötig ist.
The aim of this paper is to identify and understand the risks and issues companies are experiencing from the business use of social media and to develop a framework for describing and categorising those social media risks. The goal is to contribute to the evolving theorisation of social media risk and to provide a foundation for the further development of social media risk management strategies and processes. The study findings identify thirty risk types organised into five categories (technical, human, content, compliance and reputational). A risk-chain is used to illustrate the complex interrelated, multi-stakeholder nature of these risks and directions for future work are identified.
In this article we analyze the privacy aspects of a mobile sensor application used for recording urban travel patterns as part of a travel-survey service. This service has been developed and field-tested within the Live+Gov EU Project. The privacy analysis follows a structured approach established in. Eight privacy recommendations are derived, and have already led to corresponding enhancements of the travel-survey service.
Ontologies play an important role in knowledge representation for sharing information and collaboratively developing knowledge bases. They are changed, adapted and reused in different applications and domains resulting in multiple versions of an ontology. The comparison of different versions and the analysis of changes at a higher level of abstraction may be insightful to understand the changes that were applied to an ontology. While there is existing work on detecting (syntactical) differences and changes in ontologies, there is still a need in analyzing ontology changes at a higher level of abstraction like ontology evolution or refactoring pattern. In our approach we start from a classification of model refactoring patterns found in software engineering for identifying such refactoring patterns in OWL ontologies using DL reasoning to recognize these patterns.
Dieses Dokument legt den Standard für die Transformation von grUML-Schemas (GraphUML, [BHR+09]) nach XSD (XML Schema Definition) fest und ist im Rahmen des Arbeitspakets 5.2 "Prototypische SOAMIG-Parser und -Unparser realisieren" im SOAMIG-Projekt entstanden. Das Ziel ist der Austausch von TGraphen (typisierten, attributierten, angeordneten, gerichte Graphen [ERW08]) über XML-Dokumente. Zur Spezifikation des Austauschformats wird XSD eingesetzt. Dies erlaubt eine Validierung der XML-Instanzen auf syntaktischer Ebene. Der Ausgangspunkt ist ein gegebenes Schemas in grUML-Notation1, welches nach XSD transformiert werden soll. Mit der generierten XSD existiert ein Beschreibungsmittel für Graph-Instanzen in XML. Die dadurch beschriebenen XML-Dokumente sind flach, d.h. alle Elemente sind direkt dem root-Element untergeordnet.
E-Mail-Forensik - IP-Adressen und ihre Zuordnung zu Internet-Teilnehmern und ihren Standorten
(2010)
Wesentliches Element des weltweiten Internets bildet der Adressraum der IP-Adressen, die den am Internet teilnehmenden Geräten ("IP-Hosts") zugewiesen sind. IP-Adressen (der Version 4) bestehen aus vier Zahlen zwischen 0 und 255 und repräsentieren viermal acht Bits, mit welchen insgesamt über vier Milliarden Adressen unterschieden werden können. Die zentrale Organisation IANA vergibt an fünf regionale Adressregistraturen Adressräume, welche sie an lokale Registraturen, Telecomanbieter und Internet-Service-Provider weiter verteilen. Diese Adressverteilung ist relativ stabil. Diese Zuordnung ist öffentlich zugänglich über so genannte whois-Abfragen aus Datenbanken der regionalen Registraturen. Die Internet-Service-Provider (ISP) vergeben IP-Adressen an ihre Nutzer. Die Zuordnung wird teilweise statisch mit langfristiger Bindung vorgenommen und teilweise dynamisch nur für die Dauer einer Datenverbindung. Die dynamische Adressverwaltung erlaubt es Internet-Service-Providern, mehr Nutzer zu bedienen, als ihr Adressraum an verschiedenen IPAdressen zulässt, da die Adressen von Geräten, die aus dem Internet ausscheiden, nicht wie bei der statischen Vergabe frei gehalten werden müssen, sondern an sich neu mit dem ISP verbindende Geräte vergeben werden können. In internen Tabellen verwalten die Internet-Service-Provider die Zuordnung von IP-Adressen zu den konkreten Anschlüssen ihrer Nutzer, außerdem protokollieren sie, welcher Anschluss wann welche IP-Adresse hatte . Diese Daten sind öffentlich nicht zugänglich, sondern müssen bei Bedarf mit gesetzlich geregelten Einschränkungen (Datenschutz) erfragt werden.
Im Gesundheitswesen geht es nicht nur um die Vertraulichkeit von Patientendaten, sondern auch um ihre Integrität, von der die richtige Behandlung und Pflege des Patienten abhängen. Eine Maßnahme zur Absicherung eines Krankenhausinformationssystems (KIS) gegen unautorisierte Angreifer ist eine flexibel organisierte Zugriffskontrolle zum Schutz der patientenbezogenen Daten. Dieser Artikel beschreibt das Konzept einer rollenbasierten Zugriffskontrolle (RBAC - Role Based Access Control) und seine Einsatzmöglichkeiten im Gesundheitswesen.
SOA-Security
(2007)
This paper is a part of the ASG project (Adaptive Services Grid) and addresses some IT security issues of service oriented architectures. It defines a service-oriented security concept, it explores the SOA security challenge, it describes the existing WS-Security standard, and it undertakes a first step into a survey on best practice examples. In particular, the ASG middleware platform technology (JBossWS) is analyzed with respect to its ability to handle security functions.
This paper describes the development of security requirements for non-political Internet voting. The practical background is our experience with the Internet voting within the Gesellschaft für Informatik (GI - Informatics Society) 2004 and 2005. The theoretical background is the international state-of-the-art of requirements about electronic voting, especially in the US and in Europe. A focus of this paper is on the user community driven standardization of security requirements by means of a Protection Profile of the international Common Criteria standard.
In den folgenden acht Beispielen wird gezeigt, wie elektronische Anwendungen mithilfe kryptographischer Verfahren abgesichert werden. In jedem Beispiel erklären wir das Ziel der Verschlüsselung, erläutern den technischen Hintergrund, besprechen im Detail den Ablauf der Anwendung mit ihrer Verschlüsselung und diskutieren Stärken und Schwächen des Verfahrens. Das erste Beispiel der Verschlüsselungsprotokolle ist noch nicht sehr konkret, sondern es bildet die Grundlage der folgenden Beispiele. Denn jede Anwendung setzt eines der beiden Typen von Verschlüsselungsprotokollen ein, die im ersten Beispiel erklärt werden. Dann folgen die Beispiele Sicheres World Wide Web mit SSL, die Verschlüsselung der Luftschnittstelle im Mobilfunk, die sichere Identifikation des Karteninhabers einer ec-Karte am Geldauszahlungsautomaten, der Schutz von WLANs gegen fremde Lauscher und Eindringlinge, die sichere Identifikation elektronischer Schlüssel, darunter der Funkschlüssel bei Autotüren, das kontaktlose Flugticket zum Ausdrucken einer Boardingkarte und schließlich die Verschlüsselung im Pay-TV. Die Beschreibung der kryptografischen Anwendungen dieses Kapitels wird in einem Arbeitspapier des Instituts für Wirtschafts- und Verwaltungsinformatik der Universität Koblenz-Landau weitergepflegt und dort stets aktuell zum Download bereitgehalten [Grimm, Hundacker, Meletiadou 2006]. http://www.uni-koblenz.de/FB4/Institutes/IWVI/AGGrimm/Downloads
Virtual Goods + ODRL 2012
(2012)
This is the 10th international workshop for technical, economic, and legal aspects of business models for virtual goods incorporating the 8th ODRL community group meeting. This year we did not call for completed research results, but we invited PhD students to present and discuss their ongoing research work. In the traditional international group of virtual goods and ODRL researchers we discussed PhD research from Belgium, Brazil, and Germany. The topics focused on research questions about rights management in the Internet and e-business stimulation. In the center of rights management stands the conception of a formal policy expression that can be used for human readable policy transparency, as well as for machine readable support of policy conformant systems behavior up to automatic policy enforcement. ODRL has proven to be an ideal basis for policy expressions, not only for digital copy rights, but also for the more general "Policy Awareness in the World of Virtual Goods". In this sense, policies support the communication of virtual goods, and they are a virtualization of rules-governed behavior themselves.
IT-Sicherheitsmodelle
(2008)
Es wird erklärt, was ein Beschreibungsmodell ist, wie IT-Sicherheitsmodelle grundsätzlich aufgebaut sind und welchem Zweck sie dienen. Zur Illustration werden vier verschiedene IT-Sicherheitsmodelle vorgestellt, die historisch zu unterschiedlichen Zeiten entstanden sind. Sie passen zu vier verschiedenen typischen Anwendungsumgebungen und charakterisieren die zugehörigen Sicherheitsanforderungen. Vorgestellt werden das Bell-LaPadula-Modell zum Vertraulichkeitsschutz in hierarchischen Institutionen, das Clark-Wilson-Modell zum Integri-tätsschutz von Geschäftsanwendungen, das Chinese-Wall-Modell zum Konkurrentenschutz von Unternehmen und das Gleichgewichtsmodell zum Schutz verbindlichen Handelns im offenen Internet.
Various best practices and principles guide an ontology engineer when modeling Linked Data. The choice of appropriate vocabularies is one essential aspect in the guidelines, as it leads to better interpretation, querying, and consumption of the data by Linked Data applications and users.
In this paper, we present the various types of support features for an ontology engineer to model a Linked Data dataset, discuss existing tools and services with respect to these support features, and propose LOVER: a novel approach to support the ontology engineer in modeling a Linked Data dataset. We demonstrate that none of the existing tools and services incorporate all types of supporting features and illustrate the concept of LOVER, which supports the engineer by recommending appropriate classes and properties from existing and actively used vocabularies. Hereby, the recommendations are made on the basis of an iterative multimodal search. LOVER uses different, orthogonal information sources for finding terms, e.g. based on a best string match or schema information on other datasets published in the Linked Open Data cloud. We describe LOVER's recommendation mechanism in general and illustrate it alongrna real-life example from the social sciences domain.
Process Commodities : Entwicklung eines Reifegradmodells alsBasis für Outsourcingentscheidungen
(2009)
Der vorliegende Arbeitsbericht behandelt die Anforderungen an Prozessmanagement als Basis für Outsourcingentscheidungen. Zu dieser Fragestellung erschien im Jahr 2005 ein Artikel von Thomas Davenport mit dem Titel "The Coming Commoditization of Processes". In diesem wird auf das Fehlen eines einheitlichen Verständnisses über Inhalte und Beschreibungen von Prozessen und Prozessschritten, Leistungsindikatoren sowie Prozessmanagementpraktiken hingewiesen. Der vorliegende Arbeitsbericht versucht diese Lücke in Teilen zu schließen, indem er ein Reifegradmodell für Prozessmanagement vorstellt. Mit Hilfe des Modells können Unternehmen einschätzen, in wieweit sie vorhandene Methoden des Prozessmanagements umsetzen und damit die Grundlage für Prozessoutsourcing geschaffen haben.
This paper presents a method for the evolution of SHI ABoxes which is based on a compilation technique of the knowledge base. For this the ABox is regarded as an interpretation of the TBox which is close to a model. It is shown, that the ABox can be used for a semantically guided transformation resulting in an equisatisfiable knowledge base. We use the result of this transformation to effciently delete assertions from the ABox. Furthermore, insertion of assertions as well as repair of inconsistent ABoxes is addressed. For the computation of the necessary actions for deletion, insertion and repair, the E-KRHyper theorem prover is used.
Knowledge compilation is a common technique for propositional logic knowledge bases. The idea is to transform a given knowledge base into a special normal form ([MR03],[DH05]), for which queries can be answered efficiently. This precompilation step is very expensive but it only has to be performed once. We propose to apply this technique to knowledge bases defined in Description Logics. For this, we introduce a normal form, called linkless concept descriptions, for ALC concepts. Further we present an algorithm, based on path dissolution, which can be used to transform a given concept description into an equivalent linkless concept description. Finally we discuss a linear satisfiability test as well as a subsumption test for linkless concept descriptions.
This paper offers an informal overview and discussion on first order predicate logic reasoning systems together with a description of applications which are carried out in the Artificial Intelligence Research Group of the University in Koblenz. Furthermore the technique of knowledge compilation is shortly introduced.
This paper shows how multiagent systems can be modeled by a combination of UML statecharts and hybrid automata. This allows formal system specification on different levels of abstraction on the one hand, and expressing real-time system behavior with continuous variables on the other hand. It is not only shown how multi-robot systems can be modeled by a combination of hybrid automata and hierarchical state machines, but also how model checking techniques for hybrid automata can be applied. An enhanced synchronization concept is introduced that allows synchronization taking time and avoids state explosion to a certain extent.
In this paper we describe a series of projects on location based and personalised information systems. We start wit a basic research project and we show how we came with the help of two other more application oriented project to a product. This is developed by a consortium of enterprises and it already is in use in the city of Koblenz.
In this paper we describe a network for distributing personalized information within a pervasive university. We discuss the system architecture of our Bluetooth-based CampusNews-system, both, from the administrator and the user viewpoint. We furthermore present first statistical data about the usage of the partial installation at the Koblenz campus together with an outlook to future work.
In this paper we describe a network for distributing personalized Information in a metropolitan area. We discuss the system architecture of our Bluetooth-based information system as well as the reasoning process that fits users" needs with potential messages. We furthermore present our findings on parallelizing Bluetooth connection setup and performance.
This paper proposes model-driven techniques to extend IBM- SOMA method towards migrating legacy systems into Service-Oriented Architectures (SOA). The proposal explores how graph-based querying and transformation techniques enable the integration of legacy assets into a new SOA. The presented approach is applied to the identification and migration of services in an open source Java software system.
ERP-Systeme stellen einen wichtigen Kompetenzbereich der Forschungsgruppe Betriebliche Anwendungssysteme (FG BAS) am Institut für Wirtschafts- und Verwaltungsinformatik (IWVI) der Universität Koblenz-Landau dar. In enger Zusammenarbeit mit Partnern aus Wirtschaft und Wissenschaft vermittelt die Forschungsgruppe sowohl Grundlagenforschung als auch praxisbezogenes Wissen an Studierende und Unternehmen. Der vorliegende Arbeitsbericht "Künftige Anforderungen an ERP-Systeme: Deutsche Anbieter im Fokus" beleuchtet auf vier Ebenen die Architektur, die verwendete Technologie, die praktischen Einsatzmöglichkeiten und die allgemeine Marktsituation aktueller ERP-Systeme auf dem deutschsprachigen Markt. Ein besonderer Dank geht in diesem Zusammenhang an Peter Herzog (zum damaligen Zeitpunkt Marketingleiter der Bison AG), der die Idee einer solchen Untersuchung hatte und die FG BAS mit deren Durchführung betraute. Die Studie wurde vom Sommer 2007 bis zum Frühling 2008 von Norbert Frick im Rahmen seiner Diplomarbeit im Fach Informatik durchgeführt. Die Auswertung der Ergebnisse wurde inhaltlich und wissenschaftlich betreut von Prof. Dr. Petra Schubert. Die Arbeit entstand aufgrund von Literaturrecherchen, schriftlichen Umfragen und Telefoninterviews mit ERPAnbietern.
Semantic desktop environments aim at improving the effectiveness and efficiency of users carrying out daily tasks within their personal information management infrastructure (PIM). They support the user by transferring and exploiting the explicit semantics of data items across different PIM applications. Whether such an approach does indeed reach its aim of facilitating users" life and—if so—to which extent, however, remains an open question that we address in this paper with the first summative evaluation of a semantic desktop approach. We approach the research question exploiting our own semantic desktop infrastructure, X-COSIM. As data corpus, we have used over 100 emails and 50 documents extracted from the organizers of a conference-like event at our university. The evaluation has been carried out with 18 subjects. We have developed a test environment to evaluate COSIMail and COSIFile, two semantic PIM applications based on X-COSIM. As result, we have found a significant improvement for typical PIM tasks compared to a standard desktop environment.
The STOR project aims at the development of a scientific component system employing models and knowledge for object recognition in images. This interim report elaborates on the requirements for such a component system, structures the application area by identifying a large set of basic operations, and shows how a set of appropriate data structures and components can be derived. A small case studies exemplifies the approach.
Expert-driven business process management is an established means for improving efficiency of organizational knowledge work. Implicit procedural knowledge in the organization is made explicit by defining processes. This approach is not applicable to individual knowledge work due to its high complexity and variability. However, without explicitly described processes there is no analysis and efficient communication of best practices of individual knowledge work within the organization. In addition, the activities of the individual knowledge work cannot be synchronized with the activities in the organizational knowledge work.rnrnSolution to this problem is the semantic integration of individual knowledgernwork and organizational knowledge work by means of the patternbased core ontology strukt. The ontology allows for defining and managing the dynamic tasks of individual knowledge work in a formal way and to synchronize them with organizational business processes. Using the strukt ontology, we have implemented a prototype application for knowledge workers and have evaluated it at the use case of an architectural fifirm conducting construction projects.
With the Multimedia Metadata Ontology (M3O), we have developed a sophisticated model for representing among others the annotation, decomposition, and provenance of multimedia metadata. The goal of the M3O is to integrate the existing metadata standards and metadata formats rather than replacing them. To this end, the M3O provides a scaffold needed to represent multimedia metadata. Being an abstract model for multimedia metadata, it is not straightforward how to use and specialize the M3O for concrete application requirements and existing metadata formats and metadata standards. In this paper, we present a step-by-step alignment method describing how to integrate and leverage existing multimedia metadata standards and metadata formats in the M3O in order to use them in a concrete application. We demonstrate our approach by integrating three existing metadata models: the Core Ontology on Multimedia (COMM), which is a formalization of the multimedia metadata standard MPEG-7, the Ontology for Media Resource of the W3C, and the widely known industry standard EXIF for image metadata
Digital happiness
(2016)
Emotions in the digital world gain more and more importance by the accelerating digitalization of recent years. In everyday life as well as professional and educational environments, the use of media is almost inescapable. This work takes a closer look to the still relatively unknown topic, “Digital Happiness“. In the literature emotions in the digital world are studied increasingly, but concerning the topic of “Digital Happiness” there are not many insights.
In this research the differences between communicating emotions through digital media and face-to-face communications, as well as their impacts, are investigated. The perception and expression of emotions in the real and digital world are considered. Furthermore, a first definition of “Digital Happiness” and a model with the associated determinants that influence “Digital Happiness” is developed.
To achieve these goals, a literature analysis by the method of Webster & Watson (2002) is performed. Therefore, 15 articles are selected and analyzed in more detail to illustrate the current state of research. These articles are organized into a concept matrix. In addition, special phenomena, such as Quantified Self, are closely considered in order to determine other factors. Based on the literature analysis, a qualitative study by the method of Mayring (2014) is performed. It is developed into an interview guideline that serves as the basis for the investigation of the established determinants of the literature analysis. This is confirmed or refuted by 30 interviews with users of information and communication technologies. With the help of the interviews, a large part of the determinants in the literature can be confirmed. Moreover, some new determinants are identified. These results are included into the final model. This model provides a basis for further research in the field of “Digital Happiness“. Finally, different implications for research and practice are found, which reveal the need of further research.
Der Fachbereich 4: Informatik der Universität Koblenz-Landau bietet seit seiner Gründung 1990 wissenschaftliche Studiengänge (Diplom) in Informatik mit verschiedenen Schwerpunkten, darunter Wirtschaft- und Verwaltungsinformatik an, dem nach und nach weitere Studiengänge zur Seite gestellt wurden: seit 1997 Computervisualistik (Diplom) und seit 2000 Informationsmanagement (Bachelor und Master). Der Fachbereich 4: Informatik hat zum Wintersemester 2006/2007 sein Lehrangebot in den Nicht-Lehramtsstudiengängen vollständig auf konsekutive Bachelor- und Masterstudiengänge umgestellt. Die Akkreditierung erfolgte über die Akkreditierungsagentur ASIIN. Die Aufnahme von Studienanfängern für die bis dahin angebotenen Diplomstudiengänge "Informatik" bzw. "Computervisualistik" wurde gleichzeitig eingestellt, und diese wurden durch ein integriertes Programm mit zwei konsekutiven Bachelor-Master-Studienprogrammen in "Informatik" und "Computervisualistik" ersetzt. Das bereits bestehende Bachelor-Master-Programm in "Informationsmanagement" wurde reformiert, und ein weiterer Masterstudiengang in "Wirtschaftsinformatik" wurde eingerichtet. Bei der Konzeption dieser sieben Studiengänge wurde auf eine hohe Integration und Studiengangs-übergreifende Wiederverwendung von Modulen geachtet. Die in diesem Papier vorgestellten Studiengänge für das Lehramt wurden schon bei der damaligen Planung mit einbezogen. Da mit Prof. Dr. Stefan Müller der Leiter der Landesarbeitsgruppe zur Erarbeitung der curricularen Standards für das Fach Informatik bei der Planung eng mit eingebunden war, konnten die Module der Bachelor- und Masterstudiengänge von Anfang an im Hinblick auf eine spätere Verwendung auch im Lehramt konzipiert werden.
The term "Augmented Reality (AR)" denotes the superposition of additional virtual objects and supplementary information over real images. The joint project Enhanced Reality (ER)1 aims at a generic AR-system. The ER-project is a cooperation of six different research groups of the Department of Computer Science at the University of Koblenz-Landau. According to Ronald Azuma an AR-system combines real and virtual environments, where the real and virtual objects are registered in 3-D, and it provides interactivity in real time [Azu97]. Enhanced Reality extends Augmented Reality by requiring the virtual objects to be seamlessly embedded into the real world as photo-realistic objects according to the exact lighting conditions. Furthermore, additional information supplying value-added services may be displayed and interaction of the user may even be immersive. The short-term goal of the ER-project is the exploration of ER-fundamentals using some specific research scenarios; the long-term goal is the development of a component-based ER-framework for the creation of ER-applications for arbitrary application areas. ER-applications are developed as single-user applications for users who are moving in a real environment and are wearing some kind of visual output device like see-through glasses and some mobile end device. By these devices the user is able to see reality as it is, but he can also see the virtual objects and the additional information about some value-added service. Furthermore he might have additional devices whereby he can interact with the available virtual objects. The development of a generic framework for ER-applications requires the definition of generic components which are customizable and composable to build concrete applications and it requires a homogeneous data model which supports all components equally well. The workgroup "Software Technology"2 is responsible for this subproject. This report gives some preliminary results concerning the derivation of a component-based view of ER. There are several augmented reality frameworks like ARVIKA, AMIRE, DWARF, MORGAN, Studierstube and others which offer some support for the development of AR-applications. All of them ease the use of existing subsystems like AR-Toolkit, OpenGL and others and leverage the generation process for realistic systems by making efficient use of those subsystems. Consequently, they highly rely on them.
Im Rahmen dieser Arbeit wird der Einfluss des Wahlszenarios auf die Geheimheit und Öffentlichkeit der Wahl herausgearbeitet. Ein Wahlszenario wird durch seine Wahlform und die verwendete Wahltechnik bestimmt. Bei der Wahl-form kann zwischen einer Präsenz- und einer Fernwahl unterschieden werden. Bei der Wahltechnik zwischen der Papier- und der elektronischen Wahl. Mit der Papier-Präsenzwahl, der Briefwahl (Papier-Fernwahl) und der Internetwahl (elektronische Fernwahl) werden drei prominente Wahlszenarien und ihr Einfluss auf Geheimheit, Privatheit und Öffentlichkeit untersucht.
In diesem Arbeitsbericht werden zuvor nicht identifizierte Bedrohungen bezüglich des Wahlgeheimnisses des in [BKG11] vorgeschlagenen Konzeptes zur Authentifizierung von Wählern bei elektronischen Wahlen mittels des neuen Personalausweises aufgezeigt. Überdies wird mit der Einführung einer zwischengelagerten Anonymisierungsschicht eine Lösung vorgeschlagen, wie eben diese Bedrohungen abgewehrt werden können.
The novel mobile application csxPOI (short for: collaborative, semantic, and context-aware points-of-interest) enables its users to collaboratively create, share, and modify semantic points of interest (POI). Semantic POIs describe geographic places with explicit semantic properties of a collaboratively created ontology. As the ontology includes multiple subclassiffcations and instantiations and as it links to DBpedia, the richness of annotation goes far beyond mere textual annotations such as tags. With the intuitive interface of csxPOI, users can easily create, delete, and modify their POIs and those shared by others. Thereby, the users adapt the structure of the ontology underlying the semantic annotations of the POIs. Data mining techniques are employed to cluster and thus improve the quality of the collaboratively created POIs. The semantic POIs and collaborative POI ontology are published as Linked Open Data.
The University of Koblenz-Landau would like to apply for participation in the RoboCup Mixed Reality League in Suzhou, China 2008. Our team is composed of ten team members and two supervisors. All members are graduate students of Computational Visualistics. Our supervisors are Ph.D. candidates currently researching in the working groups of artificial intelligence and computer graphics.
Avoidance of routing loops
(2009)
We introduce a new routing algorithm which can detect routing loops by evaluating routing updates more thoroughly. Our new algorithm is called Routing with Metric based Topology Investigation (RMTI), which is based on the simple Routing Information Protocol (RIP) and is compatible to all RIP versions. In case of a link failure, a network can reorganize itself if there are redundant links available. Redundant links are only available in a network system like the internet if the topology contains loops. Therefore, it is necessary to recognize and to prevent routing loops. A routing loop can be seen as a circular trace of a routing update information which returns to the same router, either directly from the neighbor router or via a loop topology. Routing loops could consume a large amount of network bandwidth and could impact the endtoend performance of the network. Our RMTI approach is capable to improve the efficiency of Distance Vector Routing.
Die vorliegende Arbeit bildet den Abschluss eines Forschungspraktikums von Studierenden der Masterstudiengänge Informationsmanagement und Wirtschaftsinformatik unter Betreuung der wissenschaftlichen Mitarbeiterin Daniela Simić-Draws und von Prof. Dr. Rüdiger Grimm. Eine wesentliche Vorlage zu dieser Arbeit war ein Vorgehensmodell zur Sicherheitsanalyse von Geschäftsprozessen, das von D. Simić-Draws im Rahmen ihrer Dissertation erarbeitet wird und zu dessen laufender Verbesserung dieses studentische Forschungspraktikum wertvolle Hinweise liefern konnte. Als Anwendungsbeispiel wurden die sicherheitskritischen Prozesse "Kommunalwahl" und "Geldauszahlung am Bankautomaten" gewählt, weil die Arbeitsgruppe von Prof. Grimm in diesen beiden Anwendungen aus vorhergehender wissenschaftlicher Arbeit Erfahrung gesammelt hat. Insbesondere zum Anwendungsbeispiel "Kommunalwahl" hatte sich dankenswerterweise das Ordnungsamt Koblenz, das für die Kommunalwahlen zuständig ist, unter aktiver Mithilfe ihres Leiters Dirk Urmersbach zur Zusammenarbeit angeboten, so dass dieser Geschäftsprozess wirklichkeitsnah untersucht werden konnte.
E-KRHyper is a versatile theorem prover and model generator for firstorder logic that natively supports equality. Inequality of constants, however, has to be given by explicitly adding facts. As the amount of these facts grows quadratically in the number of these distinct constants, the knowledge base is blown up. This makes it harder for a human reader to focus on the actual problem, and impairs the reasoning process. We extend E-Hyper- underlying E-KRhyper tableau calculus to avoid this blow-up by implementing a native handling for inequality of constants. This is done by introducing the unique name assumption for a subset of the constants (the so called distinct object identifiers). The obtained calculus is shown to be sound and complete and is implemented into the E-KRHyper system. Synthetic benchmarks, situated in the theory of arrays, are used to back up the benefits of the new calculus.
Semantic descriptions of non-textual media available on the web can be used to facilitate retrieval and presentation of media assets and documents containing them. While technologies for multimedia semantic descriptions already exist, there is as yet no formal description of a high quality multimedia ontology that is compatible with existing (semantic) web technologies. We explain the complexity of the problem using an annotation scenario. We then derive a number of requirements for specifying a formal multimedia ontology, including: compatibility with MPEG-7, embedding in foundational ontologies, and modularisation including separation of document structure from domain knowledge. We then present the developed ontology and discuss it with respect to our requirements.
Colonoscopy is one of the best methods for screening colon cancer. As the automatic detection of polyps in endoscopic images is a challenging task for image processing, a variety of research groups have proposed methods that try to fulfill this task to develop a system which supports the doctors during examination. However, the problem is still "at least partially" not solved. This paper gives a summary of 16 different polyp detection methods published in the last ten years. We found out that the major draw-back of many approaches is the lack of representative video data, which hinders comparison and evaluation of the published methods.
This volume contains those research papers presented at the Second International Conference on Tests and Proofs (TAP 2008) that were not included in the main conference proceedings. TAP was the second conference devoted to the convergence of proofs and tests. It combines ideas from both areas for the advancement of software quality. To prove the correctness of a program is to demonstrate, through impeccable mathematical techniques, that it has no bugs; to test a program is to run it with the expectation of discovering bugs. On the surface, the two techniques seem contradictory: if you have proved your program, it is fruitless to comb it for bugs; and if you are testing it, that is surely a sign that you have given up on any hope of proving its correctness. Accordingly, proofs and tests have, since the onset of software engineering research, been pursued by distinct communities using rather different techniques and tools. And yet the development of both approaches leads to the discovery of common issues and to the realization that each may need the other. The emergence of model checking has been one of the first signs that contradiction may yield to complementarity, but in the past few years an increasing number of research efforts have encountered the need for combining proofs and tests, dropping earlier dogmatic views of their incompatibility and taking instead the best of what each of these software engineering domains has to offer. The first TAP conference (held at ETH Zurich in February 2007) was an attempt to provide a forum for the cross-fertilization of ideas and approaches from the testing and proving communities. For the 2008 edition we found the Monash University Prato Centre near Florence to be an ideal place providing a stimulating environment. We wish to sincerely thank all the authors who submitted their work for consideration. And we would like to thank the Program Committee members as well as additional referees for their great effort and professional work in the review and selection process. Their names are listed on the following pages. In addition to the contributed papers, the program included three excellent keynote talks. We are grateful to Michael Hennell (LDRA Ltd., Cheshire, UK), Orna Kupferman (Hebrew University, Israel), and Elaine Weyuker (AT&T Labs Inc., USA) for accepting the invitation to address the conference. Two very interesting tutorials were part of TAP 2008: "Parameterized Unit Testing with Pex" (J. de Halleux, N. Tillmann) and "Integrating Verification and Testing of Object-Oriented Software" (C. Engel, C. Gladisch, V. Klebanov, and P. Rümmer). We would like to express our thanks to the tutorial presenters for their contribution. It was a team effort that made the conference so successful. We are grateful to the Conference Chair and the Steering Committee members for their support. And we particularly thank Christoph Gladisch, Beate Körner, and Philipp Rümmer for their hard work and help in making the conference a success. In addition, we gratefully acknowledge the generous support of Microsoft Research Redmond, who financed an invited speaker.