Refine
Year of publication
Document Type
- Part of Periodical (132) (remove)
Keywords
- Bluetooth (4)
- computer clusters (4)
- ontology (4)
- Knowledge Compilation (3)
- parallel algorithms (3)
- Augmented Reality (2)
- Campus Information System (2)
- Customer Relationship Management (2)
- DOCHOUSE (2)
- Datensicherheit (2)
- Enhanced Reality (2)
- Fallstudie (2)
- IBM Lotus Quickr (2)
- Informatik (2)
- Linked Open Data (2)
- OWL (2)
- Ontology (2)
- Petri-Netze (2)
- Semantic Web (2)
- University (2)
- artifcial neural networks (2)
- artificial neural networks (2)
- classification (2)
- constraint logic programming (2)
- mobile phone (2)
- privacy protection (2)
- probability propagation nets (2)
- risk (2)
- social media (2)
- 2D/3D-Segmentierung (1)
- 3-D-Modell (1)
- 3D-Color-Structure-Code (1)
- 3D-RETISEG (1)
- ABox (1)
- Absorptive capacity (1)
- Adaptive Services Grid (ASG) (1)
- Amazon Mechanical Turks (1)
- Annotationsmodell (1)
- Ausstellung (1)
- Authentifizierung (1)
- Authentisierung (1)
- Automated Theorem Proving (1)
- Automated Theorem Proving Systems (1)
- B2B (1)
- Bachelorstudiengang (1)
- Bayes Procedures (1)
- Bilderkennung (1)
- Calculus (1)
- Campus Koblenz (1)
- Cloud Computing (1)
- Colonoscopy (1)
- Common Annotation Framework (1)
- Conference (1)
- Context-aware processes (1)
- Core Ontology on Multimedia (1)
- Core Ontology on Multimedia (COMM) (1)
- Creativity (1)
- Crowdsourcing (1)
- Darmspiegelung (1)
- Datenbrille (1)
- Datenschutz (1)
- Delphi-Studie (1)
- Description Logic (1)
- Description Logics (1)
- Dienstleistungsproduktivität (1)
- Discussion Forums (1)
- Distributed process execution (1)
- E-KRHyper (1)
- E-KRHyper theorem prover (1)
- E-Mail (1)
- E-government (1)
- E-services (1)
- ERP-Systeme (1)
- Enterprise Systems (1)
- Enterprise-Resource-Planning (1)
- Evaluation (1)
- Forensik (1)
- Forschungsprojekt KMU 2.0 (1)
- Generative Model (1)
- Gesundheitswesen (1)
- Graph Technology (1)
- GraphUML (1)
- Healthcare institution (1)
- Horn Clauses (1)
- IASON (1)
- IEC 61131 (1)
- IP-Adressen (1)
- IT Outsourcing (1)
- IT Security (1)
- IT Services (1)
- IT security analysis (1)
- IT-Risk-Management (1)
- IT-Sicherheitsmodelle (1)
- Image (1)
- Innovation (1)
- Innovationsmanagement (1)
- Intelligent Information Network (1)
- Interaktionselemente (1)
- Internet (1)
- Internet Voting (1)
- Java (1)
- Kalibrierung (1)
- Knowledge Sharing (1)
- Koloskopie (1)
- Kontextbezogener Dienst (1)
- Krankenhausinformationssystem (1)
- Kryptographie (1)
- Leistungsmessung (1)
- Linked Data Modeling (1)
- Logistik (1)
- MIA (1)
- MPEG-7 (1)
- Masterstudiengang (1)
- Mehrbenutzer-Annotationssystem (1)
- Mixed method (1)
- Mobile Information Systems (1)
- Mobiles Umfeld (1)
- Model-Driven Engineering (1)
- Multi-robot System (1)
- Multiagent System (1)
- Multimedia Metadata Ontology (1)
- Netzwerk kleiner und mittlerer Unternehmen (1)
- Neuronales Netz (1)
- ODRL (1)
- Object Recognition (1)
- Online Community (1)
- Online-Überwachung (1)
- Onlinewahl (1)
- Ontology alignment (1)
- Open Source (1)
- Outsourcing (1)
- POIs (1)
- Personal Key Infrastruktur (1)
- Personalausweis (1)
- Personalised Information Systems (1)
- Petri Nets (1)
- Petri net (1)
- Petrinetz (1)
- Polypendetektion (1)
- Pose (1)
- Probability (1)
- Probability propagation nets (1)
- Process tracing (1)
- Programmiersprachen (1)
- Projekt iCity (1)
- Propagation (1)
- Prozessmanagement (1)
- Quality assessment system (1)
- RDF (1)
- RDF Graphs (1)
- RDF modeling (1)
- RMIS (1)
- ReDSeeDS-Project (1)
- Resource Description Framework (RDF) (1)
- Risikomanagement-Informationssysteme (1)
- Robocup 2008 (1)
- Router (1)
- Routing Information Protocol (RIP) (1)
- Routing Loops (1)
- Routing with Metric based Topology Investigation (RMTI) (1)
- SOAMIG (1)
- SOMA (1)
- SPARQL (1)
- Schema Information (1)
- Schutzprofil (1)
- Search engine (1)
- Security (1)
- Semantics (1)
- Semantik (1)
- Service-Oriented Architectures (SOA) (1)
- Service-oriented Architectures (SOA) (1)
- Shared Annotation Model (1)
- Shared Annotations (1)
- Social Networking Platforms (1)
- Software (1)
- Software Development (1)
- Software techniques for object recognition (STOR) (1)
- Software-Customizing (1)
- Stochastic Logic (1)
- Support System (1)
- Survey Research (1)
- Syntax (1)
- TAP (1)
- TBox (1)
- Technologiemanagement (1)
- Theorem prover (1)
- Tokens (1)
- Traceability (1)
- Transformation (1)
- UML (1)
- Unified Modeling Language (UML ) (1)
- United Internet AG (1)
- Universität Koblenz-Landau (1)
- Vocabulary Mapping (1)
- Vocabulary Reuse (1)
- Web 2.0 (1)
- Web Ontology Language (OWL) (1)
- Website (1)
- Wechselkursänderung (1)
- Werbung (1)
- Word-of-Mouth (1)
- XSD (1)
- Zertifikat (1)
- activation functions of neurons (1)
- adaptive resonance theory (1)
- application programming interfaces (1)
- artiffficial neural networks (1)
- artififfcial neural networks (1)
- blood analysis (1)
- business process management (1)
- categorisation (1)
- core ontologies (1)
- currency exchange rates (1)
- delivery drone (1)
- design thinking (1)
- deutschsprachiger Markt (1)
- directed acyclic graphs (1)
- drone (1)
- e-Commerce (1)
- elektronische Anwendung (1)
- entrepreneurial design thinking (1)
- entrepreneurial thinking (1)
- entrepreneurship education (1)
- estimation of algorithm efficiency (1)
- event model (1)
- event-based systems (1)
- events (1)
- faceted search (1)
- finite state automata (1)
- governance (1)
- gradient method of training weight coefficients (1)
- hybrid automata (1)
- hybrid systems (1)
- iCity project (1)
- image processing (1)
- information system (1)
- knowledge work (1)
- kollaboratives Lernen (1)
- legacy software (1)
- mathematical model (1)
- media competence model (1)
- metadata formats (1)
- metadata standards (1)
- migration (1)
- minimum self-contained graphs (1)
- mobile application (1)
- mobile devices (1)
- mobile facets (1)
- mobile interaction (1)
- mobile phones (1)
- multi-agent systems (1)
- multimedia metadata (1)
- myAnnotations (1)
- parallel calculations (1)
- personal information management (1)
- persönliches Informationsmanagement (1)
- points of interest (1)
- polyp detection (1)
- privacy and personal data (1)
- privacy competence model (1)
- public key infrastructure (1)
- regular dag languages (1)
- risks (1)
- scene analysis (1)
- security awareness (1)
- semantic desktop (1)
- semantics (1)
- semantischer Desktop (1)
- sensor data (1)
- social media data (1)
- social object (1)
- social simulation (1)
- summative evaluation (1)
- teams (1)
- technology acceptance model (1)
- time series (1)
- tracking (1)
- traffic survey (1)
- virtual goods (1)
- visualization (1)
Institute
- Fachbereich 4 (132) (remove)
The estimation of various social objects is necessary in different fields of social life, science, education, etc. This estimation is usually used for forecasting, for evaluating of different properties and for other goals in complex man-machine systems. At present this estimation is possible by means of computer and mathematical simulation methods which is connected with significant difficulties, such as: - time-distributed process of receiving information about the object; - determination of a corresponding mathematical device and structure identification of the mathematical model; - approximation of the mathematical model to real data, generalization and parametric identification of the mathematical model; - identification of the structure of the links of the real social object. The solution of these problems is impossible without a special intellectual information system which combines different processes and allows predicting the behaviour of such an object. However, most existing information systems lead to the solution of only one special problem. From this point of view the development of a more general technology of designing such systems is very important. The technology of intellectual information system development for estimation and forecasting the professional ability of respondents in the sphere of education can be a concrete example of such a technology. Job orientation is necessary and topical in present economic conditions. It helps tornsolve the problem of expediency of investments to a certain sphere of education. Scientifically validated combined diagnostic methods of job orientation are necessary to carry out professional selection in higher education establishments. The requirements of a modern society are growing, with the earlier developed techniques being unable to correspond to them sufficiently. All these techniques lack an opportunity to account all necessary professional and personal characteristics. Therefore, it is necessary to use a system of various tests. Thus, the development of new methods of job orientation for entrants is necessary. The information model of the process of job orientation is necessary for this purpose. Therefore, it would be desirable to have an information system capable of giving recommendations concerning the choice of a trade on the basis of complex personal characteristics of entrants.
Generalized methods for automated theorem proving can be used to compute formula transformations such as projection elimination and knowledge compilation. We present a framework based on clausal tableaux suited for such tasks. These tableaux are characterized independently of particular construction methods, but important features of empirically successful methods are taken into account, especially dependency directed backjumping and branch local operation. As an instance of that framework an adaption of DPLL is described. We show that knowledge compilation methods can be essentially improved by weaving projection elimination partially into the compilation phase.
Ziel dieser Forschungsarbeit ist die Auswahl und Evaluierung von Open-Source ERPSystemen auf effiziente Anwendbarkeit in Unternehmen zum Zwecke des Aufbaus eines "ERP-Future-Labs", in welchem mittelständische Handelsunternehmen das/die installierte(n) System(e) testen können. Den Projektabschluss bildet hierbei die Installation eines lauffähigen Systems, auf welchem die vom Auftraggeber vorgegebenen Geschäftsprozesse abgewickelt werden können. Ferner sollen die Auftraggeber auf dem System geschult, eine Dokumentation der Software (Installation/Bedienung) und des Projekts erstellt werden.
Die vorliegende Fallstudie entstand als Untersuchungsobjekt zu einer Bachelorarbeit und wurde nach der eXperience Fallstudienmethodik erstellt. Ziel der Bachelorarbeit war die Identifizierung von Nutzenaspekten in diesem konkreten Fall. Im Anwenderunternehmen DOCHOUSE wurde hier eine Schnittstelle zwischen dem internen CRM-System und einem kollaborativen System für den externen Zugriff eingeführt.
Ein besonderer Dank geht in diesem Zusammenhang an Herr Wolfgang Brugger (Geschäftsführer DOCHOUSE GmbH), der die Erstellung der Fallstudie motiviert hat und die FG BAS mit deren Durchführung betraute. Die Fallstudie wurde im Winter 2011 von dem Studenten Martin Surrey und Roland Diehl, wissenschaftlicher Mitarbeiter der Forschungsgruppe, erhoben.
Integration von CRM-Systemen mit Kollaborations-Systemen am Beispiel von DocHouse und Lotus Quickr
(2012)
Der vorliegende Arbeitsbericht "Integration von CRM-Systemen mit Kollaborations-Systemen am Beispiel von DocHouse/ BRM und IBM Lotus Quickr" ist Ergebnis einer studentischen Projektarbeit. Ziel des Projekts war es Integrationsszenarien zwischen einem CRM-System und einem Kollaborati-onssystem zu erarbeiten und eine prototypische Schnittstelle mit entsprechender Funktion zwischen den Systemen DocHouse/ BRM und IBM Lotus Quickr zu implementieren.
Ein besonderer Dank geht in diesem Zusammenhang an Herr Wolfgang Brugger (Geschäftsführer der DocHouse GmbH), der die Idee einer solchen Entwicklung hatte und die FG BAS mit deren Durchführung betraute. Die Erstellung des Konzepts und des Prototyps wurde vom Winter 2010 bis Sommer 2011 von den Studenten Björn Lilge, Ludwig Paulsen, Marco Wolf, Markus Aldenhövel, Martin Surrey und Mike Reuthers im Rahmen ihres Projektpraktikums durchgeführt. Das Projektteam wurde bei der Konzeption und Implementierung inhaltlich und organisatorisch von Dipl.-Wirt.-Inform. Roland Diehl betreut.
Dieser Arbeitsbericht behandelt die Entwicklung einer Architektur für komplexe kontextbezogene Dienste im mobilen Umfeld. Der folgende Arbeitsbericht beschreibt die grundlegende Problemstellung und einen theoretischen Lösungsansatz, der im weiteren Forschungsprozess konkretisiert, prototypisch implementiert und evaluiert wird. Durch die gestiegene Mobilität vieler Menschen besteht ein stetig steigender Bedarf an mobilen Kommunikations- und Informationsdiensten. Im mobilen Umfeld werden die meisten Mehrwertdienste zum jetzigen Zeitpunkt von den Mobilfunkprovidern angeboten. Es handelt sich primär um Dienste für den Massenmarkt, die keine nennenswerte Personalisierung zulassen. Aufgrund der funktionell einfachen Dienste und des damit verbundenen niedrigen Komforts sowie der durch die Nutzung entstehenden Kosten werden derartige Dienste nur in begrenztem Maße vom Massenmarkt angenommen. Dazu besteht keine Möglichkeit, kostengünstig kontextbezogene Dienste für spezielle Personengruppen anzubieten, da das Dienstangebot vom jeweiligen Mobilfunkprovider festgelegt wird. Diese Arbeit betrachtet nicht nur die heutigen Hemmnisse, sondern auch die Anforderungen, die einer Akzeptanz besonders von komplexen kontextbezogenen Diensten noch im Wege stehen. Ziel ist es, eine Architektur bereitzustellen, die zukünftig personalisierte Dienste ermöglichen soll. Durch die Verwendung von sensiblen Kontextinformationen bei der Diensterbringung muss bei der Konzeption dieser Architektur der Schutz der Privatsphäre als ein wichtiger Punkt betrachtet werden. Basierend auf diesen ermittelten Anforderungen schlägt diese Arbeit eine Architektur vor, die es ermöglicht, kontextbezogene Dienste geräte und providerunabhängig in einem wirtschaftlichen Umfeld, unter Berücksichtigung des Schutzes der Privatsphäre des Benutzers, anzubieten.
Querying for meta knowledge
(2008)
The Semantic Web is based on accessing and reusing RDF data from many different sources, which one may assign different levels of authority and credibility. Existing Semantic Web query languages, like SPARQL, have targeted the retrieval, combination and reuse of facts, but have so far ignored all aspects of meta knowledge, such as origins, authorship, recency or certainty of data, to name but a few. In this paper, we present an original, generic, formalized and implemented approach for managing many dimensions of meta knowledge, like source, authorship, certainty and others. The approach re-uses existing RDF modeling possibilities in order to represent meta knowledge. Then, it extends SPARQL query processing in such a way that given a SPARQL query for data, one may request meta knowledge without modifying the query proper. Thus, our approach achieves highly flexible and automatically coordinated querying for data and meta knowledge, while completely separating the two areas of concern.
In recent years, traceability has been more and more universally accepted as being a key factor for the success of software development projects. However, the multitude of different, not well-integrated taxonomies, approaches and technologies impedes the application of traceability techniques in practice. This paper presents a comprehensive view on traceability, pertaining to the whole software development process. Based on graph technology, it derives a seamless approach which combines all activities related to traceability information, namely definition, recording, identification, maintenance, retrieval, and utilization in one single conceptual framework. The presented approach is validated in the context of the ReDSeeDS-project aiming at requirements-based software reuse.
The way information is presented to users in online community platforms has an influence on the way the users create new information. This is the case, for instance, in question-answering fora, crowdsourcing platforms or other social computation settings. To better understand the effects of presentation policies on user activity, we introduce a generative model of user behaviour in this paper. Running simulations based on this user behaviour we demonstrate the ability of the model to evoke macro phenomena comparable to the ones observed on real world data.
Cloud Computing is a topic that has gained momentum in the last years. Current studies show that an increasing number of companies is evaluating the promised advantages and considering making use of cloud services. In this paper we investigate the phenomenon of cloud computing and its importance for the operation of ERP systems. We argue that the phenomenon of cloud computing could lead to a decisive change in the way business software is deployed in companies. Our reference framework contains three levels (IaaS, PaaS, SaaS) and clarifies the meaning of public, private and hybrid clouds. The three levels of cloud computing and their impact on ERP systems operation are discussed. From the literature we identify areas for future research and propose a research agenda.
Knowledge compilation is a common technique for propositional logic knowledge bases. A given knowledge base is transformed into a normal form, for which queries can be answered efficiently. This precompilation step is expensive, but it only has to be performed once. We apply this technique to concepts defined in the Description Logic ALC. We introduce a normal form called linkless normal form for ALC concepts and discuss an efficient satisability test for concepts given in this normal form. Furthermore, we will show how to efficiently calculate uniform interpolants of precompiled concepts w.r.t. a given signature.
In this paper, we compare two approaches for exploring large,rnhierarchical data spaces of social media data on mobile devicesrnusing facets. While the first approach arranges thernfacets in a 3x3 grid, the second approach makes use of arnscrollable list of facets for exploring the data. We have conductedrna between-group experiment of the two approachesrnwith 24 subjects (20 male, 4 female) executing the same set ofrntasks of typical mobile users" information needs. The resultsrnshow that the grid-based approach requires significantly morernclicks, but subjects need less time for completing the tasks.rnFurthermore, it shows that the additional clicks do not hamperrnthe subjects" satisfaction. Thus, the results suggest thatrnthe grid-based approach is a better choice for faceted searchrnon touchscreen mobile devices. To the best of our knowledge,rnsuch a summative evaluation of different approaches for facetedrnsearch on mobile devices has not been done so far.
We propose a new approach for mobile visualization and interaction of temporal information by integrating support for time with today's most prevalent visualization of spatial information, the map. Our approach allows for an easy and precise selection of the time that is of interest and provides immediate feedback to the users when interacting with it. It has been developed in an evolutionary process gaining formative feedback from end users.
Designing Core Ontologies
(2011)
One of the key factors that hinders integration of distributed, heterogeneous information systems is the lack of a formal basis for modeling the complex, structured knowledge that is to be exchanged. To alleviate this situation, we present an approach based on core ontologies. Core ontologies are characterized by a high degree of axiomatization and formal precision. This is achieved by basing on a foundational ontology. In addition, core ontologies should follow a pattern-oriented design approach. By this, they are modular and extensible. Core ontologies allow for reusing the structured knowledge they define as well as integrating existing domainrnknowledge. The structured knowledge of the core ontologies is clearly separated from the domain-specific knowledge. Such core ontologies allow for both formally conceptualize their particular fields and to be flexibly combined to cover the needsrnof concrete, complex application domains. Over the last years, we have developed three independent core ontologies for events and objects, multimedia annotations, and personal information management. In this paper, we present the simultaneousrnuse and integration of our core ontologies at the example of a complex, distributed socio-technical system of emergency response. We describe our design approach for core ontologies and discuss the lessons learned in designing them. Finally, we elaborate on the beauty aspects of our core ontologies.
Schema information about resources in the Linked Open Data (LOD) cloud can be provided in a twofold way: it can be explicitly defined by attaching RDF types to the resources. Or it is provided implicitly via the definition of the resources´ properties.
In this paper, we analyze the correlation between the two sources of schema information. To this end, we have extracted schema information regarding the types and properties defined in two datasets of different size. One dataset is a LOD crawl from TimBL- FOAF profile (11 Mio. triple) and the second is an extract from the Billion Triples Challenge 2011 dataset (500 Mio. triple). We have conducted an in depth analysis and have computed various entropy measures as well as the mutual information encoded in this two manifestations of schema information.
Our analysis provides insights into the information encoded in the different schema characteristics. It shows that a schema based on either types or properties alone will capture only about 75% of the information contained in the data. From these observations, we derive conclusions about the design of future schemas for LOD.
The lack of a formal event model hinders interoperability in distributed event-based systems. Consequently, we present in this paper a formal model of events, called F. The model bases on an upper-level ontology and pro-vides comprehensive support for all aspects of events such as time and space, objects and persons involved, as well as the structural aspects, namely mereological, causal, and correlational relationships. The event model provides a flexible means for event composition, modeling of event causality and correlation, and allows for representing different interpretations of the same event. The foundational event model F is developed in a pattern-oriented approach, modularized in different ontologies, and can be easily extended by domain specifific ontologies.
The Multimedia Metadata Ontology (M3O) provides a generic modeling framework for representing multimedia metadata. It has been designed based on an analysis of existing metadata standards and metadata formats. The M3O abstracts from the existing metadata standards and formats and provides generic modeling solutions for annotations, decompositions, and provenance of metadata. Being a generic modeling framework, the M3O aims at integrating the existing metadata standards and metadata formats rather than replacing them. This is in particular useful as today's multimedia applications often need to combine and use more than one existing metadata standard or metadata format at the same time. However, applying and specializing the abstract and powerful M3O modeling framework in concrete application domains and integrating it with existing metadata formats and metadata standards is not always straightforward. Thus, we have developed a step-by-step alignment method that describes how to integrate existing multimedia metadata standards and metadata formats with the M3O in order to use them in a concrete application. We demonstrate our alignment method by integrating seven different existing metadata standards and metadata formats with the M3O and describe the experiences made during the integration process.
Networked RDF graphs
(2007)
Networked graphs are defined in this paper as a small syntactic extension of named graphs in RDF. They allow for the definition of a graph by explicitly listing triples as well as by SPARQL queries on one or multiple other graphs. By this extension it becomes possible to define a graph including a view onto other graphs and to define the meaning of a set of graphs by the way they reference each other. The semantics of networked graphs is defined by their mapping into logic programs. The expressiveness and computational complexity of networked graphs, varying by the set of constraints imposed on the underlying SPARQL queries, is investigated. We demonstrate the capabilities of networked graphs by a simple use case.
Existing tools for generating application programming interfaces (APIs) for ontologies lack sophisticated support for mapping the logics-based concepts of the ontology to an appropriate object-oriented implementation of the API. Such a mapping has to overcome the fundamental differences between the semantics described in the ontology and the pragmatics, i.e., structure, functionalities, and behavior implemented in the API. Typically, concepts from the ontology are mapped one-to-one to classes in the targeted programming language. Such a mapping only produces concept representations but not an API at the desired level of granularity expected by an application developer. We present a Model-Driven Engineering (MDE) process to generate customized APIs for ontologies. This API generation is based on the semantics defined in the ontology but also leverages additional information the ontology provides. This can be the inheritance structure of the ontology concepts, the scope of relevance of an ontology concept, or design patterns defined in the ontology.
Modeling and publishing Linked Open Data (LOD) involves the choice of which vocabulary to use. This choice is far from trivial and poses a challenge to a Linked Data engineer. It covers the search for appropriate vocabulary terms, making decisions regarding the number of vocabularies to consider in the design process, as well as the way of selecting and combining vocabularies. Until today, there is no study that investigates the different strategies of reusing vocabularies for LOD modeling and publishing. In this paper, we present the results of a survey with 79 participants that examines the most preferred vocabulary reuse strategies of LOD modeling. Participants of our survey are LOD publishers and practitioners. Their task was to assess different vocabulary reuse strategies and explain their ranking decision. We found significant differences between the modeling strategies that range from reusing popular vocabularies, minimizing the number of vocabularies, and staying within one domain vocabulary. A very interesting insight is that the popularity in the meaning of how frequent a vocabulary is used in a data source is more important than how often individual classes and properties arernused in the LOD cloud. Overall, the results of this survey help in understanding the strategies how data engineers reuse vocabularies, and theyrnmay also be used to develop future vocabulary engineering tools.
Die folgende Arbeit soll einen Überblick über bestehende Lösungen zur Interaktion in Erweiterten Realitäten (Augmented Reality) schaffen. Hierzu werden anhand dreier grundlegender Betrachtungsweisen unterschiedliche Interaktionskonzepte und -umsetzungen sowohl von der technischen, als auch von der konzeptuellen Seite her, vorgestellt. Neben Fragen der Visualisierung werden unterschiedliche Typen von Benutzungsschnittstellen vorstellt. Den größten Teil nehmen die drei typischen Interaktionsaufgaben Selektion- und Manipulation, Navigation und Systemkontrolle und die damit verbundenen Interaktionstechniken ein. Die Inhalte des Arbeitsberichts beschränken sich auf den Einsatz von Interaktionelementen in Augmented Reality Umgebungen. Dies geschieht in Abgrenzung zu Forschungsarbeiten auf dem Gebiet zu Interaktionstechniken in Virtual Reality Umgebungen (vollimmersiv oder auch desktoporientiert). Zwar standen und stehen viele Interaktionstechniken aus dem Bereich VR in der AR Pate, doch haben sich gerade im Bereich der AR neue Techniken und Konzepte entwickelt. Folglich sollen VR Techniken nur dann betrachtet werden, wenn Sie in AR Anwendungen angewendet wurden bzw. wenn ihre Anwendung sinnvoll erscheint.
Social networking platforms as creativity fostering systems: research model and exploratory study
(2008)
Social networking platforms are enabling users to create their own content, share this content with anyone they invite and organize connections with existing or new online contacts. Within these electronic environments users voluntarily add comments on virtual boards, distribute their search results or add information about their expertise areas to their social networking profiles and thereby share it with acquaintances, friends and increasingly even with colleagues in the corporate world. As a result, it is most likely that the underlying knowledge sharing processes result in many new and creative ideas. The objective of our research therefore is to understand if and how social social networking platforms can enforce creativity. In addition, we look at how these processes could be embedded within the organizational structures that influence innovative knowledge sharing behavior. The basis for our research is a framework which focuses on the relations between intrinsic motivation, creativity and social networking platforms. First results of our empirical investigation of a social software platform called "StudiVZ.net" proved that our two propositions are valid.
The impact of innovation in order to secure a company- existence is increasing. Especially in the field of information and communication systems, modified conditions, reduced product life cycles (in some markets e.g. in the software market down to a few months), and increased competition force many companies to adapt their innovation speed. Long-term profit will just be obtained by those companies that will manage to consolidate or further to enlarge their competitive position with technological innovation. The increasing relevance of new developments requires a separate treatment of technological innovations and demands for a technology and innovation management. Not every innovation is thereby a result of a single, spontaneous idea. Many innovations evolve from the integration or combination of already existing technologies. Especially in markets with insignificant growing rates, this kind of innovation is quite important. The purpose of this article is a case-oriented identification of existing deficiencies within established recommendations for a strategic technology and innovation management and the discussion about the role of modularity and technology integration as an alternative way to innovate.
Information systems research has started to use crowdsourcing platforms such as Amazon Mechanical Turks (MTurk) for scientific research, recently. In particular, MTurk provides a scalable, cheap work-force that can also be used as a pool of potential respondents for online survey research. In light of the increasing use of crowdsourcing platforms for survey research, the authors aim to contribute to the understanding of its appropriate usage. Therefore, they assess if samples drawn from MTurk deviate from those drawn via conventional online surveys (COS) in terms of answers in relation to relevant e-commerce variables and test the data in a nomological network for assessing differences in effects.
The authors compare responses from 138 MTurk workers with those of 150 German shoppers recruited via COS. The findings indicate, inter alia, that MTurk workers tend to exhibit more positive word-of mouth, perceived risk, customer orientation and commitment to the focal company. The authors discuss the study- results, point to limitations, and provide avenues for further research.
In dieser Arbeit wird ein Datenmodell für Shared Annotations vorgestellt. Basierend auf einem bereits existierenden Datenmodell für Annotationen, wird dieses erweitert um die Möglichkeit der Modellierung von Shared Annotations. Daraufhin werden Funktionen von Shared Annotations erläutert, die über das neue Annotationsmodell möglich sind.
In dieser Arbeit wird ein Mehrbenutzer-Annotationssystem namens myAnnotations vorgestellt, das mit Hilfe von sogenannten "Shared In-situ Problem Solving" Annotationen auf beliebigen Webseiten kollaborative Lern- und Arbeitsszenarien unterstützt. Hierbei wird insbesondere auf die Einsatzmöglichkeiten von "Shared In-situ Problem Solving" Annotationen beim kollaborativen Bearbeiten eines Textes und bei kollaborativen Lernerfolgskontrollen eingegangen.
The processing of data is often restricted by contractual and legal requirements for protecting privacy and IPRs. Policies provide means to control how and by whom data is processed. Conditions of policies may depend on the previous processing of the data. However, existing policy languages do not provide means to express such conditions. In this work we present a formal model and language allowing for specifying conditions based on the history of data processing. We base the model and language on XACML.
We present a non-linear camera pose estimator, which is able to handle a combined input of point and line feature correspondences. For three or more correspondences, the estimator works on any arbitrary number and choice of the feature type, which provides an estimation of the pose on a preferably small and flexible amount of 2D-3D correspondences. We also give an analysis of different minimization techniques, parametrizations of the pose data, and of error measurements between 2D and 3D data. These will be tested for the usage of point features, lines and the combination case. The result shows the most stable and fast working non-linear parameter set for pose estimation in model-based tracking.
Ziel des Verbundprojektes 3D-RETISEG ist es, ein optimiertes echtzeitfähiges Soft- und Hardwarepaket zur 2D/3D-Segmentierung und Klassifizierung basierend auf hierarchischen Inselstrukturen, und zur schnellen Volumenvisualisierung sehr großer Volumendaten aus komprimierten Datensätzen zu entwickeln. Die Entwicklungen sollen in konkreten Anwendungsfällen der Forschung und der Wirtschaft als marktreifes Modul in das Softwarepaket VGStudioMax integriert werden. In der Medizin, speziell bei der Segmentierung von Volumendatensätzen, die durch unterschiedliche Tomographietechniken (z.B. CT und MRI) erzeugt werden, liefern zweidimensionale Verfahren häufig eine unzureichende Qualität aufgrund der anisotropen Berücksichtigung der Rauminformationen. Bisher entwickelte dreidimensionale Verfahren sind, neben der der schlechten Verfügbarkeit ihrer Implementierungen, häufig nur für bestimmte Fragestellungen dediziert und daher für andere Anwendungsfälle schlecht adaptierbar. Weiterhin benötigen Verfahren, die gute Ergebnisse liefern, meist eine hohe Laufzeit. Dies ist aber speziell in medizinischen Anwendungen von großem Nachteil, da hier die Qualität der Ergebnisse in letzter Instanz von einem Experten beurteilt wird. Damit das Ergebnis schnell zu benutzerdefinierten Optimierungen durch Veränderung der Verfahrensparameter zur Verfügung steht, sollte ein Verfahren nur sehr kurze Rechenzeiten benötigen. Weist ein Verfahren eine geringe Laufzeit auf, so kann es außerdem intra-operativ eingesetzt werden. Der hier im Projekt verwendete 3D-Color-Structure-Code verspricht, ein besseres und generisches Verfahren zu ermöglichen.
In Enhanced Reality Umgebungen nimmt der Betrachter die Umgebung und zusätzliche, in eine halbdurchsichtige Datenbrille eingeblendete Informationen wahr. Das Kalibrierungsproblem der Datenbrille ist die Aufgabe, die eingeblendete Information mit dem korrekten realen Hintergrund zur Deckung zu bringen. Heutige Datenbrillen sind vergleichsweise klobig und schwer, deshalb kommt es häufig zu leichtem Verrutschen der Brille. Wird dieses Verrutschen nicht in die Position der Einblendung in der Brille einbezogen, so passt die Einblendung nicht mehr zum realen Hintergrund. Dies wird in Abbildung 1.1 exemplarisch dargestellt. Nach initialer Kalibrierung der teildurchsichtigen Datenbrille auf das Auge des Betrachters soll deshalb bei relativer Lageveränderung der Datenbrille zum Auge (durch leichtes Verrutschen) eine Re-Kalibrierung in Echtzeit automatisch erfolgen. Eine automatische Re-Kalibrierung bei Verrutschen wird von uns erstmalig erforscht.
We introduce linear expressions for unrestricted dags (directed acyclic graphs) and finite deterministic and nondeterministic automata operating on them. Those dag automata are a conservative extension of the Tu,u-automata of Courcelle on unranked, unordered trees and forests. Several examples of dag languages acceptable and not acceptable by dag automata and some closure properties are given.
Pose-Tracking
(2009)
Die bildbasierte automatische Bestimmung der Pose, d. h. der Position und Blickrichtung, einer Kamera in derWelt, ist eine relevante, aber immer noch unzureichend gelöste Aufgabe im Rechnersehen. In dem diesem Bericht zugrunde liegenden Projekt werden höhere markante Merkmale in den Bildern sicherer lokalisiert, sowie deren semantische Signifikanz vom Rechner bestimmt. Eine Posebestimmung wird durch eine Registrierung der elementaren Bestandteile dieser markanten Merkmale im Bild mit Merkmalen im 3-D-Modell erreicht. Dazu werden neue Algorithmen zur Detektion, Lokalisation und Registrierung der markanten Merkmale entwickelt bzw. vorhandene Algorithmen weiter verbessert. Modelle, wie sie aus der Rekonstruktion von Mehrfachansichten entstehen, werden durch weitere Semantik angereichert. Als Anwendungsszenario wird die Posebestimmung auf dem Campusgelände unter Verwendung von Bildern und einem semantischen CG-Modell des Campus gewählt. Die allgemeinen Verfahren werden an diesem Beispiel getestet und ihre Tragfähigkeit wird in Experimenten belegt. Im modularen System entstehen problemunabhängige Einzelbausteine zur Detektion markanter Merkmale und zur 3-D-Rekonstruktion und Posebestimmung aus Merkmalen, Punkten und Linien. Damit wird belegt, dass eine robuste Detektion markanter Merkmale möglich ist und zu einer effektiven Rekonstruktion und Posebestimmung auch in teilweise wenig strukturierten Außengebieten genutzt werden kann.
UML models and OWL ontologies constitute modeling approaches with different strength and weaknesses that make them appropriate for use of specifying different aspects of software systems. In particular, OWL ontologies are well suited to specify classes using an expressive logical language with highly flexible, dynamic and polymorphic class membership, while UML diagrams are much more suitable for specifying not only static models including classes and associations, but also dynamic behavior. Though MOF based metamodels and UML profiles for OWL have been proposed in the past, an integrated use of both modeling approaches in a coherent framework has been lacking so far. We present such a framework, TwoUse, for developing integrated models, comprising the benefits of UML models and OWL ontologies
Hybrid systems are the result of merging the two most commonly used models for dynamical systems, namely continuous dynamical systems defined by differential equations and discrete-event systems defined by automata. One can view hybrid systems as constrained systems, where the constraints describe the possible process flows, invariants within states, and transitions on the one hand, and to characterize certain parts of the state space (e.g. the set of initial states, or the set of unsafe states) on the other hand. Therefore, it is advantageous to use constraint logic programming (CLP) as an approach to model hybrid systems. In this paper, we provide CLP implementations, that model hybrid systems comprising several concurrent hybrid automata, whose size is only straight proportional to the size of the given system description. Furthermore, we allow different levels of abstraction by making use of hierarchies as in UML statecharts. In consequence, the CLP model can be used for analyzing and testing the absence or existence of (un)wanted behaviors in hybrid systems. Thus in summary, we get a procedure for the formal verification of hybrid systems by model checking, employing logic programming with constraints.
Hybrid automata are used as standard means for the specification and analysis of dynamical systems. Several researches have approached them to formally specify reactive Multi-agent systems situated in a physical environment, where the agents react continuously to their environment. The specified systems, in turn, are formally checked with the help of existing hybrid automata verification tools. However, when dealing with multi-agent systems, two problems may be raised. The first problem is a state space problem raised due to the composition process, where the agents have to be parallel composed into an agent capturing all possible behaviors of the multi-agent system prior to the verification phase. The second problem concerns the expressiveness of verification tools when modeling and verifying certain behaviors. Therefore, this paper tackles these problems by showing how multi-agent systems, specified as hybrid automata, can be modeled and verified using constraint logic programming(CLP). In particular, a CLP framework is presented to show how the composition of multi-agent behaviors can be captured dynamically during the verification phase. This can relieve the state space complexity that may occur as a result of the composition process. Additionally, the expressiveness of the CLP model flexibly allows not only to model multi-agent systems, but also to check various properties by means of the reachability analysis. Experiments are promising to show the feasibility of our approach.
This paper describes the robot Lisa used by team homer@UniKoblenz of the University of Koblenz Landau, Germany, for the participation at the RoboCup@Home 2017 in Nagoya, Japan. A special focus is put on novel system components and the open source contributions of our team. We have released packages for object recognition, a robot face including speech synthesis, mapping and navigation, speech recognition interface via android and a GUI. The packages are available (and new packages will be released) on
http://wiki.ros.org/agas-ros-pkg.
This paper describes the robot Lisa used by team
homer@UniKoblenz of the University of Koblenz Landau, Germany, for the participation at the RoboCup@Home 2016 in Leipzig, Germany. A special focus is put on novel system components and the open source contributions of our team. We have released packages for object recognition, a robot face including speech synthesis, mapping and navigation, speech recognition interface via android and a GUI. The packages are available (and new packages will be released) on http://wiki.ros.org/agas-ros-pkg.
So genannte Risikomanagement-Informationssysteme (RMIS) unterstützen Unternehmen darin, ihr Risiko-Management unternehmensweit durchzuführen. Im vorliegenden Beitrag analysieren wir mit Hilfe der Scoring-Methode die inhaltlichen Anforderungen an ein RMIS und zeigen, wie das zu einer begründeten Auswahl für die Anschaffung eines RMIS führt. Dazu diskutieren wir erstens, welche Anforderungen an ein RMIS gestellt werden, zweitens klassifizieren wir die auf dem Markt existierenden RMIS bezüglich ihrer Anwendungsziele und drittens erstellen wir einen Anforderungskatalog und ein dreistufiges Verfahren zur Handhabung des Auswahlprozesses. Dieses wird im Rahmen einer erfolgreich durchgeführten Fallstudie im Rahmen einer Masterarbeit [11] in einem konkreten Konzern, der Telekommunikations- und Internetzugang anbietet (United Internet AG, Montabaur), erläutert und evaluiert. 1.
IT-Risk-Management ist ein Bereich, der in den letzten 2 Jahren stark diskutiert und sich verändert hat. Die Anlässe findet man sowohl in den terroristischen Überfällen und entsprechenden Konsequenzen für Unternehmen als auch an die Naturkatastrophen oder Fehlinvestitionen und undurchsichtigen Geschäfte mancher Unternehmen. Daher wurden Richtlinien und Gesetze entworfen, die Vorstände verpflichten, das eigene Unternehmen gegen konkrete Risiken zu schützen. Der Detaillierungsgrad der Vorschläge ist je nach Bereich unterschiedlich. Sind aber Unternehmen bereit, die hohen Investitionen, die mit solchen umfangreichen Projekten zusammenhängen, auf sich zu nehmen um diese Richtlinien zu verfolgen? Wo sehen sie die Vorteile einer IT-Risk-Management-Strategie und wo die Herausforderungen? Welche Entwicklungen oder Veränderung wünschen sie sich? Das sind Fragen, die in der vorliegenden Studie, Experten des Bereich IT-Risk-Management gestellt wurden um die Zukunft des IT-Risk-Managements beeinflussen zu können. Diese Fragestellungen wurden in der vorliegenden Studie untersucht, dazu haben wir Experten des Bereichs IT-Risk-Managements mit Hilfe der Delphi-Methode befragt. Der vorliegende Bericht fasst die wichtigsten Ergebnisse zusammen.
Im Laufe der letzten Jahre hat sich der typische Komplex an kritischen Erfolgsfaktoren für Unternehmen verändert, infolgedessen der Faktor Wissen eine wachsende Bedeutung erlangt hat. Insofern kann man zum heutigen Zeitpunkt von Wissen als viertem Produktionsfaktor sprechen, welcher die Faktoren Arbeit, Kapital und Boden als wichtigste Faktoren eines Unternehmens ablöst (vgl. Keller & Yeaple 2013, S. 2; Kogut & Zander 1993, S. 631). Dies liegt darin begründet, dass aktive Maßnahmen zur Unterstützung von Wissenstransfer in Unternehmen höhere Profite und Marktanteile sowie bessere Überlebensfähigkeit gegenüber Wettbewerbern ohne derartige Maßnahmen nach sich ziehen (vgl. Argote 1999, S. 28; Szulanski 1996, S. 27; Osterloh & Frey 2000, S. 538). Der hauptsächliche Vorteil von wissensbasierten Entwicklungen liegt dabei in deren Nachhaltigkeit, da aufgrund der immateriellen Struktur (vgl. Inkpen & Dinur 1998, S. 456; Spender 1996a, S. 65 f.; Spender 1996b, S. 49; Nelson & Winter 1982, S. 76 ff.) eine Nachahmung durch Wettbewerber erschwert wird (vgl. Wernerfelt 1984, S. 173; Barney 1991, S. 102).
CAMPUS NEWS - artificial intelligence methods combined for an intelligent information network
(2008)
In this paper we describe a network for distributing personalised information with the usage of artificial intelligence methods. Reception of this information should be possible with everyday mobile equipment. Intelligent filtering and spam protection aim at integrating this technology into our environment. Information on the system architecture and usage of the installation are also presented.
Dieses Dokument schlägt ein Konzept für eine Personal Key Infrastruktur in iCity vor. Über ein Trust Center (TC) ausgestellte Zertiffkate gewährleisten einen sicheren Schlüsselaustausch mit nachweisbarer Authentisierung des Kommunikationspartners, Abhörsicherheit sowie Unverf älschtheit und Nachweisbarkeit der Nachrichten. Das gemeinsam vertrauensw ürdige TC muss während der Kommunikation nicht erreichbar sein. Es erhält lediglich öffentliche Informationen. Das Konzept stellt mehrere Sicherheitsstufen vor, die sichere Identiffkation und Anonymität unterschiedlich gewichten.
Conventional security infrastructures in the Internet cannot be directly adopted to ambient systems, especially if based on short-range communication channels: Personal, mobile devices are used and the participants are present during communication, so privacy protection is a crucial issue. As ambient systems cannot rely on an uninterrupted connection to a Trust Center, certiffed data has to be veriffed locally. Security techniques have to be adjusted to the special environment. This paper introduces a public key infrastructure (PKI) to provide secure communication channels with respect to privacy, confidentiality, data integrity, non-repudiability, and user or device authentication. It supports three certiffcate levels with a different balance between authenticity and anonymity. This PKI is currently under implementation as part of the iCity project.
An empirical study to evaluate the location of advertisement panels by using a mobile marketing tool
(2009)
The efficiency of marketing campaigns is a precondition for business success. This paper discusses a technique to transfer advertisement content vie Bluetooth technology and collects market research information at the same time. Conventional advertisement media were enhanced by devices to automatically measure the number, distance, frequency and exposure time of passersby, making information available to evaluate both the wireless media as well as the location in general. This paper presents a study analyzing these data. A cryptographic one-way function protects privacy during data acquisition.
Das Forschungsprojekt KMU 2.0 zielt auf die Gestaltung einer modernen Arbeitswelt ab, in der die Mitarbeiter eines Netzwerkes kleiner und mittlerer Unternehmen (KMU) ihre Kompetenzen, Kreativität und Ideen in einen unternehmensübergreifenden Austausch einbringen, um innovative Lösungen für Probleme aus dem Berufsalltag gemeinsam zu generieren. Hierüber gilt es neue Wege und Szenarien für das Management der unternehmensübergreifenden Zusammenarbeit in KMU-Netzwerken zu erforschen, mit Hinblick auf die Entstehung und Umsetzung von Innovationen für Probleme aus der modernen Arbeitswelt. Fokussiert wird dabei auf den Einsatz von Web 2.0-Technologien, die als Instrument hinsichtlich ihres Unterstützungspotentials bei der kooperativen Generierung von innovativen Lösungen untersucht werden. Zentrale Fragestellung dabei ist, ob der Einsatz von Web 2.0 in einem Netzwerk von KMU den Austausch innovativer Ideen durch Vernetzung der Mitarbeiter und ihres kreativen Potentials fördert. Praxispartner ist das WirtschaftsForum Neuwied e.V., ein im nördlichen Rheinland-Pfalz angesiedeltes Unternehmen, das aus ca. 115 Mitgliedern und 10.000 Mitarbeitern besteht und durch eine heterogene Struktur an Unternehmen unterschiedlichster Größen und Branchen sowie verschiedenster Leistungsangebote charakterisiert ist.
The paper deals with a specific introduction into probability propagation nets. Starting from dependency nets (which in a way can be considered the maximum information which follows from the directed graph structure of Bayesian networks), the probability propagation nets are constructed by joining a dependency net and (a slightly adapted version of) its dual net. Probability propagation nets are the Petri net version of Bayesian networks. In contrast to Bayesian networks, Petri nets are transparent and easy to operate. The high degree of transparency is due to the fact that every state in a process is visible as a marking of the Petri net. The convenient operability consists in the fact that there is no algorithm apart from the firing rule of Petri net transitions. Besides the structural importance of the Petri net duality there is a semantic matter; common sense in the form of probabilities and evidencebased likelihoods are dual to each other.
In this paper, we demonstrate by means of two examples how to work with probability propagation nets (PPNs). The fiirst, which comes from the book by Peng and Reggia [1], is a small example of medical diagnosis. The second one comes from [2]. It is an example of operational risk and is to show how the evidence flow in PPNs gives hints to reduce high losses. In terms of Bayesian networks, both examples contain cycles which are resolved by the conditioning technique [3].
Probability propagation nets
(2007)
A class of high level Petri nets, called "probability propagation nets", is introduced which is particularly useful for modeling probability and evidence propagation. These nets themselves are well suited to represent the probabilistic Horn abduction, whereas specific foldings of them will be used for representing the flows of probabilities and likelihoods in Bayesian networks.