Refine
Year of publication
Document Type
- Part of Periodical (132) (remove)
Keywords
- Bluetooth (4)
- computer clusters (4)
- ontology (4)
- Knowledge Compilation (3)
- parallel algorithms (3)
- Augmented Reality (2)
- Campus Information System (2)
- Customer Relationship Management (2)
- DOCHOUSE (2)
- Datensicherheit (2)
- Enhanced Reality (2)
- Fallstudie (2)
- IBM Lotus Quickr (2)
- Informatik (2)
- Linked Open Data (2)
- OWL (2)
- Ontology (2)
- Petri-Netze (2)
- Semantic Web (2)
- University (2)
- artifcial neural networks (2)
- artificial neural networks (2)
- classification (2)
- constraint logic programming (2)
- mobile phone (2)
- privacy protection (2)
- probability propagation nets (2)
- risk (2)
- social media (2)
- 2D/3D-Segmentierung (1)
- 3-D-Modell (1)
- 3D-Color-Structure-Code (1)
- 3D-RETISEG (1)
- ABox (1)
- Absorptive capacity (1)
- Adaptive Services Grid (ASG) (1)
- Amazon Mechanical Turks (1)
- Annotationsmodell (1)
- Ausstellung (1)
- Authentifizierung (1)
- Authentisierung (1)
- Automated Theorem Proving (1)
- Automated Theorem Proving Systems (1)
- B2B (1)
- Bachelorstudiengang (1)
- Bayes Procedures (1)
- Bilderkennung (1)
- Calculus (1)
- Campus Koblenz (1)
- Cloud Computing (1)
- Colonoscopy (1)
- Common Annotation Framework (1)
- Conference (1)
- Context-aware processes (1)
- Core Ontology on Multimedia (1)
- Core Ontology on Multimedia (COMM) (1)
- Creativity (1)
- Crowdsourcing (1)
- Darmspiegelung (1)
- Datenbrille (1)
- Datenschutz (1)
- Delphi-Studie (1)
- Description Logic (1)
- Description Logics (1)
- Dienstleistungsproduktivität (1)
- Discussion Forums (1)
- Distributed process execution (1)
- E-KRHyper (1)
- E-KRHyper theorem prover (1)
- E-Mail (1)
- E-government (1)
- E-services (1)
- ERP-Systeme (1)
- Enterprise Systems (1)
- Enterprise-Resource-Planning (1)
- Evaluation (1)
- Forensik (1)
- Forschungsprojekt KMU 2.0 (1)
- Generative Model (1)
- Gesundheitswesen (1)
- Graph Technology (1)
- GraphUML (1)
- Healthcare institution (1)
- Horn Clauses (1)
- IASON (1)
- IEC 61131 (1)
- IP-Adressen (1)
- IT Outsourcing (1)
- IT Security (1)
- IT Services (1)
- IT security analysis (1)
- IT-Risk-Management (1)
- IT-Sicherheitsmodelle (1)
- Image (1)
- Innovation (1)
- Innovationsmanagement (1)
- Intelligent Information Network (1)
- Interaktionselemente (1)
- Internet (1)
- Internet Voting (1)
- Java (1)
- Kalibrierung (1)
- Knowledge Sharing (1)
- Koloskopie (1)
- Kontextbezogener Dienst (1)
- Krankenhausinformationssystem (1)
- Kryptographie (1)
- Leistungsmessung (1)
- Linked Data Modeling (1)
- Logistik (1)
- MIA (1)
- MPEG-7 (1)
- Masterstudiengang (1)
- Mehrbenutzer-Annotationssystem (1)
- Mixed method (1)
- Mobile Information Systems (1)
- Mobiles Umfeld (1)
- Model-Driven Engineering (1)
- Multi-robot System (1)
- Multiagent System (1)
- Multimedia Metadata Ontology (1)
- Netzwerk kleiner und mittlerer Unternehmen (1)
- Neuronales Netz (1)
- ODRL (1)
- Object Recognition (1)
- Online Community (1)
- Online-Überwachung (1)
- Onlinewahl (1)
- Ontology alignment (1)
- Open Source (1)
- Outsourcing (1)
- POIs (1)
- Personal Key Infrastruktur (1)
- Personalausweis (1)
- Personalised Information Systems (1)
- Petri Nets (1)
- Petri net (1)
- Petrinetz (1)
- Polypendetektion (1)
- Pose (1)
- Probability (1)
- Probability propagation nets (1)
- Process tracing (1)
- Programmiersprachen (1)
- Projekt iCity (1)
- Propagation (1)
- Prozessmanagement (1)
- Quality assessment system (1)
- RDF (1)
- RDF Graphs (1)
- RDF modeling (1)
- RMIS (1)
- ReDSeeDS-Project (1)
- Resource Description Framework (RDF) (1)
- Risikomanagement-Informationssysteme (1)
- Robocup 2008 (1)
- Router (1)
- Routing Information Protocol (RIP) (1)
- Routing Loops (1)
- Routing with Metric based Topology Investigation (RMTI) (1)
- SOAMIG (1)
- SOMA (1)
- SPARQL (1)
- Schema Information (1)
- Schutzprofil (1)
- Search engine (1)
- Security (1)
- Semantics (1)
- Semantik (1)
- Service-Oriented Architectures (SOA) (1)
- Service-oriented Architectures (SOA) (1)
- Shared Annotation Model (1)
- Shared Annotations (1)
- Social Networking Platforms (1)
- Software (1)
- Software Development (1)
- Software techniques for object recognition (STOR) (1)
- Software-Customizing (1)
- Stochastic Logic (1)
- Support System (1)
- Survey Research (1)
- Syntax (1)
- TAP (1)
- TBox (1)
- Technologiemanagement (1)
- Theorem prover (1)
- Tokens (1)
- Traceability (1)
- Transformation (1)
- UML (1)
- Unified Modeling Language (UML ) (1)
- United Internet AG (1)
- Universität Koblenz-Landau (1)
- Vocabulary Mapping (1)
- Vocabulary Reuse (1)
- Web 2.0 (1)
- Web Ontology Language (OWL) (1)
- Website (1)
- Wechselkursänderung (1)
- Werbung (1)
- Word-of-Mouth (1)
- XSD (1)
- Zertifikat (1)
- activation functions of neurons (1)
- adaptive resonance theory (1)
- application programming interfaces (1)
- artiffficial neural networks (1)
- artififfcial neural networks (1)
- blood analysis (1)
- business process management (1)
- categorisation (1)
- core ontologies (1)
- currency exchange rates (1)
- delivery drone (1)
- design thinking (1)
- deutschsprachiger Markt (1)
- directed acyclic graphs (1)
- drone (1)
- e-Commerce (1)
- elektronische Anwendung (1)
- entrepreneurial design thinking (1)
- entrepreneurial thinking (1)
- entrepreneurship education (1)
- estimation of algorithm efficiency (1)
- event model (1)
- event-based systems (1)
- events (1)
- faceted search (1)
- finite state automata (1)
- governance (1)
- gradient method of training weight coefficients (1)
- hybrid automata (1)
- hybrid systems (1)
- iCity project (1)
- image processing (1)
- information system (1)
- knowledge work (1)
- kollaboratives Lernen (1)
- legacy software (1)
- mathematical model (1)
- media competence model (1)
- metadata formats (1)
- metadata standards (1)
- migration (1)
- minimum self-contained graphs (1)
- mobile application (1)
- mobile devices (1)
- mobile facets (1)
- mobile interaction (1)
- mobile phones (1)
- multi-agent systems (1)
- multimedia metadata (1)
- myAnnotations (1)
- parallel calculations (1)
- personal information management (1)
- persönliches Informationsmanagement (1)
- points of interest (1)
- polyp detection (1)
- privacy and personal data (1)
- privacy competence model (1)
- public key infrastructure (1)
- regular dag languages (1)
- risks (1)
- scene analysis (1)
- security awareness (1)
- semantic desktop (1)
- semantics (1)
- semantischer Desktop (1)
- sensor data (1)
- social media data (1)
- social object (1)
- social simulation (1)
- summative evaluation (1)
- teams (1)
- technology acceptance model (1)
- time series (1)
- tracking (1)
- traffic survey (1)
- virtual goods (1)
- visualization (1)
Institute
- Fachbereich 4 (132) (remove)
The estimation of various social objects is necessary in different fields of social life, science, education, etc. This estimation is usually used for forecasting, for evaluating of different properties and for other goals in complex man-machine systems. At present this estimation is possible by means of computer and mathematical simulation methods which is connected with significant difficulties, such as: - time-distributed process of receiving information about the object; - determination of a corresponding mathematical device and structure identification of the mathematical model; - approximation of the mathematical model to real data, generalization and parametric identification of the mathematical model; - identification of the structure of the links of the real social object. The solution of these problems is impossible without a special intellectual information system which combines different processes and allows predicting the behaviour of such an object. However, most existing information systems lead to the solution of only one special problem. From this point of view the development of a more general technology of designing such systems is very important. The technology of intellectual information system development for estimation and forecasting the professional ability of respondents in the sphere of education can be a concrete example of such a technology. Job orientation is necessary and topical in present economic conditions. It helps tornsolve the problem of expediency of investments to a certain sphere of education. Scientifically validated combined diagnostic methods of job orientation are necessary to carry out professional selection in higher education establishments. The requirements of a modern society are growing, with the earlier developed techniques being unable to correspond to them sufficiently. All these techniques lack an opportunity to account all necessary professional and personal characteristics. Therefore, it is necessary to use a system of various tests. Thus, the development of new methods of job orientation for entrants is necessary. The information model of the process of job orientation is necessary for this purpose. Therefore, it would be desirable to have an information system capable of giving recommendations concerning the choice of a trade on the basis of complex personal characteristics of entrants.
Generalized methods for automated theorem proving can be used to compute formula transformations such as projection elimination and knowledge compilation. We present a framework based on clausal tableaux suited for such tasks. These tableaux are characterized independently of particular construction methods, but important features of empirically successful methods are taken into account, especially dependency directed backjumping and branch local operation. As an instance of that framework an adaption of DPLL is described. We show that knowledge compilation methods can be essentially improved by weaving projection elimination partially into the compilation phase.
Ziel dieser Forschungsarbeit ist die Auswahl und Evaluierung von Open-Source ERPSystemen auf effiziente Anwendbarkeit in Unternehmen zum Zwecke des Aufbaus eines "ERP-Future-Labs", in welchem mittelständische Handelsunternehmen das/die installierte(n) System(e) testen können. Den Projektabschluss bildet hierbei die Installation eines lauffähigen Systems, auf welchem die vom Auftraggeber vorgegebenen Geschäftsprozesse abgewickelt werden können. Ferner sollen die Auftraggeber auf dem System geschult, eine Dokumentation der Software (Installation/Bedienung) und des Projekts erstellt werden.
Die vorliegende Fallstudie entstand als Untersuchungsobjekt zu einer Bachelorarbeit und wurde nach der eXperience Fallstudienmethodik erstellt. Ziel der Bachelorarbeit war die Identifizierung von Nutzenaspekten in diesem konkreten Fall. Im Anwenderunternehmen DOCHOUSE wurde hier eine Schnittstelle zwischen dem internen CRM-System und einem kollaborativen System für den externen Zugriff eingeführt.
Ein besonderer Dank geht in diesem Zusammenhang an Herr Wolfgang Brugger (Geschäftsführer DOCHOUSE GmbH), der die Erstellung der Fallstudie motiviert hat und die FG BAS mit deren Durchführung betraute. Die Fallstudie wurde im Winter 2011 von dem Studenten Martin Surrey und Roland Diehl, wissenschaftlicher Mitarbeiter der Forschungsgruppe, erhoben.
Integration von CRM-Systemen mit Kollaborations-Systemen am Beispiel von DocHouse und Lotus Quickr
(2012)
Der vorliegende Arbeitsbericht "Integration von CRM-Systemen mit Kollaborations-Systemen am Beispiel von DocHouse/ BRM und IBM Lotus Quickr" ist Ergebnis einer studentischen Projektarbeit. Ziel des Projekts war es Integrationsszenarien zwischen einem CRM-System und einem Kollaborati-onssystem zu erarbeiten und eine prototypische Schnittstelle mit entsprechender Funktion zwischen den Systemen DocHouse/ BRM und IBM Lotus Quickr zu implementieren.
Ein besonderer Dank geht in diesem Zusammenhang an Herr Wolfgang Brugger (Geschäftsführer der DocHouse GmbH), der die Idee einer solchen Entwicklung hatte und die FG BAS mit deren Durchführung betraute. Die Erstellung des Konzepts und des Prototyps wurde vom Winter 2010 bis Sommer 2011 von den Studenten Björn Lilge, Ludwig Paulsen, Marco Wolf, Markus Aldenhövel, Martin Surrey und Mike Reuthers im Rahmen ihres Projektpraktikums durchgeführt. Das Projektteam wurde bei der Konzeption und Implementierung inhaltlich und organisatorisch von Dipl.-Wirt.-Inform. Roland Diehl betreut.
Dieser Arbeitsbericht behandelt die Entwicklung einer Architektur für komplexe kontextbezogene Dienste im mobilen Umfeld. Der folgende Arbeitsbericht beschreibt die grundlegende Problemstellung und einen theoretischen Lösungsansatz, der im weiteren Forschungsprozess konkretisiert, prototypisch implementiert und evaluiert wird. Durch die gestiegene Mobilität vieler Menschen besteht ein stetig steigender Bedarf an mobilen Kommunikations- und Informationsdiensten. Im mobilen Umfeld werden die meisten Mehrwertdienste zum jetzigen Zeitpunkt von den Mobilfunkprovidern angeboten. Es handelt sich primär um Dienste für den Massenmarkt, die keine nennenswerte Personalisierung zulassen. Aufgrund der funktionell einfachen Dienste und des damit verbundenen niedrigen Komforts sowie der durch die Nutzung entstehenden Kosten werden derartige Dienste nur in begrenztem Maße vom Massenmarkt angenommen. Dazu besteht keine Möglichkeit, kostengünstig kontextbezogene Dienste für spezielle Personengruppen anzubieten, da das Dienstangebot vom jeweiligen Mobilfunkprovider festgelegt wird. Diese Arbeit betrachtet nicht nur die heutigen Hemmnisse, sondern auch die Anforderungen, die einer Akzeptanz besonders von komplexen kontextbezogenen Diensten noch im Wege stehen. Ziel ist es, eine Architektur bereitzustellen, die zukünftig personalisierte Dienste ermöglichen soll. Durch die Verwendung von sensiblen Kontextinformationen bei der Diensterbringung muss bei der Konzeption dieser Architektur der Schutz der Privatsphäre als ein wichtiger Punkt betrachtet werden. Basierend auf diesen ermittelten Anforderungen schlägt diese Arbeit eine Architektur vor, die es ermöglicht, kontextbezogene Dienste geräte und providerunabhängig in einem wirtschaftlichen Umfeld, unter Berücksichtigung des Schutzes der Privatsphäre des Benutzers, anzubieten.
Querying for meta knowledge
(2008)
The Semantic Web is based on accessing and reusing RDF data from many different sources, which one may assign different levels of authority and credibility. Existing Semantic Web query languages, like SPARQL, have targeted the retrieval, combination and reuse of facts, but have so far ignored all aspects of meta knowledge, such as origins, authorship, recency or certainty of data, to name but a few. In this paper, we present an original, generic, formalized and implemented approach for managing many dimensions of meta knowledge, like source, authorship, certainty and others. The approach re-uses existing RDF modeling possibilities in order to represent meta knowledge. Then, it extends SPARQL query processing in such a way that given a SPARQL query for data, one may request meta knowledge without modifying the query proper. Thus, our approach achieves highly flexible and automatically coordinated querying for data and meta knowledge, while completely separating the two areas of concern.
In recent years, traceability has been more and more universally accepted as being a key factor for the success of software development projects. However, the multitude of different, not well-integrated taxonomies, approaches and technologies impedes the application of traceability techniques in practice. This paper presents a comprehensive view on traceability, pertaining to the whole software development process. Based on graph technology, it derives a seamless approach which combines all activities related to traceability information, namely definition, recording, identification, maintenance, retrieval, and utilization in one single conceptual framework. The presented approach is validated in the context of the ReDSeeDS-project aiming at requirements-based software reuse.
The way information is presented to users in online community platforms has an influence on the way the users create new information. This is the case, for instance, in question-answering fora, crowdsourcing platforms or other social computation settings. To better understand the effects of presentation policies on user activity, we introduce a generative model of user behaviour in this paper. Running simulations based on this user behaviour we demonstrate the ability of the model to evoke macro phenomena comparable to the ones observed on real world data.
Cloud Computing is a topic that has gained momentum in the last years. Current studies show that an increasing number of companies is evaluating the promised advantages and considering making use of cloud services. In this paper we investigate the phenomenon of cloud computing and its importance for the operation of ERP systems. We argue that the phenomenon of cloud computing could lead to a decisive change in the way business software is deployed in companies. Our reference framework contains three levels (IaaS, PaaS, SaaS) and clarifies the meaning of public, private and hybrid clouds. The three levels of cloud computing and their impact on ERP systems operation are discussed. From the literature we identify areas for future research and propose a research agenda.
Knowledge compilation is a common technique for propositional logic knowledge bases. A given knowledge base is transformed into a normal form, for which queries can be answered efficiently. This precompilation step is expensive, but it only has to be performed once. We apply this technique to concepts defined in the Description Logic ALC. We introduce a normal form called linkless normal form for ALC concepts and discuss an efficient satisability test for concepts given in this normal form. Furthermore, we will show how to efficiently calculate uniform interpolants of precompiled concepts w.r.t. a given signature.
In this paper, we compare two approaches for exploring large,rnhierarchical data spaces of social media data on mobile devicesrnusing facets. While the first approach arranges thernfacets in a 3x3 grid, the second approach makes use of arnscrollable list of facets for exploring the data. We have conductedrna between-group experiment of the two approachesrnwith 24 subjects (20 male, 4 female) executing the same set ofrntasks of typical mobile users" information needs. The resultsrnshow that the grid-based approach requires significantly morernclicks, but subjects need less time for completing the tasks.rnFurthermore, it shows that the additional clicks do not hamperrnthe subjects" satisfaction. Thus, the results suggest thatrnthe grid-based approach is a better choice for faceted searchrnon touchscreen mobile devices. To the best of our knowledge,rnsuch a summative evaluation of different approaches for facetedrnsearch on mobile devices has not been done so far.
We propose a new approach for mobile visualization and interaction of temporal information by integrating support for time with today's most prevalent visualization of spatial information, the map. Our approach allows for an easy and precise selection of the time that is of interest and provides immediate feedback to the users when interacting with it. It has been developed in an evolutionary process gaining formative feedback from end users.
Designing Core Ontologies
(2011)
One of the key factors that hinders integration of distributed, heterogeneous information systems is the lack of a formal basis for modeling the complex, structured knowledge that is to be exchanged. To alleviate this situation, we present an approach based on core ontologies. Core ontologies are characterized by a high degree of axiomatization and formal precision. This is achieved by basing on a foundational ontology. In addition, core ontologies should follow a pattern-oriented design approach. By this, they are modular and extensible. Core ontologies allow for reusing the structured knowledge they define as well as integrating existing domainrnknowledge. The structured knowledge of the core ontologies is clearly separated from the domain-specific knowledge. Such core ontologies allow for both formally conceptualize their particular fields and to be flexibly combined to cover the needsrnof concrete, complex application domains. Over the last years, we have developed three independent core ontologies for events and objects, multimedia annotations, and personal information management. In this paper, we present the simultaneousrnuse and integration of our core ontologies at the example of a complex, distributed socio-technical system of emergency response. We describe our design approach for core ontologies and discuss the lessons learned in designing them. Finally, we elaborate on the beauty aspects of our core ontologies.
Schema information about resources in the Linked Open Data (LOD) cloud can be provided in a twofold way: it can be explicitly defined by attaching RDF types to the resources. Or it is provided implicitly via the definition of the resources´ properties.
In this paper, we analyze the correlation between the two sources of schema information. To this end, we have extracted schema information regarding the types and properties defined in two datasets of different size. One dataset is a LOD crawl from TimBL- FOAF profile (11 Mio. triple) and the second is an extract from the Billion Triples Challenge 2011 dataset (500 Mio. triple). We have conducted an in depth analysis and have computed various entropy measures as well as the mutual information encoded in this two manifestations of schema information.
Our analysis provides insights into the information encoded in the different schema characteristics. It shows that a schema based on either types or properties alone will capture only about 75% of the information contained in the data. From these observations, we derive conclusions about the design of future schemas for LOD.
The lack of a formal event model hinders interoperability in distributed event-based systems. Consequently, we present in this paper a formal model of events, called F. The model bases on an upper-level ontology and pro-vides comprehensive support for all aspects of events such as time and space, objects and persons involved, as well as the structural aspects, namely mereological, causal, and correlational relationships. The event model provides a flexible means for event composition, modeling of event causality and correlation, and allows for representing different interpretations of the same event. The foundational event model F is developed in a pattern-oriented approach, modularized in different ontologies, and can be easily extended by domain specifific ontologies.
The Multimedia Metadata Ontology (M3O) provides a generic modeling framework for representing multimedia metadata. It has been designed based on an analysis of existing metadata standards and metadata formats. The M3O abstracts from the existing metadata standards and formats and provides generic modeling solutions for annotations, decompositions, and provenance of metadata. Being a generic modeling framework, the M3O aims at integrating the existing metadata standards and metadata formats rather than replacing them. This is in particular useful as today's multimedia applications often need to combine and use more than one existing metadata standard or metadata format at the same time. However, applying and specializing the abstract and powerful M3O modeling framework in concrete application domains and integrating it with existing metadata formats and metadata standards is not always straightforward. Thus, we have developed a step-by-step alignment method that describes how to integrate existing multimedia metadata standards and metadata formats with the M3O in order to use them in a concrete application. We demonstrate our alignment method by integrating seven different existing metadata standards and metadata formats with the M3O and describe the experiences made during the integration process.
Networked RDF graphs
(2007)
Networked graphs are defined in this paper as a small syntactic extension of named graphs in RDF. They allow for the definition of a graph by explicitly listing triples as well as by SPARQL queries on one or multiple other graphs. By this extension it becomes possible to define a graph including a view onto other graphs and to define the meaning of a set of graphs by the way they reference each other. The semantics of networked graphs is defined by their mapping into logic programs. The expressiveness and computational complexity of networked graphs, varying by the set of constraints imposed on the underlying SPARQL queries, is investigated. We demonstrate the capabilities of networked graphs by a simple use case.
Existing tools for generating application programming interfaces (APIs) for ontologies lack sophisticated support for mapping the logics-based concepts of the ontology to an appropriate object-oriented implementation of the API. Such a mapping has to overcome the fundamental differences between the semantics described in the ontology and the pragmatics, i.e., structure, functionalities, and behavior implemented in the API. Typically, concepts from the ontology are mapped one-to-one to classes in the targeted programming language. Such a mapping only produces concept representations but not an API at the desired level of granularity expected by an application developer. We present a Model-Driven Engineering (MDE) process to generate customized APIs for ontologies. This API generation is based on the semantics defined in the ontology but also leverages additional information the ontology provides. This can be the inheritance structure of the ontology concepts, the scope of relevance of an ontology concept, or design patterns defined in the ontology.