Refine
Year of publication
Document Type
- Part of Periodical (132)
- Bachelor Thesis (68)
- Diploma Thesis (47)
- Master's Thesis (34)
- Study Thesis (9)
- Conference Proceedings (5)
- Doctoral Thesis (2)
- Article (1)
Keywords
- Simulation (5)
- ontology (5)
- Bluetooth (4)
- computer clusters (4)
- Android <Systemplattform> (3)
- Augmented Reality (3)
- Customer Relationship Management (3)
- Enterprise 2.0 (3)
- Informatik (3)
- Knowledge Compilation (3)
- Mikrocontroller AVR (3)
- Petri-Netze (3)
- Routing (3)
- University (3)
- parallel algorithms (3)
- Akkreditierung (2)
- Campus Information System (2)
- Cloud Computing (2)
- Computersimulation (2)
- DOCHOUSE (2)
- Datensicherheit (2)
- Distance Vector Routing (2)
- Distanz Vektor Routing (2)
- E-Government (2)
- E-KRHyper (2)
- Enhanced Reality (2)
- Fallstudie (2)
- Google App Engine (2)
- Google Web Toolkit (2)
- IBM Lotus Quickr (2)
- Interaktion (2)
- Java (2)
- Linked Open Data (2)
- Mikrocontroller (2)
- OWL (2)
- Objektverfolgung (2)
- Ontology (2)
- Personalausweis (2)
- RDF (2)
- Rechnernetz (2)
- Rechnernetze (2)
- Router (2)
- Semantic Web (2)
- Smartphone (2)
- Tablet PC (2)
- Visualisierung (2)
- Web Analytics (2)
- WebGL (2)
- artifcial neural networks (2)
- artificial neural networks (2)
- classification (2)
- constraint logic programming (2)
- governance (2)
- mobile phone (2)
- privacy protection (2)
- probability propagation nets (2)
- risk (2)
- simulation (2)
- social media (2)
- 101companies (1)
- 2D/3D-Segmentierung (1)
- 3-D-Modell (1)
- 3D-Color-Structure-Code (1)
- 3D-Kartierung (1)
- 3D-Modell (1)
- 3D-RETISEG (1)
- 8C model (1)
- ABox (1)
- API (1)
- API Analysis (1)
- API Migratiom (1)
- ATMega 16 (1)
- ATmega644 (1)
- ATtiny2313 (1)
- Absorptive capacity (1)
- Ad-Hoc Routing (1)
- Adaptive Services Grid (ASG) (1)
- Adobe Flex (1)
- Agenten (1)
- Agents (1)
- Aggregation (1)
- Aktualisierung (1)
- Alex Bernstein (1)
- Amazon Mechanical Turks (1)
- Ambient Occlusion (1)
- Android (1)
- Android 2 (1)
- Android 3 (1)
- Android 4 (1)
- Annotation (1)
- Annotationsmodell (1)
- App (1)
- App <Programm> (1)
- ArcGIS (1)
- Archivierung (1)
- Artificial Intelligence (1)
- Aspect-orientation (1)
- Aspekt-Orientierung (1)
- Ausstellung (1)
- Authentifizierung (1)
- Authentisierung (1)
- Automated Theorem Proving (1)
- Automated Theorem Proving Systems (1)
- Automatisches Beweisverfahren (1)
- Autonomes Fahrzeug (1)
- B2B (1)
- B2B Integration (1)
- BPMN (1)
- Bachelorstudiengang (1)
- Bayes Procedures (1)
- Bebauungsdichte (1)
- Beschreibungslogik (1)
- Bewegung (1)
- Bewegungsablauf (1)
- Bewegungssteuerung (1)
- Bilderkennung (1)
- Browser (1)
- Burg (1)
- Business Collaboration (1)
- C-Programmierung (1)
- CIDR (1)
- CSMs (1)
- Calculus (1)
- CamInSens (1)
- Campus Koblenz (1)
- Casual Games (1)
- Chaos (1)
- Classification (1)
- Clustering (1)
- Colonoscopy (1)
- Common Annotation Framework (1)
- Computational biology (1)
- Computer Networks (1)
- Computeranimation (1)
- Computergraphik (1)
- Computermodell (1)
- Computerspiel (1)
- Conference (1)
- Congestion Control (1)
- Content Analysis (1)
- Content Management (1)
- Context-aware processes (1)
- Core Ontology on Multimedia (1)
- Core Ontology on Multimedia (COMM) (1)
- Creativity (1)
- Criteria Matrix (1)
- Crowdsourcing (1)
- Darmspiegelung (1)
- Data Mining (1)
- Data analysis (1)
- Datenanalyse (1)
- Datenbrille (1)
- Datenschutz (1)
- Datenschutz, Datensicherheit, Apps, Informatik im Kontext (1)
- Defi-Now! (1)
- Defibrillator (1)
- Delphi-Studie (1)
- Delta (1)
- Description Logic (1)
- Description Logics (1)
- Design Pattern (1)
- Diabetes (1)
- Diabetische Retinopathie (1)
- Dienstleistungsproduktivität (1)
- Digitale Steuerung (1)
- Dimensionality Reduction (1)
- Dimensionsreduzierung (1)
- Discussion Forums (1)
- Distributed process execution (1)
- Drahtloses lokales Netz (1)
- Dreidimensionale Bildverarbeitung (1)
- Dynamische Analyse (1)
- E-Business (1)
- E-Hyper Tableau (1)
- E-IMS (1)
- E-KRHyper theorem prover (1)
- E-Learning (1)
- E-Mail (1)
- E-Partizipation (1)
- E-Procurement (1)
- E-government (1)
- E-services (1)
- ECMS 2012 (1)
- EEPROM (1)
- EPROM (1)
- ERP-Systeme (1)
- Echtzeit-Raytracer "Augenblick" (1)
- Echtzeitphysik (1)
- Echtzeitsystem (1)
- Einchip-Computer (1)
- Electronic Government (1)
- Elektrischer Servomotor (1)
- Emergenz (1)
- Emil (1)
- Enterprise Information Management (1)
- Enterprise Systems (1)
- Enterprise-Resource-Planning (1)
- Entwicklung (1)
- Entwurfsmuster (1)
- Erste Hilfe (1)
- European Conference on Modelling and Simulation (1)
- Evaluation (1)
- Facebook App (1)
- Facebook Application (1)
- Fahrzeug (1)
- Farbkalibrierung (1)
- Feature Extraction (1)
- Finanzberatung (1)
- First aid (1)
- FlexViz (1)
- Forensik (1)
- Forschungslandkarte (1)
- Forschungsprojekt KMU 2.0 (1)
- Framework (1)
- Freiheitsgrad (1)
- Föderiertes System (1)
- GPGPU (1)
- GPS (1)
- GSM-Standard (1)
- Game Design (1)
- Generative Model (1)
- Generative Modellierung (1)
- Geocaching (1)
- Germany (1)
- Gesundheitswesen (1)
- Good Government (1)
- Grails (1)
- Grails 1.2 (1)
- Graph Technology (1)
- GraphUML (1)
- HTML 5.0 (1)
- Haskell (1)
- Healthcare institution (1)
- Historisierung (1)
- Horn Clauses (1)
- Hypertableaux (1)
- IASON (1)
- IEC 61131 (1)
- IP 6 (1)
- IP-Adressen (1)
- IPv6 (1)
- IT Outsourcing (1)
- IT Security (1)
- IT Services (1)
- IT security analysis (1)
- IT-Risk-Management (1)
- IT-Sicherheitsmodelle (1)
- Image (1)
- Incremental Algorithms (1)
- Infobroker (1)
- Information Flow Ontology (1)
- Information Supply Chain (1)
- Informationsmanagement (1)
- Informationssystem (1)
- Innovation (1)
- Innovationsmanagement (1)
- Integriertes Informationssystem (1)
- Intelligent Information Network (1)
- Interaktionselemente (1)
- Internet (1)
- Internet Voting (1)
- Internetregulierung (1)
- Internetworking (1)
- Interpersonale Kommunikation (1)
- Interpretierer (1)
- Intranet-Zugang (1)
- JCryp-Tool (1)
- Java <Programmiersprache> (1)
- JavaScript (1)
- Kalibrierung (1)
- Kamera (1)
- Kamerakalibrierung (1)
- Kantenverfolgung (1)
- Katastrophentheorie (1)
- Kinematik (1)
- Klassifikation (1)
- Kletterer (1)
- Knowledge Sharing (1)
- Koloskopie (1)
- Kommunikationsprotokoll (1)
- Kondertal (1)
- Kontextbezogener Dienst (1)
- Konzept (1)
- Kooperatives Informationssystem (1)
- Krankenhausinformationssystem (1)
- Kryptographie (1)
- Künstliche Intelligenz (1)
- Laser (1)
- Leistungsmessung (1)
- Lernsoftware (1)
- Linked Data Modeling (1)
- Logging (1)
- Logik (1)
- Logistik (1)
- Lokalisation (1)
- MIA (1)
- MPEG-7 (1)
- Manöver-Visualisierung (1)
- MapReduce (1)
- Markov Zufallsfelder (1)
- Maschinelles Sehen (1)
- Massenkommunikation (1)
- Masterstudiengang (1)
- Matching (1)
- Mediator framework (1)
- Medical Image Analysis (1)
- Medizinische Bildanalyse (1)
- Medizinische Bildverarbeitung (1)
- Medizinische Visualisierung (1)
- Mehragentensystem (1)
- Mehrbenutzer-Annotationssystem (1)
- Menschenmengen-Evakuierung (1)
- Merkmalsextrahierung (1)
- Migration (1)
- Mindestlohn (1)
- Minimum wage (1)
- Mister X (1)
- Mixed method (1)
- Mobile BI (1)
- Mobile Business Intelligence (1)
- Mobile Information Systems (1)
- Mobiles Umfeld (1)
- Model View Presenter (1)
- Model-Driven Engineering (1)
- Modell (1)
- Montageablauf (1)
- Morphologische Operatoren (1)
- Motivation (1)
- Multi-robot System (1)
- Multiagent System (1)
- Multiagentensystem (1)
- Multimedia Metadata Ontology (1)
- Natural Feature Tracking (1)
- Netlogo (1)
- Network Virtualization (1)
- Netzwerk Routing (1)
- Netzwerk Virtualisierung (1)
- Netzwerk kleiner und mittlerer Unternehmen (1)
- Neuronales Netz (1)
- OCR (1)
- ODRL (1)
- OWL <Informatik> (1)
- Object Recognition (1)
- Objective-C (1)
- Online Community (1)
- Online-Befragung (1)
- Online-Überwachung (1)
- Onlinewahl (1)
- Ontologie (1)
- Ontologie <Wissensverarbeitung> (1)
- Ontology alignment (1)
- Open Source (1)
- OpenVDB (1)
- Optische Zeichenerkennung (1)
- Outsourcing (1)
- PHP (1)
- POIs (1)
- Parallelmanipulator (1)
- Path Tracing (1)
- Personal Key Infrastruktur (1)
- Personalised Information Systems (1)
- Petri Nets (1)
- Petri net (1)
- Petrinetz (1)
- Pfadnachverfolgung (1)
- Plugin (1)
- Politik (1)
- Politische Kampagne (1)
- Politische Meinungsbildung (1)
- Polypendetektion (1)
- Pose (1)
- Privatvermögen (1)
- Probability (1)
- Probability propagation nets (1)
- Proceedings (1)
- Process tracing (1)
- Programmiergerät (1)
- Programmiersprachen (1)
- Programmierumgebung (1)
- Projekt iCity (1)
- Propagation (1)
- Prozedurale Synthese (1)
- Prozessmanagement (1)
- Public-Key-Infrastruktur (1)
- Pulsweitenmodulation (1)
- Quality assessment system (1)
- Qualität (1)
- RDF Graphs (1)
- RDF modeling (1)
- RIPng (1)
- RMIS (1)
- Ray tracing (1)
- Raytracing (1)
- ReDSeeDS-Project (1)
- Real-Life Game (1)
- Real-Life Spiel (1)
- Real-Time (1)
- Realistisches Kameramodell (1)
- Reengineering (1)
- Registratur (1)
- Regression (1)
- Reisedokument (1)
- Repast (1)
- Rescueanalyser (1)
- Research Map (1)
- Resource Description Framework (RDF) (1)
- Retina Befundbilder (1)
- Retina Fundus Bilder (1)
- Retina Fundus Images (1)
- Rettungsroboter (1)
- Risikomanagement-Informationssysteme (1)
- Robbie (1)
- Robert P. Abelson (1)
- Robocup 2008 (1)
- Route Aggregation (1)
- Routen-Aggregation (1)
- Routing Information Protocol (1)
- Routing Information Protocol (RIP) (1)
- Routing Information Protokoll (1)
- Routing Loops (1)
- Routing with Metric based Topology Investigation (RMTI) (1)
- SOAMIG (1)
- SOMA (1)
- SPARQL (1)
- Schema Information (1)
- Schlussfolgern (1)
- Schutzprofil (1)
- Search engine (1)
- Security (1)
- Security Routing (1)
- Selbstorganisation (1)
- Semantics (1)
- Semantik (1)
- Serielle Schnittstelle (1)
- Service identification (1)
- Service-Identifizierung (1)
- Service-Oriented Architectures (SOA) (1)
- Service-oriented Architectures (SOA) (1)
- Serviceorientierte Architektur (1)
- Serviceroboter (1)
- Shared Annotation Model (1)
- Shared Annotations (1)
- Sicherheit Routing (1)
- Simulationswerkzeug (1)
- Smartphone Applikation (1)
- Social Games (1)
- Social Networking Platforms (1)
- Software (1)
- Software Development (1)
- Software migration (1)
- Software techniques for object recognition (STOR) (1)
- Software-Customizing (1)
- Software-Migration (1)
- Softwareentwicklung (1)
- Softwareentwurf (1)
- Softwareergonomie (1)
- Softwarespezifikation (1)
- Softwarewartung (1)
- Soziales System (1)
- Spiel (1)
- Spielkonsole (1)
- SpoGa (1)
- Sport (1)
- Sports (1)
- Stakeholder (1)
- Stakeholder-Beteiligung (1)
- Stakeholderanalyse (1)
- Starrkörper (1)
- Staukontrollmechanismen (1)
- Stereosehen (1)
- Steuerung (1)
- Stewart-Plattform (1)
- Stochastic Logic (1)
- Stomp (1)
- Strassenkreuzung (1)
- Suchmaschine (1)
- Supernetting (1)
- Supply Chain Management (1)
- Support System (1)
- Survey Research (1)
- Syntax (1)
- Systembiologie (1)
- Szenengraph (1)
- TAP (1)
- TBox (1)
- TCP/IP (1)
- Taktstraße (1)
- Tarrain Klassifikation (1)
- Taxonomy (1)
- Technologiemanagement (1)
- Temporale Datenhaltung (1)
- Tesseract (1)
- Text Analysis (1)
- Text Mining (1)
- Theorem prover (1)
- Tokens (1)
- Tool Evaluation (1)
- Traceability (1)
- Tracing (1)
- Transferfunktion (1)
- Transformation (1)
- Translation (1)
- UML (1)
- USB (1)
- Umfrage (1)
- Umgebungslicht (1)
- Unified Modeling Language (UML ) (1)
- United Internet AG (1)
- Universität Koblenz-Landau (1)
- Usability (1)
- Validierung (1)
- Veranstaltung (1)
- Verteilte Systeme (1)
- Verwaltungsautomation (1)
- Videospiel (1)
- Videospiele (1)
- Virtual Networks (1)
- Virtualisierung (1)
- Virtuelles Netz (1)
- Visibility Skeleton (1)
- Vocabulary Mapping (1)
- Vocabulary Reuse (1)
- Vocabulary Trainer (1)
- Volume Raycasting (1)
- Volumen-Rendering (1)
- WLAN Fingerprinting (1)
- Web 2.0 (1)
- Web 2.0- basierte Stakeholderplattform (1)
- Web Analytics Framework (1)
- Web Mining (1)
- Web Ontology Language (OWL) (1)
- Web Services (1)
- Web-application framework (1)
- Web-programming technologies (1)
- Web3D (1)
- Webanwendung (1)
- Website (1)
- Wechselkursänderung (1)
- Werbung (1)
- WiFi Fingerprinting (1)
- Wii (1)
- Wikipedia (1)
- Wirtschaftsprojekt (1)
- Wissensbasis (1)
- Wissensvermittlung (1)
- Word-of-Mouth (1)
- Wrapping (1)
- XSD (1)
- Xcode (1)
- Zertifikat (1)
- activation functions of neurons (1)
- adaptive resonance theory (1)
- agent-based model (1)
- agent-based simulation (1)
- agentenbasierte Modellierung (1)
- application programming interfaces (1)
- archiving (1)
- artiffficial neural networks (1)
- artififfcial neural networks (1)
- automatisierte Videoüberwachung (1)
- benefits (1)
- blood analysis (1)
- business process management (1)
- by-stander effect (1)
- camera calibration (1)
- catastrophy theory (1)
- categorisation (1)
- chaos (1)
- collaborative technologies (1)
- colour calibration (1)
- computer science (1)
- concept (1)
- contexts of use (1)
- core ontologies (1)
- crowd evacuation (1)
- currency exchange rates (1)
- delivery drone (1)
- description logic (1)
- design thinking (1)
- deutsche Hochschulen (1)
- deutschsprachiger Markt (1)
- development (1)
- diabetic retinopathy (1)
- directed acyclic graphs (1)
- distinct object identifiers (1)
- drone (1)
- e-Commerce (1)
- e-ID (1)
- eGroupware (1)
- eID (1)
- ePA (1)
- eXperience methodology (1)
- edge linking (1)
- elektronische Anwendung (1)
- emergence (1)
- entrepreneurial design thinking (1)
- entrepreneurial thinking (1)
- entrepreneurship education (1)
- estimation of algorithm efficiency (1)
- ethnic conflicts (1)
- ethnische Konflikte (1)
- event model (1)
- event-based systems (1)
- events (1)
- faceted search (1)
- finite state automata (1)
- game (1)
- gradient method of training weight coefficients (1)
- hybrid automata (1)
- hybrid systems (1)
- hypertableaux (1)
- iCity project (1)
- iOS (1)
- iOS 5 (1)
- iOS-SDK (1)
- id card (1)
- image processing (1)
- information flow regulation (1)
- information system (1)
- interaction (1)
- interaktiv (1)
- jOWL (1)
- knowledge base (1)
- knowledge work (1)
- kollaboratives Lernen (1)
- legacy software (1)
- lineare Algebra (1)
- mathematical model (1)
- media competence model (1)
- medical image processing (1)
- metadata formats (1)
- metadata standards (1)
- metasearch engine (1)
- migration (1)
- minimum self-contained graphs (1)
- mobile application (1)
- mobile devices (1)
- mobile facets (1)
- mobile interaction (1)
- mobile phones (1)
- morphological operators (1)
- movement (1)
- multi-agent systems (1)
- multimedia metadata (1)
- myAnnotations (1)
- nPA (1)
- norm (1)
- online survey (1)
- parallel calculations (1)
- personal information management (1)
- persönliches Informationsmanagement (1)
- physical interaction (1)
- physische Interaktion (1)
- points of interest (1)
- policy modelling (1)
- polyp detection (1)
- privacy and personal data (1)
- privacy competence model (1)
- procedural content generation (1)
- psychisches Verhalten (1)
- psychological behavior (1)
- public key infrastructure (1)
- quality (1)
- realtime physics (1)
- regular dag languages (1)
- retina fundus images (1)
- revenue management (1)
- risks (1)
- scene analysis (1)
- search engine (1)
- security awareness (1)
- self-organisation (1)
- semantic desktop (1)
- semantics (1)
- semantischer Desktop (1)
- sensor data (1)
- shiq (1)
- smartphone app (1)
- social media data (1)
- social object (1)
- social simulation (1)
- social system (1)
- stereotracking (1)
- summative evaluation (1)
- survey (1)
- teams (1)
- technology acceptance model (1)
- time series (1)
- tool-integration (1)
- tracking (1)
- traffic survey (1)
- travel document (1)
- unique name assumption (1)
- usability study (1)
- video game (1)
- video games (1)
- videogames (1)
- virtual goods (1)
- virtuelle Netze (1)
- visualization (1)
- Übersetzung (1)
- Überwachung (1)
- Überwachungstechnik (1)
Institute
- Fachbereich 4 (298) (remove)
Replikation einer Multi-Agenten-Simulationsumgebung zur Überprüfung auf Integrität und Konsistenz
(2012)
In dieser Master -Arbeit möchte ich zunächst eine Simulation vorstellen, mit der das Verhalten von Agenten untersucht wird, die in einer generierten Welt versuchen zu über leben und dazu einige Handlungsmöglichkeiten zur Auswahl haben. Anschließend werde ich kurz die theoretischen Aspekte beleuchten, welche hier zu Grunde liegen. Der Hauptteil meiner Arbeit ist meine Replikation einer Simulation, die von Andreas König im Jahr 2000 in Java angefertigt worden ist [Kö2000] . Ich werde hier seine Arbeit in stark verkürzter Form darstellen und anschließend auf meine eigene Entwicklung eingehen.
Im Schlussteil der Arbeit werde ich die Ergebnisse meiner Simulation mit denen von Andreas König vergleichen und die verwendeten Werkzeuge (Java und NetLogo) besprechen. Zum Abschluss werde ich in einem Fazit mein Vorhaben kurz zusammenfassen und berichten was sich umsetzen ließ, was nicht funktioniert hat und warum.
Standards are widely-used in the computer science and IT industry. Different organizations like the International Organization for Standardization (SO) are involved in the development of computer related standards. An important domain of standardization is the specification of data formats enabling the exchange of information between different applications. Such formats can be expressed in a variety of schema languages thereby defining sets of conformant documents. Often the use of multiple schema languages is required due to their varying expressive power and different kind of validation requirements.rnThis also holds for the Specification Common Cartridge which is maintained by the IMS Global Learning Consortium. The specification defines valid zip packages that can be used to aggregate different learning objects. These learning objects are represented by a set of files which are a part of the package and can be imported into a learning management system. The specification makes use of other specifications to constrain the contents of valid documents. Such documents are expressed in the eXtensible Markup Language and may contain references to other files also part of the package. The specification itself is a so-called domain profile. A domain profile allows the modification of one or more specifications to meet the needs of a particular community. Test rules can be used to determine a set of tasks in order to validate a concrete package. The execution is done by a testsystem which, as we will show, can be created automatically. Hence this method may apply to other package based data formats that are defined as a part of a specification.
This work will examine the applicability of this generic test method to the data formats that are introduced by the so called Virtual Company Dossier. These formats are used in processes related to public e-procurement. They allow the packaging of evidences that are needed to prove the fulfillment of criteria related to a public tender. The work first examines the requirements that are common to both specifications. This will introduce a new view on the requirements by introducing a higher level of abstraction. The identified requirements will then be used to create different domain profiles each capturing the requirements of a package based data format. The process is normally guided by supporting tools that ease the capturing of a domain profile and the creation of testsystems. These tools will be adapted to support the new requirements. Furtheron the generic testsystem will be modified. This system is used as a basis when a concrete testsystem is created.
Finally the author comes to a positive conclusion. Common requirements have been identified and captured. The involved systems have been adapted allowing the capturing of further types of requirements that have not been supported before. Furthermore the background of the specifications quite differ. This indicates that the use of domain profiles and generic test technologies may be suitable in a wide variety of other contexts.
Planning routes for trucks with a trailer is a complex procedure. In order to simplify this process, a route is segmented into elementary components, which represents basic motions of the considered vehicle. These elementary components are called maneuvers and are composed of two party. First, paths are constructed for certain reference points. Second, the vehicle is enclosed by a corridor during the execution of a maneuver. The paths of the vehicle have to take driveability into consideration. They must respect the kinematic constraints of the vehicle. The maneuver corridor can be used as a basis to guarantee collision-free motion planing. No part of the vehicle leaves the corridor during the maneuver. There are different types of maneuvers. Currently, the bending maneuver, the cusp maneuver and the straight line maneuver can be distinguished. In addition, a maneuver can be created with two different construction methods, the conventional and the iterative method.
In this thesis, a data structure to construct a maneuver is designed and implemented. The data structure is integrated into an already existing tool. The user can interact with the software to adjust various parameters of a maneuver. Afterwards the maneuver is generated based on these parameters. This also includes a visualization within the software, which can plot the parts of a maneuver. The visualization can be exported to an image file.
Web-programming is a huge field of different technologies and concepts. Each technology implements a web-application requirement like content generation or client-server communication. Different technologies within one application are organized by concepts, for example architectural patterns. The thesis describes an approach for creating a taxonomy about these web-programming components using the free encyclopaedia Wikipedia. Our 101companies project uses implementations to identify and classify the different technology sets and concepts behind a web-application framework. These classifications can be used to create taxonomies and ontologies within the project. The thesis also describes, how we priorize useful web-application frameworks with the help of Wikipedia. Finally, the created implementations concerning web-programming are documented.
To meet the growing demands in the automotive industry, car manufacturers constantly reduce the depth of production and shift value-adding processes to the suppliers. This requires that companies work together more closely and promotes the creation of complex logistics networks. To meet the requirements for information exchange, a consortium of automobile manufacturers launched the project RFID-based Automotive Network (RAN) in 2009. The initiative aims at creating a standardized architecture for efficient material flow management along the entire supply chain. Core component of this architecture is the Informationbroker, an information unit which automatically communicates data which is captured via Auto-ID technology to supply chain participants. The thesis focuses in cooperation with the IBS AG, a software company and consortium partner in the project, on the exchange of goods data.
At first, theoretical foundations are presented by describing the characteristics of a supply chain and explaining standardization efforts and related processes. The chapter on the supply chain focuses on trends in the automotive industry to create a link to the project. The topic of standardization provides in-depth information on electronic data exchange standards in order to additionally create a transition to the Informationbroker concept. In the analytical part, reference projects will be presented with a similar problem and set in relation to RAN. According to project documents, system requirements will be defined and models will be created in order to illustrate the problem. Rich Pictures are used to describe the basis and target state.
Based on these models, the flow of goods related data is depicted between two companies and the role of the Informationbroker for the information exchange is clarified. The thesis aims at establishing an understanding of the challenges of the project and how the proposed concepts of the initiative can lead to an optimization of an automotive supply chain.
The estimation of various social objects is necessary in different fields of social life, science, education, etc. This estimation is usually used for forecasting, for evaluating of different properties and for other goals in complex man-machine systems. At present this estimation is possible by means of computer and mathematical simulation methods which is connected with significant difficulties, such as: - time-distributed process of receiving information about the object; - determination of a corresponding mathematical device and structure identification of the mathematical model; - approximation of the mathematical model to real data, generalization and parametric identification of the mathematical model; - identification of the structure of the links of the real social object. The solution of these problems is impossible without a special intellectual information system which combines different processes and allows predicting the behaviour of such an object. However, most existing information systems lead to the solution of only one special problem. From this point of view the development of a more general technology of designing such systems is very important. The technology of intellectual information system development for estimation and forecasting the professional ability of respondents in the sphere of education can be a concrete example of such a technology. Job orientation is necessary and topical in present economic conditions. It helps tornsolve the problem of expediency of investments to a certain sphere of education. Scientifically validated combined diagnostic methods of job orientation are necessary to carry out professional selection in higher education establishments. The requirements of a modern society are growing, with the earlier developed techniques being unable to correspond to them sufficiently. All these techniques lack an opportunity to account all necessary professional and personal characteristics. Therefore, it is necessary to use a system of various tests. Thus, the development of new methods of job orientation for entrants is necessary. The information model of the process of job orientation is necessary for this purpose. Therefore, it would be desirable to have an information system capable of giving recommendations concerning the choice of a trade on the basis of complex personal characteristics of entrants.
Remote rendering services offer the possibility to stream high quality images to lower powered devices. Due to the transmission of data the interactivity of applications is afflicted with a delay. A method to reduce delay of the camera manipulation on the client is called 3d-warping. This method causes artifacts. In this thesis different approaches of remote rendering setups will be shown. The artifacts and improvements of the warping method will be described. Methods to reduce the artifacts will be implemented and analyzed.
As Enterprise 2.0 (E2.0) initiatives are gradually moving out of the early experimentation phase it is time to focus greater attention on examining the structures, processes and operations surrounding E2.0 projects. In this paper we present the findings of an empirical study to investigate and understand the reasons for initiating E2.0 projects and the benefits being derived from them. Our study comprises seven in-depth case studies of E2.0 implementations. We develop a classification and means of visualising the scope of E2.0 initiatives and use these methods to analyse and compare projects.
Our findings indicate a wide range of motivations and combinations of technology in use and show a strong emphasis towards the content management functionality of E2.0 technologies.
Im Vergleich zu herkömmlicher Computergrafik (perspektivische Projektion) bietet Raytracing entscheidende Vorteile, die hauptsächlich in der vergleichsweise hohen physikalischen Korrektheit der Methode begründet sind. Die Schwächen liegen hingegen im immensen Rechenaufwand.
Ein Raytracer ist vergleichsweise so rechenintensiv, weil für jeden Pixel mindestens ein Strahl verschickt werden muss. Dieser muss gegen alle Objekte im Raum geschnitten werden. Hinzu kommen noch die Strahlen, die entstehen, wenn Strahlen an Objekten reflektiert werden (Rekursion). Um diesen Rechenaufwand zu verkleinern und zusätzlich ein besseres Bild zu erzeugen, soll der adaptive Sampler den Raytracer unterstützen. Der adaptive Sampler soll während des Rendervorgangs den progressiven Fortschritt in der Bildgenerierung beobachten und Pixel von der weiteren Berechnung ausschließen, für die sich ein zusätzliches Verschießen von Strahlen nicht mehr lohnt.
Anders als der rein progressive Raytracer hört der adaptive Sampler mit dem Konvergieren des Bildes auf zu rechnen. Der adaptive Sampler soll so dafür sorgen, dass schneller ein besseres Bild erzeugt wird und somit die Performanz gesteigert wird. Insgesamt erwartet man sich vom adaptiven Sampler Vorteile bei der Berechnung von bestimmten Szenen. Unter anderem eine Verbesserung bei Szenen mit rein diffus beleuchteten Bildbereichen, sowie eine Verbesserung bei Szenen mit unterschiedlich rechenintensiven Bildbereichen. Ein normaler Raytracer kann nicht beurteilen, wie sinnvoll seine Schüsse sind. Er kann nur mehr Strahlen verschießen, in der Hoffnung, das Bild damit effektiv zu verbessern.
Es gibt jedoch viele Szenarien, bei denen eine linear steigende Schussanzahl pro Pixel keine gleichmäßige Verbesserung im Bild erzeugt. Das bedeutet, dass Bereiche im Bild schon gut aussehen, während andere noch sehr verrauscht sind. Man möchte nun Bildbereiche, die bereits konvergiert sind, in denen sich ein weiterer Beschuss also nicht mehr bemerkbar macht, ausschließen und die Rechenleistung dort nutzen, wo man sie noch braucht.
Wichtig dabei ist, dass Pixel nicht ungewollt zu früh von der Berechnung ausgeschlossen werden, die nicht weit genug konvergiert sind. Der adaptive Sampler soll so lange arbeiten, bis jeder Pixel dauerhaft keine Änderungen mehr vorweist. Das bedeutet, dass die Wahrscheinlichkeit für eine signifikante Farbänderung eines Pixels durch Verschießen eines Strahls (bei mehreren Lichtquellen in RenderGin mehrere Strahlen pro Pixel) klein genug ist. Es wird zwar intern keine Wahrscheinlichkeit berechnet, jedoch bekommt der Raytracer eine Art Gedächtnis: Er speichert die Veränderungen im beleuchteten Bild und deren Verlauf in eigenen Gedächtnisbildern. Das "Gedächtnis" für das alte Bild (Zustand des Bildes in der letzten Iteration über die Pixel) repräsentiert dabei das Kurzzeitgedächtnis. Es ist absolut genau. Das Langzeitgedächtnis wird von drei verschiedenen Bildern repräsentiert. Das erste gibt die Anzahl der verschossenen Strahlen pro Pixel an. Das zweite ist ein Wahrheitswertebild, das für jeden Pixel angibt, ob dieser noch in die Berechnung einbezogen werden soll. Das dritte Bild gibt an, wie oft jeder Pixel eine Farbänderung vollzogen hat, die geringer ist als der geforderte Maximalabstand eines Pixels zu sich selbst (vor und nach dem Verschießen eines weiteren Strahls).
Mit diesen drei Bildern ist es möglich, zusätzliche quantitative Informationen zu den qualitativen Informationen des Vergleichs vom neuen und alten Bild zu berücksichtigen.
In dieser Arbeit kläre ich die Frage, ob die gewünschten Effekte eintreten und ob bei Integration in die bestehende Struktur von RenderGin ein Performanzgewinn möglich ist. Die Umsetzung eines adaptiven Samplers ist als Plug-In in der Software RenderGin von Numenus GmbH geschehen. RenderGin ist ein echtzeitfähiger, progressiver Raytracer, der sich durch seine Performanz auszeichnet. Die Bildgenerierung geschieht allein auf der CPU, die Grafikkarte wird lediglich zur Anzeige des erzeugten Bildes benötigt.
Die Umsetzung und Programmierung des Plug-Ins ist in Microsoft Visual Studio 2010 geschehen unter Verwendung des RenderGin SDK der Numenus GmbH.
Generalized methods for automated theorem proving can be used to compute formula transformations such as projection elimination and knowledge compilation. We present a framework based on clausal tableaux suited for such tasks. These tableaux are characterized independently of particular construction methods, but important features of empirically successful methods are taken into account, especially dependency directed backjumping and branch local operation. As an instance of that framework an adaption of DPLL is described. We show that knowledge compilation methods can be essentially improved by weaving projection elimination partially into the compilation phase.
Forwarding loops
(2013)
The annotation of digital media is no new area of research, instead it is widely investigated. There are many innovative ideas for creating the process of annotation. The most extensive segment of related work is about semi automatic annotation. One characteristic is common in the related work: None of them put the user in focus. If you want to build an interface, which is supporting and satsfying the user, you will have to do a user evaluation first. Whithin this thesis we want to analyze, which features an interface should or should not have to meet these requirements of support, user satisfaction and beeing intuitive. After collecting many ideas and arguing with a team of experts, we determined only a few of them. Different combination of these determined variables form the interfaces, we have to investigate in our usability study. The results of the usability leads to the assumption, that autocompletion and suggestion features supports the user. Furthermore coloring tags for grouping them into categories is not disturbing to the user, but has a tendency of being supportive. Same tendencies emerge for an interface consisting of two user interface elements. There is also an example given for the definition differences of being intuitive. This thesis leads to the concolusion that for reasons of user satisfaction and support it is allowed to differ from classical annotation interface features and to implement further usability studies in the section of annotation interfaces.
This thesis addresses the implementation of a particle simulation of an explosion. The simulation will be displayed via ray tracing in near real time. The implementation makes use of the openCL standard. The focus of research in this thesis is to analyse the performance of this combination of components.
Autonomous systems such as robots already are part of our daily life. In contrast to these machines, humans an react appropriately to their counterparts. People can hear and interpret human speech, and interpret facial expressions of other people.
This thesis presents a system for automatic facial expression recognition with emotion mapping. The system is image-based and employs feature-based feature extraction. This thesis analyzes the common steps of an emotion recognition system and presents state-of-the-art methods. The approach presented is based on 2D features. These features are detected in the face. No neutral face is needed as reference. The system extracts two types of facial parameters. The first type consists of distances between the feature points. The second type comprises angles between lines connecting the feature points. Both types of parameters are implemented and tested. The parameters which provide the best results for expression recognition are used to compare the system with state-of-the-art approaches. A multiclass Support Vector Machine classifies the parameters.
The results are codes of Action Units of the Facial Action Coding System. These codes are mapped to a facial emotion. This thesis addresses the six basic emotions (happy, surprised, sad, fearful, angry, and disgusted) plus the neutral facial expression. The system presented is implemented in C++ and is provided with an interface to the Robot Operating System (ROS).
Opinion Mining : Using Twitter as a source of opinion for the prediction of stock market prices
(2012)
Neben den theoretischen Grundkonzepten der automatisierten Fließtextanalyse, die das Fundament dieser Arbeit bilden, soll ein Überblick in den derzeitigen Forschungsstand bei der Analyse von Twitter-Nachrichten gegeben werden. Hierzu werden verschiedene Forschungsergebnisse der, derzeit verfügbaren wissenschaftlichen Literatur erläutert, miteinander verglichen und kritisch hinterfragt. Deren Ergebnisse und Vorgehensweisen sollen in unsere eigene Forschung mit eingehen, soweit sie sinnvoll erscheinen. Ziel ist es hierbei, den derzeitigen Forschungsstand möglichst gut zu nutzen.
Ein weiteres Ziel ist es, dem Leser einen Überblick über verschiedene maschinelle Datenanalysemethoden zur Erkennung von Meinungen zu geben. Dies ist notwendig, um die Bedeutung der im späteren Verlauf der Arbeit eingesetzten Analysemethoden in ihrem wissenschaftlichen Kontext besser verstehen zu können. Da diese Methoden auf verschiedene Arten durchgeführt werden können, werden verschiedene Analysemethoden vorgestellt und miteinander verglichen. Hierdurch soll die Machbarkeit der folgenden Meinungsauswertung bewiesen werden. Um eine hinreichende Genauigkeit bei der folgenden Untersuchung zu gewährleisten, wird auf ein bereits bestehendes und evaluiertes Framework zurückgegriffen. Dieses ist als API 1 verfügbar und wird daher zusätzlich behandelt. Der Kern Inhalt dieser Arbeit wird sich der Analyse von Twitternachrichten mit den Methoden des Opinion Mining widmen.
Es soll untersucht werden, ob sich Korrelationen zwischen der Meinungsausprägung von Twitternachrichten und dem Börsenkurs eines Unternehmens finden lassen. Es soll dabei die Stimmungslage der Firma Google Inc. über einen Zeitraum von einem Monat untersucht und die dadurch gefunden Erkenntnisse mit dem Börsenkurs des Unternehmens verglichen werden. Ziel ist es, die Erkenntnisse von (Sprenger & Welpe, 2010) und (Taytal & Komaragiri, 2009) auf diesem Gebiet zu überprüfen und weitere Fragestellungen zu beantworten.
The Microsoft Kinect is currently polular in many application areas because ofrnthe cheap price and good precission. But controlling the cursor is unapplicablerndue to jitter in the skeletton data. My approach will try to stabilisize the cursor position with common techniques from image processing. The input therefore will be the Kinect color camera. A final position will be calculated using the different positions of the tracking techniques. For controlling the cursor the right hand should be tracked. A simple click gesture will also be developed. The evaluation will show if this approach was succesfull.
The natural and the artificial environment of mankind is of enormous complexity, and our means of understanding this complex environment are restricted unless we make use of simplified (but not oversimplified) dynamical models with the help of which we can explicate and communicate what we have understood in order to discuss among ourselves how to re-shape reality according to what our simulation models make us believe to be possible. Being both a science and an art, modelling and simulation isrnstill one of the core tools of extended thought experiments, and its use is still spreading into new application areas, particularly as the increasing availability of massive computational resources allows for simulating more and more complex target systems.
In the early summer of 2012, the 26th European Conference on Modelling andrnSimulation (ECMS) once again brings together the best experts and scientists in the field to present their ideas and research, and to discuss new challenges and directions for the field.
The 2012 edition of ECMS includes three new tracks, namely Simulation-BasedrnBusiness Research, Policy Modelling and Social Dynamics and Collective Behaviour, and extended the classical Finance and Economics track with Social Science. It attracted more than 110 papers, 125 participants from 21 countries and backgrounds ranging from electrical engineering to sociology.
This book was inspired by the event, and it was prepared to compile the most recent concepts, advances, challenges and ideas associated with modelling and computer simulation. It contains all papers carefully selected from the large number of submissions by the programme committee for presentation during the conference and is organised according to the still growing number tracks which shaped the event. The book is complemented by two invited pieces from other experts that discussed an emerging approach to modelling and a specialised application. rnrnWe hope these proceedings will serve as a reference to researchers and practitioners in the ever growing field as well as an inspiration to newcomers to the area of modelling and computer simulation. The editors are honoured and proud to present you with this carefully compiled selection of topics and publications in the field.
From September 4 to 11, 1992, a fiirst meeting between Ukrainian and German scientists interested in mathematical and computer modeling of social processes was held at Vorzel' near Kiev. The meeting had been planned for nearly three years by Igor V. Chernenko and Mikhail V. Kuz'min, then members of the research group on mathematical modeling in sociology at the Institute of Sociology of the Academy of Science of the Ukrainian Republic, and had to be postponed twice due to the political development in the former Soviet Union, but thanks to the organizers' perseverance (and in spite of a strike of the airport personell at Kiev Borispol Airport on the eve of the conference) the conference could at last be realized.rnThe main purpose of the conference was to discuss a synergetic interpretation of large-scale destructive social processes as catastrophic phenomena in self-organized systems.
This paper originates from the FP6 project "Emergence in the Loop (EMIL)" which explores the emergence of norms in artificial societies. Part of work package 3 of this project is a simulator that allows for simulation experiments in different scenarios, one of which is collaborative writing. The agents in this still prototypical implementation are able to perform certain actions, such as writing short texts, submitting them to a central collection of texts (the "encyclopaedia") or adding their texts to texts formerly prepared by other agents. At the same time they are able to comment upon others' texts, for instance checking for correct spelling, for double entries in the encyclopaedia or for plagiarisms. Findings of this kind lead to reproaching the original authors of blamable texts. Under certain conditions blamable activities are no longer performed after some time.
Ziel dieser Forschungsarbeit ist die Auswahl und Evaluierung von Open-Source ERPSystemen auf effiziente Anwendbarkeit in Unternehmen zum Zwecke des Aufbaus eines "ERP-Future-Labs", in welchem mittelständische Handelsunternehmen das/die installierte(n) System(e) testen können. Den Projektabschluss bildet hierbei die Installation eines lauffähigen Systems, auf welchem die vom Auftraggeber vorgegebenen Geschäftsprozesse abgewickelt werden können. Ferner sollen die Auftraggeber auf dem System geschult, eine Dokumentation der Software (Installation/Bedienung) und des Projekts erstellt werden.
Today you can find smartphones everywhere. This situation created a hype for Augmented Reality and AR Apps. The big question is: Do these applications provide a real added value? To make AR pratically it is important to add the computational power of a computer to the advantages of AR. An easy and fast way of interaction is essential.
A Poker-Assistance-Software is an ideal test area for an AR Application with real added value. The estimation of the winning probability and a fast automated tracking of the playing cards is the perfect field of investigation.
In this discussion it is interesting to evaluate the added value of AR Applications in common.
Particle swarm optimization is an optimization technique based on simulation of the social behavior of swarms.
The goal of this thesis is to solve 6DOF local pose estimation using a modified particle swarm technique introduced by Khan et al. in 2010. Local pose estimation is achieved by using continuous depth and color data from a RGB-D sensor. Datasets are aquired from different camera poses and registered into a common model. Accuracy and computation time of the implementation is compared to state of the art algorithms and evaluated in different configurations.
In Silico simulation of biological systems is an important sub area of computational biology (system biology), and becomes more and more an inherent part for research. Therefore, different kinds of software tools are required. At present, a multitude of tools for several areas exists, but the problem is that most of the tools are essentially application specific and cannot be combined. For instance, a software tool for the simulation of biochemical processes is not able to interact with tools for the morphology simulation and vice versa. In order to obtain realistic results with computer-aided simulations it is important to regard the biological system in its entirety. The objective is to develop a software framework, which provides an interface structure to combine existing simulation tools, and to offer an interaction between all affiliated systems. Consequently, it is possible to re-use existing models and simulation programs. Additionally, dependencies between those can be defined. The system is designed to interoperate as an extendable architecture for various tools. The thesis shows the usability and applicability of the software and discusses potential improvements.
This bachelor thesis deals with the topic "user-friendly design of applications (apps)" on mobile devices, a subdomain of software-ergonomics. In the process, two applications are being analyzed with the aim of developing a solution on how support on a mobile device should be conducted. This study focuses primarily on appropriate gestures to coordinate the 'help function' on a mobile device. The study results show that the test persons request a customized help function, but reject an extensive help description, as this seems to be overwhelming for the user.
Das allgemeine Erreichbarkeitsproblem in Stellen/Transitions-Netzen behandelt die Frage, ob eine bestimmte Zielmarkierung aus einer anderen Markierung erreichbar ist. Die lineare Algebra als Teilgebiet der Mathematik kann zur Beschreibung und Analyse von Petri-Netzen herangezogen werden. Jedoch stellen Kreise in Netzen bei der Erreichbarkeitsanalyse ein Problem für sie dar: ob für bestimmte Transitionen erforderliche Marken sofort, erst nach dem Durchlaufen bestimmter Kreise oder gar nicht zur Verfügung stehen, ist nicht erkennbar, hat jedoch unmittelbare Auswirkungen auf die Erreichbarkeit. Es gibt verschiedene Möglichkeiten, dieses Problem linear-algebraisch anzugehen. In dieser Arbeit werden vorhandene Verfahren sowie neue Lösungsansätze auf Basis der Reproduzierbarkeit der leeren Markierung diskutiert.
Die vorliegende Fallstudie entstand als Untersuchungsobjekt zu einer Bachelorarbeit und wurde nach der eXperience Fallstudienmethodik erstellt. Ziel der Bachelorarbeit war die Identifizierung von Nutzenaspekten in diesem konkreten Fall. Im Anwenderunternehmen DOCHOUSE wurde hier eine Schnittstelle zwischen dem internen CRM-System und einem kollaborativen System für den externen Zugriff eingeführt.
Ein besonderer Dank geht in diesem Zusammenhang an Herr Wolfgang Brugger (Geschäftsführer DOCHOUSE GmbH), der die Erstellung der Fallstudie motiviert hat und die FG BAS mit deren Durchführung betraute. Die Fallstudie wurde im Winter 2011 von dem Studenten Martin Surrey und Roland Diehl, wissenschaftlicher Mitarbeiter der Forschungsgruppe, erhoben.
Integration von CRM-Systemen mit Kollaborations-Systemen am Beispiel von DocHouse und Lotus Quickr
(2012)
Der vorliegende Arbeitsbericht "Integration von CRM-Systemen mit Kollaborations-Systemen am Beispiel von DocHouse/ BRM und IBM Lotus Quickr" ist Ergebnis einer studentischen Projektarbeit. Ziel des Projekts war es Integrationsszenarien zwischen einem CRM-System und einem Kollaborati-onssystem zu erarbeiten und eine prototypische Schnittstelle mit entsprechender Funktion zwischen den Systemen DocHouse/ BRM und IBM Lotus Quickr zu implementieren.
Ein besonderer Dank geht in diesem Zusammenhang an Herr Wolfgang Brugger (Geschäftsführer der DocHouse GmbH), der die Idee einer solchen Entwicklung hatte und die FG BAS mit deren Durchführung betraute. Die Erstellung des Konzepts und des Prototyps wurde vom Winter 2010 bis Sommer 2011 von den Studenten Björn Lilge, Ludwig Paulsen, Marco Wolf, Markus Aldenhövel, Martin Surrey und Mike Reuthers im Rahmen ihres Projektpraktikums durchgeführt. Das Projektteam wurde bei der Konzeption und Implementierung inhaltlich und organisatorisch von Dipl.-Wirt.-Inform. Roland Diehl betreut.
The measurement of the benefits of collaborative technologies is a challenge caused by sociological influences on these systems. In this work, the introduction of a collaborative system into an existing IT environment is examined, so that a value can be identified for the company. The analysis of frameworks has shown that the benefits can be reaped through better management and organization of in-company resources and optimization of the underlying process.
Die Siedlungsgeschichte im Rhein-Mosel-Dreieck reicht zurück bis in die römische Zeit. Entlang der beiden großen Flüsse finden sich zahlreiche Beispiele historischer Architektur. In diese Kategorie lässt sich auch die ehemalige Burganlage im Kondertal einordnen, die sich auf dem Nordwest-Ausläufer des Hinterberges befindet. Um eine genauere Vorstellung der Burganlage zu erhalten, sollte ein Computermodell erstellt werden. Die praktische Umsetzung dieses Modells ist Thema der vorliegenden Studienarbeit. Von der Erstellung eines "einfachen 3D-Modells" mittels einer dazu mächtigen Software kam man schnell ab. Stattdessen sollte das Ziel der Arbeit ein Programm sein, dass es dem Benutzer ermöglicht die Burganlage interaktiv aufzubauen und in beliebiger Form zu verändern.
Dieser Arbeitsbericht behandelt die Entwicklung einer Architektur für komplexe kontextbezogene Dienste im mobilen Umfeld. Der folgende Arbeitsbericht beschreibt die grundlegende Problemstellung und einen theoretischen Lösungsansatz, der im weiteren Forschungsprozess konkretisiert, prototypisch implementiert und evaluiert wird. Durch die gestiegene Mobilität vieler Menschen besteht ein stetig steigender Bedarf an mobilen Kommunikations- und Informationsdiensten. Im mobilen Umfeld werden die meisten Mehrwertdienste zum jetzigen Zeitpunkt von den Mobilfunkprovidern angeboten. Es handelt sich primär um Dienste für den Massenmarkt, die keine nennenswerte Personalisierung zulassen. Aufgrund der funktionell einfachen Dienste und des damit verbundenen niedrigen Komforts sowie der durch die Nutzung entstehenden Kosten werden derartige Dienste nur in begrenztem Maße vom Massenmarkt angenommen. Dazu besteht keine Möglichkeit, kostengünstig kontextbezogene Dienste für spezielle Personengruppen anzubieten, da das Dienstangebot vom jeweiligen Mobilfunkprovider festgelegt wird. Diese Arbeit betrachtet nicht nur die heutigen Hemmnisse, sondern auch die Anforderungen, die einer Akzeptanz besonders von komplexen kontextbezogenen Diensten noch im Wege stehen. Ziel ist es, eine Architektur bereitzustellen, die zukünftig personalisierte Dienste ermöglichen soll. Durch die Verwendung von sensiblen Kontextinformationen bei der Diensterbringung muss bei der Konzeption dieser Architektur der Schutz der Privatsphäre als ein wichtiger Punkt betrachtet werden. Basierend auf diesen ermittelten Anforderungen schlägt diese Arbeit eine Architektur vor, die es ermöglicht, kontextbezogene Dienste geräte und providerunabhängig in einem wirtschaftlichen Umfeld, unter Berücksichtigung des Schutzes der Privatsphäre des Benutzers, anzubieten.
The RoboCup Rescue League was founded with the intention to serve as an international communication platform for development of rescue robots. In regions hit by catastrophes, those robots are meant to find buried people, detect their physical condition and send the proper information to rescue teams.
At the university of Koblenz the rescue robot "Robbie" has been in development for years. Robbie accumulates information about his environment by targeted control of sensors and can act autonomous in unknown regions with help of the previous collected data. He creates an internal 2D map of his environment. This map provides enough information to navigate through space and to localize himself. Unfortunately, 2D maps have a huge drawback. When confronted with uneven terrain or even multilayered disaster areas, this technique will meet its limitations. Considered that most afflicted areas will probably have a bumpy ground, it is important to improve this technique.
That is why 3D-mapping is being required. With the help of RoboCup Rescue Scenario this Bachelor Thesis is going to implement a 3D-mapping algorithm and evaluate the flaws of 2D- and 3D-mapping problems thoroughly.
Echtzeitsysteme spielen in der heutigen Zeit in vielen Anwendungsbereichen eine bedeutende Rolle. Beispiele hierfür finden sich in der Robotik, der Fabrikautomation, der Medizintechnik, aber auch in Bereichen aus dem täglichen Leben, wie im Automobilbereich oder in der Mobilkommunikation. Trotz der ständig wiederkehrenden Begegnung mit ihnen im Alltag, sind Echtzeitsysteme häufig innerhalb von größeren Systemen eingebettet und können somit nur selten vom jeweiligen Nutzer bewusst als eigenständig wahrgenommen werden.
Um spezifische Eigenschaften des Gebiets Echtzeitsysteme im Schulunterricht oder in der Ausbildung an Hochschulen den Lernenden sichtbar zu machen, existiert im Labor der Arbeitsgruppe Echtzeitsysteme der Universität Koblenz-Landau ein Versuchsaufbau mit dem Namen "Wippe", bei dem eine Kamera die Bewegung einer Kugel auf einer ebenen Fläche aufnehmen und vermessen kann. Durch die Neigung der Fläche in zwei Achsen ist die Wippe prinzipiell in der Lage, die Kugel zu bewegen und zum Halten zu bringen. Insbesondere soll verhindert werden, dass die Kugel von der Fläche fällt. Die vorliegende Arbeit beschäftigt sich mit der Analyse und Überarbeitung des Wippe-Systems. Außerdem wird ein Regelalgorithmus für eine automatische Steuerung des Systems realisiert.
Das Wippe-Experiment soll für didaktische Zwecke eingesetzt werden und ein offenes System darstellen, welches die Merkmale und Eigenschaften eines Echtzeitsystems exemplarisch verdeutlicht.
In this study, the author evaluates four visualization methods (FlexViz, the Jambalaya applet, Experimental jOWL TouchGraph, and Plone ontology) from the pool of currently available web based visualization methods.
Further, this study attempts to answer of the following research questions: 1. What are the requirements for ontology visualization? 2. How is comparative analysis and evaluation carried out? 3. How can a chosen method be tested? Based on the results of the evaluation, the visualization method FlexViz was found to be the most suitable for the given scenario. The deployment of FlexViz was carried out and integrated within Plone CMS and logical parts of the VCD ontology are tested. Finally, FlexViz was analyzed from different stakeholder- perspectives. Results showed that FleViz is a reliable tool for visualizing, understanding and analyzing a developed ontology because of its user-friendly and interactive interface, but performs poorly in regards to the editing, extension and development process.
Augmented Reality (AR) is getting more and more popular. To augment information into the field of vision of the user using HMDs, e.g. front shields of a car, glasses, displays of a smartphone or tablets are the main use of AR technology. It is necessary to get the position and orientation (pose) of the camera in space to augment correctly.
Nowadays, this is solved with artificial markers. These known markers are placed in the room and the system is taught to this set up. The next step is to get rid of these artificial markers. If we are calculating the pose without such markers we are talking about marker-less tracking. Instead of artificial markers we will use natural objects in the real world as reference points to calculate the pose. Thus, this approach can be used flexibly and dynamically. We are no longer dependent on artificial markers but we need much more knowledge about the scenery to find the pose. This is compensated by technical actions and/or the user himself. However, both solutions are neither comfortable nor efficient for the usage of such a system. This is why marker-less 3D tracking is still a big field of research.
This sets the starting point for the bachelor thesis. In this thesis an approach is proposed that needs only a quantity of 2D Feature from a given camera image and a quantity of 3D Feature of an object to find the initial Pose. With this approach, we got rid of the technical and user assistance. 2D and 3D Features can be detected in any way you like.
The main idea of this approach is to build six correspondences between these quantities. With those we are able to estimate the pose. Each 3D Feature is mapped with the estimated pose onto image coordinates, whereby the estimated pose can be evaluated. Each distance is measured between the mapped 3D Feature and the associated 2D Feature. Each correspondency is evaluated and the results are summed up to evaluate the whole pose. The lower this summed up value is, the better the pose. It has been shown to have a correct pose with a value around ten pixels.
Due to lots of possibilities to build six correspondences between the quantities, it is necessary to optimize the building process. For the optimization we will use a genetic algorithm.
During the test case the system worked quite reliably. The hit rate was around 90% with a runtime of approximately twelve minutes. Without optimization it can take easily some years.
Procedural content generation, the generation of video game content using pseudo-random algorithms, is a field of increasing business and academic interest due to its suitability for reducing development time and cost as well as the possibility of creating interesting, unique game spaces. Although many contemporary games feature procedurally generated content, the author perceived a lack of games using this approach to create realistic outer-space game environments, and the feasibility of employing procedural content generations in such a game was examined. Using current scientific models, a real-time astronomical simulation was developed in Python which generates star and planets object in a fictional galaxy procedurally to serve as the game space of a simple 2D space exploration game where the player has to search for intelligent life.
In der Betriebswirtschaft wird das Steuern von Preis und Kapazität einer be- stimmten Ware oder Dienstleistung als Yield oder Revenue Management bezeich- net. Im wesentlichen geht es darum, die vorhandenen knappen Gütter ertrags- bzw. erlösmaximal zu vermarkten. Dies wird zum einen durch Erstellen von Pro- gnosen über das zukünftige Kundenverhalten, zum anderen durch den Einsatz verschiedener Ertragsmaximierungstechniken, wie z.B. Preisdifferenzierung oder Überbuchung, erreicht. So lassen sich die Zahlungsbereitschaften verschiedener Nachfragegruppen optimal abschöpfen, der Auslastungsgrad erhöhen und somit der Ertrag optimieren. Das Yield-Management entstand nach der Deregulierung des amerikanischen Luft- verkehrsmarktes im Jahre 1978, als zahlreiche Fluggesellschaften den etablierten Airlines Konkurenz machten. Sie zielten mit ihren günstigen Preisen vor allem auf Freizeitreisende und konnten so erhebliche Marktanteile gewinnen. Im Zuge des Preiskampfes entwickelten Fluggesellschaften verschiedene Techniken zur Steuerung der Preise und Kapazitäten um ihre Erlöse zu maximieren. So konnte z.B. American Airlines trotzt schlechterer Kostenstruktur viele Wettbewerber vom Markt drängen, als sie 1985 zum ersten mal das neue Konzept vollständig eingesetzt haben. Vergl. (vergl. uRK05, Kap. A). Im folgenden haben andere Branchen, wie Autovermieter und Hoteliers, die Kon- zepte des Yield-Managements adoptiert und erfolgreich eingesetzt. Die Mechanis- men der Nachfrageprognosen und Optimierung der Produktsegmente, Kontingente und Preise sind stets die gleichen. Es ädern sich lediglich die branchenspezifischen Parameter sowie Optimierungsziele. Im Rahmen dieser Diplomarbeit werden die möglichen Einsatzgebiete des Yield- Managements untersucht. Es wird der Versuch unternommen ein allgemeingültiges Model zur Steuerung von Preis und Kapazität von Dienstleistungen oder Pro- dukten zu entwickeln. Das Model wird dann am speziellen Beispiel der Touris- musbranche, insbesondere der Produkte eines Reiseveranstalters, in einem Yield- Management-Modul des Reiseveranstaltersystems TourPaX implementiert.
Querying for meta knowledge
(2008)
The Semantic Web is based on accessing and reusing RDF data from many different sources, which one may assign different levels of authority and credibility. Existing Semantic Web query languages, like SPARQL, have targeted the retrieval, combination and reuse of facts, but have so far ignored all aspects of meta knowledge, such as origins, authorship, recency or certainty of data, to name but a few. In this paper, we present an original, generic, formalized and implemented approach for managing many dimensions of meta knowledge, like source, authorship, certainty and others. The approach re-uses existing RDF modeling possibilities in order to represent meta knowledge. Then, it extends SPARQL query processing in such a way that given a SPARQL query for data, one may request meta knowledge without modifying the query proper. Thus, our approach achieves highly flexible and automatically coordinated querying for data and meta knowledge, while completely separating the two areas of concern.
This thesis deals with the distribution of large virtual networks to multiple physical hosts using the virtualization tools VNUML and EDIV. While VNUML is responsible for simulating the network, EDIV has its focus primarily on the distributiion. After a short explanation of basic terms and concepts from the field of network simulation, the two previously mentioned virtualization tools are described in detail. This starts in both cases with a detailed description regarding the proper installation and configuration, followed by a demonstration of the most important functions, such as starting or stopping a simulation. The creation of a suitable network scenario and the information scripts provided by EDIV are two further points described in this context. To use the presented possibilities of those tools in practice, an own network scenario is designed and distributed to several hosts, so that the different functions of EDIV can be shown and described accurately.
In this diploma thesis a skeleton-based matching technique for 2D shapes is introduced. First, current approaches for the matching of shapes will be presented. The basics of skeleton-based matchings will be introduced. In the context of this thesis, a skeleton-based matching approach was implemented as presented in the original paper. This implementation is evaluated by performing a similarity search in three shape databases. Strengths and limitations of the approach are pointed out. In addition, the introduced algorithm will be examined with respect to extending it towards matching of 3D objects. In particular, the approach is applied to medical data sets: Pre- and postoperative CT images of the abdominal aorta of one patient will be compared. Problems and approaches for matching of 3D objects in general and blood vessels in particular will be presented.
In recent years, traceability has been more and more universally accepted as being a key factor for the success of software development projects. However, the multitude of different, not well-integrated taxonomies, approaches and technologies impedes the application of traceability techniques in practice. This paper presents a comprehensive view on traceability, pertaining to the whole software development process. Based on graph technology, it derives a seamless approach which combines all activities related to traceability information, namely definition, recording, identification, maintenance, retrieval, and utilization in one single conceptual framework. The presented approach is validated in the context of the ReDSeeDS-project aiming at requirements-based software reuse.
The way information is presented to users in online community platforms has an influence on the way the users create new information. This is the case, for instance, in question-answering fora, crowdsourcing platforms or other social computation settings. To better understand the effects of presentation policies on user activity, we introduce a generative model of user behaviour in this paper. Running simulations based on this user behaviour we demonstrate the ability of the model to evoke macro phenomena comparable to the ones observed on real world data.
This study investigates crowdfunding, a new form of financing projects. In the past years more and more crowdfunding platforms emerged. The main question is if crowdfunding is able to compete with the traditional types of financing social projects. The history and development of crowdfunding is presented in this paper. The different crowdfunding models are explained. An overview of German crowd funding platforms is presented. Based on successful social crowdfunding projects a list of key success factors is listed and described. In a case study a concept for financing a social project through crowdfunding is developed upon the previous studies.
Aspect-orientation in PHP
(2012)
Diese Diplomarbeit hat das Thema der fehlenden Cross-Cutting-Concerns(CCCs)-Unterstützung in PHP zum Inhalt. Die Basis bilden zu definierende Anforderungen an eine AOP-Realisierung im PHP-Umfeld. Es wird analysiert, wie und ob verwandte Sprachen und Paradigmen es gestatten, CCCs zu unterstützen. Darüber hinaus wird die Möglichkeit erörtert, AOP in PHP ohne PHP-Erweiterung zu realisieren. Weiter werden die bisherigen Ansätze, AOP in PHP umzusetzen, qualitativ untersucht. Die vorliegende Arbeit zielt darauf ab, eine eigene AOP-PHP-Lösung zu präsentieren, die nicht die Schwächen existierender Lösungen teilt.
Cloud Computing is a topic that has gained momentum in the last years. Current studies show that an increasing number of companies is evaluating the promised advantages and considering making use of cloud services. In this paper we investigate the phenomenon of cloud computing and its importance for the operation of ERP systems. We argue that the phenomenon of cloud computing could lead to a decisive change in the way business software is deployed in companies. Our reference framework contains three levels (IaaS, PaaS, SaaS) and clarifies the meaning of public, private and hybrid clouds. The three levels of cloud computing and their impact on ERP systems operation are discussed. From the literature we identify areas for future research and propose a research agenda.
Knowledge compilation is a common technique for propositional logic knowledge bases. A given knowledge base is transformed into a normal form, for which queries can be answered efficiently. This precompilation step is expensive, but it only has to be performed once. We apply this technique to concepts defined in the Description Logic ALC. We introduce a normal form called linkless normal form for ALC concepts and discuss an efficient satisability test for concepts given in this normal form. Furthermore, we will show how to efficiently calculate uniform interpolants of precompiled concepts w.r.t. a given signature.
In dieser Studienarbeit werden neben den Grundlagen der Web Services, Komponenten und APIs zur Realisierung des Sticky Loggings aufgezeigt. Es wird ein Szenario zum Testen des Sticky Loggings beschrieben und als Web Services implementiert. Der Sticky-Logging-Formalismus wird erklärt und es wird eine API zur Erstellung der StickyLogs implementiert. Die StickyLogs werden innerhalb des SOAP-Attachments der SOAP-Nachrichten zwischen den Web Services ausgetauscht. Dazu wird eine Realisierung mit einem Messagehandler unter JAX-WS programmiert und erläutert.
In this paper, we compare two approaches for exploring large,rnhierarchical data spaces of social media data on mobile devicesrnusing facets. While the first approach arranges thernfacets in a 3x3 grid, the second approach makes use of arnscrollable list of facets for exploring the data. We have conductedrna between-group experiment of the two approachesrnwith 24 subjects (20 male, 4 female) executing the same set ofrntasks of typical mobile users" information needs. The resultsrnshow that the grid-based approach requires significantly morernclicks, but subjects need less time for completing the tasks.rnFurthermore, it shows that the additional clicks do not hamperrnthe subjects" satisfaction. Thus, the results suggest thatrnthe grid-based approach is a better choice for faceted searchrnon touchscreen mobile devices. To the best of our knowledge,rnsuch a summative evaluation of different approaches for facetedrnsearch on mobile devices has not been done so far.
Large and unknown data sets can be easily and systematically discovered by using faceted search. If implementing applications for smartphones, it needs to be considered that unlike desktop applications you can only use smaller screen sizes and there are limited possibilities for interaction between user and smartphone. These limitations can negatively influence the usability of an application. With FaThumb and MobileFacets, two mobile applications exist, which implement and use faceted search, although only MobileFacets is designed for current smartphones with touchscreen. However, FaThumb provides a novel facet navigation, which is newly realized in MFacets for present smartphones within this work.
Moreover, this work deals with the performance of a summative evaluation between both applications, MFacets and MobileFacets, with regards to usability and presents the evaluated results.
In this thesis we exercise a wide variety of libraries, frameworks and other technologies that are available for the Haskell programming language. We show various applications of Haskell in real-world scenarios and contribute implementations and taxonomy entities to the 101companies system. That is, we cover a broad range of the 101companies feature model and define related terms and technologies. The implementations illustrate how different language concepts of Haskell, such as a very strong typing system, polymorphism, higher-order functions and monads, can be effectively used in the development of information systems. In this context we demonstrate both advantages and limitations of different Haskell technologies.
Usability experts conduct user studies to identify existing usability problems. An established method is to record gaze behavior with an eye-tracker. These studies require a lot of effort to evaluate the results. Automated recognition of good and bad usability in recorded user data can support usability experts in eye tracking evaluation and reduce the effort. The objective of that bachelor thesis is to identify suitable eye-tracking metrics that correlate with the quality of usability. For this purpose, the central research question is answered: Which eye-tracking metrics correlate with the quality of a web form’s operation? To answer the research question, a quantitative A/B-user-study with eye-tracking was conducted and recorded the
gaze behavior of 30 subjects while filling out the web form. The web form was designed, that each web form page was available as a good and bad variant according to known usability guidelines. The results confirm a significant correlation between the eye-tracking-metric "number of visits to an
AOI" and the quality of the operation of a web form. The eye-tracking-metrics
"number of fixations within an AOI" and "duration of fixations within an AOI" also correlate with the quality of usability. No correlation could be confirmed for the "time of the first fixation within an AOI".
Infinite worlds
(2013)
This work is concerned with creating a 2D action-adventure with roleplay elements. It provides an overview over various tasks of the implementation. First, the game idea and the used gamemechanism are verified and a definfinition of requirements is created. After introducing the used framework, the software engineering concept for realization is presented. The implementation of control components, game editor, sound and graphics is shown. The graphical implementation pays special attention to the abstraction of light and shadow into the 2D game world.
Ziel dieser Arbeit ist es, eine Anwendung für ein Augmented Reality Fernrohr zu entwickeln, die verschiedene Interaktions- und Explorationsmöglichkeiten umsetzt. Darüber hinaus sollen Erkenntnisse über den Einfluss auf die Vermittlung von Wissen gewonnen werden. Dafür soll innerhalb von sechs Monaten neben der Einarbeitung in die verschiedenen Themengebiete, dem Entwurf eines Konzepts und der Implementierung der Anwendung auch eine gründliche Evaluation durchgeführt werden. Da der Standort des Fernrohrs im Rahmen der Bundesgartenschau 2011 [BUG] in Koblenz am Rheinufer in der Nähe der Talstation der Seilbahn ist, bietet es sich an die Festung Ehrenbreitstein in den Fokus zu setzen. Dabei soll vor allem die Festungsarchitektur im Mittelpunkt stehen. Die Anwendung soll dem Benutzer einen Überblick über die Festungsanlagen geben und nebenbei auch einige geschichtliche Fakten vermitteln.
Um die verschiedenen Anforderungen und die damit verbundenen Aufgaben zu erfüllen und strukturiert zu erarbeiten, wurde vor Beginn der Arbeit ein Projektplan erstellt, der die 6 Monate in fünf verschiedenen Phasen unterteilt, die jeweils mit Meilensteinen enden. Die Phase soll die Einarbeitung in die Grundlagen, die Entwicklung des Konzepts und die softwaretechnische Planung umfassen. Die nächste Phase beschäftigt sich mit der Präevaluation, die genaueren Aufschluss über die Qualität des entwickelten Konzepts geben soll. Im Anschluss an die Auswertung der Präevaluation kommt die erste Implementierungsphase, die mit einem Prototypen abschließen soll. Im zweiten Implementierungsschritt sollen dann die grafischen Inhalte erstellt und die prototypische Anwendung mit Inhalten gefüllt werden.
Zum Abschluss des Projekts wird eine Evaluation durchgeführt, die Erkenntnisse über den Einfluss von Augmented Reality auf die Vermittlung von Wissen liefern soll.
Taktstraße
(2008)
Eine Taktstraße ermöglicht eine automatisierte Verarbeitung eines Werkstückes mit Hilfe von Förderbändern, Lichtschranken, Schiebern und Bearbeitungsstationen. Für eine vorgegebene Taktstraße wird eine Ansteuerung entwickelt. Dazu wird der Mikrocontroller ATMega16 von Atmel eingesetzt. Ein externer Controller sendet über den TWI-Bus Steuerbefehle an den mit der Taktstraße verbundenen Controller. Um die Taktstraße bedienbar zu machen, wird eine geeignete Platine entworfen sowie eine LCD-Bibliothek als Ausgabe- und Informationsmedium. Die Arbeit umfasst alle für ein Projekt im Rahmen eines Informatikstudiums benötigten Entwicklungsstadien von der Projektplanung über die Aneignung von spezifischem Grundlagenwissen, die Hard- und Softwareentwicklung bis hin zu ausführlichen Entwicklungs- und Testphasen.
We propose a new approach for mobile visualization and interaction of temporal information by integrating support for time with today's most prevalent visualization of spatial information, the map. Our approach allows for an easy and precise selection of the time that is of interest and provides immediate feedback to the users when interacting with it. It has been developed in an evolutionary process gaining formative feedback from end users.
The goal of this Bachelor thesis was programming an existig six-legged robot, which should be able to explore any environment and create a map of it autonomous. A laser scanner is to be integrated for cognition of this environment. To build the map and locate the robot a suitable SLAM(Simultaneous Localization and Mapping) technique will be connected to the sensor data. The map is reported to be the robots base of path planning and obstancle avoiding, what will be developed in the scope of the bachelor thesis, too. Therefore both GMapping and Hector SLAM will be implemented and tested.
An exploration algorithm is described in this bachelor thesis for exploring the robots environment. The implementation on the robot takes place in the space of ROS(Robot Operating System) framework on a "Raspberry Pi" miniature PC.
Cloud-Computing ist aktuell ein besonderer Trend in der IT-Branche. ERP-Systeme sind in den heutigen Unternehmen ebenfalls nicht mehr wegzudenken. Durch die Analyse ausgewählter Literatur wird aufgezeigt, dass Cloud-Computing als Betriebsmodell für ERP-Systeme besonderer Untersuchung bedarf, da beim Zusammenspiel dieser Technologien noch unterschiedliche Herausforderungen geklärt werden müssen. Darauf aufbauend werden mit Hilfe von drei verschiedenen Praxispartnern Fallstudien zu deren Cloud-ERP-Lösungen erstellt, um in einem nächsten Schritt die theoretische Literatur mit den praktischen Ergebnissen zu vergleichen.
Ziel dieser Arbeit ist es mit Hilfe der Forschungsfragen, differenzierte Nutzenaspekte von Cloud-ERP-Lösungen aufzudecken und zu erklären, wie die Theorie mit praktischen Erfahrungswerten von Experten übereinstimmt. Durch die Fallstudien wird deutlich, dass sich die drei unterschiedlichen Cloud-ERP-Anbieter vor allem durch den Umfang ihrer Software und die Unternehmensgrößen der Zielgruppen im Markt differenzieren. Zusätzlich zeigt sich im Analyseteil und Fazit der Arbeit, dass über die in der Theorie identifizierten Nutzenaspekte hinaus, zusätzliche Nutzenaspekte durch die Kombination von Cloud-ERP-Lösungen generiert werden können und dass es aktuell schon eine spezielle Zielgruppe für Cloud-ERP-Lösungen gibt. Für die Zukunft bleibt abzuwarten, wie sich der Cloud-ERP-Markt weiterentwickeln wird und welche weiteren Funktionalitäten in die Cloud ausgelagert werden können, sodass sich On-Demand ERP-Systeme zu einer Konkurrenz von On-Premise-Lösungen entwickeln können.
Designing Core Ontologies
(2011)
One of the key factors that hinders integration of distributed, heterogeneous information systems is the lack of a formal basis for modeling the complex, structured knowledge that is to be exchanged. To alleviate this situation, we present an approach based on core ontologies. Core ontologies are characterized by a high degree of axiomatization and formal precision. This is achieved by basing on a foundational ontology. In addition, core ontologies should follow a pattern-oriented design approach. By this, they are modular and extensible. Core ontologies allow for reusing the structured knowledge they define as well as integrating existing domainrnknowledge. The structured knowledge of the core ontologies is clearly separated from the domain-specific knowledge. Such core ontologies allow for both formally conceptualize their particular fields and to be flexibly combined to cover the needsrnof concrete, complex application domains. Over the last years, we have developed three independent core ontologies for events and objects, multimedia annotations, and personal information management. In this paper, we present the simultaneousrnuse and integration of our core ontologies at the example of a complex, distributed socio-technical system of emergency response. We describe our design approach for core ontologies and discuss the lessons learned in designing them. Finally, we elaborate on the beauty aspects of our core ontologies.
Schema information about resources in the Linked Open Data (LOD) cloud can be provided in a twofold way: it can be explicitly defined by attaching RDF types to the resources. Or it is provided implicitly via the definition of the resources´ properties.
In this paper, we analyze the correlation between the two sources of schema information. To this end, we have extracted schema information regarding the types and properties defined in two datasets of different size. One dataset is a LOD crawl from TimBL- FOAF profile (11 Mio. triple) and the second is an extract from the Billion Triples Challenge 2011 dataset (500 Mio. triple). We have conducted an in depth analysis and have computed various entropy measures as well as the mutual information encoded in this two manifestations of schema information.
Our analysis provides insights into the information encoded in the different schema characteristics. It shows that a schema based on either types or properties alone will capture only about 75% of the information contained in the data. From these observations, we derive conclusions about the design of future schemas for LOD.
The lack of a formal event model hinders interoperability in distributed event-based systems. Consequently, we present in this paper a formal model of events, called F. The model bases on an upper-level ontology and pro-vides comprehensive support for all aspects of events such as time and space, objects and persons involved, as well as the structural aspects, namely mereological, causal, and correlational relationships. The event model provides a flexible means for event composition, modeling of event causality and correlation, and allows for representing different interpretations of the same event. The foundational event model F is developed in a pattern-oriented approach, modularized in different ontologies, and can be easily extended by domain specifific ontologies.
The Multimedia Metadata Ontology (M3O) provides a generic modeling framework for representing multimedia metadata. It has been designed based on an analysis of existing metadata standards and metadata formats. The M3O abstracts from the existing metadata standards and formats and provides generic modeling solutions for annotations, decompositions, and provenance of metadata. Being a generic modeling framework, the M3O aims at integrating the existing metadata standards and metadata formats rather than replacing them. This is in particular useful as today's multimedia applications often need to combine and use more than one existing metadata standard or metadata format at the same time. However, applying and specializing the abstract and powerful M3O modeling framework in concrete application domains and integrating it with existing metadata formats and metadata standards is not always straightforward. Thus, we have developed a step-by-step alignment method that describes how to integrate existing multimedia metadata standards and metadata formats with the M3O in order to use them in a concrete application. We demonstrate our alignment method by integrating seven different existing metadata standards and metadata formats with the M3O and describe the experiences made during the integration process.
Networked RDF graphs
(2007)
Networked graphs are defined in this paper as a small syntactic extension of named graphs in RDF. They allow for the definition of a graph by explicitly listing triples as well as by SPARQL queries on one or multiple other graphs. By this extension it becomes possible to define a graph including a view onto other graphs and to define the meaning of a set of graphs by the way they reference each other. The semantics of networked graphs is defined by their mapping into logic programs. The expressiveness and computational complexity of networked graphs, varying by the set of constraints imposed on the underlying SPARQL queries, is investigated. We demonstrate the capabilities of networked graphs by a simple use case.
Existing tools for generating application programming interfaces (APIs) for ontologies lack sophisticated support for mapping the logics-based concepts of the ontology to an appropriate object-oriented implementation of the API. Such a mapping has to overcome the fundamental differences between the semantics described in the ontology and the pragmatics, i.e., structure, functionalities, and behavior implemented in the API. Typically, concepts from the ontology are mapped one-to-one to classes in the targeted programming language. Such a mapping only produces concept representations but not an API at the desired level of granularity expected by an application developer. We present a Model-Driven Engineering (MDE) process to generate customized APIs for ontologies. This API generation is based on the semantics defined in the ontology but also leverages additional information the ontology provides. This can be the inheritance structure of the ontology concepts, the scope of relevance of an ontology concept, or design patterns defined in the ontology.
Modeling and publishing Linked Open Data (LOD) involves the choice of which vocabulary to use. This choice is far from trivial and poses a challenge to a Linked Data engineer. It covers the search for appropriate vocabulary terms, making decisions regarding the number of vocabularies to consider in the design process, as well as the way of selecting and combining vocabularies. Until today, there is no study that investigates the different strategies of reusing vocabularies for LOD modeling and publishing. In this paper, we present the results of a survey with 79 participants that examines the most preferred vocabulary reuse strategies of LOD modeling. Participants of our survey are LOD publishers and practitioners. Their task was to assess different vocabulary reuse strategies and explain their ranking decision. We found significant differences between the modeling strategies that range from reusing popular vocabularies, minimizing the number of vocabularies, and staying within one domain vocabulary. A very interesting insight is that the popularity in the meaning of how frequent a vocabulary is used in a data source is more important than how often individual classes and properties arernused in the LOD cloud. Overall, the results of this survey help in understanding the strategies how data engineers reuse vocabularies, and theyrnmay also be used to develop future vocabulary engineering tools.
Die folgende Arbeit soll einen Überblick über bestehende Lösungen zur Interaktion in Erweiterten Realitäten (Augmented Reality) schaffen. Hierzu werden anhand dreier grundlegender Betrachtungsweisen unterschiedliche Interaktionskonzepte und -umsetzungen sowohl von der technischen, als auch von der konzeptuellen Seite her, vorgestellt. Neben Fragen der Visualisierung werden unterschiedliche Typen von Benutzungsschnittstellen vorstellt. Den größten Teil nehmen die drei typischen Interaktionsaufgaben Selektion- und Manipulation, Navigation und Systemkontrolle und die damit verbundenen Interaktionstechniken ein. Die Inhalte des Arbeitsberichts beschränken sich auf den Einsatz von Interaktionelementen in Augmented Reality Umgebungen. Dies geschieht in Abgrenzung zu Forschungsarbeiten auf dem Gebiet zu Interaktionstechniken in Virtual Reality Umgebungen (vollimmersiv oder auch desktoporientiert). Zwar standen und stehen viele Interaktionstechniken aus dem Bereich VR in der AR Pate, doch haben sich gerade im Bereich der AR neue Techniken und Konzepte entwickelt. Folglich sollen VR Techniken nur dann betrachtet werden, wenn Sie in AR Anwendungen angewendet wurden bzw. wenn ihre Anwendung sinnvoll erscheint.
Social networking platforms as creativity fostering systems: research model and exploratory study
(2008)
Social networking platforms are enabling users to create their own content, share this content with anyone they invite and organize connections with existing or new online contacts. Within these electronic environments users voluntarily add comments on virtual boards, distribute their search results or add information about their expertise areas to their social networking profiles and thereby share it with acquaintances, friends and increasingly even with colleagues in the corporate world. As a result, it is most likely that the underlying knowledge sharing processes result in many new and creative ideas. The objective of our research therefore is to understand if and how social social networking platforms can enforce creativity. In addition, we look at how these processes could be embedded within the organizational structures that influence innovative knowledge sharing behavior. The basis for our research is a framework which focuses on the relations between intrinsic motivation, creativity and social networking platforms. First results of our empirical investigation of a social software platform called "StudiVZ.net" proved that our two propositions are valid.
The impact of innovation in order to secure a company- existence is increasing. Especially in the field of information and communication systems, modified conditions, reduced product life cycles (in some markets e.g. in the software market down to a few months), and increased competition force many companies to adapt their innovation speed. Long-term profit will just be obtained by those companies that will manage to consolidate or further to enlarge their competitive position with technological innovation. The increasing relevance of new developments requires a separate treatment of technological innovations and demands for a technology and innovation management. Not every innovation is thereby a result of a single, spontaneous idea. Many innovations evolve from the integration or combination of already existing technologies. Especially in markets with insignificant growing rates, this kind of innovation is quite important. The purpose of this article is a case-oriented identification of existing deficiencies within established recommendations for a strategic technology and innovation management and the discussion about the role of modularity and technology integration as an alternative way to innovate.
Information systems research has started to use crowdsourcing platforms such as Amazon Mechanical Turks (MTurk) for scientific research, recently. In particular, MTurk provides a scalable, cheap work-force that can also be used as a pool of potential respondents for online survey research. In light of the increasing use of crowdsourcing platforms for survey research, the authors aim to contribute to the understanding of its appropriate usage. Therefore, they assess if samples drawn from MTurk deviate from those drawn via conventional online surveys (COS) in terms of answers in relation to relevant e-commerce variables and test the data in a nomological network for assessing differences in effects.
The authors compare responses from 138 MTurk workers with those of 150 German shoppers recruited via COS. The findings indicate, inter alia, that MTurk workers tend to exhibit more positive word-of mouth, perceived risk, customer orientation and commitment to the focal company. The authors discuss the study- results, point to limitations, and provide avenues for further research.
Semantic Web technologies have been recognized to be key for the integration of distributed and heterogeneous data sources on the Web, as they provide means to define typed links between resources in a dynamic manner and following the principles of dataspaces. The widespread adoption of these technologies in the last years led to a large volume and variety of data sets published as machine-readable RDF data, that once linked constitute the so-called Web of Data. Given the large scale of the data, these links are typically generated by computational methods that given a set of RDF data sets, analyze their content and identify the entities and schema elements that should be connected via the links. Analogously to any other kind of data, in order to be truly useful and ready to be consumed, links need to comply with the criteria of high quality data (e.g., syntactically and semantically accurate, consistent, up-to-date). Despite the progress in the field of machine learning, human intelligence is still essential in the quest for high quality links: humans can train algorithms by labeling reference examples, validate the output of algorithms to verify their performance on a data set basis, as well as augment the resulting set of links. Humans —especially expert humans, however, have limited availability. Hence, extending data quality management processes from data owners/publishers to a broader audience can significantly improve the data quality management life cycle.
Recent advances in human computation and peer-production technologies opened new avenues for human-machine data management techniques, allowing to involve non-experts in certain tasks and providing methods for cooperative approaches. The research work presented in this thesis takes advantage of such technologies and investigates human-machine methods that aim at facilitating link quality management in the Semantic Web. Firstly, and focusing on the dimension of link accuracy, a method for crowdsourcing ontology alignment is presented. This method, also applicable to entities, is implemented as a complement to automatic ontology alignment algorithms. Secondly, novel measures for the dimension of information gain facilitated by the links are introduced. These entropy-centric measures provide data managers with information about the extent the entities in the linked data set gain information in terms of entity description, connectivity and schema heterogeneity. Thirdly, taking Wikidata —the most successful case of a linked data set curated, linked and maintained by a community of humans and bots— as a case study, we apply descriptive and predictive data mining techniques to study participation inequality and user attrition. Our findings and method can help community managers make decisions on when/how to intervene with user retention plans. Lastly, an ontology to model the history of crowd contributions across marketplaces is presented. While the field of human-machine data management poses complex social and technical challenges, the work in this thesis aims to contribute to the development of this still emerging field.
In dieser Arbeit wird ein Datenmodell für Shared Annotations vorgestellt. Basierend auf einem bereits existierenden Datenmodell für Annotationen, wird dieses erweitert um die Möglichkeit der Modellierung von Shared Annotations. Daraufhin werden Funktionen von Shared Annotations erläutert, die über das neue Annotationsmodell möglich sind.
In dieser Arbeit wird ein Mehrbenutzer-Annotationssystem namens myAnnotations vorgestellt, das mit Hilfe von sogenannten "Shared In-situ Problem Solving" Annotationen auf beliebigen Webseiten kollaborative Lern- und Arbeitsszenarien unterstützt. Hierbei wird insbesondere auf die Einsatzmöglichkeiten von "Shared In-situ Problem Solving" Annotationen beim kollaborativen Bearbeiten eines Textes und bei kollaborativen Lernerfolgskontrollen eingegangen.
MapReduce with Deltas
(2011)
The MapReduce programming model is extended slightly in order to use deltas. Because many MapReduce jobs are being re-executed over slightly changing input, processing only those changes promises significant improvements. Reduced execution time allows for more frequent execution of tasks, yielding more up-to-date results in practical applications. In the context of compound MapReduce jobs, benefits even add up over the individual jobs, as each job gains from processing less input data. The individual steps necessary in working with deltas are being analyzed and examined for efficiency. Several use cases have been implemented and tested on top of Hadoop. The correctness of the extended programming model relies on a simple correctness criterion.
In dieser Arbeit wurde die Realisierung einer mobilen Sicherheitslösung für Überwachungszwecke vorgestellt, welche unter Zuhilfenahme des staatlich geförderten Forschungsprojekts CamInSens entwickelt wurde. CamInSens soll erreichen, in überwachten Gefahrenbereichen die erhaltenen Video- und Sensorendaten so zu analysieren, dass Bedrohungen möglichst frühzeitig erkannt und behandelt werden können. Das Ziel dieser Arbeit war, auf Basis von verarbeiteten Daten aus CamInSens eine Interaktion und Visualisierung zu entwickeln, die bei einem späteren Praxiseinsatz mobilem Sicherheitspersonal dabei hilft, in Bedrohungssituationen fundierte Entscheidungen treffen zu können. Zu diesem Zweck wurde nicht nur eine Software implementiert, sondern auch eine Marktsichtung hinsichtlich geeigneter Geräte und einsetzbarer Softwarebibliotheken durchgeführt.
Polsearchine: Implementation of a policy-based search engine for regulating information flows
(2013)
Many search engines regulate Internet communication in some way. It is often difficult for end users to notice such regulation, as well as obtaining background information for it. Additionally, the regulation can usually be circumvented easily. This bachelor thesis presents the prototypical metasearch engine "Polsearchine" for addressing these weaknesses. Its regulation is established through InFO, a model for regulating information flows developed by Kasten and Scherp. More precisely, the extension for regulating search engines SEFCO is being used. For retrieving search results, Polsearchine uses an external search engine API. The API can be interchanged easily to make this metasearch engine independent from one specific API.
This paper critically examines the Google Calendar. For this purpose, the offered functions of the core product are studied on privacy aspects. On one hand, it is identified, to which extent the product could infringe the users" privacy, on the other accuring risks are discussed. Furthermore, the functions in terms of their use for both, the service provider Google and for the user, are considered. A detailed analysis demonstrates the critical aspects, in which we have to decide between privacy and functionality. The identified solutions to minimize discussed risks of IT security mechanisms, are presented, discussed and analyzed in terms of their feasibility. Afterwards the individual solutions are summarized in a security concept and other requirements are explained. Finally, a Firefox-Addon which implements the described solution concept should be created, to resolve the existing weaknesses to the best of its ability. Ultimately, the functionality of the addon with technical implementation is illustrated in detail.
The processing of data is often restricted by contractual and legal requirements for protecting privacy and IPRs. Policies provide means to control how and by whom data is processed. Conditions of policies may depend on the previous processing of the data. However, existing policy languages do not provide means to express such conditions. In this work we present a formal model and language allowing for specifying conditions based on the history of data processing. We base the model and language on XACML.
ERP market analysis
(2013)
Der aktuelle ERP Markt wird dominiert von den fünf größten Anbietern SAP, Oracle, Microsoft, Infor und Sage. Da der Markt und die angebotenen Lösungen vielfältig sind, bedarf es einer fundierten Analyse der Systeme. Die Arbeit beleuchtet dabei anhand ausgesuchter Literatur und Kennzahlen der verschiedenen Unternehmen die theoretische Seite der angebotenen Lösungen der fünf großen ERP Anbieter. Daneben wird die Nutzung der Systeme in der Praxis anhand der Befragung von sechs Anwendern analysiert und die Systeme miteinander verglichen.
Ziel der Arbeit ist es, dass die Forschungsfragen beantwortet werden und dass es bezogen auf die Systeme dem Leser der Arbeit ersichtlich wird, welches ERP System für welche Unternehmensbranche und Unternehmensgröße am besten geeignet ist.
Des Weiteren gibt die Arbeit Aufschluss darüber, welche Trends für ERP Systeme für die Zukunft zu erwarten sind und welche Herausforderungen sich dadurch für die Unternehmen stellen.
We present a non-linear camera pose estimator, which is able to handle a combined input of point and line feature correspondences. For three or more correspondences, the estimator works on any arbitrary number and choice of the feature type, which provides an estimation of the pose on a preferably small and flexible amount of 2D-3D correspondences. We also give an analysis of different minimization techniques, parametrizations of the pose data, and of error measurements between 2D and 3D data. These will be tested for the usage of point features, lines and the combination case. The result shows the most stable and fast working non-linear parameter set for pose estimation in model-based tracking.
This thesis describes the conception, implementation and evaluation of a collaborative multiplayer game for preschoolers for mobile devices.
The main object of this thesis is to find out, if mobile devices like smartphones and tablet computers are suitable for the interaction of children. In order to develop this kind of game relevant aspects were researched. On this basis a game was designed which was finally tested by preschoolers.
Ziel dieser Ausarbeitung ist es, das Wippe-Experiment gemäß dem Aufbau innerhalb der AG Echtzeitsysteme unter Leitung von Professor Dr. Dieter Zöbel mithilfe eines LEGO Mindstorms NXT Education-Bausatzes funktionsfähig nachzubauen und das Vorgehen zu dokumentieren. Der dabei entstehende Programmcode soll didaktisch aufbereitet und eine Bauanleitung zur Verfügung gestellt werden. Dies soll gewährleisten, dass Schülerinnen und Schüler auch ohne direkten Zugang zu einer Hochschule oder ähnlichem Institut den Versuchsaufbau Wippe möglichst unkompliziert im Klassenraum erleben können.
Ziel des Verbundprojektes 3D-RETISEG ist es, ein optimiertes echtzeitfähiges Soft- und Hardwarepaket zur 2D/3D-Segmentierung und Klassifizierung basierend auf hierarchischen Inselstrukturen, und zur schnellen Volumenvisualisierung sehr großer Volumendaten aus komprimierten Datensätzen zu entwickeln. Die Entwicklungen sollen in konkreten Anwendungsfällen der Forschung und der Wirtschaft als marktreifes Modul in das Softwarepaket VGStudioMax integriert werden. In der Medizin, speziell bei der Segmentierung von Volumendatensätzen, die durch unterschiedliche Tomographietechniken (z.B. CT und MRI) erzeugt werden, liefern zweidimensionale Verfahren häufig eine unzureichende Qualität aufgrund der anisotropen Berücksichtigung der Rauminformationen. Bisher entwickelte dreidimensionale Verfahren sind, neben der der schlechten Verfügbarkeit ihrer Implementierungen, häufig nur für bestimmte Fragestellungen dediziert und daher für andere Anwendungsfälle schlecht adaptierbar. Weiterhin benötigen Verfahren, die gute Ergebnisse liefern, meist eine hohe Laufzeit. Dies ist aber speziell in medizinischen Anwendungen von großem Nachteil, da hier die Qualität der Ergebnisse in letzter Instanz von einem Experten beurteilt wird. Damit das Ergebnis schnell zu benutzerdefinierten Optimierungen durch Veränderung der Verfahrensparameter zur Verfügung steht, sollte ein Verfahren nur sehr kurze Rechenzeiten benötigen. Weist ein Verfahren eine geringe Laufzeit auf, so kann es außerdem intra-operativ eingesetzt werden. Der hier im Projekt verwendete 3D-Color-Structure-Code verspricht, ein besseres und generisches Verfahren zu ermöglichen.
In Enhanced Reality Umgebungen nimmt der Betrachter die Umgebung und zusätzliche, in eine halbdurchsichtige Datenbrille eingeblendete Informationen wahr. Das Kalibrierungsproblem der Datenbrille ist die Aufgabe, die eingeblendete Information mit dem korrekten realen Hintergrund zur Deckung zu bringen. Heutige Datenbrillen sind vergleichsweise klobig und schwer, deshalb kommt es häufig zu leichtem Verrutschen der Brille. Wird dieses Verrutschen nicht in die Position der Einblendung in der Brille einbezogen, so passt die Einblendung nicht mehr zum realen Hintergrund. Dies wird in Abbildung 1.1 exemplarisch dargestellt. Nach initialer Kalibrierung der teildurchsichtigen Datenbrille auf das Auge des Betrachters soll deshalb bei relativer Lageveränderung der Datenbrille zum Auge (durch leichtes Verrutschen) eine Re-Kalibrierung in Echtzeit automatisch erfolgen. Eine automatische Re-Kalibrierung bei Verrutschen wird von uns erstmalig erforscht.
We introduce linear expressions for unrestricted dags (directed acyclic graphs) and finite deterministic and nondeterministic automata operating on them. Those dag automata are a conservative extension of the Tu,u-automata of Courcelle on unranked, unordered trees and forests. Several examples of dag languages acceptable and not acceptable by dag automata and some closure properties are given.
Software integration is an engineering task where image-based approaches are still considered slow and error-prone. In this thesis, we apply image-based software integration to the domain of online Poker systems. That is, we implement the poker expert system PokerBot which can play online. We use the method of screen scraping to capture the screen information needed to interact with the Poker server. The consequent use of Template Matching leads to an efficient implementation. Substantial e ort was also addressed to the artificial intelligence aspects of PokerBot. The purpose of AI is here to mimic human playing style by translating a well-documented Pokerguide into formalized, executable language.
This work deals with the migration of software systems towards the use of the character set defined in the Unicode standard. The work is performed as a case study on the document-management-system PROXESS. A conversion process will be designed that defines the working-steps of the migration for the entire system as well as an arbitrary decomposition of the system into individual modules. The working-steps for each module can be performed chronologically independent of each other to a great extend. For the conversion of the implementation, an approach of automatic recognition of usage patterns is applied. The approach aims at searching the abstract syntax tree for sequences of program instructions that can be assigned to a certain usage pattern. The usage pattern defines another sequence of instructions that acts as an sample solution for that usage pattern. The sample solution demonstrates the Unicode-based management of strings for that usage pattern. By applying a transformation rule, the original sequence of instructions is transferred to the sequence of instructions exposed by the sample solution of the related usage pattern. This mechanism is a starting point for the development of tools that perform this transformation automatically.
This bachelor thesis deals with the concept of a smartphone application for emergencies. It describes the basic problem and provides a conceptual approach.
The core content of this thesis is a requirement analysis of the newly to be designed emergency application. Furthermore the functional and non-functional requirements such as usability are specified to give insights for the concept of the application. In addition, single sub-functions of the mHealth applications of the University Koblenz which exists or are still under construction can be integrated in the future emergency application. Based on the catalog of requirements a market analysis for strengths and weaknesses of existing emergency application systems is realized. In the to-be concept the findings were summarized and possible architectural sketches for future emergency applications were given. Furthermore, one conclusion of dealing with this topic is that a design alone is not sufficient to guarantee a good working app. That is why the requirements for the thesis were expanded by the connection to and integration of rescue centers in the architecture of the emergency app.
At the end of the thesis, the reader will receive a comprehensive overview of the provision of emergency data to the rescue control centers by different transmission channels. Furthermore, conditions for the system requirements are also presented as possible scenarios of the architecture of the whole system of the emergency application. The generic and modular approach guarantees that the system is open for future development and integration of functions of other applications.
Software projects typically rely on several, external libraries. The interface provided by such a library is called API (application programming interface). APIs often evolve over time, thereby implying the need to adapt applications that use them. There are also reasons which may call for the replacement of one library by another one, what also results in a need to adapt the applications where the library is replaced. The process of adapting applications to use a different API is called API migration. Doing API migration manually is a cumbersome task. Automated API migration is an active research field. A related field of research is API analysis which can also provide data for developing API migration tools.
The following thesis investigates techniques and technologies for API analysis and API migration frameworks. To this end, design patterns are leveraged. These patterns are based on experience with API analysis and migration within the Software Languages Team.
This thesis aims to clarify the question of how stakeholders can participate in a project. We discuss methods on how to select the relevant stakeholders from the whole set of the stakeholders and how to involve these stakeholders in the participatory process. The participation process does not only classical methods of participation such as workshops, especially before modern Web 2.0 techniques.
Pose-Tracking
(2009)
Die bildbasierte automatische Bestimmung der Pose, d. h. der Position und Blickrichtung, einer Kamera in derWelt, ist eine relevante, aber immer noch unzureichend gelöste Aufgabe im Rechnersehen. In dem diesem Bericht zugrunde liegenden Projekt werden höhere markante Merkmale in den Bildern sicherer lokalisiert, sowie deren semantische Signifikanz vom Rechner bestimmt. Eine Posebestimmung wird durch eine Registrierung der elementaren Bestandteile dieser markanten Merkmale im Bild mit Merkmalen im 3-D-Modell erreicht. Dazu werden neue Algorithmen zur Detektion, Lokalisation und Registrierung der markanten Merkmale entwickelt bzw. vorhandene Algorithmen weiter verbessert. Modelle, wie sie aus der Rekonstruktion von Mehrfachansichten entstehen, werden durch weitere Semantik angereichert. Als Anwendungsszenario wird die Posebestimmung auf dem Campusgelände unter Verwendung von Bildern und einem semantischen CG-Modell des Campus gewählt. Die allgemeinen Verfahren werden an diesem Beispiel getestet und ihre Tragfähigkeit wird in Experimenten belegt. Im modularen System entstehen problemunabhängige Einzelbausteine zur Detektion markanter Merkmale und zur 3-D-Rekonstruktion und Posebestimmung aus Merkmalen, Punkten und Linien. Damit wird belegt, dass eine robuste Detektion markanter Merkmale möglich ist und zu einer effektiven Rekonstruktion und Posebestimmung auch in teilweise wenig strukturierten Außengebieten genutzt werden kann.
This paper consists of the observation of existing first aid applications for smartphones and comparing them to a first aid application developed by the University of Koblenz called "Defi Now!". The main focus lies on examining "Defi Now!" in respect to its usability based on the dialogue principles referring to the seven software ergonomic principles due to the ISO 9241-110 standard. These are known as suitability for learning, controllability, error tolerance, self-descriptiveness, conformity with user expectations, suitability for the task, and suitability for individualization.
Therefore a usability study was conducted with 74 participants. A questionnaire was developed, which was to be filled out by the test participants anonymously. The test results were used for an optimization of the app referring its' usability.
UML models and OWL ontologies constitute modeling approaches with different strength and weaknesses that make them appropriate for use of specifying different aspects of software systems. In particular, OWL ontologies are well suited to specify classes using an expressive logical language with highly flexible, dynamic and polymorphic class membership, while UML diagrams are much more suitable for specifying not only static models including classes and associations, but also dynamic behavior. Though MOF based metamodels and UML profiles for OWL have been proposed in the past, an integrated use of both modeling approaches in a coherent framework has been lacking so far. We present such a framework, TwoUse, for developing integrated models, comprising the benefits of UML models and OWL ontologies
The E-Government research area has gained in importance in Europe and specially Germany in the last few years, causing the number of researchers, institutes and publications to increase rapidly. This makes it difficult for outsiders to get an overview of the relevant actors in the E-Government field. This issue can be addressed by implementing a research map for the E-Government field, where all relevant actors and objects and their information are shown on the map according to their location. In order to give a complete overview, information which was valid at a certain time in the past needs to be available on the research map. This can be only achieved if the contents of the research map are historicized. This means that a new version of an object needs to be created and saved in the database, if changes occur to the object. Older versions need to be retained on the database, so that the user is able to navigate the website based on temporal information. Past experience has shown that the temporal aspects of historicization should be managed and planned during the conceptual phase of the website rather than during implementation. This Bachelor thesis proposes a concept for the E-Government research map which includes the modeling of relevant temporal dimensions needed to historicize the contents of the research map.
Diese Bachelor-Thesis beschäftigt sich mit der Entwicklung eines Programms, welches den Zahnarzt durch die AR bei seiner Behandlung am Patienten unterstützen soll. Um eine angemessene theoretische Grundlage zu schaffen, wird zunächst der aktuelle Stand der Technik erläutert, der für dieses Projekt relevant ist. Daraufhin werden mögliche zukünftige Technologien vorgestellt, welche die hypothetische Basis dieser Arbeit darstellen. In dem darauffolgenden Unterkapitel wird die Auswahl der Systeme erläutert, die für dieses Projekt verwendet wurden. Der Hauptteil beschäftigt sich zunächst mit dem Vorgehen in der Vorbereitungs- und Planungsphase, um daraufhin den Programmablauf der Applikation sukzessiv vorzustellen. Dabei wird auch auf die Probleme eingegangen, die während des Programmierens entstanden sind. In dem reflektierenden Auswertungsteil werden Verbesserungsvorschlägen und Zusatzfunktionen für das geschriebene Programm präsentiert.
Development of an Android Application for the Recognition and Translation of Words in Camera Scenes
(2012)
This bachelor- thesis describes the conception and implementation of a translation software for the Android platform. The specific feature of the software is the independent text recognition based on the view of the camera. This approach aims to enhance and accelerate the process of translation in certain situations. After an introduction into text recognition, the underlying technologies and the operation system Android useful applications are described. Then the concept of the software is created and the implementation examined. Finally an evaluation is conducted to identify strengths and weaknesses of the software.
Robotics research today is primarily about enabling autonomous, mobile robots to seamlessly interact with arbitrary, previously unknown environments. One of the most basic problems to be solved in this context is the question of where the robot is, and what the world around it, and in previously visited places looks like " the so-called simultaneous localization and mapping (SLAM) problem. We present a GraphSLAM system, which is a graph-based approach to this problem. This system consists of a frontend and a backend: The frontend- task is to incrementally construct a graph from the sensor data that models the spatial relationship between measurements. These measurements may be contradicting and therefore the graph is inconsistent in general. The backend is responsible for optimizing this graph, i. e. finding a configuration of the nodes that is least contradicting. The nodes represent poses, which do not form a regular vector space due to the contained rotations. We respect this fact by treating them as what they really are mathematically: manifolds. This leads to a very efficient and elegant optimization algorithm.
The purpose of this master thesis is to enable the Robot Lisa to process complex commands and extract the necessary information in order to perform a complex task as a sequence of smaller tasks. This is intended to be achieved by the improvement of the understanding that Lisa has of her environment by adding semantics to the maps that she builds. The complex command itself will be expected to be already parsed. Therefore the way the input is processed to become a parsed command is out of the scope of this work. Maps that Lisa builds will be improved by the addition of semantic annotations that can include any kind of information that might be useful for the performance of generic tasks. This can include (but not necessarily limited to) hierarchical classifications of locations, objects and surfaces. The processing of the command in addition to some information of the environment shall trigger the performance of a sequence of actions. These actions are expected to be included in Lisa- currently implemented tasks and will rely on the currently existing modules that perform them.
Nevertheless the aim of this work is not only to be able to use currently implemented tasks in a more complex sequence of actions but also make it easier to add new tasks to the complex commands that Lisa can perform.
In this thesis, a first prototype of a mobile instruction device with mixed reality (MR) funktionality is developed. This system shall be capable to support training on the job through interaction with the work item. The concept corresponds to a didactic approach presented by Martens-Parree that combines constructivism with situated learning. As an application example, the training of glider pilots checked out on a new type was chosen. Whether the MR device could increase the competence, or facilitiate the completion of certain tasks, was examined in a survey with fifteen testers. The results of the study show that in general, the didactic approach of Martens-Parree is valid. While an increase in fact knowledge has been observed, it was not (yet) possible to demonstrate an increase in skills with respect to the work tasks.
In der vorliegenden Diplomarbeit wurde gezeigt, wie sich Ambient Occlusion in einer Szene aus frei transformierbaren Starrkörpern unter Verwendung von Coherent Shadow Maps in interaktiven Frameraten berechnen und darstellen lässt. Die Vorteile von Ambient Occlusion im Vergleich zu klassischen lokalen Beleuchtungsmodellen wurden aufgezeigt - den Ansatzpunkt bildet die Approximation von Umgebungslicht und indirekter Beleuchtung durch einen konstanten Farbwert - und die vereinfachenden Annahmen, die im Vergleich zu einer Simulation globaler Beleuchtung gemacht werden, wurden nachgezeichnet - Die Gültigkeit von Ambient Occlusion beschränkt sich demnach eigentlich auf eine direkte Beleuchtung diffuser Materialien mit konstanter Leuchtdichte. Um einen umfassenden Überblick zu gewährleisten, wurden zahlreiche existierende Ansätze zur Berechnung von Ambient Occlusion ausführlich vorgestellt. Anschließend wurde auf die Berechnung der Visibilität mittels CSMs eingegangen. Es wurde gezeigt wie sich die Komprimierungsrate von CSMs durch eine gleichmäßige Verteilung der zu komprimierenden Depth Maps deutlich erhöhen lässt und ein XML-konformes Dateiformat, das die beliebige Wiederverwendung einmal komprimierter Depth Maps erlaubt, wurde eingeführt.