Refine
Year of publication
Document Type
- Part of Periodical (132)
- Bachelor Thesis (68)
- Diploma Thesis (47)
- Master's Thesis (34)
- Study Thesis (9)
- Conference Proceedings (5)
- Doctoral Thesis (2)
Keywords
- Simulation (5)
- ontology (5)
- Bluetooth (4)
- computer clusters (4)
- Android <Systemplattform> (3)
- Augmented Reality (3)
- Customer Relationship Management (3)
- Enterprise 2.0 (3)
- Informatik (3)
- Knowledge Compilation (3)
- Mikrocontroller AVR (3)
- Petri-Netze (3)
- Routing (3)
- University (3)
- parallel algorithms (3)
- Akkreditierung (2)
- Campus Information System (2)
- Cloud Computing (2)
- Computersimulation (2)
- DOCHOUSE (2)
- Datensicherheit (2)
- Distance Vector Routing (2)
- Distanz Vektor Routing (2)
- E-Government (2)
- E-KRHyper (2)
- Enhanced Reality (2)
- Fallstudie (2)
- Google App Engine (2)
- Google Web Toolkit (2)
- IBM Lotus Quickr (2)
- Interaktion (2)
- Java (2)
- Linked Open Data (2)
- Mikrocontroller (2)
- OWL (2)
- Objektverfolgung (2)
- Ontology (2)
- Personalausweis (2)
- RDF (2)
- Rechnernetz (2)
- Rechnernetze (2)
- Router (2)
- Semantic Web (2)
- Smartphone (2)
- Tablet PC (2)
- Visualisierung (2)
- Web Analytics (2)
- WebGL (2)
- artifcial neural networks (2)
- artificial neural networks (2)
- classification (2)
- constraint logic programming (2)
- governance (2)
- mobile phone (2)
- privacy protection (2)
- probability propagation nets (2)
- risk (2)
- simulation (2)
- social media (2)
- 101companies (1)
- 2D/3D-Segmentierung (1)
- 3-D-Modell (1)
- 3D-Color-Structure-Code (1)
- 3D-Kartierung (1)
- 3D-Modell (1)
- 3D-RETISEG (1)
- 8C model (1)
- ABox (1)
- API (1)
- API Analysis (1)
- API Migratiom (1)
- ATMega 16 (1)
- ATmega644 (1)
- ATtiny2313 (1)
- Absorptive capacity (1)
- Ad-Hoc Routing (1)
- Adaptive Services Grid (ASG) (1)
- Adobe Flex (1)
- Agenten (1)
- Agents (1)
- Aggregation (1)
- Aktualisierung (1)
- Alex Bernstein (1)
- Amazon Mechanical Turks (1)
- Ambient Occlusion (1)
- Android (1)
- Android 2 (1)
- Android 3 (1)
- Android 4 (1)
- Annotation (1)
- Annotationsmodell (1)
- App (1)
- App <Programm> (1)
- ArcGIS (1)
- Archivierung (1)
- Artificial Intelligence (1)
- Aspect-orientation (1)
- Aspekt-Orientierung (1)
- Ausstellung (1)
- Authentifizierung (1)
- Authentisierung (1)
- Automated Theorem Proving (1)
- Automated Theorem Proving Systems (1)
- Automatisches Beweisverfahren (1)
- Autonomes Fahrzeug (1)
- B2B (1)
- B2B Integration (1)
- BPMN (1)
- Bachelorstudiengang (1)
- Bayes Procedures (1)
- Bebauungsdichte (1)
- Beschreibungslogik (1)
- Bewegung (1)
- Bewegungsablauf (1)
- Bewegungssteuerung (1)
- Bilderkennung (1)
- Browser (1)
- Burg (1)
- Business Collaboration (1)
- C-Programmierung (1)
- CIDR (1)
- CSMs (1)
- Calculus (1)
- CamInSens (1)
- Campus Koblenz (1)
- Casual Games (1)
- Chaos (1)
- Classification (1)
- Clustering (1)
- Colonoscopy (1)
- Common Annotation Framework (1)
- Computational biology (1)
- Computer Networks (1)
- Computeranimation (1)
- Computergraphik (1)
- Computermodell (1)
- Computerspiel (1)
- Conference (1)
- Congestion Control (1)
- Content Analysis (1)
- Content Management (1)
- Context-aware processes (1)
- Core Ontology on Multimedia (1)
- Core Ontology on Multimedia (COMM) (1)
- Creativity (1)
- Criteria Matrix (1)
- Crowdsourcing (1)
- Darmspiegelung (1)
- Data Mining (1)
- Data analysis (1)
- Datenanalyse (1)
- Datenbrille (1)
- Datenschutz (1)
- Datenschutz, Datensicherheit, Apps, Informatik im Kontext (1)
- Defi-Now! (1)
- Defibrillator (1)
- Delphi-Studie (1)
- Delta (1)
- Description Logic (1)
- Description Logics (1)
- Design Pattern (1)
- Diabetes (1)
- Diabetische Retinopathie (1)
- Dienstleistungsproduktivität (1)
- Digitale Steuerung (1)
- Dimensionality Reduction (1)
- Dimensionsreduzierung (1)
- Discussion Forums (1)
- Distributed process execution (1)
- Drahtloses lokales Netz (1)
- Dreidimensionale Bildverarbeitung (1)
- Dynamische Analyse (1)
- E-Business (1)
- E-Hyper Tableau (1)
- E-IMS (1)
- E-KRHyper theorem prover (1)
- E-Learning (1)
- E-Mail (1)
- E-Partizipation (1)
- E-Procurement (1)
- E-government (1)
- E-services (1)
- ECMS 2012 (1)
- EEPROM (1)
- EPROM (1)
- ERP-Systeme (1)
- Echtzeit-Raytracer "Augenblick" (1)
- Echtzeitphysik (1)
- Echtzeitsystem (1)
- Einchip-Computer (1)
- Electronic Government (1)
- Elektrischer Servomotor (1)
- Emergenz (1)
- Emil (1)
- Enterprise Information Management (1)
- Enterprise Systems (1)
- Enterprise-Resource-Planning (1)
- Entwicklung (1)
- Entwurfsmuster (1)
- Erste Hilfe (1)
- European Conference on Modelling and Simulation (1)
- Evaluation (1)
- Facebook App (1)
- Facebook Application (1)
- Fahrzeug (1)
- Farbkalibrierung (1)
- Feature Extraction (1)
- Finanzberatung (1)
- First aid (1)
- FlexViz (1)
- Forensik (1)
- Forschungslandkarte (1)
- Forschungsprojekt KMU 2.0 (1)
- Framework (1)
- Freiheitsgrad (1)
- Föderiertes System (1)
- GPGPU (1)
- GPS (1)
- GSM-Standard (1)
- Game Design (1)
- Generative Model (1)
- Generative Modellierung (1)
- Geocaching (1)
- Germany (1)
- Gesundheitswesen (1)
- Good Government (1)
- Grails (1)
- Grails 1.2 (1)
- Graph Technology (1)
- GraphUML (1)
- HTML 5.0 (1)
- Haskell (1)
- Healthcare institution (1)
- Historisierung (1)
- Horn Clauses (1)
- Hypertableaux (1)
- IASON (1)
- IEC 61131 (1)
- IP 6 (1)
- IP-Adressen (1)
- IPv6 (1)
- IT Outsourcing (1)
- IT Security (1)
- IT Services (1)
- IT security analysis (1)
- IT-Risk-Management (1)
- IT-Sicherheitsmodelle (1)
- Image (1)
- Incremental Algorithms (1)
- Infobroker (1)
- Information Flow Ontology (1)
- Information Supply Chain (1)
- Informationsmanagement (1)
- Informationssystem (1)
- Innovation (1)
- Innovationsmanagement (1)
- Integriertes Informationssystem (1)
- Intelligent Information Network (1)
- Interaktionselemente (1)
- Internet (1)
- Internet Voting (1)
- Internetregulierung (1)
- Internetworking (1)
- Interpersonale Kommunikation (1)
- Interpretierer (1)
- Intranet-Zugang (1)
- JCryp-Tool (1)
- Java <Programmiersprache> (1)
- JavaScript (1)
- Kalibrierung (1)
- Kamera (1)
- Kamerakalibrierung (1)
- Kantenverfolgung (1)
- Katastrophentheorie (1)
- Kinematik (1)
- Klassifikation (1)
- Kletterer (1)
- Knowledge Sharing (1)
- Koloskopie (1)
- Kommunikationsprotokoll (1)
- Kondertal (1)
- Kontextbezogener Dienst (1)
- Konzept (1)
- Kooperatives Informationssystem (1)
- Krankenhausinformationssystem (1)
- Kryptographie (1)
- Künstliche Intelligenz (1)
- Laser (1)
- Leistungsmessung (1)
- Lernsoftware (1)
- Linked Data Modeling (1)
- Logging (1)
- Logik (1)
- Logistik (1)
- Lokalisation (1)
- MIA (1)
- MPEG-7 (1)
- Manöver-Visualisierung (1)
- MapReduce (1)
- Markov Zufallsfelder (1)
- Maschinelles Sehen (1)
- Massenkommunikation (1)
- Masterstudiengang (1)
- Matching (1)
- Mediator framework (1)
- Medical Image Analysis (1)
- Medizinische Bildanalyse (1)
- Medizinische Bildverarbeitung (1)
- Medizinische Visualisierung (1)
- Mehragentensystem (1)
- Mehrbenutzer-Annotationssystem (1)
- Menschenmengen-Evakuierung (1)
- Merkmalsextrahierung (1)
- Migration (1)
- Mindestlohn (1)
- Minimum wage (1)
- Mister X (1)
- Mixed method (1)
- Mobile BI (1)
- Mobile Business Intelligence (1)
- Mobile Information Systems (1)
- Mobiles Umfeld (1)
- Model View Presenter (1)
- Model-Driven Engineering (1)
- Modell (1)
- Montageablauf (1)
- Morphologische Operatoren (1)
- Motivation (1)
- Multi-robot System (1)
- Multiagent System (1)
- Multiagentensystem (1)
- Multimedia Metadata Ontology (1)
- Natural Feature Tracking (1)
- Netlogo (1)
- Network Virtualization (1)
- Netzwerk Routing (1)
- Netzwerk Virtualisierung (1)
- Netzwerk kleiner und mittlerer Unternehmen (1)
- Neuronales Netz (1)
- OCR (1)
- ODRL (1)
- OWL <Informatik> (1)
- Object Recognition (1)
- Objective-C (1)
- Online Community (1)
- Online-Befragung (1)
- Online-Überwachung (1)
- Onlinewahl (1)
- Ontologie (1)
- Ontologie <Wissensverarbeitung> (1)
- Ontology alignment (1)
- Open Source (1)
- OpenVDB (1)
- Optische Zeichenerkennung (1)
- Outsourcing (1)
- PHP (1)
- POIs (1)
- Parallelmanipulator (1)
- Path Tracing (1)
- Personal Key Infrastruktur (1)
- Personalised Information Systems (1)
- Petri Nets (1)
- Petri net (1)
- Petrinetz (1)
- Pfadnachverfolgung (1)
- Plugin (1)
- Politik (1)
- Politische Kampagne (1)
- Politische Meinungsbildung (1)
- Polypendetektion (1)
- Pose (1)
- Privatvermögen (1)
- Probability (1)
- Probability propagation nets (1)
- Proceedings (1)
- Process tracing (1)
- Programmiergerät (1)
- Programmiersprachen (1)
- Programmierumgebung (1)
- Projekt iCity (1)
- Propagation (1)
- Prozedurale Synthese (1)
- Prozessmanagement (1)
- Public-Key-Infrastruktur (1)
- Pulsweitenmodulation (1)
- Quality assessment system (1)
- Qualität (1)
- RDF Graphs (1)
- RDF modeling (1)
- RIPng (1)
- RMIS (1)
- Ray tracing (1)
- Raytracing (1)
- ReDSeeDS-Project (1)
- Real-Life Game (1)
- Real-Life Spiel (1)
- Real-Time (1)
- Realistisches Kameramodell (1)
- Reengineering (1)
- Registratur (1)
- Regression (1)
- Reisedokument (1)
- Repast (1)
- Rescueanalyser (1)
- Research Map (1)
- Resource Description Framework (RDF) (1)
- Retina Befundbilder (1)
- Retina Fundus Bilder (1)
- Retina Fundus Images (1)
- Rettungsroboter (1)
- Risikomanagement-Informationssysteme (1)
- Robbie (1)
- Robert P. Abelson (1)
- Robocup 2008 (1)
- Route Aggregation (1)
- Routen-Aggregation (1)
- Routing Information Protocol (1)
- Routing Information Protocol (RIP) (1)
- Routing Information Protokoll (1)
- Routing Loops (1)
- Routing with Metric based Topology Investigation (RMTI) (1)
- SOAMIG (1)
- SOMA (1)
- SPARQL (1)
- Schema Information (1)
- Schlussfolgern (1)
- Schutzprofil (1)
- Search engine (1)
- Security (1)
- Security Routing (1)
- Selbstorganisation (1)
- Semantics (1)
- Semantik (1)
- Serielle Schnittstelle (1)
- Service identification (1)
- Service-Identifizierung (1)
- Service-Oriented Architectures (SOA) (1)
- Service-oriented Architectures (SOA) (1)
- Serviceorientierte Architektur (1)
- Serviceroboter (1)
- Shared Annotation Model (1)
- Shared Annotations (1)
- Sicherheit Routing (1)
- Simulationswerkzeug (1)
- Smartphone Applikation (1)
- Social Games (1)
- Social Networking Platforms (1)
- Software (1)
- Software Development (1)
- Software migration (1)
- Software techniques for object recognition (STOR) (1)
- Software-Customizing (1)
- Software-Migration (1)
- Softwareentwicklung (1)
- Softwareentwurf (1)
- Softwareergonomie (1)
- Softwarespezifikation (1)
- Softwarewartung (1)
- Soziales System (1)
- Spiel (1)
- Spielkonsole (1)
- SpoGa (1)
- Sport (1)
- Sports (1)
- Stakeholder (1)
- Stakeholder-Beteiligung (1)
- Stakeholderanalyse (1)
- Starrkörper (1)
- Staukontrollmechanismen (1)
- Stereosehen (1)
- Steuerung (1)
- Stewart-Plattform (1)
- Stochastic Logic (1)
- Stomp (1)
- Strassenkreuzung (1)
- Suchmaschine (1)
- Supernetting (1)
- Supply Chain Management (1)
- Support System (1)
- Survey Research (1)
- Syntax (1)
- Systembiologie (1)
- Szenengraph (1)
- TAP (1)
- TBox (1)
- TCP/IP (1)
- Taktstraße (1)
- Tarrain Klassifikation (1)
- Taxonomy (1)
- Technologiemanagement (1)
- Temporale Datenhaltung (1)
- Tesseract (1)
- Text Analysis (1)
- Text Mining (1)
- Theorem prover (1)
- Tokens (1)
- Tool Evaluation (1)
- Traceability (1)
- Tracing (1)
- Transferfunktion (1)
- Transformation (1)
- Translation (1)
- UML (1)
- USB (1)
- Umfrage (1)
- Umgebungslicht (1)
- Unified Modeling Language (UML ) (1)
- United Internet AG (1)
- Universität Koblenz-Landau (1)
- Usability (1)
- Validierung (1)
- Veranstaltung (1)
- Verteilte Systeme (1)
- Verwaltungsautomation (1)
- Videospiel (1)
- Videospiele (1)
- Virtual Networks (1)
- Virtualisierung (1)
- Virtuelles Netz (1)
- Visibility Skeleton (1)
- Vocabulary Mapping (1)
- Vocabulary Reuse (1)
- Vocabulary Trainer (1)
- Volume Raycasting (1)
- Volumen-Rendering (1)
- WLAN Fingerprinting (1)
- Web 2.0 (1)
- Web 2.0- basierte Stakeholderplattform (1)
- Web Analytics Framework (1)
- Web Mining (1)
- Web Ontology Language (OWL) (1)
- Web Services (1)
- Web-application framework (1)
- Web-programming technologies (1)
- Web3D (1)
- Webanwendung (1)
- Website (1)
- Wechselkursänderung (1)
- Werbung (1)
- WiFi Fingerprinting (1)
- Wii (1)
- Wikipedia (1)
- Wirtschaftsprojekt (1)
- Wissensbasis (1)
- Wissensvermittlung (1)
- Word-of-Mouth (1)
- Wrapping (1)
- XSD (1)
- Xcode (1)
- Zertifikat (1)
- activation functions of neurons (1)
- adaptive resonance theory (1)
- agent-based model (1)
- agent-based simulation (1)
- agentenbasierte Modellierung (1)
- application programming interfaces (1)
- archiving (1)
- artiffficial neural networks (1)
- artififfcial neural networks (1)
- automatisierte Videoüberwachung (1)
- benefits (1)
- blood analysis (1)
- business process management (1)
- by-stander effect (1)
- camera calibration (1)
- catastrophy theory (1)
- categorisation (1)
- chaos (1)
- collaborative technologies (1)
- colour calibration (1)
- computer science (1)
- concept (1)
- contexts of use (1)
- core ontologies (1)
- crowd evacuation (1)
- currency exchange rates (1)
- delivery drone (1)
- description logic (1)
- design thinking (1)
- deutsche Hochschulen (1)
- deutschsprachiger Markt (1)
- development (1)
- diabetic retinopathy (1)
- directed acyclic graphs (1)
- distinct object identifiers (1)
- drone (1)
- e-Commerce (1)
- e-ID (1)
- eGroupware (1)
- eID (1)
- ePA (1)
- eXperience methodology (1)
- edge linking (1)
- elektronische Anwendung (1)
- emergence (1)
- entrepreneurial design thinking (1)
- entrepreneurial thinking (1)
- entrepreneurship education (1)
- estimation of algorithm efficiency (1)
- ethnic conflicts (1)
- ethnische Konflikte (1)
- event model (1)
- event-based systems (1)
- events (1)
- faceted search (1)
- finite state automata (1)
- game (1)
- gradient method of training weight coefficients (1)
- hybrid automata (1)
- hybrid systems (1)
- hypertableaux (1)
- iCity project (1)
- iOS (1)
- iOS 5 (1)
- iOS-SDK (1)
- id card (1)
- image processing (1)
- information flow regulation (1)
- information system (1)
- interaction (1)
- interaktiv (1)
- jOWL (1)
- knowledge base (1)
- knowledge work (1)
- kollaboratives Lernen (1)
- legacy software (1)
- lineare Algebra (1)
- mathematical model (1)
- media competence model (1)
- medical image processing (1)
- metadata formats (1)
- metadata standards (1)
- metasearch engine (1)
- migration (1)
- minimum self-contained graphs (1)
- mobile application (1)
- mobile devices (1)
- mobile facets (1)
- mobile interaction (1)
- mobile phones (1)
- morphological operators (1)
- movement (1)
- multi-agent systems (1)
- multimedia metadata (1)
- myAnnotations (1)
- nPA (1)
- norm (1)
- online survey (1)
- parallel calculations (1)
- personal information management (1)
- persönliches Informationsmanagement (1)
- physical interaction (1)
- physische Interaktion (1)
- points of interest (1)
- policy modelling (1)
- polyp detection (1)
- privacy and personal data (1)
- privacy competence model (1)
- procedural content generation (1)
- psychisches Verhalten (1)
- psychological behavior (1)
- public key infrastructure (1)
- quality (1)
- realtime physics (1)
- regular dag languages (1)
- retina fundus images (1)
- revenue management (1)
- risks (1)
- scene analysis (1)
- search engine (1)
- security awareness (1)
- self-organisation (1)
- semantic desktop (1)
- semantics (1)
- semantischer Desktop (1)
- sensor data (1)
- shiq (1)
- smartphone app (1)
- social media data (1)
- social object (1)
- social simulation (1)
- social system (1)
- stereotracking (1)
- summative evaluation (1)
- survey (1)
- teams (1)
- technology acceptance model (1)
- time series (1)
- tool-integration (1)
- tracking (1)
- traffic survey (1)
- travel document (1)
- unique name assumption (1)
- usability study (1)
- video game (1)
- video games (1)
- videogames (1)
- virtual goods (1)
- virtuelle Netze (1)
- visualization (1)
- Übersetzung (1)
- Überwachung (1)
- Überwachungstechnik (1)
Institute
- Fachbereich 4 (297) (remove)
Replikation einer Multi-Agenten-Simulationsumgebung zur Überprüfung auf Integrität und Konsistenz
(2012)
In dieser Master -Arbeit möchte ich zunächst eine Simulation vorstellen, mit der das Verhalten von Agenten untersucht wird, die in einer generierten Welt versuchen zu über leben und dazu einige Handlungsmöglichkeiten zur Auswahl haben. Anschließend werde ich kurz die theoretischen Aspekte beleuchten, welche hier zu Grunde liegen. Der Hauptteil meiner Arbeit ist meine Replikation einer Simulation, die von Andreas König im Jahr 2000 in Java angefertigt worden ist [Kö2000] . Ich werde hier seine Arbeit in stark verkürzter Form darstellen und anschließend auf meine eigene Entwicklung eingehen.
Im Schlussteil der Arbeit werde ich die Ergebnisse meiner Simulation mit denen von Andreas König vergleichen und die verwendeten Werkzeuge (Java und NetLogo) besprechen. Zum Abschluss werde ich in einem Fazit mein Vorhaben kurz zusammenfassen und berichten was sich umsetzen ließ, was nicht funktioniert hat und warum.
Standards are widely-used in the computer science and IT industry. Different organizations like the International Organization for Standardization (SO) are involved in the development of computer related standards. An important domain of standardization is the specification of data formats enabling the exchange of information between different applications. Such formats can be expressed in a variety of schema languages thereby defining sets of conformant documents. Often the use of multiple schema languages is required due to their varying expressive power and different kind of validation requirements.rnThis also holds for the Specification Common Cartridge which is maintained by the IMS Global Learning Consortium. The specification defines valid zip packages that can be used to aggregate different learning objects. These learning objects are represented by a set of files which are a part of the package and can be imported into a learning management system. The specification makes use of other specifications to constrain the contents of valid documents. Such documents are expressed in the eXtensible Markup Language and may contain references to other files also part of the package. The specification itself is a so-called domain profile. A domain profile allows the modification of one or more specifications to meet the needs of a particular community. Test rules can be used to determine a set of tasks in order to validate a concrete package. The execution is done by a testsystem which, as we will show, can be created automatically. Hence this method may apply to other package based data formats that are defined as a part of a specification.
This work will examine the applicability of this generic test method to the data formats that are introduced by the so called Virtual Company Dossier. These formats are used in processes related to public e-procurement. They allow the packaging of evidences that are needed to prove the fulfillment of criteria related to a public tender. The work first examines the requirements that are common to both specifications. This will introduce a new view on the requirements by introducing a higher level of abstraction. The identified requirements will then be used to create different domain profiles each capturing the requirements of a package based data format. The process is normally guided by supporting tools that ease the capturing of a domain profile and the creation of testsystems. These tools will be adapted to support the new requirements. Furtheron the generic testsystem will be modified. This system is used as a basis when a concrete testsystem is created.
Finally the author comes to a positive conclusion. Common requirements have been identified and captured. The involved systems have been adapted allowing the capturing of further types of requirements that have not been supported before. Furthermore the background of the specifications quite differ. This indicates that the use of domain profiles and generic test technologies may be suitable in a wide variety of other contexts.
Planning routes for trucks with a trailer is a complex procedure. In order to simplify this process, a route is segmented into elementary components, which represents basic motions of the considered vehicle. These elementary components are called maneuvers and are composed of two party. First, paths are constructed for certain reference points. Second, the vehicle is enclosed by a corridor during the execution of a maneuver. The paths of the vehicle have to take driveability into consideration. They must respect the kinematic constraints of the vehicle. The maneuver corridor can be used as a basis to guarantee collision-free motion planing. No part of the vehicle leaves the corridor during the maneuver. There are different types of maneuvers. Currently, the bending maneuver, the cusp maneuver and the straight line maneuver can be distinguished. In addition, a maneuver can be created with two different construction methods, the conventional and the iterative method.
In this thesis, a data structure to construct a maneuver is designed and implemented. The data structure is integrated into an already existing tool. The user can interact with the software to adjust various parameters of a maneuver. Afterwards the maneuver is generated based on these parameters. This also includes a visualization within the software, which can plot the parts of a maneuver. The visualization can be exported to an image file.
Web-programming is a huge field of different technologies and concepts. Each technology implements a web-application requirement like content generation or client-server communication. Different technologies within one application are organized by concepts, for example architectural patterns. The thesis describes an approach for creating a taxonomy about these web-programming components using the free encyclopaedia Wikipedia. Our 101companies project uses implementations to identify and classify the different technology sets and concepts behind a web-application framework. These classifications can be used to create taxonomies and ontologies within the project. The thesis also describes, how we priorize useful web-application frameworks with the help of Wikipedia. Finally, the created implementations concerning web-programming are documented.
To meet the growing demands in the automotive industry, car manufacturers constantly reduce the depth of production and shift value-adding processes to the suppliers. This requires that companies work together more closely and promotes the creation of complex logistics networks. To meet the requirements for information exchange, a consortium of automobile manufacturers launched the project RFID-based Automotive Network (RAN) in 2009. The initiative aims at creating a standardized architecture for efficient material flow management along the entire supply chain. Core component of this architecture is the Informationbroker, an information unit which automatically communicates data which is captured via Auto-ID technology to supply chain participants. The thesis focuses in cooperation with the IBS AG, a software company and consortium partner in the project, on the exchange of goods data.
At first, theoretical foundations are presented by describing the characteristics of a supply chain and explaining standardization efforts and related processes. The chapter on the supply chain focuses on trends in the automotive industry to create a link to the project. The topic of standardization provides in-depth information on electronic data exchange standards in order to additionally create a transition to the Informationbroker concept. In the analytical part, reference projects will be presented with a similar problem and set in relation to RAN. According to project documents, system requirements will be defined and models will be created in order to illustrate the problem. Rich Pictures are used to describe the basis and target state.
Based on these models, the flow of goods related data is depicted between two companies and the role of the Informationbroker for the information exchange is clarified. The thesis aims at establishing an understanding of the challenges of the project and how the proposed concepts of the initiative can lead to an optimization of an automotive supply chain.
The estimation of various social objects is necessary in different fields of social life, science, education, etc. This estimation is usually used for forecasting, for evaluating of different properties and for other goals in complex man-machine systems. At present this estimation is possible by means of computer and mathematical simulation methods which is connected with significant difficulties, such as: - time-distributed process of receiving information about the object; - determination of a corresponding mathematical device and structure identification of the mathematical model; - approximation of the mathematical model to real data, generalization and parametric identification of the mathematical model; - identification of the structure of the links of the real social object. The solution of these problems is impossible without a special intellectual information system which combines different processes and allows predicting the behaviour of such an object. However, most existing information systems lead to the solution of only one special problem. From this point of view the development of a more general technology of designing such systems is very important. The technology of intellectual information system development for estimation and forecasting the professional ability of respondents in the sphere of education can be a concrete example of such a technology. Job orientation is necessary and topical in present economic conditions. It helps tornsolve the problem of expediency of investments to a certain sphere of education. Scientifically validated combined diagnostic methods of job orientation are necessary to carry out professional selection in higher education establishments. The requirements of a modern society are growing, with the earlier developed techniques being unable to correspond to them sufficiently. All these techniques lack an opportunity to account all necessary professional and personal characteristics. Therefore, it is necessary to use a system of various tests. Thus, the development of new methods of job orientation for entrants is necessary. The information model of the process of job orientation is necessary for this purpose. Therefore, it would be desirable to have an information system capable of giving recommendations concerning the choice of a trade on the basis of complex personal characteristics of entrants.
Remote rendering services offer the possibility to stream high quality images to lower powered devices. Due to the transmission of data the interactivity of applications is afflicted with a delay. A method to reduce delay of the camera manipulation on the client is called 3d-warping. This method causes artifacts. In this thesis different approaches of remote rendering setups will be shown. The artifacts and improvements of the warping method will be described. Methods to reduce the artifacts will be implemented and analyzed.
As Enterprise 2.0 (E2.0) initiatives are gradually moving out of the early experimentation phase it is time to focus greater attention on examining the structures, processes and operations surrounding E2.0 projects. In this paper we present the findings of an empirical study to investigate and understand the reasons for initiating E2.0 projects and the benefits being derived from them. Our study comprises seven in-depth case studies of E2.0 implementations. We develop a classification and means of visualising the scope of E2.0 initiatives and use these methods to analyse and compare projects.
Our findings indicate a wide range of motivations and combinations of technology in use and show a strong emphasis towards the content management functionality of E2.0 technologies.
Im Vergleich zu herkömmlicher Computergrafik (perspektivische Projektion) bietet Raytracing entscheidende Vorteile, die hauptsächlich in der vergleichsweise hohen physikalischen Korrektheit der Methode begründet sind. Die Schwächen liegen hingegen im immensen Rechenaufwand.
Ein Raytracer ist vergleichsweise so rechenintensiv, weil für jeden Pixel mindestens ein Strahl verschickt werden muss. Dieser muss gegen alle Objekte im Raum geschnitten werden. Hinzu kommen noch die Strahlen, die entstehen, wenn Strahlen an Objekten reflektiert werden (Rekursion). Um diesen Rechenaufwand zu verkleinern und zusätzlich ein besseres Bild zu erzeugen, soll der adaptive Sampler den Raytracer unterstützen. Der adaptive Sampler soll während des Rendervorgangs den progressiven Fortschritt in der Bildgenerierung beobachten und Pixel von der weiteren Berechnung ausschließen, für die sich ein zusätzliches Verschießen von Strahlen nicht mehr lohnt.
Anders als der rein progressive Raytracer hört der adaptive Sampler mit dem Konvergieren des Bildes auf zu rechnen. Der adaptive Sampler soll so dafür sorgen, dass schneller ein besseres Bild erzeugt wird und somit die Performanz gesteigert wird. Insgesamt erwartet man sich vom adaptiven Sampler Vorteile bei der Berechnung von bestimmten Szenen. Unter anderem eine Verbesserung bei Szenen mit rein diffus beleuchteten Bildbereichen, sowie eine Verbesserung bei Szenen mit unterschiedlich rechenintensiven Bildbereichen. Ein normaler Raytracer kann nicht beurteilen, wie sinnvoll seine Schüsse sind. Er kann nur mehr Strahlen verschießen, in der Hoffnung, das Bild damit effektiv zu verbessern.
Es gibt jedoch viele Szenarien, bei denen eine linear steigende Schussanzahl pro Pixel keine gleichmäßige Verbesserung im Bild erzeugt. Das bedeutet, dass Bereiche im Bild schon gut aussehen, während andere noch sehr verrauscht sind. Man möchte nun Bildbereiche, die bereits konvergiert sind, in denen sich ein weiterer Beschuss also nicht mehr bemerkbar macht, ausschließen und die Rechenleistung dort nutzen, wo man sie noch braucht.
Wichtig dabei ist, dass Pixel nicht ungewollt zu früh von der Berechnung ausgeschlossen werden, die nicht weit genug konvergiert sind. Der adaptive Sampler soll so lange arbeiten, bis jeder Pixel dauerhaft keine Änderungen mehr vorweist. Das bedeutet, dass die Wahrscheinlichkeit für eine signifikante Farbänderung eines Pixels durch Verschießen eines Strahls (bei mehreren Lichtquellen in RenderGin mehrere Strahlen pro Pixel) klein genug ist. Es wird zwar intern keine Wahrscheinlichkeit berechnet, jedoch bekommt der Raytracer eine Art Gedächtnis: Er speichert die Veränderungen im beleuchteten Bild und deren Verlauf in eigenen Gedächtnisbildern. Das "Gedächtnis" für das alte Bild (Zustand des Bildes in der letzten Iteration über die Pixel) repräsentiert dabei das Kurzzeitgedächtnis. Es ist absolut genau. Das Langzeitgedächtnis wird von drei verschiedenen Bildern repräsentiert. Das erste gibt die Anzahl der verschossenen Strahlen pro Pixel an. Das zweite ist ein Wahrheitswertebild, das für jeden Pixel angibt, ob dieser noch in die Berechnung einbezogen werden soll. Das dritte Bild gibt an, wie oft jeder Pixel eine Farbänderung vollzogen hat, die geringer ist als der geforderte Maximalabstand eines Pixels zu sich selbst (vor und nach dem Verschießen eines weiteren Strahls).
Mit diesen drei Bildern ist es möglich, zusätzliche quantitative Informationen zu den qualitativen Informationen des Vergleichs vom neuen und alten Bild zu berücksichtigen.
In dieser Arbeit kläre ich die Frage, ob die gewünschten Effekte eintreten und ob bei Integration in die bestehende Struktur von RenderGin ein Performanzgewinn möglich ist. Die Umsetzung eines adaptiven Samplers ist als Plug-In in der Software RenderGin von Numenus GmbH geschehen. RenderGin ist ein echtzeitfähiger, progressiver Raytracer, der sich durch seine Performanz auszeichnet. Die Bildgenerierung geschieht allein auf der CPU, die Grafikkarte wird lediglich zur Anzeige des erzeugten Bildes benötigt.
Die Umsetzung und Programmierung des Plug-Ins ist in Microsoft Visual Studio 2010 geschehen unter Verwendung des RenderGin SDK der Numenus GmbH.
Generalized methods for automated theorem proving can be used to compute formula transformations such as projection elimination and knowledge compilation. We present a framework based on clausal tableaux suited for such tasks. These tableaux are characterized independently of particular construction methods, but important features of empirically successful methods are taken into account, especially dependency directed backjumping and branch local operation. As an instance of that framework an adaption of DPLL is described. We show that knowledge compilation methods can be essentially improved by weaving projection elimination partially into the compilation phase.
Forwarding loops
(2013)
The annotation of digital media is no new area of research, instead it is widely investigated. There are many innovative ideas for creating the process of annotation. The most extensive segment of related work is about semi automatic annotation. One characteristic is common in the related work: None of them put the user in focus. If you want to build an interface, which is supporting and satsfying the user, you will have to do a user evaluation first. Whithin this thesis we want to analyze, which features an interface should or should not have to meet these requirements of support, user satisfaction and beeing intuitive. After collecting many ideas and arguing with a team of experts, we determined only a few of them. Different combination of these determined variables form the interfaces, we have to investigate in our usability study. The results of the usability leads to the assumption, that autocompletion and suggestion features supports the user. Furthermore coloring tags for grouping them into categories is not disturbing to the user, but has a tendency of being supportive. Same tendencies emerge for an interface consisting of two user interface elements. There is also an example given for the definition differences of being intuitive. This thesis leads to the concolusion that for reasons of user satisfaction and support it is allowed to differ from classical annotation interface features and to implement further usability studies in the section of annotation interfaces.
This thesis addresses the implementation of a particle simulation of an explosion. The simulation will be displayed via ray tracing in near real time. The implementation makes use of the openCL standard. The focus of research in this thesis is to analyse the performance of this combination of components.
Autonomous systems such as robots already are part of our daily life. In contrast to these machines, humans an react appropriately to their counterparts. People can hear and interpret human speech, and interpret facial expressions of other people.
This thesis presents a system for automatic facial expression recognition with emotion mapping. The system is image-based and employs feature-based feature extraction. This thesis analyzes the common steps of an emotion recognition system and presents state-of-the-art methods. The approach presented is based on 2D features. These features are detected in the face. No neutral face is needed as reference. The system extracts two types of facial parameters. The first type consists of distances between the feature points. The second type comprises angles between lines connecting the feature points. Both types of parameters are implemented and tested. The parameters which provide the best results for expression recognition are used to compare the system with state-of-the-art approaches. A multiclass Support Vector Machine classifies the parameters.
The results are codes of Action Units of the Facial Action Coding System. These codes are mapped to a facial emotion. This thesis addresses the six basic emotions (happy, surprised, sad, fearful, angry, and disgusted) plus the neutral facial expression. The system presented is implemented in C++ and is provided with an interface to the Robot Operating System (ROS).
Opinion Mining : Using Twitter as a source of opinion for the prediction of stock market prices
(2012)
Neben den theoretischen Grundkonzepten der automatisierten Fließtextanalyse, die das Fundament dieser Arbeit bilden, soll ein Überblick in den derzeitigen Forschungsstand bei der Analyse von Twitter-Nachrichten gegeben werden. Hierzu werden verschiedene Forschungsergebnisse der, derzeit verfügbaren wissenschaftlichen Literatur erläutert, miteinander verglichen und kritisch hinterfragt. Deren Ergebnisse und Vorgehensweisen sollen in unsere eigene Forschung mit eingehen, soweit sie sinnvoll erscheinen. Ziel ist es hierbei, den derzeitigen Forschungsstand möglichst gut zu nutzen.
Ein weiteres Ziel ist es, dem Leser einen Überblick über verschiedene maschinelle Datenanalysemethoden zur Erkennung von Meinungen zu geben. Dies ist notwendig, um die Bedeutung der im späteren Verlauf der Arbeit eingesetzten Analysemethoden in ihrem wissenschaftlichen Kontext besser verstehen zu können. Da diese Methoden auf verschiedene Arten durchgeführt werden können, werden verschiedene Analysemethoden vorgestellt und miteinander verglichen. Hierdurch soll die Machbarkeit der folgenden Meinungsauswertung bewiesen werden. Um eine hinreichende Genauigkeit bei der folgenden Untersuchung zu gewährleisten, wird auf ein bereits bestehendes und evaluiertes Framework zurückgegriffen. Dieses ist als API 1 verfügbar und wird daher zusätzlich behandelt. Der Kern Inhalt dieser Arbeit wird sich der Analyse von Twitternachrichten mit den Methoden des Opinion Mining widmen.
Es soll untersucht werden, ob sich Korrelationen zwischen der Meinungsausprägung von Twitternachrichten und dem Börsenkurs eines Unternehmens finden lassen. Es soll dabei die Stimmungslage der Firma Google Inc. über einen Zeitraum von einem Monat untersucht und die dadurch gefunden Erkenntnisse mit dem Börsenkurs des Unternehmens verglichen werden. Ziel ist es, die Erkenntnisse von (Sprenger & Welpe, 2010) und (Taytal & Komaragiri, 2009) auf diesem Gebiet zu überprüfen und weitere Fragestellungen zu beantworten.
The Microsoft Kinect is currently polular in many application areas because ofrnthe cheap price and good precission. But controlling the cursor is unapplicablerndue to jitter in the skeletton data. My approach will try to stabilisize the cursor position with common techniques from image processing. The input therefore will be the Kinect color camera. A final position will be calculated using the different positions of the tracking techniques. For controlling the cursor the right hand should be tracked. A simple click gesture will also be developed. The evaluation will show if this approach was succesfull.
The natural and the artificial environment of mankind is of enormous complexity, and our means of understanding this complex environment are restricted unless we make use of simplified (but not oversimplified) dynamical models with the help of which we can explicate and communicate what we have understood in order to discuss among ourselves how to re-shape reality according to what our simulation models make us believe to be possible. Being both a science and an art, modelling and simulation isrnstill one of the core tools of extended thought experiments, and its use is still spreading into new application areas, particularly as the increasing availability of massive computational resources allows for simulating more and more complex target systems.
In the early summer of 2012, the 26th European Conference on Modelling andrnSimulation (ECMS) once again brings together the best experts and scientists in the field to present their ideas and research, and to discuss new challenges and directions for the field.
The 2012 edition of ECMS includes three new tracks, namely Simulation-BasedrnBusiness Research, Policy Modelling and Social Dynamics and Collective Behaviour, and extended the classical Finance and Economics track with Social Science. It attracted more than 110 papers, 125 participants from 21 countries and backgrounds ranging from electrical engineering to sociology.
This book was inspired by the event, and it was prepared to compile the most recent concepts, advances, challenges and ideas associated with modelling and computer simulation. It contains all papers carefully selected from the large number of submissions by the programme committee for presentation during the conference and is organised according to the still growing number tracks which shaped the event. The book is complemented by two invited pieces from other experts that discussed an emerging approach to modelling and a specialised application. rnrnWe hope these proceedings will serve as a reference to researchers and practitioners in the ever growing field as well as an inspiration to newcomers to the area of modelling and computer simulation. The editors are honoured and proud to present you with this carefully compiled selection of topics and publications in the field.
This paper originates from the FP6 project "Emergence in the Loop (EMIL)" which explores the emergence of norms in artificial societies. Part of work package 3 of this project is a simulator that allows for simulation experiments in different scenarios, one of which is collaborative writing. The agents in this still prototypical implementation are able to perform certain actions, such as writing short texts, submitting them to a central collection of texts (the "encyclopaedia") or adding their texts to texts formerly prepared by other agents. At the same time they are able to comment upon others' texts, for instance checking for correct spelling, for double entries in the encyclopaedia or for plagiarisms. Findings of this kind lead to reproaching the original authors of blamable texts. Under certain conditions blamable activities are no longer performed after some time.
From September 4 to 11, 1992, a fiirst meeting between Ukrainian and German scientists interested in mathematical and computer modeling of social processes was held at Vorzel' near Kiev. The meeting had been planned for nearly three years by Igor V. Chernenko and Mikhail V. Kuz'min, then members of the research group on mathematical modeling in sociology at the Institute of Sociology of the Academy of Science of the Ukrainian Republic, and had to be postponed twice due to the political development in the former Soviet Union, but thanks to the organizers' perseverance (and in spite of a strike of the airport personell at Kiev Borispol Airport on the eve of the conference) the conference could at last be realized.rnThe main purpose of the conference was to discuss a synergetic interpretation of large-scale destructive social processes as catastrophic phenomena in self-organized systems.
Ziel dieser Forschungsarbeit ist die Auswahl und Evaluierung von Open-Source ERPSystemen auf effiziente Anwendbarkeit in Unternehmen zum Zwecke des Aufbaus eines "ERP-Future-Labs", in welchem mittelständische Handelsunternehmen das/die installierte(n) System(e) testen können. Den Projektabschluss bildet hierbei die Installation eines lauffähigen Systems, auf welchem die vom Auftraggeber vorgegebenen Geschäftsprozesse abgewickelt werden können. Ferner sollen die Auftraggeber auf dem System geschult, eine Dokumentation der Software (Installation/Bedienung) und des Projekts erstellt werden.
Today you can find smartphones everywhere. This situation created a hype for Augmented Reality and AR Apps. The big question is: Do these applications provide a real added value? To make AR pratically it is important to add the computational power of a computer to the advantages of AR. An easy and fast way of interaction is essential.
A Poker-Assistance-Software is an ideal test area for an AR Application with real added value. The estimation of the winning probability and a fast automated tracking of the playing cards is the perfect field of investigation.
In this discussion it is interesting to evaluate the added value of AR Applications in common.
Particle swarm optimization is an optimization technique based on simulation of the social behavior of swarms.
The goal of this thesis is to solve 6DOF local pose estimation using a modified particle swarm technique introduced by Khan et al. in 2010. Local pose estimation is achieved by using continuous depth and color data from a RGB-D sensor. Datasets are aquired from different camera poses and registered into a common model. Accuracy and computation time of the implementation is compared to state of the art algorithms and evaluated in different configurations.
In Silico simulation of biological systems is an important sub area of computational biology (system biology), and becomes more and more an inherent part for research. Therefore, different kinds of software tools are required. At present, a multitude of tools for several areas exists, but the problem is that most of the tools are essentially application specific and cannot be combined. For instance, a software tool for the simulation of biochemical processes is not able to interact with tools for the morphology simulation and vice versa. In order to obtain realistic results with computer-aided simulations it is important to regard the biological system in its entirety. The objective is to develop a software framework, which provides an interface structure to combine existing simulation tools, and to offer an interaction between all affiliated systems. Consequently, it is possible to re-use existing models and simulation programs. Additionally, dependencies between those can be defined. The system is designed to interoperate as an extendable architecture for various tools. The thesis shows the usability and applicability of the software and discusses potential improvements.
This bachelor thesis deals with the topic "user-friendly design of applications (apps)" on mobile devices, a subdomain of software-ergonomics. In the process, two applications are being analyzed with the aim of developing a solution on how support on a mobile device should be conducted. This study focuses primarily on appropriate gestures to coordinate the 'help function' on a mobile device. The study results show that the test persons request a customized help function, but reject an extensive help description, as this seems to be overwhelming for the user.
Das allgemeine Erreichbarkeitsproblem in Stellen/Transitions-Netzen behandelt die Frage, ob eine bestimmte Zielmarkierung aus einer anderen Markierung erreichbar ist. Die lineare Algebra als Teilgebiet der Mathematik kann zur Beschreibung und Analyse von Petri-Netzen herangezogen werden. Jedoch stellen Kreise in Netzen bei der Erreichbarkeitsanalyse ein Problem für sie dar: ob für bestimmte Transitionen erforderliche Marken sofort, erst nach dem Durchlaufen bestimmter Kreise oder gar nicht zur Verfügung stehen, ist nicht erkennbar, hat jedoch unmittelbare Auswirkungen auf die Erreichbarkeit. Es gibt verschiedene Möglichkeiten, dieses Problem linear-algebraisch anzugehen. In dieser Arbeit werden vorhandene Verfahren sowie neue Lösungsansätze auf Basis der Reproduzierbarkeit der leeren Markierung diskutiert.
Die vorliegende Fallstudie entstand als Untersuchungsobjekt zu einer Bachelorarbeit und wurde nach der eXperience Fallstudienmethodik erstellt. Ziel der Bachelorarbeit war die Identifizierung von Nutzenaspekten in diesem konkreten Fall. Im Anwenderunternehmen DOCHOUSE wurde hier eine Schnittstelle zwischen dem internen CRM-System und einem kollaborativen System für den externen Zugriff eingeführt.
Ein besonderer Dank geht in diesem Zusammenhang an Herr Wolfgang Brugger (Geschäftsführer DOCHOUSE GmbH), der die Erstellung der Fallstudie motiviert hat und die FG BAS mit deren Durchführung betraute. Die Fallstudie wurde im Winter 2011 von dem Studenten Martin Surrey und Roland Diehl, wissenschaftlicher Mitarbeiter der Forschungsgruppe, erhoben.
The measurement of the benefits of collaborative technologies is a challenge caused by sociological influences on these systems. In this work, the introduction of a collaborative system into an existing IT environment is examined, so that a value can be identified for the company. The analysis of frameworks has shown that the benefits can be reaped through better management and organization of in-company resources and optimization of the underlying process.
Integration von CRM-Systemen mit Kollaborations-Systemen am Beispiel von DocHouse und Lotus Quickr
(2012)
Der vorliegende Arbeitsbericht "Integration von CRM-Systemen mit Kollaborations-Systemen am Beispiel von DocHouse/ BRM und IBM Lotus Quickr" ist Ergebnis einer studentischen Projektarbeit. Ziel des Projekts war es Integrationsszenarien zwischen einem CRM-System und einem Kollaborati-onssystem zu erarbeiten und eine prototypische Schnittstelle mit entsprechender Funktion zwischen den Systemen DocHouse/ BRM und IBM Lotus Quickr zu implementieren.
Ein besonderer Dank geht in diesem Zusammenhang an Herr Wolfgang Brugger (Geschäftsführer der DocHouse GmbH), der die Idee einer solchen Entwicklung hatte und die FG BAS mit deren Durchführung betraute. Die Erstellung des Konzepts und des Prototyps wurde vom Winter 2010 bis Sommer 2011 von den Studenten Björn Lilge, Ludwig Paulsen, Marco Wolf, Markus Aldenhövel, Martin Surrey und Mike Reuthers im Rahmen ihres Projektpraktikums durchgeführt. Das Projektteam wurde bei der Konzeption und Implementierung inhaltlich und organisatorisch von Dipl.-Wirt.-Inform. Roland Diehl betreut.
Die Siedlungsgeschichte im Rhein-Mosel-Dreieck reicht zurück bis in die römische Zeit. Entlang der beiden großen Flüsse finden sich zahlreiche Beispiele historischer Architektur. In diese Kategorie lässt sich auch die ehemalige Burganlage im Kondertal einordnen, die sich auf dem Nordwest-Ausläufer des Hinterberges befindet. Um eine genauere Vorstellung der Burganlage zu erhalten, sollte ein Computermodell erstellt werden. Die praktische Umsetzung dieses Modells ist Thema der vorliegenden Studienarbeit. Von der Erstellung eines "einfachen 3D-Modells" mittels einer dazu mächtigen Software kam man schnell ab. Stattdessen sollte das Ziel der Arbeit ein Programm sein, dass es dem Benutzer ermöglicht die Burganlage interaktiv aufzubauen und in beliebiger Form zu verändern.
Dieser Arbeitsbericht behandelt die Entwicklung einer Architektur für komplexe kontextbezogene Dienste im mobilen Umfeld. Der folgende Arbeitsbericht beschreibt die grundlegende Problemstellung und einen theoretischen Lösungsansatz, der im weiteren Forschungsprozess konkretisiert, prototypisch implementiert und evaluiert wird. Durch die gestiegene Mobilität vieler Menschen besteht ein stetig steigender Bedarf an mobilen Kommunikations- und Informationsdiensten. Im mobilen Umfeld werden die meisten Mehrwertdienste zum jetzigen Zeitpunkt von den Mobilfunkprovidern angeboten. Es handelt sich primär um Dienste für den Massenmarkt, die keine nennenswerte Personalisierung zulassen. Aufgrund der funktionell einfachen Dienste und des damit verbundenen niedrigen Komforts sowie der durch die Nutzung entstehenden Kosten werden derartige Dienste nur in begrenztem Maße vom Massenmarkt angenommen. Dazu besteht keine Möglichkeit, kostengünstig kontextbezogene Dienste für spezielle Personengruppen anzubieten, da das Dienstangebot vom jeweiligen Mobilfunkprovider festgelegt wird. Diese Arbeit betrachtet nicht nur die heutigen Hemmnisse, sondern auch die Anforderungen, die einer Akzeptanz besonders von komplexen kontextbezogenen Diensten noch im Wege stehen. Ziel ist es, eine Architektur bereitzustellen, die zukünftig personalisierte Dienste ermöglichen soll. Durch die Verwendung von sensiblen Kontextinformationen bei der Diensterbringung muss bei der Konzeption dieser Architektur der Schutz der Privatsphäre als ein wichtiger Punkt betrachtet werden. Basierend auf diesen ermittelten Anforderungen schlägt diese Arbeit eine Architektur vor, die es ermöglicht, kontextbezogene Dienste geräte und providerunabhängig in einem wirtschaftlichen Umfeld, unter Berücksichtigung des Schutzes der Privatsphäre des Benutzers, anzubieten.
The RoboCup Rescue League was founded with the intention to serve as an international communication platform for development of rescue robots. In regions hit by catastrophes, those robots are meant to find buried people, detect their physical condition and send the proper information to rescue teams.
At the university of Koblenz the rescue robot "Robbie" has been in development for years. Robbie accumulates information about his environment by targeted control of sensors and can act autonomous in unknown regions with help of the previous collected data. He creates an internal 2D map of his environment. This map provides enough information to navigate through space and to localize himself. Unfortunately, 2D maps have a huge drawback. When confronted with uneven terrain or even multilayered disaster areas, this technique will meet its limitations. Considered that most afflicted areas will probably have a bumpy ground, it is important to improve this technique.
That is why 3D-mapping is being required. With the help of RoboCup Rescue Scenario this Bachelor Thesis is going to implement a 3D-mapping algorithm and evaluate the flaws of 2D- and 3D-mapping problems thoroughly.
Echtzeitsysteme spielen in der heutigen Zeit in vielen Anwendungsbereichen eine bedeutende Rolle. Beispiele hierfür finden sich in der Robotik, der Fabrikautomation, der Medizintechnik, aber auch in Bereichen aus dem täglichen Leben, wie im Automobilbereich oder in der Mobilkommunikation. Trotz der ständig wiederkehrenden Begegnung mit ihnen im Alltag, sind Echtzeitsysteme häufig innerhalb von größeren Systemen eingebettet und können somit nur selten vom jeweiligen Nutzer bewusst als eigenständig wahrgenommen werden.
Um spezifische Eigenschaften des Gebiets Echtzeitsysteme im Schulunterricht oder in der Ausbildung an Hochschulen den Lernenden sichtbar zu machen, existiert im Labor der Arbeitsgruppe Echtzeitsysteme der Universität Koblenz-Landau ein Versuchsaufbau mit dem Namen "Wippe", bei dem eine Kamera die Bewegung einer Kugel auf einer ebenen Fläche aufnehmen und vermessen kann. Durch die Neigung der Fläche in zwei Achsen ist die Wippe prinzipiell in der Lage, die Kugel zu bewegen und zum Halten zu bringen. Insbesondere soll verhindert werden, dass die Kugel von der Fläche fällt. Die vorliegende Arbeit beschäftigt sich mit der Analyse und Überarbeitung des Wippe-Systems. Außerdem wird ein Regelalgorithmus für eine automatische Steuerung des Systems realisiert.
Das Wippe-Experiment soll für didaktische Zwecke eingesetzt werden und ein offenes System darstellen, welches die Merkmale und Eigenschaften eines Echtzeitsystems exemplarisch verdeutlicht.
In this study, the author evaluates four visualization methods (FlexViz, the Jambalaya applet, Experimental jOWL TouchGraph, and Plone ontology) from the pool of currently available web based visualization methods.
Further, this study attempts to answer of the following research questions: 1. What are the requirements for ontology visualization? 2. How is comparative analysis and evaluation carried out? 3. How can a chosen method be tested? Based on the results of the evaluation, the visualization method FlexViz was found to be the most suitable for the given scenario. The deployment of FlexViz was carried out and integrated within Plone CMS and logical parts of the VCD ontology are tested. Finally, FlexViz was analyzed from different stakeholder- perspectives. Results showed that FleViz is a reliable tool for visualizing, understanding and analyzing a developed ontology because of its user-friendly and interactive interface, but performs poorly in regards to the editing, extension and development process.
Augmented Reality (AR) is getting more and more popular. To augment information into the field of vision of the user using HMDs, e.g. front shields of a car, glasses, displays of a smartphone or tablets are the main use of AR technology. It is necessary to get the position and orientation (pose) of the camera in space to augment correctly.
Nowadays, this is solved with artificial markers. These known markers are placed in the room and the system is taught to this set up. The next step is to get rid of these artificial markers. If we are calculating the pose without such markers we are talking about marker-less tracking. Instead of artificial markers we will use natural objects in the real world as reference points to calculate the pose. Thus, this approach can be used flexibly and dynamically. We are no longer dependent on artificial markers but we need much more knowledge about the scenery to find the pose. This is compensated by technical actions and/or the user himself. However, both solutions are neither comfortable nor efficient for the usage of such a system. This is why marker-less 3D tracking is still a big field of research.
This sets the starting point for the bachelor thesis. In this thesis an approach is proposed that needs only a quantity of 2D Feature from a given camera image and a quantity of 3D Feature of an object to find the initial Pose. With this approach, we got rid of the technical and user assistance. 2D and 3D Features can be detected in any way you like.
The main idea of this approach is to build six correspondences between these quantities. With those we are able to estimate the pose. Each 3D Feature is mapped with the estimated pose onto image coordinates, whereby the estimated pose can be evaluated. Each distance is measured between the mapped 3D Feature and the associated 2D Feature. Each correspondency is evaluated and the results are summed up to evaluate the whole pose. The lower this summed up value is, the better the pose. It has been shown to have a correct pose with a value around ten pixels.
Due to lots of possibilities to build six correspondences between the quantities, it is necessary to optimize the building process. For the optimization we will use a genetic algorithm.
During the test case the system worked quite reliably. The hit rate was around 90% with a runtime of approximately twelve minutes. Without optimization it can take easily some years.
Procedural content generation, the generation of video game content using pseudo-random algorithms, is a field of increasing business and academic interest due to its suitability for reducing development time and cost as well as the possibility of creating interesting, unique game spaces. Although many contemporary games feature procedurally generated content, the author perceived a lack of games using this approach to create realistic outer-space game environments, and the feasibility of employing procedural content generations in such a game was examined. Using current scientific models, a real-time astronomical simulation was developed in Python which generates star and planets object in a fictional galaxy procedurally to serve as the game space of a simple 2D space exploration game where the player has to search for intelligent life.
In der Betriebswirtschaft wird das Steuern von Preis und Kapazität einer be- stimmten Ware oder Dienstleistung als Yield oder Revenue Management bezeich- net. Im wesentlichen geht es darum, die vorhandenen knappen Gütter ertrags- bzw. erlösmaximal zu vermarkten. Dies wird zum einen durch Erstellen von Pro- gnosen über das zukünftige Kundenverhalten, zum anderen durch den Einsatz verschiedener Ertragsmaximierungstechniken, wie z.B. Preisdifferenzierung oder Überbuchung, erreicht. So lassen sich die Zahlungsbereitschaften verschiedener Nachfragegruppen optimal abschöpfen, der Auslastungsgrad erhöhen und somit der Ertrag optimieren. Das Yield-Management entstand nach der Deregulierung des amerikanischen Luft- verkehrsmarktes im Jahre 1978, als zahlreiche Fluggesellschaften den etablierten Airlines Konkurenz machten. Sie zielten mit ihren günstigen Preisen vor allem auf Freizeitreisende und konnten so erhebliche Marktanteile gewinnen. Im Zuge des Preiskampfes entwickelten Fluggesellschaften verschiedene Techniken zur Steuerung der Preise und Kapazitäten um ihre Erlöse zu maximieren. So konnte z.B. American Airlines trotzt schlechterer Kostenstruktur viele Wettbewerber vom Markt drängen, als sie 1985 zum ersten mal das neue Konzept vollständig eingesetzt haben. Vergl. (vergl. uRK05, Kap. A). Im folgenden haben andere Branchen, wie Autovermieter und Hoteliers, die Kon- zepte des Yield-Managements adoptiert und erfolgreich eingesetzt. Die Mechanis- men der Nachfrageprognosen und Optimierung der Produktsegmente, Kontingente und Preise sind stets die gleichen. Es ädern sich lediglich die branchenspezifischen Parameter sowie Optimierungsziele. Im Rahmen dieser Diplomarbeit werden die möglichen Einsatzgebiete des Yield- Managements untersucht. Es wird der Versuch unternommen ein allgemeingültiges Model zur Steuerung von Preis und Kapazität von Dienstleistungen oder Pro- dukten zu entwickeln. Das Model wird dann am speziellen Beispiel der Touris- musbranche, insbesondere der Produkte eines Reiseveranstalters, in einem Yield- Management-Modul des Reiseveranstaltersystems TourPaX implementiert.
Querying for meta knowledge
(2008)
The Semantic Web is based on accessing and reusing RDF data from many different sources, which one may assign different levels of authority and credibility. Existing Semantic Web query languages, like SPARQL, have targeted the retrieval, combination and reuse of facts, but have so far ignored all aspects of meta knowledge, such as origins, authorship, recency or certainty of data, to name but a few. In this paper, we present an original, generic, formalized and implemented approach for managing many dimensions of meta knowledge, like source, authorship, certainty and others. The approach re-uses existing RDF modeling possibilities in order to represent meta knowledge. Then, it extends SPARQL query processing in such a way that given a SPARQL query for data, one may request meta knowledge without modifying the query proper. Thus, our approach achieves highly flexible and automatically coordinated querying for data and meta knowledge, while completely separating the two areas of concern.
This thesis deals with the distribution of large virtual networks to multiple physical hosts using the virtualization tools VNUML and EDIV. While VNUML is responsible for simulating the network, EDIV has its focus primarily on the distributiion. After a short explanation of basic terms and concepts from the field of network simulation, the two previously mentioned virtualization tools are described in detail. This starts in both cases with a detailed description regarding the proper installation and configuration, followed by a demonstration of the most important functions, such as starting or stopping a simulation. The creation of a suitable network scenario and the information scripts provided by EDIV are two further points described in this context. To use the presented possibilities of those tools in practice, an own network scenario is designed and distributed to several hosts, so that the different functions of EDIV can be shown and described accurately.
In this diploma thesis a skeleton-based matching technique for 2D shapes is introduced. First, current approaches for the matching of shapes will be presented. The basics of skeleton-based matchings will be introduced. In the context of this thesis, a skeleton-based matching approach was implemented as presented in the original paper. This implementation is evaluated by performing a similarity search in three shape databases. Strengths and limitations of the approach are pointed out. In addition, the introduced algorithm will be examined with respect to extending it towards matching of 3D objects. In particular, the approach is applied to medical data sets: Pre- and postoperative CT images of the abdominal aorta of one patient will be compared. Problems and approaches for matching of 3D objects in general and blood vessels in particular will be presented.
In recent years, traceability has been more and more universally accepted as being a key factor for the success of software development projects. However, the multitude of different, not well-integrated taxonomies, approaches and technologies impedes the application of traceability techniques in practice. This paper presents a comprehensive view on traceability, pertaining to the whole software development process. Based on graph technology, it derives a seamless approach which combines all activities related to traceability information, namely definition, recording, identification, maintenance, retrieval, and utilization in one single conceptual framework. The presented approach is validated in the context of the ReDSeeDS-project aiming at requirements-based software reuse.
The way information is presented to users in online community platforms has an influence on the way the users create new information. This is the case, for instance, in question-answering fora, crowdsourcing platforms or other social computation settings. To better understand the effects of presentation policies on user activity, we introduce a generative model of user behaviour in this paper. Running simulations based on this user behaviour we demonstrate the ability of the model to evoke macro phenomena comparable to the ones observed on real world data.
This study investigates crowdfunding, a new form of financing projects. In the past years more and more crowdfunding platforms emerged. The main question is if crowdfunding is able to compete with the traditional types of financing social projects. The history and development of crowdfunding is presented in this paper. The different crowdfunding models are explained. An overview of German crowd funding platforms is presented. Based on successful social crowdfunding projects a list of key success factors is listed and described. In a case study a concept for financing a social project through crowdfunding is developed upon the previous studies.
Aspect-orientation in PHP
(2012)
Diese Diplomarbeit hat das Thema der fehlenden Cross-Cutting-Concerns(CCCs)-Unterstützung in PHP zum Inhalt. Die Basis bilden zu definierende Anforderungen an eine AOP-Realisierung im PHP-Umfeld. Es wird analysiert, wie und ob verwandte Sprachen und Paradigmen es gestatten, CCCs zu unterstützen. Darüber hinaus wird die Möglichkeit erörtert, AOP in PHP ohne PHP-Erweiterung zu realisieren. Weiter werden die bisherigen Ansätze, AOP in PHP umzusetzen, qualitativ untersucht. Die vorliegende Arbeit zielt darauf ab, eine eigene AOP-PHP-Lösung zu präsentieren, die nicht die Schwächen existierender Lösungen teilt.
Cloud Computing is a topic that has gained momentum in the last years. Current studies show that an increasing number of companies is evaluating the promised advantages and considering making use of cloud services. In this paper we investigate the phenomenon of cloud computing and its importance for the operation of ERP systems. We argue that the phenomenon of cloud computing could lead to a decisive change in the way business software is deployed in companies. Our reference framework contains three levels (IaaS, PaaS, SaaS) and clarifies the meaning of public, private and hybrid clouds. The three levels of cloud computing and their impact on ERP systems operation are discussed. From the literature we identify areas for future research and propose a research agenda.
Knowledge compilation is a common technique for propositional logic knowledge bases. A given knowledge base is transformed into a normal form, for which queries can be answered efficiently. This precompilation step is expensive, but it only has to be performed once. We apply this technique to concepts defined in the Description Logic ALC. We introduce a normal form called linkless normal form for ALC concepts and discuss an efficient satisability test for concepts given in this normal form. Furthermore, we will show how to efficiently calculate uniform interpolants of precompiled concepts w.r.t. a given signature.
In dieser Studienarbeit werden neben den Grundlagen der Web Services, Komponenten und APIs zur Realisierung des Sticky Loggings aufgezeigt. Es wird ein Szenario zum Testen des Sticky Loggings beschrieben und als Web Services implementiert. Der Sticky-Logging-Formalismus wird erklärt und es wird eine API zur Erstellung der StickyLogs implementiert. Die StickyLogs werden innerhalb des SOAP-Attachments der SOAP-Nachrichten zwischen den Web Services ausgetauscht. Dazu wird eine Realisierung mit einem Messagehandler unter JAX-WS programmiert und erläutert.
In this paper, we compare two approaches for exploring large,rnhierarchical data spaces of social media data on mobile devicesrnusing facets. While the first approach arranges thernfacets in a 3x3 grid, the second approach makes use of arnscrollable list of facets for exploring the data. We have conductedrna between-group experiment of the two approachesrnwith 24 subjects (20 male, 4 female) executing the same set ofrntasks of typical mobile users" information needs. The resultsrnshow that the grid-based approach requires significantly morernclicks, but subjects need less time for completing the tasks.rnFurthermore, it shows that the additional clicks do not hamperrnthe subjects" satisfaction. Thus, the results suggest thatrnthe grid-based approach is a better choice for faceted searchrnon touchscreen mobile devices. To the best of our knowledge,rnsuch a summative evaluation of different approaches for facetedrnsearch on mobile devices has not been done so far.
Large and unknown data sets can be easily and systematically discovered by using faceted search. If implementing applications for smartphones, it needs to be considered that unlike desktop applications you can only use smaller screen sizes and there are limited possibilities for interaction between user and smartphone. These limitations can negatively influence the usability of an application. With FaThumb and MobileFacets, two mobile applications exist, which implement and use faceted search, although only MobileFacets is designed for current smartphones with touchscreen. However, FaThumb provides a novel facet navigation, which is newly realized in MFacets for present smartphones within this work.
Moreover, this work deals with the performance of a summative evaluation between both applications, MFacets and MobileFacets, with regards to usability and presents the evaluated results.
In this thesis we exercise a wide variety of libraries, frameworks and other technologies that are available for the Haskell programming language. We show various applications of Haskell in real-world scenarios and contribute implementations and taxonomy entities to the 101companies system. That is, we cover a broad range of the 101companies feature model and define related terms and technologies. The implementations illustrate how different language concepts of Haskell, such as a very strong typing system, polymorphism, higher-order functions and monads, can be effectively used in the development of information systems. In this context we demonstrate both advantages and limitations of different Haskell technologies.