Refine
Year of publication
- 2012 (67) (remove)
Document Type
- Diploma Thesis (22)
- Bachelor Thesis (21)
- Part of Periodical (12)
- Master's Thesis (11)
- Conference Proceedings (1)
Keywords
- Customer Relationship Management (3)
- Akkreditierung (2)
- Android <Systemplattform> (2)
- DOCHOUSE (2)
- E-Government (2)
- Fallstudie (2)
- IBM Lotus Quickr (2)
- Petri-Netze (2)
- Tablet PC (2)
- probability propagation nets (2)
Institute
- Fachbereich 4 (67) (remove)
Development of an Android Application for the Recognition and Translation of Words in Camera Scenes
(2012)
This bachelor- thesis describes the conception and implementation of a translation software for the Android platform. The specific feature of the software is the independent text recognition based on the view of the camera. This approach aims to enhance and accelerate the process of translation in certain situations. After an introduction into text recognition, the underlying technologies and the operation system Android useful applications are described. Then the concept of the software is created and the implementation examined. Finally an evaluation is conducted to identify strengths and weaknesses of the software.
Dualizing marked Petri nets results in tokens for transitions (t-tokens). A marked transition can strictly not be enabled, even if there are sufficient "enabling" tokens (p-tokens) on its input places. On the other hand, t-tokens can be moved by the firing of places. This permits flows of t-tokens which describe sequences of non-events. Their benefiit to simulation is the possibility to model (and observe) causes and effects of non-events, e.g. if something is broken down.
In this paper, we demonstrate by means of two examples how to work with probability propagation nets (PPNs). The fiirst, which comes from the book by Peng and Reggia [1], is a small example of medical diagnosis. The second one comes from [2]. It is an example of operational risk and is to show how the evidence flow in PPNs gives hints to reduce high losses. In terms of Bayesian networks, both examples contain cycles which are resolved by the conditioning technique [3].
The paper deals with a specific introduction into probability propagation nets. Starting from dependency nets (which in a way can be considered the maximum information which follows from the directed graph structure of Bayesian networks), the probability propagation nets are constructed by joining a dependency net and (a slightly adapted version of) its dual net. Probability propagation nets are the Petri net version of Bayesian networks. In contrast to Bayesian networks, Petri nets are transparent and easy to operate. The high degree of transparency is due to the fact that every state in a process is visible as a marking of the Petri net. The convenient operability consists in the fact that there is no algorithm apart from the firing rule of Petri net transitions. Besides the structural importance of the Petri net duality there is a semantic matter; common sense in the form of probabilities and evidencebased likelihoods are dual to each other.
Standards are widely-used in the computer science and IT industry. Different organizations like the International Organization for Standardization (SO) are involved in the development of computer related standards. An important domain of standardization is the specification of data formats enabling the exchange of information between different applications. Such formats can be expressed in a variety of schema languages thereby defining sets of conformant documents. Often the use of multiple schema languages is required due to their varying expressive power and different kind of validation requirements.rnThis also holds for the Specification Common Cartridge which is maintained by the IMS Global Learning Consortium. The specification defines valid zip packages that can be used to aggregate different learning objects. These learning objects are represented by a set of files which are a part of the package and can be imported into a learning management system. The specification makes use of other specifications to constrain the contents of valid documents. Such documents are expressed in the eXtensible Markup Language and may contain references to other files also part of the package. The specification itself is a so-called domain profile. A domain profile allows the modification of one or more specifications to meet the needs of a particular community. Test rules can be used to determine a set of tasks in order to validate a concrete package. The execution is done by a testsystem which, as we will show, can be created automatically. Hence this method may apply to other package based data formats that are defined as a part of a specification.
This work will examine the applicability of this generic test method to the data formats that are introduced by the so called Virtual Company Dossier. These formats are used in processes related to public e-procurement. They allow the packaging of evidences that are needed to prove the fulfillment of criteria related to a public tender. The work first examines the requirements that are common to both specifications. This will introduce a new view on the requirements by introducing a higher level of abstraction. The identified requirements will then be used to create different domain profiles each capturing the requirements of a package based data format. The process is normally guided by supporting tools that ease the capturing of a domain profile and the creation of testsystems. These tools will be adapted to support the new requirements. Furtheron the generic testsystem will be modified. This system is used as a basis when a concrete testsystem is created.
Finally the author comes to a positive conclusion. Common requirements have been identified and captured. The involved systems have been adapted allowing the capturing of further types of requirements that have not been supported before. Furthermore the background of the specifications quite differ. This indicates that the use of domain profiles and generic test technologies may be suitable in a wide variety of other contexts.
With the reaccreditation of the degree programs in the department of computer science at the University of Koblenz-Landau new trendsetting degree programs will be offered. For further planning and design of the individual degree programs the opinion of the students is a considerable indicator. Information about the new degree programs aren"t available during the accreditation process. Students have an interest in knowing about the new degree programs and the new examination regulation and therefore being part in the decision process would be desirable. The concept of e-participation is an opportunity to satisfy this need. It offers the possibility to discuss topics of the accreditation and to bring in own ideas and opinions into the decision process. This bachelor thesis describes an e-participation project at the University of Koblenz-Landau about the accreditation of the degree programs of the faculty of computer science. By using the reference framework of Scherer and Wimmer (2011) the project will be carried out. Furthermore the accreditation process will be modeled to get a better understanding of the whole process and to identify the possibilities for e-participation project integration. The results of this project are going to be covered by an online survey about the e-participation platform. Using the results of the survey and the experiences gained after the project recommendations are given for further e-participation projects. Moreover the reference framework of Scherer and Wimmer (2011) will be analyzed critically.
Parallelmanipulatoren, welche den Stewartmechanismus nutzen, ermöglichen die präzise Ausführung von Aufgaben in einem begrenzten Arbeitsraum. Durch die Nutzung von sechs Freiheitsgraden wird eine hohe Flexibilität der Positionierung erreicht. Die robuste Konstruktion sorgt zudem für ein sehr gutes Verhältnis von Gewicht zu Nutzlast.
Diese Bachelorarbeit befasst sich mit der Entwicklung einer flexiblen Softwarelösung zur Ansteuerung einer Stewartplattform. Dies umfasst ein Modell der Plattform, welches zu Testzwecken dient. Es werden zunächst die mathematischen Grundlagen der Inversen Kinematik erarbeitet aufbauend auf einem zuvor definierten Bewegungsmodell. Es folgt die Entwicklung einer generischen Architektur zur Übermittlung und Auswertung von Steuerkommandos vom PC. Die Implementierung geschieht in C und wird in verschiedene Module aufgeteilt, welche jeweils einen Aufgabenbereich der Positionskontrolle oder der Hardwarekommunikation abdecken. Es wird zudem eine graphische Nutzeroberfläche vorgestellt, über die man die Position der Plattform manuell verändern kann. Eine automatische Ansteuerung wird im folgenden Anwendungsbeispiel beschrieben, wo die Plattform mit frequentiellen Beschleunigungswerten einer Achterbahnsimulation beliefert wird.
Die Messung der Produktivität von Dienstleistungen ist trotz zunehmender Relevanz immer noch ein wenig durchdrungenes Forschungsfeld. Ursachen hierfür sind vor allem in den besonderen Merkmalen von Dienstleistungen - Immaterialität und Integrativität - zu suchen. Eine typische Dienstleistung der B2B Softwarebranche ist die Anpassung von Systemen an die Bedürfnisse des Kunden - das sogenannte Customizing. Die Ausschöpfung des Customizing-Potentials von Standardsoftware und -produkten und eine stärkere Einbeziehung des Kunden in Innovationsprozesse werden jedoch dadurch erschwert, dass die Produktivität dieser Dienstleistung nur unzureichend mess- und somit bewertbar zu sein scheint.
Der vorliegende Beitrag beschreibt die Entwicklung eines Modells zur Messung der Produktivität von Dienstleistungen auf der Basis unterschiedlicher Vorstudien im Rahmen des CustomB2B Projektes an der Universität Koblenz-Landau.
Virtual Goods + ODRL 2012
(2012)
This is the 10th international workshop for technical, economic, and legal aspects of business models for virtual goods incorporating the 8th ODRL community group meeting. This year we did not call for completed research results, but we invited PhD students to present and discuss their ongoing research work. In the traditional international group of virtual goods and ODRL researchers we discussed PhD research from Belgium, Brazil, and Germany. The topics focused on research questions about rights management in the Internet and e-business stimulation. In the center of rights management stands the conception of a formal policy expression that can be used for human readable policy transparency, as well as for machine readable support of policy conformant systems behavior up to automatic policy enforcement. ODRL has proven to be an ideal basis for policy expressions, not only for digital copy rights, but also for the more general "Policy Awareness in the World of Virtual Goods". In this sense, policies support the communication of virtual goods, and they are a virtualization of rules-governed behavior themselves.
Aspect-orientation in PHP
(2012)
Diese Diplomarbeit hat das Thema der fehlenden Cross-Cutting-Concerns(CCCs)-Unterstützung in PHP zum Inhalt. Die Basis bilden zu definierende Anforderungen an eine AOP-Realisierung im PHP-Umfeld. Es wird analysiert, wie und ob verwandte Sprachen und Paradigmen es gestatten, CCCs zu unterstützen. Darüber hinaus wird die Möglichkeit erörtert, AOP in PHP ohne PHP-Erweiterung zu realisieren. Weiter werden die bisherigen Ansätze, AOP in PHP umzusetzen, qualitativ untersucht. Die vorliegende Arbeit zielt darauf ab, eine eigene AOP-PHP-Lösung zu präsentieren, die nicht die Schwächen existierender Lösungen teilt.
Regarding the rapidly growing amount of data produced every year and the increasing acceptance of Enterprise 2.0 enterprises have to care about the management of their data more and more. Content created and stored in an uncoordinated manner can lead to data-silos (Williams & Hardy 2011, p.57), which result in long search times, inaccessible data and in consequence monetary losses. The "expanding digital universe" forces enterprises to develop new archiving solutions and records management policies (Gantz et al. 2007, p.13). Enterprise Content Management (ECM) is the research field that deals with these challenges. It is placed in the scientific context of Enterprise Information Management. This thesis aims to find out to what extent current Enterprise Content Management Systems (ECMS) support these new requirements, especially concerning the archiving of Enterprise 2.0 data. For this purpose, three scenarios were created to evaluate two different kinds of ECMS (one Open Source - and one proprietary system) chosen on the basis of a short marketrnresearch. The application of the scenarios reveals that the system vendors actually face the industry- concerns: both tools provide functionality for the archiving of data arising from online collaboration and also business records management capabilities but the integration of those topics is not, or is only inconsistently solved. At this point new questions - such as, "Which datarngenerated in an Enterprise 2.0 is worth being a record?" - arise and should be examined in future research.
Procedural content generation, the generation of video game content using pseudo-random algorithms, is a field of increasing business and academic interest due to its suitability for reducing development time and cost as well as the possibility of creating interesting, unique game spaces. Although many contemporary games feature procedurally generated content, the author perceived a lack of games using this approach to create realistic outer-space game environments, and the feasibility of employing procedural content generations in such a game was examined. Using current scientific models, a real-time astronomical simulation was developed in Python which generates star and planets object in a fictional galaxy procedurally to serve as the game space of a simple 2D space exploration game where the player has to search for intelligent life.
Schema information about resources in the Linked Open Data (LOD) cloud can be provided in a twofold way: it can be explicitly defined by attaching RDF types to the resources. Or it is provided implicitly via the definition of the resources´ properties.
In this paper, we analyze the correlation between the two sources of schema information. To this end, we have extracted schema information regarding the types and properties defined in two datasets of different size. One dataset is a LOD crawl from TimBL- FOAF profile (11 Mio. triple) and the second is an extract from the Billion Triples Challenge 2011 dataset (500 Mio. triple). We have conducted an in depth analysis and have computed various entropy measures as well as the mutual information encoded in this two manifestations of schema information.
Our analysis provides insights into the information encoded in the different schema characteristics. It shows that a schema based on either types or properties alone will capture only about 75% of the information contained in the data. From these observations, we derive conclusions about the design of future schemas for LOD.
Quadrokopter sind Helikopter mit vier in einer Ebene angeordneten Rotoren. Kleine unbemannte Modelle, die oft nur eine Schubkraft von wenigen Newton erzeugen können, sind im Spielzeug- und Modellbaubereich beliebt, werden aber auch von Militär und Polizei als Drohne für Aufklärungs- und Überwachungsaufgaben eingesetzt. Diese Diplomarbeit befasst sich mit den theoretischen Grundlagen der Steuerung eines Quadrokopters und entwickelt darauf aufbauend eine kostengünstige Steuerplatine für einen Modellquadrokopter.
Die theoretischen Grundlagen enthalten eine Untersuchung der Dynamik eines frei fliegenden Quadrokopters, bei der Bewegungsgleichungen hergeleitet und mit den Ergebnissen verglichen werden, die in "Design and control of quadrotors with application to autonomous flying" ([Bou07]) vorgestellt wurden. Weiterhin wird die Funktionsweise verschiedener Sensoren beschrieben, die zur Bestimmung der aktuellen räumlichen Ausrichtung geeignet sind, und es werden Verfahren besprochen, mit denen die Ausrichtung aus den Messwerten dieser Sensoren abgeschätzt werden kann. Zusätzlich wird in den Schiefkörper der Quaternionen eingeführt, in dem dreidimensionale Rotationen kompakt dargestellt und effizient verkettet werden können.
Daran anschließend wird die Entwicklung einer einfachen Steuerplatine beschrieben, die sowohl einen autonomen Schwebeflug als auch Fernsteuerung ermöglicht. Die Platine wurde auf einem X-Ufo-Quadrokopter der Firma Silverlit entwickelt und getestet, der daher ebenfalls vorgestellt wird. Die eingesetzten Bauteile und deren Zusammenspiel werden besprochen. Dabei ist insbesondere die WiiMotionPlus hervorzuheben, die als kostengünstiges Gyrosensormodul verwendet wird. Daneben werden verschiedene Aspekte der Steuersoftware erläutert: die Auswertung der Sensordaten, die Zustandsschätzung mit Hilfe des expliziten komplementären Filters nach Mahony et al. ([MHP08]), die Umsetzung des Ausrichtungsreglers sowie die Erzeugung der Steuersignale für die Motoren. Sowohl die Steuersoftware als auch Schaltplan und Platinenlayout der Steuerplatine liegen dieser Arbeit auf einer CD bei. Schaltplan und Platinenlayout sind zusätzlich im Anhang der Arbeit abgedruckt.
In this master thesis some new helpful features will be added to the Spanning Tree Simulator. This simulator was created by Andreas Sebastian Janke in his bachelor thesis [Jan10b] in 2010. It is possible to visualize networks which are defined in a configuration file. Each of them is a XML representation of a network consisting of switches and hosts. After loading such a file into the program it is possible to run the Spanning Tree Algorithm IEEE 802.1D. In contrast to the previous version only the switches are implemented as threads. When the algorithm is finished a spanning tree is built. This means, that messages cannot run into loops anymore. This is important because loops can cause a total breakdown of the communication in a network, if the running routing protocols cannot handle loops.
Magnetic resonance (MR) tomography is an imaging method, that is used to expose the structure and function of tissues and organs in the human body for medical diagnosis. Diffusion weighted (DW) imaging is a specific MR imaging technique, which enables us to gain insight into the connectivity of white matter pathways noninvasively and in vivo. It allows for making predictions about the structure and integrity of those connections. In clinical routine this modality finds application in the planning phase of neurosurgical operations, such as in tumor resections. This is especially helpful if the lesion is deeply seated in a functionally important area, where the risk of damage is given. This work reviews the concepts of MR imaging and DW imaging. Generally, at the current resolution of diffusion weighted data, single white matter axons cannot be resolved. The captured signal rather describes whole fiber bundles. Beside this, it often appears that different complex fiber configurations occur in a single voxel, such as crossings, splittings and fannings. For this reason, the main goal is to assist tractography algorithms who are often confound in such complex regions. Tractography is a method which uses local information to reconstruct global connectivities, i.e. fiber tracts. In the course of this thesis, existing reconstruction methods such as diffusion tensor imaging (DTI) and q-ball imaging (QBI) are evaluated on synthetic generated data and real human brain data, whereas the amount of valuable information provided by the individual reconstruction mehods and their corresponding limitations are investigated. The output of QBI is the orientation distribution function (ODF), where the local maxima coincides with the underlying fiber architecture. We determine those local maxima. Furthermore, we propose a new voxel-based classification scheme conducted on diffusion tensor metrics. The main contribution of this work is the combination of voxel-based classification, local maxima from the ODF and global information from a voxel- neighborhood, which leads to the development of a global classifier. This classifier validates the detected ODF maxima and enhances them with neighborhood information. Hence, specific asymmetric fibrous architectures can be determined. The outcome of the global classifier are potential tracking directions. Subsequently, a fiber tractography algorithm is designed that integrates along the potential tracking directions and is able to reproduce splitting fiber tracts.
Activity recognition with smartphones is possible by using its internal sensors without using any external sensor. First of all, previous works and their techniques will be regarded and from these works an own implementation for the activity recognition will be derived. Most of the previous works only use the accelerometer for the activity recognition task. For this reason, this bachelor thesis analyzes the benefit of further sensors, such as the magnetic field, the linear acceleration or the gyroscope. The activity recognition is performed by classification algorithms. Decision Tree, Naive Bayes and Support Vector machines will be used. Sensor data of subjects will be collected and saved by using an own developed application. This data is needed as training data for the classification algorithms.
The result is a model which represents the structure of the data. To validate the model, a test dataset will be used which is different from the training dataset. The results confirm previous works which indicated that the activity recognition task is possible by only using the accelerometer. Orientation, gyroscope and linear acceleration cannot be used for all problems of the activity recognition. Apart from that, the Decision Tree seems to be the best classification algorithm if the model has no training data of the current user.
The objective of this contribution is to conceptually analyze the potentials of entrepreneurial design thinking as being a rather new method for entrepreneurship education. Based on a literature review of different design thinking concepts we carve out a generic design thinking model upon we conceptually build a new model that considers entrepreneurial thinking as a valuable characteristic.
The results of our work show that the characteristics of entrepreneurial design thinking can enhance entrepreneurship education by supporting respective action fields of entrepreneurial learning. In addition we reveal that entrepreneurial design thinking offers beneficial guidelines for the design of entrepreneurship education programs.
Cloud-Computing ist aktuell ein besonderer Trend in der IT-Branche. ERP-Systeme sind in den heutigen Unternehmen ebenfalls nicht mehr wegzudenken. Durch die Analyse ausgewählter Literatur wird aufgezeigt, dass Cloud-Computing als Betriebsmodell für ERP-Systeme besonderer Untersuchung bedarf, da beim Zusammenspiel dieser Technologien noch unterschiedliche Herausforderungen geklärt werden müssen. Darauf aufbauend werden mit Hilfe von drei verschiedenen Praxispartnern Fallstudien zu deren Cloud-ERP-Lösungen erstellt, um in einem nächsten Schritt die theoretische Literatur mit den praktischen Ergebnissen zu vergleichen.
Ziel dieser Arbeit ist es mit Hilfe der Forschungsfragen, differenzierte Nutzenaspekte von Cloud-ERP-Lösungen aufzudecken und zu erklären, wie die Theorie mit praktischen Erfahrungswerten von Experten übereinstimmt. Durch die Fallstudien wird deutlich, dass sich die drei unterschiedlichen Cloud-ERP-Anbieter vor allem durch den Umfang ihrer Software und die Unternehmensgrößen der Zielgruppen im Markt differenzieren. Zusätzlich zeigt sich im Analyseteil und Fazit der Arbeit, dass über die in der Theorie identifizierten Nutzenaspekte hinaus, zusätzliche Nutzenaspekte durch die Kombination von Cloud-ERP-Lösungen generiert werden können und dass es aktuell schon eine spezielle Zielgruppe für Cloud-ERP-Lösungen gibt. Für die Zukunft bleibt abzuwarten, wie sich der Cloud-ERP-Markt weiterentwickeln wird und welche weiteren Funktionalitäten in die Cloud ausgelagert werden können, sodass sich On-Demand ERP-Systeme zu einer Konkurrenz von On-Premise-Lösungen entwickeln können.
Within the scope of this bachelor thesis, a survey targeting the alumni of the Department 4: Computer Science of the University of Koblenz-Landau was planned, realized and evaluated. The goal was to support the Task Force Bachelor Master that was in charge of the re-accreditation process of the study courses. At first, the theoretical fundament of the survey design was acquired via desk research. Moreover, the analysis of past surveys of similar character lead to getting an impression of the requirements. Under consideration of recent changes, a survey prototype was created and improved following the insights from a pre-test. Using the open source tool LimeSurvey, the final version was implemented. The platform was used as the technical basis of the survey. The recipients included members of alumni clubs as well as recent alumni from the last years.
The survey lead to insights about the satisfaction of the alumni with their study course and the study situation in general. Furthermore, the opinion regarding two new master courses, E-Government and Web Science, was requested. The feedback from four of the study courses was enough to give significant results, for the other five courses it was only possible to interpret the general statements. All in all, there was a high rate of satisfaction with the studies.
Additionally, it was possible to collect suggestions for improvements and criticism. The main topics were internationality, emphasis on study topics, freedom of choice/specialization and relevancy to practice. As a result of the survey, a recommendation was verbalized, that should lead to an improvement in quality and need of the teaching in the Department 4: Computer Science in combination with the detailed results.
The Multimedia Metadata Ontology (M3O) provides a generic modeling framework for representing multimedia metadata. It has been designed based on an analysis of existing metadata standards and metadata formats. The M3O abstracts from the existing metadata standards and formats and provides generic modeling solutions for annotations, decompositions, and provenance of metadata. Being a generic modeling framework, the M3O aims at integrating the existing metadata standards and metadata formats rather than replacing them. This is in particular useful as today's multimedia applications often need to combine and use more than one existing metadata standard or metadata format at the same time. However, applying and specializing the abstract and powerful M3O modeling framework in concrete application domains and integrating it with existing metadata formats and metadata standards is not always straightforward. Thus, we have developed a step-by-step alignment method that describes how to integrate existing multimedia metadata standards and metadata formats with the M3O in order to use them in a concrete application. We demonstrate our alignment method by integrating seven different existing metadata standards and metadata formats with the M3O and describe the experiences made during the integration process.
The E-Government research area has gained in importance in Europe and specially Germany in the last few years, causing the number of researchers, institutes and publications to increase rapidly. This makes it difficult for outsiders to get an overview of the relevant actors in the E-Government field. This issue can be addressed by implementing a research map for the E-Government field, where all relevant actors and objects and their information are shown on the map according to their location. In order to give a complete overview, information which was valid at a certain time in the past needs to be available on the research map. This can be only achieved if the contents of the research map are historicized. This means that a new version of an object needs to be created and saved in the database, if changes occur to the object. Older versions need to be retained on the database, so that the user is able to navigate the website based on temporal information. Past experience has shown that the temporal aspects of historicization should be managed and planned during the conceptual phase of the website rather than during implementation. This Bachelor thesis proposes a concept for the E-Government research map which includes the modeling of relevant temporal dimensions needed to historicize the contents of the research map.
Opinion Mining : Using Twitter as a source of opinion for the prediction of stock market prices
(2012)
Neben den theoretischen Grundkonzepten der automatisierten Fließtextanalyse, die das Fundament dieser Arbeit bilden, soll ein Überblick in den derzeitigen Forschungsstand bei der Analyse von Twitter-Nachrichten gegeben werden. Hierzu werden verschiedene Forschungsergebnisse der, derzeit verfügbaren wissenschaftlichen Literatur erläutert, miteinander verglichen und kritisch hinterfragt. Deren Ergebnisse und Vorgehensweisen sollen in unsere eigene Forschung mit eingehen, soweit sie sinnvoll erscheinen. Ziel ist es hierbei, den derzeitigen Forschungsstand möglichst gut zu nutzen.
Ein weiteres Ziel ist es, dem Leser einen Überblick über verschiedene maschinelle Datenanalysemethoden zur Erkennung von Meinungen zu geben. Dies ist notwendig, um die Bedeutung der im späteren Verlauf der Arbeit eingesetzten Analysemethoden in ihrem wissenschaftlichen Kontext besser verstehen zu können. Da diese Methoden auf verschiedene Arten durchgeführt werden können, werden verschiedene Analysemethoden vorgestellt und miteinander verglichen. Hierdurch soll die Machbarkeit der folgenden Meinungsauswertung bewiesen werden. Um eine hinreichende Genauigkeit bei der folgenden Untersuchung zu gewährleisten, wird auf ein bereits bestehendes und evaluiertes Framework zurückgegriffen. Dieses ist als API 1 verfügbar und wird daher zusätzlich behandelt. Der Kern Inhalt dieser Arbeit wird sich der Analyse von Twitternachrichten mit den Methoden des Opinion Mining widmen.
Es soll untersucht werden, ob sich Korrelationen zwischen der Meinungsausprägung von Twitternachrichten und dem Börsenkurs eines Unternehmens finden lassen. Es soll dabei die Stimmungslage der Firma Google Inc. über einen Zeitraum von einem Monat untersucht und die dadurch gefunden Erkenntnisse mit dem Börsenkurs des Unternehmens verglichen werden. Ziel ist es, die Erkenntnisse von (Sprenger & Welpe, 2010) und (Taytal & Komaragiri, 2009) auf diesem Gebiet zu überprüfen und weitere Fragestellungen zu beantworten.
In this bachelor thesis a tangible augmented reality game was developed, which should have a additional benefit compared to conventional computer or augmented reality games. The main part of the thesis explains the game concept, the development and the evaluation of the game. In the evaluation the flow-experience, as measurement for the games" amusement, was analysed with a user test and the developed game was compared with other smartphone games. Also augmented reality, tangible user interface and tangible augmented reality was introduced and the advantages and disadvantages was explained. The history of augmented reality was introduced too.
In dieser Diplomarbeit wurde ein System entwickelt, dass eine Navigation für Fußgänger ermöglicht. Wie beabsichtigt wurde das System für die Benutzung auf einem iPhone realisiert.
Obwohl die Karten der neuen Generation vonNAVTEQ sich noch in Entwicklung befinden, konnten erste Eindrücke gesammelt werden, wie die Navigation der Fußgänger in Zukunft aussehen wird. Das System arbeitet aber auch mit Kartendaten von OpenStreetMap, die die klassische Repräsentation der Kartendaten haben. Die Positionierung kann später, wenn der Galileo-Empfänger zur Verfügung steht, umgestellt werden und Positionsdaten mit höherer Präzision für die Navigation bereitstellen. Die Routenberechnung konnte mit einem CH-ähnlichen Verfahren durch Vorberechnung beschleunigt werden und erlaubt trotzdem eine Änderung des Profils, ohne dass eine neue Vorberechnung nötig ist. Anders als beim einfachen CH, wo nach einer Änderung des Profils für die Berechnung der Routenkosten eine neue Vorberechnung nötig ist, die im Vergleich zu der Berechnung einer Route aufwendig ist. In Abbildung 8.1 ist die demonstrative Navigation mit dem fertigen System abgebildet. Diese zeigt eine Berechnung der Routen für 3 unterschiedliche Profile, die unterschiedliche Steigungsgrade bevorzugen. Daraus ergeben sich unterschiedliche Routen.
3D-Curve-Skeletons are often used, because the object surface repesentation is less complex and also needs less computing power in further processing, compared to the representation created by the Medial Axis Transformation introduced 1967 by Harry Blum.
This theses aims at developing a 3D curve skelton approximation algorithm that keeps these advantages and is also able to handle different scenarios of the object surface input data.
The principles of project management are due to the influences of economic conditions and technological development in transition [Wills 1998 & Jonsson et al. 2001]. The increasing in-ternationalization, shortened time to market, changing labor costs and the increasing involve-ment of professionals distributed geographical locations are drivers of the transformations of the project landscape [Evaristo/van Fenema 1999]. Resulting from this, the use of collabora-tive technologies is a crucial factor for the success of a project. [Romano et al. 2002]. Previous research on the use of collaborative technologies for project management purposes focuses especially the development of model-like, universal system architectures to identify the requirements for a specially designed collaborative project management system. This thesis investigates the challenges and benefits that arise when an organisation imple-ments business software for the purpose of collaborative project management.
In der Diplomarbeit soll die Verwendung und Möglichkeit zur Einbindung eines Eyetrackers in der Bildersuche untersucht werden. Eyetracker sind Geräte zur Blickerfassung. Sie werden häufig in Design- und Usabilitystudien verwendet, um Informationen über den Umgang der Benutzer mit dem Produkt zu untersuchen. Seit einiger Zeit werden Augenbewegungen auch zur Erkennung von benutzerrelevanten Informationen und Bereichen verwendet, wie zum Beispiel bei dem Projekt Text 2.05 [4]. Hierbei werden Blickrichtung und -fixierung benutzt, um eine Interaktion mit dem Leser eines Textes auf eine möglichst einfache, dabei aber subtile Weise zu ermöglichen.
This paper critically examines the Google Calendar. For this purpose, the offered functions of the core product are studied on privacy aspects. On one hand, it is identified, to which extent the product could infringe the users" privacy, on the other accuring risks are discussed. Furthermore, the functions in terms of their use for both, the service provider Google and for the user, are considered. A detailed analysis demonstrates the critical aspects, in which we have to decide between privacy and functionality. The identified solutions to minimize discussed risks of IT security mechanisms, are presented, discussed and analyzed in terms of their feasibility. Afterwards the individual solutions are summarized in a security concept and other requirements are explained. Finally, a Firefox-Addon which implements the described solution concept should be created, to resolve the existing weaknesses to the best of its ability. Ultimately, the functionality of the addon with technical implementation is illustrated in detail.
In this thesis the feasibility of a GPGPU (general-purpose computing on graphics processing units) approach to natural feature description on mobile phone GPUs is assessed. To this end, the SURF descriptor [4] has been implemented with OpenGL ES 2.0/GLSL ES 1.0 and evaluated across different mobile devices. The implementation is multiple times faster than a comparable CPU variant on the same device. The results proof the feasibility of modern mobile graphics accelerators for GPGPU tasks especially for the detection phase in natural feature tracking used in augmented reality applications. Extensive analysis and benchmarking of this approach in comparison to state of the art methods have been undertaken. Insights into the modifications necessary to adapt and modify the SURF algorithm to the limitations of a mobile GPU are presented. Further, an outlook for a GPGPU-based tracking pipeline on a mobile device is provided.
Computers assist humans in many every-day situations. Their advancing miniaturisation broadens their fields of use and leads to an even higher significance and spread throughout society. Already, these small and powerful machines are wide-spread in every-day objects and the spread increases still as the mobility-aspect grows in importance. From laptops, smartphones and tables to systems worn on the body (wearable computing) or even inside the body as cyber-implants, these systems help humans actively and context-sensitively in the accomplishment of their every-day business.
A part of the wearable-computing-domain is taken up by the development of Head-mounted displays (HMD). These helmets or goggles feature one or more displays enabling their users to see computer-rendered images or images of their environment enriched with computer-generated information. At the moment, most of this HMD feature LC-Displays, but newer systems start appearing that allow the projection of the image onto the user's retina. Newest break-throughs in the field of study already produced contact lenses with an integrated display. The data shown by a HMD is compiled using a multitude of sensors, like a Head-Tracker or a GPS. Increasing computational performance and miniaturisation lead to a wide spread of HMD in a lot of fields.rnThe multiple scenarios in which a HMD can be used to help improve human-perception and -interaction led the "Institut für Integrierte Naturwissenschaften" of the University of Koblenz-Landau to come up with a HMD on the basis of Apple's iOS-devices featuring Retina Displays. The high pixel density of these displays combined with condensor lenses into a HMD offer a highly immersive environment for stereoscopic imagery, while other systems only display a relatively small image projected a few feet away of the user. Furthermore, the iPhone/ iPod Touch and iPad exhibit a lot of potential given by their variety of offered sensors and computational power. While producing a similarly feature-rich HMD is very costy, using simple iPod Touches 4th Gen as the basis of a HMD results in a very inexpensive solution with a high potential. The increasing popularity and spread of Apple devices would reduce the costs even more, as users of the HMD could simply integrate their device into the system. A software designed with the specific intent to support a large variety of Apple iOS-devices that could easily be extended to support newer devices, would allow for a universal use of such a HMD-solution as the new device could simply replace an old device.rnrnThe focus of this thesis is the conception and development of an application designed for Apple's iOS 5 operating system that will be used in a HMD evolving around the use of Apple iOS-devices featuring Retina Displays. The Rollercoaster2000-project depicting a ride in a virtual rollercoaster will be used as the application's core. A server will syncronize the display of clients conntected to it which are combined to form a HMD. Furthermore the gyroscope of the iOS-devices combined into a HMD will be used to track the wearer's head-movements. Another feature will be the use of the devices cameras as a mean of orientation while wearing the HMD.
As a first step in the realization of a software meeting the set specifications is the introduction of the Objective-C programming languages used to develop iOS-Applications. In conjunction with the compiler and runtime environment, Objective-C makes up the base of the second step, the introduction of the iOS-SDK. Aimed with this iOS-app-development-knowledge, the last part of the thesis consists of the ascertainment of requirements and development of a software complying to the goals of a software written specifically for the used in a HMD.
Particle swarm optimization is an optimization technique based on simulation of the social behavior of swarms.
The goal of this thesis is to solve 6DOF local pose estimation using a modified particle swarm technique introduced by Khan et al. in 2010. Local pose estimation is achieved by using continuous depth and color data from a RGB-D sensor. Datasets are aquired from different camera poses and registered into a common model. Accuracy and computation time of the implementation is compared to state of the art algorithms and evaluated in different configurations.
This thesis addresses the implementation of a particle simulation of an explosion. The simulation will be displayed via ray tracing in near real time. The implementation makes use of the openCL standard. The focus of research in this thesis is to analyse the performance of this combination of components.
The natural and the artificial environment of mankind is of enormous complexity, and our means of understanding this complex environment are restricted unless we make use of simplified (but not oversimplified) dynamical models with the help of which we can explicate and communicate what we have understood in order to discuss among ourselves how to re-shape reality according to what our simulation models make us believe to be possible. Being both a science and an art, modelling and simulation isrnstill one of the core tools of extended thought experiments, and its use is still spreading into new application areas, particularly as the increasing availability of massive computational resources allows for simulating more and more complex target systems.
In the early summer of 2012, the 26th European Conference on Modelling andrnSimulation (ECMS) once again brings together the best experts and scientists in the field to present their ideas and research, and to discuss new challenges and directions for the field.
The 2012 edition of ECMS includes three new tracks, namely Simulation-BasedrnBusiness Research, Policy Modelling and Social Dynamics and Collective Behaviour, and extended the classical Finance and Economics track with Social Science. It attracted more than 110 papers, 125 participants from 21 countries and backgrounds ranging from electrical engineering to sociology.
This book was inspired by the event, and it was prepared to compile the most recent concepts, advances, challenges and ideas associated with modelling and computer simulation. It contains all papers carefully selected from the large number of submissions by the programme committee for presentation during the conference and is organised according to the still growing number tracks which shaped the event. The book is complemented by two invited pieces from other experts that discussed an emerging approach to modelling and a specialised application. rnrnWe hope these proceedings will serve as a reference to researchers and practitioners in the ever growing field as well as an inspiration to newcomers to the area of modelling and computer simulation. The editors are honoured and proud to present you with this carefully compiled selection of topics and publications in the field.
Die Diskussion zum Thema Mindestlohn ist ein stets aktuelles und findet zur Jahreswende 2011/2012, in der diese Arbeit entstanden ist, von der Politik und Wirtschaft besonders viel Aufmerksamkeit. Die Aktualität dieses Themas und ihre Dynamik werden besonders darin bemerkbar, dass bei der Untersuchung der deutschen Literatur zu diesem Thema viele der Aussagen und Thesen nicht mehr zutreffen. Das eingangs aufgeführte Zitat von der amtierenden Bundesministerin für Arbeit und Soziales, Ursula von der Leyen, bringt zum Ausdruck, dass mittlerweile in der Politik ein Konsens darüber existiert, dass vollzeitbeschäftigte Arbeitnehmer in der Lage sein müssen, ihren Lebensunterhalt aus ihrem Einkommen zu sichern. Dies stellt für die christlich-demokratische Regierungspartei einen Dogmenwechsel dar. Während die CDU in den letzten Jahrzehnten auf die Tarifbindung gesetzt und einen Mindestlohn kategorisch abgelehnt hat, geht sie nun dazu über, Lohnuntergrenzen für alle Branchen zum Ziel ihrer Regierungsarbeit zu machen. Dies ist in starkem Maße darauf zurückzuführen, dass auf dem Arbeitsmarkt in den letzten Jahren die Lohnspreizung, die traditionell in Deutschland niedrig war, eine stark divergente Entwicklung genommen hat.
Ein weiterer Grund ist die abnehmende Rolle der Tarifbindung der letzten Jahre. Die Folge dieser Entwicklungen ist, dass 1,2 Millionen Menschen, somit vier Prozent der Beschäftigten, für Löhne unter fünf Euro. Weitere 2,2 Millionen Menschen arbeiten für Stundenlöhne unter sechs Euro, 3,7 Millionen Menschen verdienen unter sieben Euro und 5,1 Millionen Menschen arbeiten für Löhne unter acht Die Frage inwieweit ein Leben in Würde unter diesen Voraussetzungen möglich ist, beschäftigt große Teile der Gesellschaft. Denn damit sind das Volumen und die Lohnhöhe des Niedriglohnsektors auf ein Niveau gesunken, welche gesellschaftlich und politisch nicht mehr einfach zu vertreten sind. Zur Abwendung dieser Entwicklung wird das wirtschaftspolitische Instrument Mindestlohn, als probates Mittel, häufig in die Diskussion gebracht. So haben in der Vergangenheit viele Staaten den Mindestlohn auf unterschiedliche Art eingesetzt. Die Einführung eines flächendeckenden Mindestlohns in der Bundesrepublik wird vor allem mit den folgenden Zielen befürwortet.
Der Mindestlohn soll zum einen gewährleisten, dass Vollzeitbeschäftigte ein Einkommen erzielen, dass mindestens ihrem soziokulturellen Existenzminimum entspricht. Andererseits soll die Einführung des Mindestlohns die Notwendigkeit des Aufstockens mit dem Arbeitslosengeld II hemmen und somit die öffentlichen Kassen entlasten. Die Gegner des Mindestlohns lehnen die Einführung eines flächendeckenden allgemeinverbindlichen Mindestlohns, vor allem aufgrund arbeitsmarkttheoretischer Überlegungen, kategorisch ab. Diese vertreten die Ansicht, dass die Mechanismen des Arbeitsmarktes eine selbstregulierende Wirkung besitzen und ergänzt um die Tarifautonomie ausreichend geregelt sind. Eine drohende Vernichtung bestehender Arbeitsplätze und eine ausbleibende Schaffung neuer Arbeitsplätze werden als Folge der Einführung von Mindestlohn weiterhin argumentiert.
Hinzu kommt, dass in der Theorie je nach Denkschule und Position eine entgegengesetzte Auswirkung auf den Arbeitsmarkt prognostiziert werden kann. Vor dem Hintergrund der aktuellen Debatte untersucht die vorliegende Arbeit die Auswirkung der Einführung eines Mindestlohns. Um eine objektive Wertung für die vorliegende Problemstellung zu finden, wird mit Hilfe von Netlogo die computergestützte agentenbasierte Simulation benutzt. Anhand eines fiktiven Marktes mit fiktiven Akteuren/"Agenten" wird der Versuch unternommen, eine modellhafte Darstellung des Arbeitsmarktes abzubilden. Insbesondere soll untersucht werden, inwieweit die Einführung eines Mindestlohns, branchenspezifisch oder flächendeckend, den Beschäftigungsgrad und die Höhe der Löhne beeinflusst.
Gartner predicts that 33 per cent of total accesses to Business Intelligence (BI) functionalities will be done until 2013 using mobile devices. While mobile devices have already spread to a high extent in private life they have recently become more and more part of the professional life as well and businesses are seeking to develop smart scenarios on how to integrate the new technologies in their processes. The development of mobile software applications is also booming in the field of BI. The research topic of Mobile Business Intelligence (mBI) is focussing on how to expand traditional BI concepts to the new mobility dimension thus enabling to apply BI functionalities on mobile devices, such as smartphones and tablet computers. The increasing significance of mBI for research in combination with the tremendous economic growth potential forecasted for the next years is the driver for writing this thesis. The main aspect of the present thesis is a concept on how to develop a consistent mBI Strategy aligning all structures, processes and stakeholders to one standard set of goals and guidelines. Crucial points when implementing an mBI strategy are the definition of goals, the identification of risks and opportunities, the adaptation of mBI to the user context, the integration of mBI into the existing IT landscape, as well as a proper set-up of system architectures and security concepts. Besides analyzing the aspect of mBI strategy this thesis also considers the market situation of mBI. Clearly defined goals and the systematic development of a consistent mBI strategy which is in line with these goals can be considered as a crucial prerequisite for companies to successfully implement mBI.
The measurement of the benefits of collaborative technologies is a challenge caused by sociological influences on these systems. In this work, the introduction of a collaborative system into an existing IT environment is examined, so that a value can be identified for the company. The analysis of frameworks has shown that the benefits can be reaped through better management and organization of in-company resources and optimization of the underlying process.
Integration von CRM-Systemen mit Kollaborations-Systemen am Beispiel von DocHouse und Lotus Quickr
(2012)
Der vorliegende Arbeitsbericht "Integration von CRM-Systemen mit Kollaborations-Systemen am Beispiel von DocHouse/ BRM und IBM Lotus Quickr" ist Ergebnis einer studentischen Projektarbeit. Ziel des Projekts war es Integrationsszenarien zwischen einem CRM-System und einem Kollaborati-onssystem zu erarbeiten und eine prototypische Schnittstelle mit entsprechender Funktion zwischen den Systemen DocHouse/ BRM und IBM Lotus Quickr zu implementieren.
Ein besonderer Dank geht in diesem Zusammenhang an Herr Wolfgang Brugger (Geschäftsführer der DocHouse GmbH), der die Idee einer solchen Entwicklung hatte und die FG BAS mit deren Durchführung betraute. Die Erstellung des Konzepts und des Prototyps wurde vom Winter 2010 bis Sommer 2011 von den Studenten Björn Lilge, Ludwig Paulsen, Marco Wolf, Markus Aldenhövel, Martin Surrey und Mike Reuthers im Rahmen ihres Projektpraktikums durchgeführt. Das Projektteam wurde bei der Konzeption und Implementierung inhaltlich und organisatorisch von Dipl.-Wirt.-Inform. Roland Diehl betreut.
Die vorliegende Fallstudie entstand als Untersuchungsobjekt zu einer Bachelorarbeit und wurde nach der eXperience Fallstudienmethodik erstellt. Ziel der Bachelorarbeit war die Identifizierung von Nutzenaspekten in diesem konkreten Fall. Im Anwenderunternehmen DOCHOUSE wurde hier eine Schnittstelle zwischen dem internen CRM-System und einem kollaborativen System für den externen Zugriff eingeführt.
Ein besonderer Dank geht in diesem Zusammenhang an Herr Wolfgang Brugger (Geschäftsführer DOCHOUSE GmbH), der die Erstellung der Fallstudie motiviert hat und die FG BAS mit deren Durchführung betraute. Die Fallstudie wurde im Winter 2011 von dem Studenten Martin Surrey und Roland Diehl, wissenschaftlicher Mitarbeiter der Forschungsgruppe, erhoben.
Large and unknown data sets can be easily and systematically discovered by using faceted search. If implementing applications for smartphones, it needs to be considered that unlike desktop applications you can only use smaller screen sizes and there are limited possibilities for interaction between user and smartphone. These limitations can negatively influence the usability of an application. With FaThumb and MobileFacets, two mobile applications exist, which implement and use faceted search, although only MobileFacets is designed for current smartphones with touchscreen. However, FaThumb provides a novel facet navigation, which is newly realized in MFacets for present smartphones within this work.
Moreover, this work deals with the performance of a summative evaluation between both applications, MFacets and MobileFacets, with regards to usability and presents the evaluated results.
The purpose of this master thesis is to enable the Robot Lisa to process complex commands and extract the necessary information in order to perform a complex task as a sequence of smaller tasks. This is intended to be achieved by the improvement of the understanding that Lisa has of her environment by adding semantics to the maps that she builds. The complex command itself will be expected to be already parsed. Therefore the way the input is processed to become a parsed command is out of the scope of this work. Maps that Lisa builds will be improved by the addition of semantic annotations that can include any kind of information that might be useful for the performance of generic tasks. This can include (but not necessarily limited to) hierarchical classifications of locations, objects and surfaces. The processing of the command in addition to some information of the environment shall trigger the performance of a sequence of actions. These actions are expected to be included in Lisa- currently implemented tasks and will rely on the currently existing modules that perform them.
Nevertheless the aim of this work is not only to be able to use currently implemented tasks in a more complex sequence of actions but also make it easier to add new tasks to the complex commands that Lisa can perform.
Diese Masterarbeit beschreibt und evaluiert die im Rahmen der Arbeit entwickelten Erweiterungen zur Schellings Segregationsimulation. Der Nobelpreisträger Thomas C. Schelling simulierte als erster ab 1969 die Segregation. Die dabei entstandenen Ergebnisse erlaubten eine genauere Analyse der auftretenden Phänomene. Uri Wilensky, Mitarbeiter der "Northwestern University", entwickelte seit 1999 eine Segregationssimulation in NetLogo.
Im Rahmen meiner Masterarbeit habe ich das NetLogo-Model von Uri Wilensky in sechs unterschiedlichen Szenarien weiterentwickelt.
Replikation einer Multi-Agenten-Simulationsumgebung zur Überprüfung auf Integrität und Konsistenz
(2012)
In dieser Master -Arbeit möchte ich zunächst eine Simulation vorstellen, mit der das Verhalten von Agenten untersucht wird, die in einer generierten Welt versuchen zu über leben und dazu einige Handlungsmöglichkeiten zur Auswahl haben. Anschließend werde ich kurz die theoretischen Aspekte beleuchten, welche hier zu Grunde liegen. Der Hauptteil meiner Arbeit ist meine Replikation einer Simulation, die von Andreas König im Jahr 2000 in Java angefertigt worden ist [Kö2000] . Ich werde hier seine Arbeit in stark verkürzter Form darstellen und anschließend auf meine eigene Entwicklung eingehen.
Im Schlussteil der Arbeit werde ich die Ergebnisse meiner Simulation mit denen von Andreas König vergleichen und die verwendeten Werkzeuge (Java und NetLogo) besprechen. Zum Abschluss werde ich in einem Fazit mein Vorhaben kurz zusammenfassen und berichten was sich umsetzen ließ, was nicht funktioniert hat und warum.
This thesis aims to clarify the question of how stakeholders can participate in a project. We discuss methods on how to select the relevant stakeholders from the whole set of the stakeholders and how to involve these stakeholders in the participatory process. The participation process does not only classical methods of participation such as workshops, especially before modern Web 2.0 techniques.
Die wesentliche Zielsetzung der Bachelor-Arbeit war es, entlang der Anforderungen an DMS/WMS aus dem DOMEA-Konzept und aus dem Kommunalen Gebietsrechenzentrum Koblenz (KGRZ) verschiedene Systeme zu evaluieren. Weiteres Ziel war es, ein System, das die Anforderungen am besten erfüllt, zu identifizieren. Entlang der Evaluation wurde exemplarisch der Workflow des KGRZ untersucht, bewertet und ggf. angepasst. Des Weiteren wurde auf die Frage eingegangen, ob ein einheitliches System im Rahmen des Föderalismus nicht nur in Koblenz sondern auch Rheinland-Pfalz weit eingesetzt werden kann, oder ob ämterspezifische Lösungen vorteilhafter sind.
Especially e-government projects have a complex stakeholder structure: governments, businesses, non-profits and private stakeholders in different forms and roles are involved or affected by e-government projects. Consequently, the success of ICT projects critically depends on the integration of different stakeholder groups in the design processes of e-government solutions. In this context, stakeholder participation is sought therefore useful initiatives like open government and good governance drive this process forward. Although it is important to analyze the objectives, expectations and power characteristics of project participants and project stakeholders, the determinations of the stakeholder groups through a so called stakeholder analysis are insufficiently methodically developed.
The aim of this bachelor thesis is to pick up the stakeholder theories, stakeholder participation theories, stakeholder analysis and transfer it to an administrative level as well as to check their applicability. This will be done by explaining and systematizing appropriate procedures for the classification and mapping of stakeholder groups, through well-known stakeholder theories of Freeman, Mitchel and Rowley. In order to generate recommendations for future e-government projects, the application of stakeholder theories in two e-government projects with stakeholder involvement will be reviewed. It is also important to work out how stakeholder groups can be actively involved in the developmental processes in concrete e-government projects by using Web 2.0 possibilities. The role of Web 2.0 applications as an additional channel of communication and participation tool will be presented and evaluated.
The result is a guide, which supports successful participation of stakeholders in projects, by analyzing the process for stakeholder identification, stakeholder grouping and prioritization and showing instructions and benefits for using Web 2.0.
The World Wide Web (WWW) has become a very important communication channel. Its usage has steadily grown within the past. Interest by website owners in identifying user behaviour has been around since Tim Berners-Lee developed the first web browser in 1990. But as the influence of the online channel today eclipses all other media the interest in monitoring website usage and user activities has intensified as well. Gathering and analysing data about the usage of websites can help to understand customer behaviour, improve services and potentially increase profit.
It is further essential for ensuring effective website design and management, efficient mass customization and effective marketing. Web Analytics (WA) is the area addressing these considerations. However, changing technologies and evolving Web Analytic methods and processes present a challenge to organisations starting with Web Analytic programmes. Because of lacking resources in different areas and other types of websites especially small and medium-sized enterprises (SME) as well as non-profit organisations struggle to operate WA in an effective manner.
This research project aims to identify the existing gap between theory, tool possibilities and business needs for undertaking Web Analytic programmes. Therefore the topic was looked at from three different ways: the academic literature, Web Analytic tools and an interpretative case study. The researcher utilized an action research approach to investigate Web Analytics presenting an holistic overview and to identify the gaps that exists. The outcome of this research project is an overall framework, which provides guidance for SMEs who operate information websites on how to proceed in a Web Analytic programme.
The advantage of virtual networks is to test routing-algorithms or distributed software in an easy-to-use, cheap and well-arranged way. Optimized Link State Routing (OLSR) and Babel are two modern routing-algorithms for mobile ad-hoc networks that distinguish faulty links from good link connections in their routing process. Typically virtual networks provide nearly perfect link connections without any network traffic disruptions. The Traffic Control (tc) tool is able to manipulate connections in terms of delay and packet loss.
In this bachelor thesis virtual network scenarios are created with VNUML and Qemu and manipulated with tc in order to analyze the behavior of ad-hoc routing algorithms like Babel and OLSR. The behavior of well-known TCP congestion control algorithms, e.g., TCP Vegas and TCP Reno are analyzed, too.
In this paper, we compare two approaches for exploring large,rnhierarchical data spaces of social media data on mobile devicesrnusing facets. While the first approach arranges thernfacets in a 3x3 grid, the second approach makes use of arnscrollable list of facets for exploring the data. We have conductedrna between-group experiment of the two approachesrnwith 24 subjects (20 male, 4 female) executing the same set ofrntasks of typical mobile users" information needs. The resultsrnshow that the grid-based approach requires significantly morernclicks, but subjects need less time for completing the tasks.rnFurthermore, it shows that the additional clicks do not hamperrnthe subjects" satisfaction. Thus, the results suggest thatrnthe grid-based approach is a better choice for faceted searchrnon touchscreen mobile devices. To the best of our knowledge,rnsuch a summative evaluation of different approaches for facetedrnsearch on mobile devices has not been done so far.
Web-programming is a huge field of different technologies and concepts. Each technology implements a web-application requirement like content generation or client-server communication. Different technologies within one application are organized by concepts, for example architectural patterns. The thesis describes an approach for creating a taxonomy about these web-programming components using the free encyclopaedia Wikipedia. Our 101companies project uses implementations to identify and classify the different technology sets and concepts behind a web-application framework. These classifications can be used to create taxonomies and ontologies within the project. The thesis also describes, how we priorize useful web-application frameworks with the help of Wikipedia. Finally, the created implementations concerning web-programming are documented.
The goal of this Bachelor thesis is to implement and evaluate the "Simulating of Collective Misbelief"-model into the NetLogo programming language. Therefore, the model requirements have to be specified and implemented into the NetLogo environment. Further tool-related re-quirements have to be specified to enable the model to work in NetLogo. After implementation several simulations will be conducted to answer the research question stated above.
In dieser Arbeit wurde die Realisierung einer mobilen Sicherheitslösung für Überwachungszwecke vorgestellt, welche unter Zuhilfenahme des staatlich geförderten Forschungsprojekts CamInSens entwickelt wurde. CamInSens soll erreichen, in überwachten Gefahrenbereichen die erhaltenen Video- und Sensorendaten so zu analysieren, dass Bedrohungen möglichst frühzeitig erkannt und behandelt werden können. Das Ziel dieser Arbeit war, auf Basis von verarbeiteten Daten aus CamInSens eine Interaktion und Visualisierung zu entwickeln, die bei einem späteren Praxiseinsatz mobilem Sicherheitspersonal dabei hilft, in Bedrohungssituationen fundierte Entscheidungen treffen zu können. Zu diesem Zweck wurde nicht nur eine Software implementiert, sondern auch eine Marktsichtung hinsichtlich geeigneter Geräte und einsetzbarer Softwarebibliotheken durchgeführt.
Software projects typically rely on several, external libraries. The interface provided by such a library is called API (application programming interface). APIs often evolve over time, thereby implying the need to adapt applications that use them. There are also reasons which may call for the replacement of one library by another one, what also results in a need to adapt the applications where the library is replaced. The process of adapting applications to use a different API is called API migration. Doing API migration manually is a cumbersome task. Automated API migration is an active research field. A related field of research is API analysis which can also provide data for developing API migration tools.
The following thesis investigates techniques and technologies for API analysis and API migration frameworks. To this end, design patterns are leveraged. These patterns are based on experience with API analysis and migration within the Software Languages Team.
In automated theorem proving, there are some problems that need information on the inequality of certain constants. In most cases this information is provided by adding facts which explicitly state that two constants are unequal. Depending on the number of constants, a huge amount of this facts can clutter the knowledge base and distract the author and readers of the problem from its actual proposition. For most cases it is save to assume that a larger knowledge base reduces the performance of a theorem prover, which is another drawback of explicit inequality facts. Using the unique name assumption in those reasoning tasks renders the introduction of inequality facts obsolete as the unique name assumptions states that two constants are identical iff their interpretation is identical. Implicit handling of non-identical constants makes the problems easier to comprehend and reduces the execution time of reasoning. In this thesis we will show how to integrate the unique name assumption into the E-hyper tableau calculus and that the modified calculus is sound and complete. The calculus will be implemented into the E-KRHyper theorem prover and we will show, by empiric evaluation, that the changed implementation, which is able to use the unique name assumption, is superior to the traditional version of E-KRHyper.
In dieser Ausarbeitung beschreibe ich die Ergebnisse meiner Untersuchungen zur Erweiterung des LogAnswer-Systemsmit nutzerspezifischen Profilinformationen. LogAnswer ist ein natürlichsprachliches open-domain Frage-Antwort-System. Das heißt: es beantwortet Fragen zu beliebigen Themen und liefert dabei konkrete (möglichst knappe und korrekte) Antworten zurück. Das System wird im Rahmen eines Gemeinschaftsprojekts der Arbeitsgruppe für künstliche Intelligenz von Professor Ulrich Furbach an der Universität Koblenz-Landau und der Arbeitsgruppe Intelligent Information and Communication Systems (IICS) von Professor Hermann Helbig an der Fernuniversität Hagen entwickelt. Die Motivation meiner Arbeit war die Idee, dass der Prozess der Antwortfindung optimiert werden kann, wenn das Themengebiet, auf das die Frage abzielt, im Vorhinein bestimmt werden kann. Dazu versuchte ich im Rahmen meiner Arbeit die Interessensgebiete von Nutzern basierend auf Profilinformationen zu bestimmen. Das Semantic Desktop System NEPOMUK wurde verwendet um diese Profilinformationen zu erhalten. NEPOMUK wird verwendet um alle Daten, Dokumente und Informationen, die ein Nutzer auf seinem Rechner hat zu strukturieren. Dazu nutzt das System ein sogenanntes Personal Information Model (PIMO) in Form einer Ontologie. Diese Ontologie enthält unter anderem eine Klasse "Topic", welche die wichtigste Grundlage für das Erstellen der in meiner Arbeit verwendeten Nutzerprofile bildete. Konkret wurde die RDF-Anfragesprache SPARQL verwendet, um eine Liste aller für den Nutzer relevanten Themen aus der Ontologie zu filtern. Die zentrale Idee meiner Arbeit war es nun diese Profilinformationen zur Optimierung des Ranking von Antwortkandidaten einzusetzen. In LogAnswer werden zu jeder gestellten Frage bis zu 200 potentiell relevante Textstellen aus der deutschen Wikipedia extrahiert. Diese Textstellen werden auf Basis von Eigenschaften (wie z.B. lexikalische Übereinstimmungen zwischen Frage und Textstelle) geordnet, da innerhalb des zur Verfügung stehenden Zeitlimits nicht alle Kandidaten bearbeitet werden können.
Mein Ansatz verfolgte das Ziel, diesen Algorithmus durch Nutzerprofile so zu erweitern, dass Antwortkandidaten, welche für den Benutzer relevante Informationen enthalten, höher in der Rangfolge eingeordnet werden. Zur Umsetzung dieser Idee musste eine Methode gefunden werden, um zu bestimmen ob ein Antwortkandidat mit dem Profil übereinstimmt. Da sich die in einer Textstelle enthaltenen Informationen in den meisten Fällen auf das übergeordnete Thema des Artikels beziehen, ohne den Namen des Artikels explizit zu erwähnen, wurde in meiner Implementierung der Artikelname betrachtet, um zu ermitteln, zu welchem Themengebiet die Textstelle Informationen liefert. Als zusätzliches Hilfsmittel wurde außerdem die DBpedia-Ontologie eingesetzt, welche die Informationen der Wikipedia strukturiert im RDF Format enthält. Mit Hilfe dieser Ontologie war es möglich, jeden Artikel in Kategorien einzuordnen, die dann mit den im Profil enthaltenen Stichworten verglichen wurden. Zur Untersuchung der Auswirkungen des Ansatzes auf das Ranking-Verfahren wurden mehrere Testläufe mit je 200 Testfragen durchgeführt. Die erste Testmenge bestand aus zufällig ausgewählten Fragen, die mit meinem eigenen Nutzerprofil getestet wurden. Dieser Testlauf lieferte kaum nutzbare Ergebnisse, da nur bei 29 der getesteten Fragen überhaupt ein Antwortkandidat mit dem Profil in Verbindung gebracht werden konnte. Außerdem konnte eine potentielle Verbesserung der Ergebnisse nur bei einer dieser 29 Fragen festgestellt werden, was zu der Schlussfolgerung führte, dass der Einsatz von Profildaten nicht für Anwendungsfälle geeignet ist, in denen die Fragen keine Korrelation mit dem genutzten Profil aufweisen.
Da die Grundannahme meiner Arbeit war, dass Nutzer in erster Linie Fragen zu den Interessensgebieten stellen, welche sich aus ihrem Profil ableiten lassen, sollten die weiteren Testläufe genau diesen Fall beleuchten. Dazu wurden 200 Testfragen aus dem Bereich Sport ausgewählt und mit einem Profil getestet, welches Stichworte zu unterschiedlichen Sportarten enthielt. Die Tests mit den Sportfragen waren wesentlich aussagekräftiger. Auch hier deuteten die Ergebnisse darauf hin, dass der Ansatz kein großes Potential zur Verbesserung des Rankings hat. Eine genauere Betrachtung einiger ausgewählter Beispiele zeigte allerdings, dass die Integration von Profildaten für bestimmte Anwendungsfälle, wie z.B. offene Fragen für die es mehr als eine korrekte Antwort gibt, durchaus zu einer Verbesserung der Ergebnisse führen kann. Außerdem wurde festgestellt, dass viele der schlechten Ergebnisse auf Inkosistenzen in der DBpedia-Ontologie und grundsätzliche Probleme im Umgang mit Wissensbasen in natürlicher Sprache beruhen.
Die Schlussfolgerung meiner Arbeit ist, dass der in dieser Arbeit vorgestellte Ansatz zur Integration von Profilinformationen für den aktuellen Anwendungsfall von LogAnswer nicht geeignet ist, da vor allem Faktenwissen aus sehr unterschiedlichen Domänen abgefragt wird und offene Fragen nur einen geringen Anteil ausmachen.
A trending topic in Semantic Web research deals with the processing of queries over Linked Open Data (LOD). As has been shown in literature, the loose nature of the "web of data" and data sources within can be accounted for by employing federated query processing strategies. This approach, however, is all the more dependent on both up-to-date statistical summaries (data statistics) of the sources in use and accurate and precise estimation of cardinalities and selectivities. In general, federated data sources are to be seen as black-boxes w.r.t. data statistics, as no interchange of such information can be expected. Because of this, it is possible for individual data statistics to become obsolete, if the corresponding source is subjected to data changes cumulating over time. In this thesis an adaptive system is being proposed, that complements a given RDF-based query federator. Through observation and analysis of the error of the cardinality estimation of incoming queries, it tries to infer the obsolescence of individual data statistics, triggering updates of data statistics found to be obsolete. An evaluation of the system shows, that the approach to this solution is plausible. Yet, in practice no satisfying results could be acquired, that would prove a true practicality. Still, parts of the system proposed may be re-used for related tasks that could be more promising.
Ziel dieser Diplomarbeit ist es, innerhalb von sechs Monaten eine gamebasierte Anwendung für Android-Smartphones zu entwickeln, die durch eng beieinander liegende Stationen historisches Wissen vermittelt. Die einzelnen Spielstationen sollen mit Hilfe von GPS1-Positionen festgelegt und erkannt werden. Als Gelände wurde die Burgruine Reichenstein in der Verbandsgemeinde Puderbach im Westerwald ausgewählt. Es soll untersucht werden, wie viele GPS-Stationen auf engem Raum unterzubringen sind und wie weit sie voneinander entfernt sein müssen, um gute Ergebnisse erzielen zu können. Dazu ist es nötig, sowohl den Grundriss der Burg als auch die Beschaffenheit des Geländes mit einzubeziehen.
An den einzelnen Stationen sollen auf spielerische Art und Weise historische Informationen über die Burgruine Reichenstein vermittelt werden. Neben der Recherche, der Konzepterstellung und der Implementierung umfasst diese Arbeit auch eine Evaluation, die eine Bewertung der Anwendung ermöglicht.
Robotics research today is primarily about enabling autonomous, mobile robots to seamlessly interact with arbitrary, previously unknown environments. One of the most basic problems to be solved in this context is the question of where the robot is, and what the world around it, and in previously visited places looks like " the so-called simultaneous localization and mapping (SLAM) problem. We present a GraphSLAM system, which is a graph-based approach to this problem. This system consists of a frontend and a backend: The frontend- task is to incrementally construct a graph from the sensor data that models the spatial relationship between measurements. These measurements may be contradicting and therefore the graph is inconsistent in general. The backend is responsible for optimizing this graph, i. e. finding a configuration of the nodes that is least contradicting. The nodes represent poses, which do not form a regular vector space due to the contained rotations. We respect this fact by treating them as what they really are mathematically: manifolds. This leads to a very efficient and elegant optimization algorithm.
Augmented Reality bedeutet eine reale Umgebung mit, meistens grafischen, virtuellen Inhalten zu erweitern. Oft sind dabei die virtuellen Inhalte der Szene jedoch nur ein Overlay und interagieren nicht mit den realen Bestandteilen der Szene. Daraus ergibt sich ein Authentizitätsproblem für Augmented Reatliy Anwendungen. Diese Arbeit betrachtet Augmented Reality in einer speziellen Umgebung, mit deren Hilfe eine authentischere Darstellung möglich ist. Ziel dieserArbeitwar die Erstellung eines Systems, das Zeichnungen durch Techniken der Augmented Reality mit virtuellen Inhalten erweitert. Durch das Anlegen einer Repräsentation soll es der Anwendung dabei möglich sein die virtuellen Szeneelementemit der Zeichnung interagieren zu lassen. Dazu wurden verschiedene Methoden aus den Bereichen des Pose Tracking und der Sketch Recognition disktutiert und für die Implementierung in einem prototypischen System ausgewählt. Als Zielhardware fungiert ein Android Smartphone. Kontext der Zeichnungen ist eine Dungeon Karte, wie sie in Rollenspielen vorkommt. Die virtuellen Inhalte nehmen dabei die Form von Bewohnern des Dungeons an, welche von einer Agentensimulation verwaltet werden. Die Agentensimulation ist Gegenstand einer eigenen Diplomarbeit [18]. Für das Pose Tracking wurde ARToolkitPlus eingesetzt, ein optisches Tracking System, das auf Basis von Markern arbeitet. Die Sketch Recognition ist dafür zuständig die Inhalte der Zeichnung zu erkennen und zu interpretieren. Dafür wurde ein eigener Ansatz implementiert der Techniken aus verschiedenen Sketch Recognition Systemen kombiniert. Die Evaluation konzentriert sich auf die technischen Aspekte des Systems, die für eine authentische Erweiterung der Zeichnung mit virtuellen Inhalten wichtig sind.
In the period between 1985 and 1994 ethnic conflicts had a part for more than 50% of the total number of conflicts. Oftentimes, conflicts may arise between different ethnic groups or between one ethnic group and the state. The reasons for this are varied in ranging from social deprivation to the threat of collective identity. An essential key role is played by the political elite, which takes advantage of political and economic crises in order to mobilize certain ethnic groups.
This thesis attempts to investigate such ethnic conflicts on the case of former Yugoslavia, focusing in particular on the role of the political elite. For this purpose, the approach of an agent-based simulation is been followed. The agent-based model, which is carried by ArcGIS, EmIL and Repast allows a thorough investigation of such conflicts. The main feature lies in the ability of agents to be able to develop a normative behavior, so that compex social behaviors can be simulated.
This Thesis deals with the state of Customer Relationship Management (CRM) in companies and the opportunities that are offered by these. The motivation for this thesis results from the request to classify the value of CRM. Furthermore, we want to do a research on how far a CRM-System is able to support and improve company processes. In addition, we want to get an overview of possible conditions that are linked with such a system. For the use case the motivation follows from the opportunity to optimize inefficient processes. As a result, this thesis tries to present perspectives which can improve company processes; moreover conduce to a bigger value creation.
This thesis covers in detail with customer-related processes of the company Alere and its quality in terms of the prospective implementation of a CRM-System. As there are only a few standard requirements for the processes, it is not possible to make a statement about process quality. These circumstances need a tool that is able to fulfil specified requirements. This gap shall be closed by a CRM-System. If we want to close this gap efficiently we first have to research the actual customer-related processes with the aim to specify the new system requirements.
The thesis shall deliver an insight into the topic with the goal to show the advantages of a CRM-System. Additionally, the identification of weak points, based on the use case and the motivation follows. With these data we will optimize the customer-related processes based on the CRM-System functions. Several departments of the company have contact with the customers. Because of this fact the focus lies on documentation, central data storage and loss-free communication between customers and staff and among the staff. The thesis shall deliver different results. First of all, a catalogue of requirements for the customer-related processes which bases on the basics.
Furthermore, the modelling of the customer-related processes. In addition a catalogue of the weak points of the customer related processes and after that a specification of requirements for the CRM-System to optimize the customer-related processes.
Finally the thesis shall generate the modelling of the optimized customer-related processes. The last result the thesis shall deliver is the discussion and conclusion.
Only little information is available about the diffusion of cloud computing in German higher educational institutions. A better understanding of the state of the art in this field would support the modernization of the higher educational institutions in Germany and allow the development of more adequate cloud products and more appropriate business models for this niche. For this purpose, a literature research on Cloud Computing and IT-diffusion will be run and an empirical investigation with an online questionnaire addressed to higher educational institutions in Germany will be performed to illustrate the state of the art of Cloud Computing in German higher educational institutions as well as the threats and opportunities perceived by employees of higher educational institutions data centers connected to the usage of the cloud.
In addition to that, different experts from universities and businesses will be interviewed to complete the knowledge and information collected through the online questionnaire and during the research phase. The expected results will serve to create a recommendation for higher educational institutions in Germany about either they should migration to the cloud or not and introduce a list of guiding questions of critical issues to consider before using cloud-computing technologies.