Refine
Year of publication
Document Type
- Part of Periodical (132)
- Bachelor Thesis (68)
- Diploma Thesis (47)
- Master's Thesis (34)
- Study Thesis (9)
- Conference Proceedings (5)
- Doctoral Thesis (2)
- Article (1)
Keywords
- Simulation (5)
- ontology (5)
- Bluetooth (4)
- computer clusters (4)
- Android <Systemplattform> (3)
- Augmented Reality (3)
- Customer Relationship Management (3)
- Enterprise 2.0 (3)
- Informatik (3)
- Knowledge Compilation (3)
Institute
- Fachbereich 4 (298) (remove)
This work addresses the challenge of calibrating multiple solid-state LIDAR systems. The study focuses on three different solid-state LIDAR sensors that implement different hardware designs, leading to distinct scanning patterns for each system. Consequently, detecting corresponding points between the point clouds generated by these LIDAR systems—as required for calibration—is a complex task. To overcome this challenge, this paper proposes a method that involves several steps. First, the measurement data are preprocessed to enhance its quality. Next, features are extracted from the acquired point clouds using the Fast Point Feature Histogram method, which categorizes important characteristics of the data. Finally, the extrinsic parameters are computed using the Fast Global Registration technique. The best set of parameters for the pipeline and the calibration success are evaluated using the normalized root mean square error. In a static real-world indoor scenario, a minimum root mean square error of 7 cm was achieved. Importantly, the paper demonstrates that the presented approach is suitable for online use, indicating its potential for real-time applications. By effectively calibrating the solid-state LIDAR systems and establishing point correspondences, this research contributes to the advancement of multi-LIDAR fusion and facilitates accurate perception and mapping in various fields such as autonomous driving, robotics, and environmental monitoring.
Social networks are ubiquitous structures that we generate and enrich every-day while connecting with people through social media platforms, emails, and any other type of interaction. While these structures are intangible to us, they carry important information. For instance, the political leaning of our friends can be a proxy to identify our own political preferences. Similarly, the credit score of our friends can be decisive in the approval or rejection of our own loans. This explanatory power is being leveraged in public policy, business decision-making and scientific research because it helps machine learning techniques to make accurate predictions. However, these generalizations often benefit the majority of people who shape the general structure of the network, and put in disadvantage under-represented groups by limiting their resources and opportunities. Therefore it is crucial to first understand how social networks form to then verify to what extent their mechanisms of edge formation contribute to reinforce social inequalities in machine learning algorithms.
To this end, in the first part of this thesis, I propose HopRank and Janus two methods to characterize the mechanisms of edge formation in real-world undirected social networks. HopRank is a model of information foraging on networks. Its key component is a biased random walker based on transition probabilities between k-hop neighborhoods. Janus is a Bayesian framework that allows to identify and rank plausible hypotheses of edge formation in cases where nodes possess additional information. In the second part of this thesis, I investigate the implications of these mechanisms - that explain edge formation in social networks - on machine learning. Specifically, I study the influence of homophily, preferential attachment, edge density, fraction of inorities, and the directionality of links on both performance and bias of collective classification, and on the visibility of minorities in top-k ranks. My findings demonstrate a strong correlation between network structure and machine learning outcomes. This suggests that systematic discrimination against certain people can be: (i) anticipated by the type of network, and (ii) mitigated by connecting strategically in the network.
In this thesis the possibilities for real-time visualization of OpenVDB
files are investigated. The basics of OpenVDB, its possibilities, as well
as NanoVDB and its GPU port, were studied. A system was developed
using PNanoVDB, the graphics API port of OpenVDB. Techniques were
explored to improve and accelerate a single ray approach of ray tracing.
To prove real-time capability, two single scattering approaches were
also implemented. One of these was selected, further investigated and
optimized to achieve interactive real-time rendering.
It is important to give artists immediate feedback on their adjustments, as
well as the possibility to change all parameters to ensure a user friendly
creation process.
In addition to the optical rendering, corresponding benchmarks were
collected to compare different improvement approaches and to prove
their relevance. Attention was paid to the rendering times and memory
consumption on the GPU to ensure optimal use. A special focus, when
rendering OpenVDB files, was put on the integrability and extensibility of
the program to allow easy integration into an existing real-time renderer
like U-Render.
Semantic Web technologies have been recognized to be key for the integration of distributed and heterogeneous data sources on the Web, as they provide means to define typed links between resources in a dynamic manner and following the principles of dataspaces. The widespread adoption of these technologies in the last years led to a large volume and variety of data sets published as machine-readable RDF data, that once linked constitute the so-called Web of Data. Given the large scale of the data, these links are typically generated by computational methods that given a set of RDF data sets, analyze their content and identify the entities and schema elements that should be connected via the links. Analogously to any other kind of data, in order to be truly useful and ready to be consumed, links need to comply with the criteria of high quality data (e.g., syntactically and semantically accurate, consistent, up-to-date). Despite the progress in the field of machine learning, human intelligence is still essential in the quest for high quality links: humans can train algorithms by labeling reference examples, validate the output of algorithms to verify their performance on a data set basis, as well as augment the resulting set of links. Humans —especially expert humans, however, have limited availability. Hence, extending data quality management processes from data owners/publishers to a broader audience can significantly improve the data quality management life cycle.
Recent advances in human computation and peer-production technologies opened new avenues for human-machine data management techniques, allowing to involve non-experts in certain tasks and providing methods for cooperative approaches. The research work presented in this thesis takes advantage of such technologies and investigates human-machine methods that aim at facilitating link quality management in the Semantic Web. Firstly, and focusing on the dimension of link accuracy, a method for crowdsourcing ontology alignment is presented. This method, also applicable to entities, is implemented as a complement to automatic ontology alignment algorithms. Secondly, novel measures for the dimension of information gain facilitated by the links are introduced. These entropy-centric measures provide data managers with information about the extent the entities in the linked data set gain information in terms of entity description, connectivity and schema heterogeneity. Thirdly, taking Wikidata —the most successful case of a linked data set curated, linked and maintained by a community of humans and bots— as a case study, we apply descriptive and predictive data mining techniques to study participation inequality and user attrition. Our findings and method can help community managers make decisions on when/how to intervene with user retention plans. Lastly, an ontology to model the history of crowd contributions across marketplaces is presented. While the field of human-machine data management poses complex social and technical challenges, the work in this thesis aims to contribute to the development of this still emerging field.
Usability experts conduct user studies to identify existing usability problems. An established method is to record gaze behavior with an eye-tracker. These studies require a lot of effort to evaluate the results. Automated recognition of good and bad usability in recorded user data can support usability experts in eye tracking evaluation and reduce the effort. The objective of that bachelor thesis is to identify suitable eye-tracking metrics that correlate with the quality of usability. For this purpose, the central research question is answered: Which eye-tracking metrics correlate with the quality of a web form’s operation? To answer the research question, a quantitative A/B-user-study with eye-tracking was conducted and recorded the
gaze behavior of 30 subjects while filling out the web form. The web form was designed, that each web form page was available as a good and bad variant according to known usability guidelines. The results confirm a significant correlation between the eye-tracking-metric "number of visits to an
AOI" and the quality of the operation of a web form. The eye-tracking-metrics
"number of fixations within an AOI" and "duration of fixations within an AOI" also correlate with the quality of usability. No correlation could be confirmed for the "time of the first fixation within an AOI".
Despite widespread plans of big companies like Amazon and Google to develop unmanned delivery drones, scholarly research in this field is scarce, especially in the information systems field. From technical and legal perspectives, drone delivery in last-mile scenarios is in a quite mature state. However, estimates of user acceptance are varying between high skepticism and exaggerated optimism. This research follows a mixed method approach consisting both qualitative and quantitative research, to identify and test determinants of consumer delivery drone service adoption. The qualitative part rests on ten interviews among average consumers, who use delivery services on a regular basis. Insights gained from the qualitative part were used to develop an online survey and to assess the influence of associated risks on adoption intentions. The quantitative results show that especially financial and physical risks impede drone delivery service adoption. Delivery companies who are currently thinking about providing a delivery drone service may find these results useful when evaluating usage behaviors in the future market for delivery drones.
On-screen interactive presentations have got immense popularity in the domain of attentive interfaces recently. These attentive screens adapt their behavior according to the user's visual attention. This thesis aims to introduce an application that would enable these attentive interfaces to change their behavior not just according to the gaze data but also facial features and expressions. The modern era requires new ways of communications and publications for advertisement. These ads need to be more specific according to people's interests, age, and gender. When advertising, it's important to get a reaction from the user but not every user is interested in providing feedback. In such a context more, advance techniques are required that would collect user's feedback effortlessly. The main problem this thesis intends to resolve is, to apply advanced techniques of gaze and face recognition to collect data about user's reactions towards different ads being played on interactive screens. We aim to create an application that enables attentive screens to detect a person's facial features, expressions, and eye gaze. With eye gaze data we can determine the interests and with facial features, age and gender can be specified. All this information will help in optimizing the advertisements.
This paper describes the robots TIAGo and Lisa used by
team homer@UniKoblenz of the University of Koblenz-Landau, Germany,
for the participation at the RoboCup@Home 2019 in Sydney,
Australia. We ended up first at RoboCup@Home 2019 in the Open Platform
League and won the competition in our league now three times
in a row (four times in total) which makes our team the most successful
in RoboCup@Home. We demonstrated approaches for learning from
demonstration, touch enforcing manipulation and autonomous semantic
exploration in the finals. A special focus is put on novel system components
and the open source contributions of our team. We have released
packages for object recognition, a robot face including speech synthesis,
mapping and navigation, speech recognition interface, gesture recognition
and imitation learning. The packages are available (and new packages
will be released) on http://homer.uni-koblenz.de.
Der Fachbereich 4: Informatik forscht und lehrt in einem diversifzierten Spektrum der Digitalisierung von Wirtschaft, Gesellschaft und öffentlicher Verwaltung. Die Themen schließen Grundlagenforschung und anwendungsorientierte sowie interdisziplinäre Forschung und Lehre ein. Sie reichen von der Kerninformatik bis zu grafschen und ökonomischen Aspekten der Informationsverarbeitung, wie nachfolgende Grafk verdeutlicht (die einzelnen Institute und ihre zugehörigen Arbeitsgruppen stellen sich in den nachfolgenden Kapiteln vor):
Mit der rasant fortschreitenden Entwicklung von Informatiksystemen und Algorithmen ist die Erfassung und Verarbeitung von Daten in immer größeren Umfang möglich. Verschiedene Initiativen haben sich dadurch motiviert zur Aufgabe gemacht, über die daraus resultierenden Gefahren für die Persönlichkeitsrechte und die Meinungsfreiheit aufzuklären. Dies soll einen bewussteren Umgang mit personenbezogenen Daten zur Folge haben. Zum Schutz der Grundrechte bedarf es aufgeklärter und informierter Nutzer, diese Aufgabe können die Initiativen allerdings nicht alleine leisten. Die staatlichen Bildungseinrichtungen und besonders die Schulen, stehen hier in der Pflicht sich an der Lösung des Problems zu beteiligen. Um ihrem Bildungsauftrag im vollen Ausmaß gerecht zu werden, bedarf es struktureller Änderungen, wie der Änderung von Lehrplänen. Solange diese allerdings nicht erfolgt sind, muss in und mit den gegebenen Strukturen gearbeitet werden. Eine Plattform dafür bietet der schulische Informatikunterricht.
Die vorliegende Arbeit stellt eine Unterrichtsreihe zur Behandlung von Datenschutz und Datensicherheit vor. Es wurde dabei ein kontextorientierter Ansatz nach Vorbild von Informatik im Kontext gewählt. Die Reihe Smartphone-Applikationen beinhaltet über die genannten primären Themen der Unterrichtsreihe hinaus weitere Dimensionen, die bei der Nutzung von Smartphones auftreten. Durch den direkten Bezug zum Alltag der Schüler soll dabei eine möglichst hohe Betroffenheit erzeugt werden. Dadurch sollen die Schüler ihr bisheriges Nutzungsverhalten überdenken und im besten Fall ihren Altersgenossen als Vorbilder dienen. Die Prüfung der Durchführbarkeit der Reihe im Unterricht steht noch aus. Diese war im Rahmen dieser Arbeit, begründet durch die begrenzte Bearbeitungszeit, nicht zu leisten.
This paper describes the robots TIAGo and Lisa used by team homer@UniKoblenz of the University of Koblenz-Landau, Germany, for the participation at the RoboCup@Home 2018 in Montreal, Canada. Further this paper serves as qualification material for the RoboCup-@Home participation in 2018. A special focus is put on novel system components and the open source contributions of our team. This year the team from Koblenz won the biggest annual scientianc robot competition in Montreal in the RoboCup@Home Open Platform track for the third time and also won the RoboCup@Home German Open for the second time. As a research highlight a novel symbolic imitation learning approach was demonstrated during the annals. The TIAGo robotic research platform was used for the first time by the team. We have released packages for object recognition, a robot face including speech synthesis, mapping and navigation, speech recognition interface via android and a GUI. The packages are available (and new packages will be released) on http://wiki.ros.org/agas-ros-pkg. Further information can be found on our project page http://homer.uni-koblenz.de.
This paper describes the robot Lisa used by team homer@UniKoblenz of the University of Koblenz Landau, Germany, for the participation at the RoboCup@Home 2017 in Nagoya, Japan. A special focus is put on novel system components and the open source contributions of our team. We have released packages for object recognition, a robot face including speech synthesis, mapping and navigation, speech recognition interface via android and a GUI. The packages are available (and new packages will be released) on
http://wiki.ros.org/agas-ros-pkg.
This paper describes the robot Lisa used by team
homer@UniKoblenz of the University of Koblenz Landau, Germany, for the participation at the RoboCup@Home 2016 in Leipzig, Germany. A special focus is put on novel system components and the open source contributions of our team. We have released packages for object recognition, a robot face including speech synthesis, mapping and navigation, speech recognition interface via android and a GUI. The packages are available (and new packages will be released) on http://wiki.ros.org/agas-ros-pkg.
Der Fachbereich 4 (Informatik) besteht aus fünfundzwanzig Arbeitsgruppen unter der Leitung von Professorinnen und Professoren, die für die Forschung und Lehre in sechs Instituten zusammenarbeiten.
In jedem Jahresbericht stellen sich die Arbeitsgruppen nach einem einheitlichen Muster dar, welche personelle Zusammensetzung sie haben, welche Projekte in den Berichtszeitraum fallen und welche wissenschaftlichen Leistungen erbracht wurden. In den folgenden Kapiteln werden einzelne Parameter aufgeführt, die den Fachbereich in quantitativer Hinsicht, was Drittmitteleinwerbungen, Abdeckung der Lehre, Absolventen oder Veröffentlichungen angeht, beschreiben.
Der Fachbereich 4 (Informatik) besteht aus fünfundzwanzig Arbeitsgruppen unter der Leitung von Professorinnen und Professoren, die für die Forschung und Lehre in sechs Instituten zusammenarbeiten.
In jedem Jahresbericht stellen sich die Arbeitsgruppen nach einem einheitlichen Muster dar, welche personelle Zusammensetzung sie haben, welche Projekte in den Berichtszeitraum fallen und welche wissenschaftlichen Leistungen erbracht wurden. In den folgenden Kapiteln werden einzelne Parameter aufgeführt, die den Fachbereich in quantitativer Hinsicht, was Drittmitteleinwerbungen, Abdeckung der Lehre, Absolventen oder Veröffentlichungen angeht, beschreiben.
In order to enhance the company’s appeal for potential employees and improve the satisfaction of already salaried workers, it is necessary to offer a variety of work-life balance measures. But as their implementation causes time and financial costs, a prioritization of measures is needed. To express a recommendation for companies, this study is led by the questions if there are work-life balance measures which have more impact on employee satisfaction than others, how big the relative impact of work-life balance measures on job satisfaction in comparison to other work and private life variables is, if there is a relation between the effectiveness of measures and their use and if there is a difference between the measures which are most important from the employees’ perspective and the companies’ offers.
These questions are formulated in eight research hypotheses which are examined in a quantitative research design with online survey data from 289 employees of fifteen different German companies. The formation of a hierarchy of the effectiveness of measures towards job satisfaction as well as the investigation of the relative impact in comparison to other variables is performed using a multiple regression analysis, whilst the differences between employees’ expectations and the availability of offers are examined with t-tests.
Support in childcare, support in voluntary activities and teambuilding events have a significantly higher impact on job satisfaction than other work-life balance measures, and their potential use is higher than the actual use which leads to the conclusion that there is yet potential for companies to improve their employees’ satisfaction by implementing these measures. In addition, flexible work hours, flexible work locations and free time and overtime accounts are the most important measures from the employees’ point of view and already widely offered by the surveyed companies. In general, the overall use of the available measures and the quantity of offered measures are more important with regard to job satisfaction than the specific kind of measure. In addition, work-life balance measures are more important towards job satisfaction for younger people.
Digital happiness
(2016)
Emotions in the digital world gain more and more importance by the accelerating digitalization of recent years. In everyday life as well as professional and educational environments, the use of media is almost inescapable. This work takes a closer look to the still relatively unknown topic, “Digital Happiness“. In the literature emotions in the digital world are studied increasingly, but concerning the topic of “Digital Happiness” there are not many insights.
In this research the differences between communicating emotions through digital media and face-to-face communications, as well as their impacts, are investigated. The perception and expression of emotions in the real and digital world are considered. Furthermore, a first definition of “Digital Happiness” and a model with the associated determinants that influence “Digital Happiness” is developed.
To achieve these goals, a literature analysis by the method of Webster & Watson (2002) is performed. Therefore, 15 articles are selected and analyzed in more detail to illustrate the current state of research. These articles are organized into a concept matrix. In addition, special phenomena, such as Quantified Self, are closely considered in order to determine other factors. Based on the literature analysis, a qualitative study by the method of Mayring (2014) is performed. It is developed into an interview guideline that serves as the basis for the investigation of the established determinants of the literature analysis. This is confirmed or refuted by 30 interviews with users of information and communication technologies. With the help of the interviews, a large part of the determinants in the literature can be confirmed. Moreover, some new determinants are identified. These results are included into the final model. This model provides a basis for further research in the field of “Digital Happiness“. Finally, different implications for research and practice are found, which reveal the need of further research.
Six and Gimmler have identified concrete capabilities that enable users to use the Internet in a competent way. Their media competence model can be used for the didactical design of media usage in secondary schools. However, the special challenge of security awareness is not addressed by the model. In this paper, the important dimension of risk and risk assessment will be introduced into the model. This is especially relevant for the risk of the protection of personal data and privacy. This paper will apply the method of IT risk analysis in order to select those dimensions of the Six/Gimmler media competence model that are appropriate to describe privacy aware Internet usage. Privacy risk aware decisions for or against the Internet usage is made visible by the trust model of Mayer et al.. The privacy extension of the competence model will lead to a measurement of the existing privacy awareness in secondary schools, which, in turn, can serve as a didactically well-reasoned design of Informatics modules in secondary schools. This paper will provide the privacy-extended competence model, while empirical measurement and module design is planned for further research activities.
Die vorliegende Arbeit bildet den Abschluss eines Forschungspraktikums von Studierenden der Masterstudiengänge Informationsmanagement und Wirtschaftsinformatik unter Betreuung der wissenschaftlichen Mitarbeiterin Daniela Simić-Draws und von Prof. Dr. Rüdiger Grimm. Eine wesentliche Vorlage zu dieser Arbeit war ein Vorgehensmodell zur Sicherheitsanalyse von Geschäftsprozessen, das von D. Simić-Draws im Rahmen ihrer Dissertation erarbeitet wird und zu dessen laufender Verbesserung dieses studentische Forschungspraktikum wertvolle Hinweise liefern konnte. Als Anwendungsbeispiel wurden die sicherheitskritischen Prozesse "Kommunalwahl" und "Geldauszahlung am Bankautomaten" gewählt, weil die Arbeitsgruppe von Prof. Grimm in diesen beiden Anwendungen aus vorhergehender wissenschaftlicher Arbeit Erfahrung gesammelt hat. Insbesondere zum Anwendungsbeispiel "Kommunalwahl" hatte sich dankenswerterweise das Ordnungsamt Koblenz, das für die Kommunalwahlen zuständig ist, unter aktiver Mithilfe ihres Leiters Dirk Urmersbach zur Zusammenarbeit angeboten, so dass dieser Geschäftsprozess wirklichkeitsnah untersucht werden konnte.
In this article we analyze the privacy aspects of a mobile sensor application used for recording urban travel patterns as part of a travel-survey service. This service has been developed and field-tested within the Live+Gov EU Project. The privacy analysis follows a structured approach established in. Eight privacy recommendations are derived, and have already led to corresponding enhancements of the travel-survey service.
Im Rahmen dieser Arbeit wird der Einfluss des Wahlszenarios auf die Geheimheit und Öffentlichkeit der Wahl herausgearbeitet. Ein Wahlszenario wird durch seine Wahlform und die verwendete Wahltechnik bestimmt. Bei der Wahl-form kann zwischen einer Präsenz- und einer Fernwahl unterschieden werden. Bei der Wahltechnik zwischen der Papier- und der elektronischen Wahl. Mit der Papier-Präsenzwahl, der Briefwahl (Papier-Fernwahl) und der Internetwahl (elektronische Fernwahl) werden drei prominente Wahlszenarien und ihr Einfluss auf Geheimheit, Privatheit und Öffentlichkeit untersucht.
Information systems research has started to use crowdsourcing platforms such as Amazon Mechanical Turks (MTurk) for scientific research, recently. In particular, MTurk provides a scalable, cheap work-force that can also be used as a pool of potential respondents for online survey research. In light of the increasing use of crowdsourcing platforms for survey research, the authors aim to contribute to the understanding of its appropriate usage. Therefore, they assess if samples drawn from MTurk deviate from those drawn via conventional online surveys (COS) in terms of answers in relation to relevant e-commerce variables and test the data in a nomological network for assessing differences in effects.
The authors compare responses from 138 MTurk workers with those of 150 German shoppers recruited via COS. The findings indicate, inter alia, that MTurk workers tend to exhibit more positive word-of mouth, perceived risk, customer orientation and commitment to the focal company. The authors discuss the study- results, point to limitations, and provide avenues for further research.
Der Fachbereich 4 (Informatik) besteht aus fünfundzwanzig Arbeitsgruppen unter der Leitung von Professorinnen und Professoren, die für die Forschung und Lehre in sechs Instituten zusammenarbeiten.
In jedem Jahresbericht stellen sich die Arbeitsgruppen nach einem einheitlichen Muster dar, welche personelle Zusammensetzung sie haben, welche Projekte in den Berichtszeitraum fallen und welche wissenschaftlichen Leistungen erbracht wurden. In den folgenden Kapiteln werden einzelne Parameter aufgeführt, die den Fachbereich in quantitativer Hinsicht, was Drittmitteleinwerbungen, Abdeckung der Lehre, Absolventen oder Veröffentlichungen angeht, beschreiben.
The aim of this paper is to identify and understand the risks and issues companies are experiencing from the business use of social media and to develop a framework for describing and categorising those social media risks. The goal is to contribute to the evolving theorisation of social media risk and to provide a foundation for the further development of social media risk management strategies and processes. The study findings identify thirty risk types organised into five categories (technical, human, content, compliance and reputational). A risk-chain is used to illustrate the complex interrelated, multi-stakeholder nature of these risks and directions for future work are identified.
Im Laufe der letzten Jahre hat sich der typische Komplex an kritischen Erfolgsfaktoren für Unternehmen verändert, infolgedessen der Faktor Wissen eine wachsende Bedeutung erlangt hat. Insofern kann man zum heutigen Zeitpunkt von Wissen als viertem Produktionsfaktor sprechen, welcher die Faktoren Arbeit, Kapital und Boden als wichtigste Faktoren eines Unternehmens ablöst (vgl. Keller & Yeaple 2013, S. 2; Kogut & Zander 1993, S. 631). Dies liegt darin begründet, dass aktive Maßnahmen zur Unterstützung von Wissenstransfer in Unternehmen höhere Profite und Marktanteile sowie bessere Überlebensfähigkeit gegenüber Wettbewerbern ohne derartige Maßnahmen nach sich ziehen (vgl. Argote 1999, S. 28; Szulanski 1996, S. 27; Osterloh & Frey 2000, S. 538). Der hauptsächliche Vorteil von wissensbasierten Entwicklungen liegt dabei in deren Nachhaltigkeit, da aufgrund der immateriellen Struktur (vgl. Inkpen & Dinur 1998, S. 456; Spender 1996a, S. 65 f.; Spender 1996b, S. 49; Nelson & Winter 1982, S. 76 ff.) eine Nachahmung durch Wettbewerber erschwert wird (vgl. Wernerfelt 1984, S. 173; Barney 1991, S. 102).
The way information is presented to users in online community platforms has an influence on the way the users create new information. This is the case, for instance, in question-answering fora, crowdsourcing platforms or other social computation settings. To better understand the effects of presentation policies on user activity, we introduce a generative model of user behaviour in this paper. Running simulations based on this user behaviour we demonstrate the ability of the model to evoke macro phenomena comparable to the ones observed on real world data.
Modeling and publishing Linked Open Data (LOD) involves the choice of which vocabulary to use. This choice is far from trivial and poses a challenge to a Linked Data engineer. It covers the search for appropriate vocabulary terms, making decisions regarding the number of vocabularies to consider in the design process, as well as the way of selecting and combining vocabularies. Until today, there is no study that investigates the different strategies of reusing vocabularies for LOD modeling and publishing. In this paper, we present the results of a survey with 79 participants that examines the most preferred vocabulary reuse strategies of LOD modeling. Participants of our survey are LOD publishers and practitioners. Their task was to assess different vocabulary reuse strategies and explain their ranking decision. We found significant differences between the modeling strategies that range from reusing popular vocabularies, minimizing the number of vocabularies, and staying within one domain vocabulary. A very interesting insight is that the popularity in the meaning of how frequent a vocabulary is used in a data source is more important than how often individual classes and properties arernused in the LOD cloud. Overall, the results of this survey help in understanding the strategies how data engineers reuse vocabularies, and theyrnmay also be used to develop future vocabulary engineering tools.
Remote rendering services offer the possibility to stream high quality images to lower powered devices. Due to the transmission of data the interactivity of applications is afflicted with a delay. A method to reduce delay of the camera manipulation on the client is called 3d-warping. This method causes artifacts. In this thesis different approaches of remote rendering setups will be shown. The artifacts and improvements of the warping method will be described. Methods to reduce the artifacts will be implemented and analyzed.
Aufgrund des branchenweiten Bedarfs den Konkurrenzkampf zu umgehen, entwickelten Kim und Mauborgne die Blue Ocean Strategy, um neue Märkte zu ergründen. Diese bezeichnen sie als einzigartig. Da jedoch weitere Strategien zur Ergründung neuer Märkte existieren, ist es das Ziel dieser Arbeit herauszufinden, anhand welcher Charakterisierungsmerkmale die Blue Ocean Strategy als einzigartig angesehen werden kann.
Die Strategie von Kim und Mauborgne soll daher mit Schumpeters schöpferischen Zerstörung, Ansoffs Diversifikationsstrategie, Porters Nischenstrategie und Druckers Innovationsstrategien verglichen werden. Für den Vergleich werden die Charakterisierungsmerkmale herangezogen, nach denen Kim und Mauborgne die Blue Ocean Strategy als einzigartig beurteilen. Auf Basis dieser Kriterien wird ein Metamodell entwickelt, mit dessen Hilfe die Untersuchung durchgeführt wird.
Der Vergleich zeigt, dass die Konzepte von Schumpeter, Ansoff, Porter und Drucker in einigen Kriterien der Blue Ocean Strategy ähneln. Keine der Strategien verhält sich jedoch in allen Punkten so wie das Konzept von Kim und Mauborgne. Während die Blue Ocean Strategy ein Differenzierung und Senkung der Kosten anstrebt, orientieren sich die meisten Konzepte entweder an einer Differenzierung oder an einer Kostenreduktion. Auch die Betretung des neuen Marktes wird unterschiedlich interpretiert. Während die Blue Ocean Strategy auf einen Markt abzielt, der unergründet ist und somit keinen Wettbewerb vorweist, werden bei den anderen Strategien oft bestehende Märkte als neu interpretiert, auf denen das Unternehmen bisher nicht agiert hat. Dies schließt die vorherige Existenz der Märkte jedoch nicht aus.
Auf Basis der durch den Vergleich gezogenen Erkenntnisse, kann somit die Blue Ocean Strategy als einzigartig bezeichnet werden.
Data Mining im Fußball
(2014)
The term Data Mining is used to describe applications that can be applied to extract useful information from large datasets. Since the 2011/2012 season of the german soccer league, extensive data from the first and second Bundesliga have been recorded and stored. Up to 2000 events are recorded for each game.
The question arises, whether it is possible to use Data Mining to extract patterns from this extensive data which could be useful to soccer clubs.
In this thesis, Data Mining is applied to the data of the first Bundesliga to measure the value of individual soccer players for their club. For this purpose, the state of the art and the available data are described. Furthermore, classification, regression analysis and clustering are applied to the available data. This thesis focuses on qualitative characteristics of soccer players like the nomination for the national squad or the marks players get for their playing performance. Additionally this thesis considers the playing style of the available players and examines if it is possible to make predictions for upcoming seasons. The value of individual players is determined by using regression analysis and a combination of cluster analysis and regression analysis.
Even though not all applications can achieve sufficient results, this thesis shows that Data Mining has the potential to be applied to soccer data. The value of a player can be measured with the help of the two approaches, allowing simple visualization of the importance of a player for his club.
Systems to simulate crowd-behavior are used to simulate the evacuation of a crowd in case of an emergency. These systems are limited to the moving-patterns of a crowd and are generally not considering psychological and/or physical conditions. Changing behaviors within the crowd (e.g. by a person falling down) are not considered.
For that reason, this thesis will examine the psychological behavior and the physical impact of a crowd- member on the crowd. In order to do so, this study develops a real-time simulation for a crowd of people, adapted from a system for video games. This system contains a behavior-AI for agents. In order to show physical interaction between the agents and their environment as well as their movements, the physical representation of each agent is realized by using rigid bodies from a physics-engine. The movements of the agents have an additional navigation mesh and an algorithm for collision avoidance.
By developing a behavior-AI a physical and psychological state is reached. This state contains a psychological stress-level as well as a physical condition. The developed simulation is able to show physical impacts such as crowding and crushing of agents, interaction of agents with their environment as well as factors of stress.
By evaluating several tests of the simulation, this thesis examines whether the combination of physical and psychological impacts is implementable successfully. If so, this thesis will be able to give indications of an agent- behavior in dangerous and/or stressful situations as well as a valuation of the complex physical representation.
Ziel dieser Ausarbeitung ist es, das Wippe-Experiment gemäß dem Aufbau innerhalb der AG Echtzeitsysteme unter Leitung von Professor Dr. Dieter Zöbel mithilfe eines LEGO Mindstorms NXT Education-Bausatzes funktionsfähig nachzubauen und das Vorgehen zu dokumentieren. Der dabei entstehende Programmcode soll didaktisch aufbereitet und eine Bauanleitung zur Verfügung gestellt werden. Dies soll gewährleisten, dass Schülerinnen und Schüler auch ohne direkten Zugang zu einer Hochschule oder ähnlichem Institut den Versuchsaufbau Wippe möglichst unkompliziert im Klassenraum erleben können.
Die Arbeit stellt Path Tracing zum Rendern von Bildern mitrnglobaler Beleuchtung vor. Durch die Berechnung der Rendergleichung, mithilfe von Zufallsexperimenten, ist das Verfahren physikalisch plausibel. Entscheidend für die Qualität der Ergebnisse ist Sampling. Der Schwerpunktrnder Arbeit ist die Untersuchung verschiedener Samplingstrategien. Dazu werden die Ergebnisse unterschiedlicher Dichtefunktionen verglichenrnund die Methoden bewertet. Außerdem werden Effekte, wie beispielsweise Depth of Field, mittels Sampling visualisiert.
Im Rahmen des "Design Thinking"-Prozesses kommen unterschiedliche Varianten kreativitätsfördernder Techniken zum Einsatz. Aufgrund der zunehmenden Globalisierung ergeben sich immer häufiger Kollaborationen, bei denen sich die jeweiligen Projektteilnehmer an verteilten Standorten befinden. Somit erweist sich die Digitalisierung des Design-Prozesses als durchaus erstrebenswert. Ziel der hier vorliegenden Untersuchung ist daher die Erstellung eines Bewertungsschemas, welches die Eignung digitaler Kreativitätstechniken in Bezug auf das "Entrepreneurial Design Thinking" misst. Des Weiteren soll geprüft werden, inwiefern sich der Einsatz von e-Learning-Systemen in Kombination mit der Verwendung digitaler Kreativitätstechniken eignet. Diese Prüfung soll am Beispiel der e-Learning Software "WebCT" konkretisiert werden. Hieraus ergibt sich die folgende Fragestellung: Welche digitalen Kreativitätstechniken eignen sich für die Anwendung im Bereich des "Entrepreneurial Design Thinkings" unter Einsatz der e-Learning Plattform "WebCT"? Zunächst wird eine Literaturanalyse bezüglich des "Entrepreneurial Design Thinkings", der klassische und digitale Kreativitätstechniken sowie des Arbeitens in Gruppen, was auch das Content Management, e-Learning-Systeme und die Plattform "WebCT" beinhaltet, durchgeführt. Im Anschluss daran wird eine qualitative Untersuchung durchgeführt. Auf Basis bereits bestehender Literatur, soll ein Bewertungsschema erstellt werden, welches misst, welche der behandelten digitalen Kreativitätstechniken für den Einsatz im "Entrepreneurial Design Thinking" am besten geeignet ist. Darauf aufbauend erfolgt die Verknüpfung des digitalisierten "Design Thinking"-Prozesses mit der e-Learning Plattform "WebCT". Abschließend wird diskutiert, in wie fern diese Zusammenführung als sinnvoll erachtet werden kann.
This paper presents a method for the evolution of SHI ABoxes which is based on a compilation technique of the knowledge base. For this the ABox is regarded as an interpretation of the TBox which is close to a model. It is shown, that the ABox can be used for a semantically guided transformation resulting in an equisatisfiable knowledge base. We use the result of this transformation to effciently delete assertions from the ABox. Furthermore, insertion of assertions as well as repair of inconsistent ABoxes is addressed. For the computation of the necessary actions for deletion, insertion and repair, the E-KRHyper theorem prover is used.
Die vorliegende Arbeit befasst sich mit der volkswirtschaftlichen Untersuchung von Arbeit in virtuellen Welten und hat als Kerninhalt die Analyse des Arbeitsmarktes in "Massively Multiplayer Online Role-Playing Games" (MMORPGs). Als Ausgangsbasis diente zum einen der Faktor Arbeit in der Realität, zum anderen wurden zusätzliche Besonderheiten von MMORPGs in die Betrachtung miteinbezogen, woraus sich ein Gesamtbild des virtuellen Arbeitsmarkts ergab, aus dem sich relevante Indikatoren ableiten ließen. Neben dem grundsätzlichen Befund der Existenz eines virtuellen Arbeitsmarktes, wurden Ähnlichkeiten zum realen Arbeitsmarkt deutlich. So war es möglich virtuelle Stundenlöhne zu berechnen, unternehmensähnliche Strukturen in Spielergruppierungen nachzuweisen und ausgehend von der Humankapitaltheorie, eine modifizierte Theorie ("Avatarkapital") für virtuelle Welten zu ermitteln. Allerdings ergaben sich auch Unterschiede, so ist die Komplexität der Herstellungsprozesse in den untersuchten MMORPGs in der Regel weitaus geringer als in der Realität. Durch eine Gegenüberstellung von Motivationsfaktoren in beiden Arbeitswelten wurden weiterhin Gemeinsamkeiten, aber auch Unterschiede festgestellt und dargelegt. Zusätzlich wurde aufgezeigt, dass das aktuell diskutierte Thema Mindestlohn auch in virtuellen Arbeitsmärkten von MMORPGs anzutreffen ist und als Spielmechanik implementiert wurde, um Motivation durch andauernde Beschäftigung zu gewährleisten. Über diese Parallelen hinaus, wurde anhand einer Analyse von Waren- und Geldtransaktionen (Real-Money-Trading) zwischen Virtualität und Realität eine Verbindung beider Welten nachgewiesen, die beide Arbeitsmärkte gleichermaßen betrifft. Neben der theoretischen Untersuchung, war es auch Ziel eigene Beobachtungen und Ansätze in die Ergebnisse einfließen zu lassen. Besonders in der abschließenden empirischen Untersuchung war es somit möglich weitere Faktoren zu entdecken, die nicht ausreichend aus der Theorie heraus zu ermitteln waren. Vor allem weitere Erkenntnisse zum Thema Produktivitätsmessung in virtuellen Welten konnten so aus der Praxis in die Theorie einfließen. Schlussendlich wurde aber auch deutlich, dass sich die Untersuchungen zum Thema Arbeitsmarkt in virtuellen Welten noch in einem frühen Stadium befinden und zahlreiche Forschungsobjekte in diesem Bereich existieren, die mit Sicherheit zu einem Erkenntnisgewinn in der Volkswirtschaftslehre führen.
The Microsoft Kinect is currently polular in many application areas because ofrnthe cheap price and good precission. But controlling the cursor is unapplicablerndue to jitter in the skeletton data. My approach will try to stabilisize the cursor position with common techniques from image processing. The input therefore will be the Kinect color camera. A final position will be calculated using the different positions of the tracking techniques. For controlling the cursor the right hand should be tracked. A simple click gesture will also be developed. The evaluation will show if this approach was succesfull.
ERP market analysis
(2013)
Der aktuelle ERP Markt wird dominiert von den fünf größten Anbietern SAP, Oracle, Microsoft, Infor und Sage. Da der Markt und die angebotenen Lösungen vielfältig sind, bedarf es einer fundierten Analyse der Systeme. Die Arbeit beleuchtet dabei anhand ausgesuchter Literatur und Kennzahlen der verschiedenen Unternehmen die theoretische Seite der angebotenen Lösungen der fünf großen ERP Anbieter. Daneben wird die Nutzung der Systeme in der Praxis anhand der Befragung von sechs Anwendern analysiert und die Systeme miteinander verglichen.
Ziel der Arbeit ist es, dass die Forschungsfragen beantwortet werden und dass es bezogen auf die Systeme dem Leser der Arbeit ersichtlich wird, welches ERP System für welche Unternehmensbranche und Unternehmensgröße am besten geeignet ist.
Des Weiteren gibt die Arbeit Aufschluss darüber, welche Trends für ERP Systeme für die Zukunft zu erwarten sind und welche Herausforderungen sich dadurch für die Unternehmen stellen.
Polsearchine: Implementation of a policy-based search engine for regulating information flows
(2013)
Many search engines regulate Internet communication in some way. It is often difficult for end users to notice such regulation, as well as obtaining background information for it. Additionally, the regulation can usually be circumvented easily. This bachelor thesis presents the prototypical metasearch engine "Polsearchine" for addressing these weaknesses. Its regulation is established through InFO, a model for regulating information flows developed by Kasten and Scherp. More precisely, the extension for regulating search engines SEFCO is being used. For retrieving search results, Polsearchine uses an external search engine API. The API can be interchanged easily to make this metasearch engine independent from one specific API.
Die weltweite Zugänglichkeit und umfangreiche Nutzung des Internets machen dieses Medium zu einem effizienten und beliebten Informations-, Kommunikations-, und Verkaufsinstrument. Immer mehr Menschen und Organisationen versuchen, diese Vorzüge durch eine eigene Website für ihre Zwecke zu verwenden. Als hilfreiches Mittel zur Optimierung von Webpräsenzen bewährte sich in den letzten Jahren der Einsatz von Web-Analytics-Software. Durch diese Software sind Websitebetreiber in der Lage, Informationen über die Besucher ihrer Website und deren Nutzungsverhalten zu sammeln und zu messen. Das angestrebte Resultat sind Optimierungsentscheidungen auf Basis von Daten an Stelle von Annahmen und wirkungsvolle Testmöglichkeiten.
Für den Bereich des E-Commerce existieren bis dato zahlreiche wissenschaftliche und praxiserprobte Hilfestellungen für Web-Analytics-Projekte. Informationswebsites hingegen werden trotz ihrer Wichtigkeit nur vereinzelt thematisiert. Um diesem Defizit entgegenzuwirken, hat Hausmann 2012 das Framework for Web Analytics entwickelt, welches dem Anwender ein hilfreiches Referenzmodell für sein Web Analytics-Vorhaben bietet. Diesen Ansatz weiter voranzutreiben ist das Ziel der Abschlussarbeit. Dazu wird mithilfe einer Literaturanalyse und einer Fallstudie das Framework validiert und ergänzt, sowie weitere Handlungsempfehlungen identifiziert. Als Ergebnis werden die wichtigsten Erkenntnisse dieser Forschung zusammengefasst und für den zukünftigen Gebrauch festgehalten.
Large amounts of qualitative data make the utilization of computer-assisted methods for their analysis inevitable. In this thesis Text Mining as an interdisciplinary approach, as well as the methods established in the empirical social sciences for analyzing written utterances are introduced. On this basis a process of extracting concept networks from texts is outlined and the possibilities of utilitzing natural language processing methods within are highlighted. The core of this process is text processing, to whose execution software solutions supporting manual as well as automated work are necessary. The requirements to be met by these solutions, against the background of the initiating project GLODERS, which is devoted to investigating extortion racket systems as part of the global fiσnancial system, are presented, and their fulσlment by the two most preeminent candidates reviewed. The gap between theory and pratical application is closed by a prototypical application of the method to a data set of the research project utilizing the two given software solutions.
In this thesis we discuss the increasingly important routing aggregation and its consequences on avoiding routing loops. As basis for implementation and evaluation I will use the RMTI protocol, developed at the University of Koblenz, which is an evolution of the RFC2453 specified in the Routing Information Protocol version 2. The virtual network environment Virtual Network User Mode Linux (VNUML) is used within this thesis as environment. With VNUML it is possible to operate and evaluate real network scenarios in a virtual environment. The RMTI has already proven its ability to detect topological loops and thereby prevent the formation of these routing loops. In this thesis we will describe the function of the RMTI and then discuss under which circumstances we can use routing aggregation, without it resulting in routing anomalies. In order to implement these changes it is essential to have a deeper understanding of the structure of routing tables, so the construction will be explained using reference to examples. There follows a description of which points have to be changed, in the RMTI in order to avoid loops despite aggregation. In Summary we will evaluate the affect the routing aggregation has on the reorganization ability of the virtual network.
This thesis describes the implementation of a Path-planning algorithm for multi-axle vehicles using machine learning algorithms. For that purpose, a general overview over Genetic Algorithms is given and alternative machine learning algorithms are briefly explained. The software developed for this purpose is based on the EZSystem Simulation Software developed by the AG Echtzeitysteme at the University Koblenz-Landau and a path correction algorithm developed by Christian Schwarz, which is also detailed in this paper. This also includes a description of the vehicle used in these simulations. Genetic Algorithms as a solution for path-planning in complex scenarios are then evaluated based on the results of the developed simulation software and compared to alternative, non-machine learning solutions, which are also shortly presented.
Forwarding loops
(2013)
Today you can find smartphones everywhere. This situation created a hype for Augmented Reality and AR Apps. The big question is: Do these applications provide a real added value? To make AR pratically it is important to add the computational power of a computer to the advantages of AR. An easy and fast way of interaction is essential.
A Poker-Assistance-Software is an ideal test area for an AR Application with real added value. The estimation of the winning probability and a fast automated tracking of the playing cards is the perfect field of investigation.
In this discussion it is interesting to evaluate the added value of AR Applications in common.
Recipients" youtube comments to the five most successful songs of 2011 and 2012 are tested for nostalgic content. These nostalgic relevant comments are analyzed by content and finally interpreted. It shall be found out, whether nostalgic music content is a factor for success. By using the uses and gratifications theory the recipients" purpose of consuming nostalgic-evoking music will be identified. Music is a clearly stronger trigger for evoking nostalgia than the music video whereas nostalgia triggers positive and/or negative affect. Furthermore personal nostalgia is much more evident than historical nostalgia. Moreover the lyrics have a considerably higher potential to elicit nostalgia than any other song units. Persons and momentous events are the most frequent objects in personal nostalgic reverie. The purpose of consuming nostalgic music is the intended evocation of positive and/or negative affect. Hence nostalgia in music seems to satisfy certain needs and it can be assumed that nostalgia is a factor of success in music industry.
Infinite worlds
(2013)
This work is concerned with creating a 2D action-adventure with roleplay elements. It provides an overview over various tasks of the implementation. First, the game idea and the used gamemechanism are verified and a definfinition of requirements is created. After introducing the used framework, the software engineering concept for realization is presented. The implementation of control components, game editor, sound and graphics is shown. The graphical implementation pays special attention to the abstraction of light and shadow into the 2D game world.
Due to the increasing pervasiveness of the mobile web, it is possible to send and receive mails with mobile devices. Content of digital communication should be encrypted to prevent eavesdropping and manipulation. Corresponding procedures use cryptographic keys, which have to be exchange previously. It has to be ensured, that a cryptographic key really belongs to the person, who it is supposedly assigned to. Within the scope of this thesis a concept for a smartphone application to exchange cryptographic keys was designed. The concept consists of a specification of a component-based framework, which can be used to securely exchange data in general. This framework was extended and used as the basis for a smartphone application. The application allows creating, managing and exchanging cryptographic keys. The Near Field Communication is used for the exchange. Implemented security measures prevent eavesdropping and specific manipulation. In the future the concept and the application can be extended and adjusted to be used in other contexts.
We present the conceptual and technological foundations of a distributed natural language interface employing a graph-based parsing approach. The parsing model developed in this thesis generates a semantic representation of a natural language query in a 3-staged, transition-based process using probabilistic patterns. The semantic representation of a natural language query is modeled in terms of a graph, which represents entities as nodes connected by edges representing relations between entities. The presented system architecture provides the concept of a natural language interface that is both independent in terms of the included vocabularies for parsing the syntax and semantics of the input query, as well as the knowledge sources that are consulted for retrieving search results. This functionality is achieved by modularizing the system's components, addressing external data sources by flexible modules which can be modified at runtime. We evaluate the system's performance by testing the accuracy of the syntactic parser, the precision of the retrieved search results as well as the speed of the prototype.
Geschäftsprozessmanagement (GPM) gilt in der heutigen Unternehmensentwicklung als einer der wichtigsten Erfolgsfaktoren und wird von modernen Unternehmen auch als solcher wahrgenommen [vgl. IDS Scheer 2008]. Bereits 1993 waren Geschäftsprozesse für Hammer und Champy der zentrale Schlüssel zur Reorganisation von Unternehmen [vgl. Hammer, Champy 1993, S. 35]. Den Paradigmenwechsel von der Aufbau- zur Ablauforganisation und letztendlich zur etablierten "Prozessorganisation" wurde von Gaitanides schon 1983 erstmals beschrieben [vgl. Gaitanides 2007].
Trotz einer breiten und tiefen Behandlung des Themengebiets "Geschäftsprozessmanagement" in der wissenschaftlichen Literatur, gestaltet es sich schwierig, einen schnellen Überblick in Bezug auf Vorgehensweisen zur Einführung von Geschäftsprozessmanagement zu erhalten. Dies ist im Wesentlichen der Tatsache geschuldet, dass in der Literatur "Geschäftsprozessmanagement" in unterschiedlichen wissenschaftlichen Bereichen wie z.B. der Organisationslehre [vgl. z.B. Vahs 2009; Schulte-Zurhausen 2005], der Betriebswirtschaft [vgl. z.B. Helbig 2003; Schmidt 2012] oder der Informatik bzw. Wirtschaftsinformatik [vgl. z.B. Schmelzer, Sesselmann 2008; Schwickert, Fischer 1996] behandelt und der Aufbau eines GPMs anhand unterschiedlicher Themenschwerpunkte beschrieben wird. Insbesondere gestaltet sich die Suche nach Literatur zu Geschäftsprozessmanagement speziell für kleine und mittlere Unternehmen (KMU) und zu Einführungsmethoden von BPM in KMU als schwierig. Die Kombination "Vorgehensweisen zur Einführung von Geschäftsprozessmanagement bei KMU" ist in der wissenschaftlichen Literatur nicht aufzufinden. Mit der vorliegenden Arbeit soll ein erster Ansatz geschaffen werden, diese Lücke zu schließen. Diese Arbeit zielt darauf ab, anhand einer Auswahl von Vorgehensweisen zur Einführung von Geschäftsprozessmanagement deren charakteristische Eigenschaften zu analysieren und einander gegenüberzustellen. Zudem erfolgt eine Bewertung auf die Anwendbarkeit einzelner Vorgehensweisen auf kleine und mittlere Unternehmen anhand zuvor erhobener, für KMU wichtiger Anforderungen an BPM und dessen Einführung.
Auf Basis der dieser Arbeit zugrundeliegenden Bewertungskriterien schneidet die Vorgehensweise nach Schulte-Zurhausen im Gesamtergebnis am besten ab. Dennoch ist festzustellen, dass jede der untersuchten Vorgehensweisen Stärken und Schwächen bzgl. der Eignung für ein KMU aufweist. Dies hat zur Folge, dass bei der Einführung eines Geschäftsprozessmanagements jede der untersuchten Vorgehensweisen einer Anpassung und Adaption auf die Situation eines KMUs bedarf. Aus diesem Grund empfiehlt der Autor dieser Arbeit einem KMU, eine Vorgehensweise als grundlegende Vorgehensweise der Einführung festzulegen (in diesem Fall die Vorgehensweise nach Schulte-Zurhausen) und diese durch jeweils geeignete Aspekte der weiteren Vorgehensweisen anzureichern bzw. zu vervollständigen.
Augmented Reality (AR) is getting more and more popular. To augment information into the field of vision of the user using HMDs, e.g. front shields of a car, glasses, displays of a smartphone or tablets are the main use of AR technology. It is necessary to get the position and orientation (pose) of the camera in space to augment correctly.
Nowadays, this is solved with artificial markers. These known markers are placed in the room and the system is taught to this set up. The next step is to get rid of these artificial markers. If we are calculating the pose without such markers we are talking about marker-less tracking. Instead of artificial markers we will use natural objects in the real world as reference points to calculate the pose. Thus, this approach can be used flexibly and dynamically. We are no longer dependent on artificial markers but we need much more knowledge about the scenery to find the pose. This is compensated by technical actions and/or the user himself. However, both solutions are neither comfortable nor efficient for the usage of such a system. This is why marker-less 3D tracking is still a big field of research.
This sets the starting point for the bachelor thesis. In this thesis an approach is proposed that needs only a quantity of 2D Feature from a given camera image and a quantity of 3D Feature of an object to find the initial Pose. With this approach, we got rid of the technical and user assistance. 2D and 3D Features can be detected in any way you like.
The main idea of this approach is to build six correspondences between these quantities. With those we are able to estimate the pose. Each 3D Feature is mapped with the estimated pose onto image coordinates, whereby the estimated pose can be evaluated. Each distance is measured between the mapped 3D Feature and the associated 2D Feature. Each correspondency is evaluated and the results are summed up to evaluate the whole pose. The lower this summed up value is, the better the pose. It has been shown to have a correct pose with a value around ten pixels.
Due to lots of possibilities to build six correspondences between the quantities, it is necessary to optimize the building process. For the optimization we will use a genetic algorithm.
During the test case the system worked quite reliably. The hit rate was around 90% with a runtime of approximately twelve minutes. Without optimization it can take easily some years.
Diese Bachelor-Thesis beschäftigt sich mit der Entwicklung eines Programms, welches den Zahnarzt durch die AR bei seiner Behandlung am Patienten unterstützen soll. Um eine angemessene theoretische Grundlage zu schaffen, wird zunächst der aktuelle Stand der Technik erläutert, der für dieses Projekt relevant ist. Daraufhin werden mögliche zukünftige Technologien vorgestellt, welche die hypothetische Basis dieser Arbeit darstellen. In dem darauffolgenden Unterkapitel wird die Auswahl der Systeme erläutert, die für dieses Projekt verwendet wurden. Der Hauptteil beschäftigt sich zunächst mit dem Vorgehen in der Vorbereitungs- und Planungsphase, um daraufhin den Programmablauf der Applikation sukzessiv vorzustellen. Dabei wird auch auf die Probleme eingegangen, die während des Programmierens entstanden sind. In dem reflektierenden Auswertungsteil werden Verbesserungsvorschlägen und Zusatzfunktionen für das geschriebene Programm präsentiert.
This master thesis deals basically with the design and implementation of a path planning system based on rapidly exploring search trees for general-n-trailers. This is a probabilistic method that is characterized by a fast and uniform exploration. The method is well established, however, has been applied only to vehicles with simple kinematics to date. General-n-trailers represent a particular challenge as their controllability is limited. For this reason the focus of this thesis rests on the application of the mentioned procedure to general-n-trailers. In this context systematic correlations between the characteristics of general-n-trailers and the possibilities for the realization and application of the method are analyzed.
This thesis deals with the development and evaluation of a concept of novel interaction with ubiquitous user interfaces. To accomplish the evaluation of this interaction concept, a prototype was implementated by using an existing head-mounted display solution and an android smartphone.
Furthermore, in the course of this thesis, a concrete use case for this prototype " the navigation through a city block with the aid of an electronic map " was developed and built as an executable application to help evaluate the quality of the interaction concept. Therefore, fundamental research results were achieved.
This bachelor thesis deals with the concept of a smartphone application for emergencies. It describes the basic problem and provides a conceptual approach.
The core content of this thesis is a requirement analysis of the newly to be designed emergency application. Furthermore the functional and non-functional requirements such as usability are specified to give insights for the concept of the application. In addition, single sub-functions of the mHealth applications of the University Koblenz which exists or are still under construction can be integrated in the future emergency application. Based on the catalog of requirements a market analysis for strengths and weaknesses of existing emergency application systems is realized. In the to-be concept the findings were summarized and possible architectural sketches for future emergency applications were given. Furthermore, one conclusion of dealing with this topic is that a design alone is not sufficient to guarantee a good working app. That is why the requirements for the thesis were expanded by the connection to and integration of rescue centers in the architecture of the emergency app.
At the end of the thesis, the reader will receive a comprehensive overview of the provision of emergency data to the rescue control centers by different transmission channels. Furthermore, conditions for the system requirements are also presented as possible scenarios of the architecture of the whole system of the emergency application. The generic and modular approach guarantees that the system is open for future development and integration of functions of other applications.
Im Rahmen dieser Masterarbeit wird ein umfassender Überblick über die Vielfalt der Sicherheitsmodelle gegeben, indem ausgewählte Sicherheitsmodelle beschrieben, klassifiziert und miteinander verglichen werden.
Sicherheitsmodelle beschreiben in einer abstrakten Weise die sicherheitsrelevanten
Komponenten und Zusammenhänge eines Systems. Mit den Sicherheitsmodellen können komplexe Sachverhalte veranschaulicht und analysiert werden.
Da Sicherheitsmodelle unterschiedliche Sicherheitsaspekte behandeln, beschäftigt
sich diese Arbeit mit der Ausarbeitung eines Klassifizierungsschemas, welches
die strukturelle und konzeptuelle Besonderheiten der Modelle in Bezug auf die zugrundeliegenden Sicherheitsaspekte beschreibt. Im Rahmen des Klassifizierungsschemas werden die drei grundlegenden Modellklassen gebildet: Zugriffskontrollmodelle, Informationsflussmodelle und Transaktionsmodelle.
Sicherheitsmodelle werden in einem direkten und indirekten Vergleich gegenüber gestellt. Im letzten Fall werden sie einer oder mehrerer Modellklassen des Klassifizierungsschemas zugeordnet. Diese Klassifizierung erlaubt, Aussagen über die betrachteten Sicherheitsaspekte und die strukturellen bzw. konzeptuellen Besonderheiten eines Sicherheitsmodells in Bezug auf die anderen Sicherheitsmodelle
zu machen.
Beim direkten Vergleich werden anhand der ausgewählten Kriterien die Eigenschaften
und Aspekte der Sicherheitsmodelle orthogonal zu den Modellklassen
betrachtet.
Human detection is a key element for human-robot interaction. More and more robots are used in human environments, and are expected to react to the behavior of people. Before a robot can interact with a person, it must be able to detect it at first. This thesis presents a system for the detection of humans and their hands using a RGB-D camera. First, a model based hypotheses for possible positions of humans are created to recognize a person. By using the upper parts of the body are used to extract, new features based on relief and width of a person- head and shoulders are extracted. The hypotheses are checked by classifying the features with a support vector machine (SVM). The system is able to detect people in different poses. Both sitting and standing humans are found, by using the visible upper parts of the person. Moreover, the system is able to recognize if a human is facing or averting the sensor. If the human is facing the sensor, the color information and the distance between hand and body are used to detect the positions of the person- hands. This information is useful for gestures recognition and thus can further enhances human-robot interaction.
Iterative Signing of RDF(S) Graphs, Named Graphs, and OWL Graphs: Formalization and Application
(2013)
When publishing graph data on the web such as vocabulariesrnusing RDF(S) or OWL, one has only limited means to verify the authenticity and integrity of the graph data. Today's approaches require a high signature overhead and do not allow for an iterative signing of graph data. This paper presents a formally defined framework for signing arbitrary graph data provided in RDF(S), Named Graphs, or OWL. Our framework supports signing graph data at different levels of granularity: minimum self-contained graphs (MSG), sets of MSGs, and entire graphs. It supports for an iterative signing of graph data, e. g., when different parties provide different parts of a common graph, and allows for signing multiple graphs. Both can be done with a constant, low overhead for the signature graph, even when iteratively signing graph data.
Autonomous systems such as robots already are part of our daily life. In contrast to these machines, humans an react appropriately to their counterparts. People can hear and interpret human speech, and interpret facial expressions of other people.
This thesis presents a system for automatic facial expression recognition with emotion mapping. The system is image-based and employs feature-based feature extraction. This thesis analyzes the common steps of an emotion recognition system and presents state-of-the-art methods. The approach presented is based on 2D features. These features are detected in the face. No neutral face is needed as reference. The system extracts two types of facial parameters. The first type consists of distances between the feature points. The second type comprises angles between lines connecting the feature points. Both types of parameters are implemented and tested. The parameters which provide the best results for expression recognition are used to compare the system with state-of-the-art approaches. A multiclass Support Vector Machine classifies the parameters.
The results are codes of Action Units of the Facial Action Coding System. These codes are mapped to a facial emotion. This thesis addresses the six basic emotions (happy, surprised, sad, fearful, angry, and disgusted) plus the neutral facial expression. The system presented is implemented in C++ and is provided with an interface to the Robot Operating System (ROS).
The goal of this Bachelor thesis was programming an existig six-legged robot, which should be able to explore any environment and create a map of it autonomous. A laser scanner is to be integrated for cognition of this environment. To build the map and locate the robot a suitable SLAM(Simultaneous Localization and Mapping) technique will be connected to the sensor data. The map is reported to be the robots base of path planning and obstancle avoiding, what will be developed in the scope of the bachelor thesis, too. Therefore both GMapping and Hector SLAM will be implemented and tested.
An exploration algorithm is described in this bachelor thesis for exploring the robots environment. The implementation on the robot takes place in the space of ROS(Robot Operating System) framework on a "Raspberry Pi" miniature PC.
A Kinect device has the ability to record color and depth images simultaneously. This thesis is an attempt to use the depth image to manipulate lighting information and material properties in the color image. The presented method of lighting and material manipulation needs a light simulation of the lighting conditions at the time of recording the image. It is used to transform information from a new light simulation directly back into the color image. Since the simulations are performed on a three-dimensional model, a way is searched to generate a model out of single depth image. At the same time the text will react to the problems of the depth data acquisition of the Kinect sensor. An editor is designed to make lighting and material manipulations possible. To generate a light simulation, some simple, real-time capable rendering methods and lighting modells are proposed. They are used to insert new illumination, shadows and reflections into the scene. Simple environments with well defined lighting conditions are manipulated in experiments to show boundaries and possibilities of the device and the techniques being used.
This thesis describes the conception, implementation and evaluation of a collaborative multiplayer game for preschoolers for mobile devices.
The main object of this thesis is to find out, if mobile devices like smartphones and tablet computers are suitable for the interaction of children. In order to develop this kind of game relevant aspects were researched. On this basis a game was designed which was finally tested by preschoolers.
From September 4 to 11, 1992, a fiirst meeting between Ukrainian and German scientists interested in mathematical and computer modeling of social processes was held at Vorzel' near Kiev. The meeting had been planned for nearly three years by Igor V. Chernenko and Mikhail V. Kuz'min, then members of the research group on mathematical modeling in sociology at the Institute of Sociology of the Academy of Science of the Ukrainian Republic, and had to be postponed twice due to the political development in the former Soviet Union, but thanks to the organizers' perseverance (and in spite of a strike of the airport personell at Kiev Borispol Airport on the eve of the conference) the conference could at last be realized.rnThe main purpose of the conference was to discuss a synergetic interpretation of large-scale destructive social processes as catastrophic phenomena in self-organized systems.
This paper originates from the FP6 project "Emergence in the Loop (EMIL)" which explores the emergence of norms in artificial societies. Part of work package 3 of this project is a simulator that allows for simulation experiments in different scenarios, one of which is collaborative writing. The agents in this still prototypical implementation are able to perform certain actions, such as writing short texts, submitting them to a central collection of texts (the "encyclopaedia") or adding their texts to texts formerly prepared by other agents. At the same time they are able to comment upon others' texts, for instance checking for correct spelling, for double entries in the encyclopaedia or for plagiarisms. Findings of this kind lead to reproaching the original authors of blamable texts. Under certain conditions blamable activities are no longer performed after some time.
Customization is a phenomenon which was introduced quite early in information systems literature. As the need for customized information technology is rising, different types of customization have emerged. In this study, customization processes in information systems are analyzed from a perspective based on the concept of open innovation. The objective is to identify how customization of information systems can be performed in an open innovation context. The concept of open innovation distinguishes three processes: Outside-in process, inside-out process and coupled process. After categorizing the selected journals into three core processes, the findings of this analysis indicated that there is a major concentration on outside-in processes. Further research on customization in coupled and inside-out processes is recommended. In addition, the establishment of an extensive up-to-date definition of customization in information systems is suggested.
This paper consists of the observation of existing first aid applications for smartphones and comparing them to a first aid application developed by the University of Koblenz called "Defi Now!". The main focus lies on examining "Defi Now!" in respect to its usability based on the dialogue principles referring to the seven software ergonomic principles due to the ISO 9241-110 standard. These are known as suitability for learning, controllability, error tolerance, self-descriptiveness, conformity with user expectations, suitability for the task, and suitability for individualization.
Therefore a usability study was conducted with 74 participants. A questionnaire was developed, which was to be filled out by the test participants anonymously. The test results were used for an optimization of the app referring its' usability.
Various best practices and principles guide an ontology engineer when modeling Linked Data. The choice of appropriate vocabularies is one essential aspect in the guidelines, as it leads to better interpretation, querying, and consumption of the data by Linked Data applications and users.
In this paper, we present the various types of support features for an ontology engineer to model a Linked Data dataset, discuss existing tools and services with respect to these support features, and propose LOVER: a novel approach to support the ontology engineer in modeling a Linked Data dataset. We demonstrate that none of the existing tools and services incorporate all types of supporting features and illustrate the concept of LOVER, which supports the engineer by recommending appropriate classes and properties from existing and actively used vocabularies. Hereby, the recommendations are made on the basis of an iterative multimodal search. LOVER uses different, orthogonal information sources for finding terms, e.g. based on a best string match or schema information on other datasets published in the Linked Open Data cloud. We describe LOVER's recommendation mechanism in general and illustrate it alongrna real-life example from the social sciences domain.
Concept for a Knowledge Base on ICT for Governance and Policy Modelling regarding eGovPoliNet
(2013)
Abstract The EU project eGovPoliNet is engaged in research and development in the field of information and communication technologies (ICT) for governance and policy modelling. Numerous communities pursue similar goals in this field of IT-based, strategic decision making and simulation of social problem areas. Though, the existing research approaches and results so far are quite fragmented. The aim of eGovPoliNet is to overcome the fragmentation across disciplines and to establish an international, open dialogue by fostering the cooperation between research and practice. This dialogue will advance the discussion and development of various problem areas with the help of researchers from different disciplines, who share knowledge, expertise and best practice supporting policy analysis, modelling and governance. To support this dialogue, eGovPoliNet will provide a knowledge base, which's conceptual development is the subject of this thesis. The knowledge base is to be filled with content from the area of ICT for strategic decision making and social simulation, such as publications, ICT solutions and project descriptions. This content needs to be structured, organised and managed in a way, so that it generates added value and the knowledge base is used as source of accumulated knowledge, which consolidates the previously fragmented research and development results in a central location.
The aim of this thesis is the development of a concept for a knowledge base, which provides the structure and the necessary functionalities to gather and process knowledge concerning ICT solutions for governance and policy modelling. This knowledge needs to be made available to users and thereby motivate them to contribute to the development and maintenance of the knowledge base.
This bachelor thesis deals with the topic "user-friendly design of applications (apps)" on mobile devices, a subdomain of software-ergonomics. In the process, two applications are being analyzed with the aim of developing a solution on how support on a mobile device should be conducted. This study focuses primarily on appropriate gestures to coordinate the 'help function' on a mobile device. The study results show that the test persons request a customized help function, but reject an extensive help description, as this seems to be overwhelming for the user.
The purpose of this bachelor- thesis is to teach Lisa - a robot of the university of Koblenz- AGAS department developed for participation in the @home league of the RoboCup - to draw. This requires the expansion of the robbie software framework and the operation of the robot- hardware components. Under consideration of a possible entry in the Open Challenge of the @home RoboCup, the goals are to detect a sheet of paper using Lisa- visual sensor, a Microsoft Kinect and draw on it using her Neuronics Katana robot arm. In addition, a pen mounting for the arm- gripper has to be constructed.
Outlined within this thesis are the procedures utilized to convert an image template into movement of the robotic arm, which in turn leads to drawing of a painting by the pen attached to the arm on a piece of paper detected by the visual sensor through image processing. Achieved were the parsing and drawing of an object made up of an indefinite amount of straight lines from a SVG-file onto a white sheet of paper, detected on a slightly darker surface and surrounded by various background objects or textures.
Pedestrian Detection in digital images is a task of huge importance for the development of automaticsystems and in improving the interaction of computer systems with their environment. The challenges such a system has to overcome are the high variance of the pedestrians to be recognized and the unstructured environment. For this thesis, a complete system for pedestrian detection was implemented according to a state of the art technique. A novel insight about precomputing the Color Self-Similarity accelerates the computations by a factor of four. The complete detection system is described and evaluated, and was published under an open source license.
Das Vertrauen von jungen Erwachsenen in politische Beiträge aus Rundfunk, Print- und Digitalmedien
(2013)
Die Kernfrage dieser Bachelorarbeit ist, ob das Vertrauen in Medien auf die politische Einstellung wirkt und ob Mediennutzung auf diese Wirkungsrichtung Einfluss nimmt. Hierbei werden sowohl Mediengattungen als auch einzelne Medienformate differenziert betrachtet. Die politische Einstellung wird anhand der Einstellungsdimensionen Effektivität der Regierung, Legitimität der Regierung, Einflussüberzeugung, Responsivität der politischen Akteure und Integrität der politischen Akteure operationalisiert. Hierbei wird der Fokus auf junge Erwachsene gelegt, welchen verbreitet Politikverdrossenheit nachgesagt wird.
Zur Prüfung des Zusammenhangs zwischen Medienvertrauen und der politischen Einstellung wird eine quantitative Online-Befragung der Studenten/ Studentinnen der Universität Koblenz (N = 496) durchgeführt. Zur Datenauswertung werden Regressionsanalysen sowie die ANOVA angewandt. Die Ergebnisse weisen nicht auf eine allgemeine negative politische Grundhaltung junger Erwachsenen hin. Zudem indizieren die Resultate, dass das Vertrauen in Medien einen signifikanten Effekt auf die politische Einstellung hat (p ≤ .05). Mediennutzung hat hingegen unzureichende Erklärungskraft. Auch in zukünftigen Studien würde es sich anbieten, das Medienvertrauen als zentrale unabhängige Variable zu untersuchen, wobei ein Generationenvergleich unterschiedlicher Bildungsschichten empfehlenswert wäre.
In this thesis, a first prototype of a mobile instruction device with mixed reality (MR) funktionality is developed. This system shall be capable to support training on the job through interaction with the work item. The concept corresponds to a didactic approach presented by Martens-Parree that combines constructivism with situated learning. As an application example, the training of glider pilots checked out on a new type was chosen. Whether the MR device could increase the competence, or facilitiate the completion of certain tasks, was examined in a survey with fifteen testers. The results of the study show that in general, the didactic approach of Martens-Parree is valid. While an increase in fact knowledge has been observed, it was not (yet) possible to demonstrate an increase in skills with respect to the work tasks.
This study investigates crowdfunding, a new form of financing projects. In the past years more and more crowdfunding platforms emerged. The main question is if crowdfunding is able to compete with the traditional types of financing social projects. The history and development of crowdfunding is presented in this paper. The different crowdfunding models are explained. An overview of German crowd funding platforms is presented. Based on successful social crowdfunding projects a list of key success factors is listed and described. In a case study a concept for financing a social project through crowdfunding is developed upon the previous studies.
In a software reengineering task legacy systems are adapted computer-aided to new requirements. For this an efficient representation of all data and information is needed. TGraphs are a suitable representation because all vertices and edges are typed and may have attributes. Further more there exists a global sequence of all graph elements and for each vertex exists a sequence of all incidences. In this thesis the "Extractor Description Language" (EDL) was developed. It can be used to generate an extractor out of a syntax description, which is extended by semantic actions. The generated extractor can be used to create a TGraph representation of the input data. In contrast to classical parser generators EDL support ambiguous grammars, modularization, symbol table stacks and island grammars. These features simplify the creation of the syntax description. The collected requirements for EDL are used to determine an existing parser generator which is suitable to realize the requirements.
After that the syntax and semantics of EDL are described and implemented using the suitable parser generator. Following two extractors one for XML and one for Java are created with help of EDL. Finally the time they need to process some input data is measured.
This thesis deals with problems, which occure when rendering stereoscopic contents. These problems are elaborated, simulated with the help of a program developed in this thesis and evaluated by a group of volunteers. Thereby it shall be determined, whether the errors are noticeable and how much they influence the 3D effect of the stereoscopic images. Each error is simulated using different camera assemblies and evaluated depending on the choosen assembly.
E-KRHyper is a versatile theorem prover and model generator for firstorder logic that natively supports equality. Inequality of constants, however, has to be given by explicitly adding facts. As the amount of these facts grows quadratically in the number of these distinct constants, the knowledge base is blown up. This makes it harder for a human reader to focus on the actual problem, and impairs the reasoning process. We extend E-Hyper- underlying E-KRhyper tableau calculus to avoid this blow-up by implementing a native handling for inequality of constants. This is done by introducing the unique name assumption for a subset of the constants (the so called distinct object identifiers). The obtained calculus is shown to be sound and complete and is implemented into the E-KRHyper system. Synthetic benchmarks, situated in the theory of arrays, are used to back up the benefits of the new calculus.
This work deals with the migration of software systems towards the use of the character set defined in the Unicode standard. The work is performed as a case study on the document-management-system PROXESS. A conversion process will be designed that defines the working-steps of the migration for the entire system as well as an arbitrary decomposition of the system into individual modules. The working-steps for each module can be performed chronologically independent of each other to a great extend. For the conversion of the implementation, an approach of automatic recognition of usage patterns is applied. The approach aims at searching the abstract syntax tree for sequences of program instructions that can be assigned to a certain usage pattern. The usage pattern defines another sequence of instructions that acts as an sample solution for that usage pattern. The sample solution demonstrates the Unicode-based management of strings for that usage pattern. By applying a transformation rule, the original sequence of instructions is transferred to the sequence of instructions exposed by the sample solution of the related usage pattern. This mechanism is a starting point for the development of tools that perform this transformation automatically.
The annotation of digital media is no new area of research, instead it is widely investigated. There are many innovative ideas for creating the process of annotation. The most extensive segment of related work is about semi automatic annotation. One characteristic is common in the related work: None of them put the user in focus. If you want to build an interface, which is supporting and satsfying the user, you will have to do a user evaluation first. Whithin this thesis we want to analyze, which features an interface should or should not have to meet these requirements of support, user satisfaction and beeing intuitive. After collecting many ideas and arguing with a team of experts, we determined only a few of them. Different combination of these determined variables form the interfaces, we have to investigate in our usability study. The results of the usability leads to the assumption, that autocompletion and suggestion features supports the user. Furthermore coloring tags for grouping them into categories is not disturbing to the user, but has a tendency of being supportive. Same tendencies emerge for an interface consisting of two user interface elements. There is also an example given for the definition differences of being intuitive. This thesis leads to the concolusion that for reasons of user satisfaction and support it is allowed to differ from classical annotation interface features and to implement further usability studies in the section of annotation interfaces.
Die Bedeutung von Social Software (SSW) nimmt nicht nur im Privatleben vieler Menschen zu. Auch Unternehmen haben mittlerweile die Potentiale dieser Systeme erkannt und setzen vermehrt auf Web 2.0 Technologien basierende Systeme im Unternehmenskontext ein. So brachte eine Studie der Association for Information and Image Management (AIIM) im Jahr 2009 hervor, dass über 50 % der Befragten Enterprise 2.0 (E2.0), d.h. der Einsatz von SSW im Unternehmen, als kritischen Faktor des Unternehmenserfolges ansahen. Auch durch diesen Trend mit verursacht stieg, laut einer Studie des Beratungsunternehmens IDC, die Menge an digital verfügbaren Informationen innerhalb einer Zeitspanne von fünf Jahren (2006-2011) um den Faktor zehn. Wo früher galt, "Je mehr Information, desto besser.", bereitet heute das Managen dieser schieren Flut an Informationen vielen Unternehmen Probleme (bspw. in Bezug auf die Auffindbarkeit von Informationen). SSW bietet mit neuen Funktionen, wie Social Bookmarking, Wikis oder Tags, das Potential, Informationen durch Nutzerbeteiligung besser zu strukturieren und zu organisieren. In der vorliegenden Arbeit wird am Beispiel der Forschungsgruppe für Betriebliche Anwendungssysteme (FG BAS) gezeigt, wie man vorhandene Informationsstrukturen erfassen, analysieren und darauf basierend Empfehlungen für den Einsatz von SSW herleiten kann. Den Rahmen für dieses Vorgehen bildet ein von Henczel (2000) entwickeltes Modell zur Durchführung eines Information Audits. Hervorzuhebende Ergebnisse der Arbeit stellen zum Einen das Erfassungsmodell für Informationen und Prozesse dar (Informationsmatrix) und zum Anderen das Visualisierungsmodell der erfassten Daten.
Development of an Android Application for the Recognition and Translation of Words in Camera Scenes
(2012)
This bachelor- thesis describes the conception and implementation of a translation software for the Android platform. The specific feature of the software is the independent text recognition based on the view of the camera. This approach aims to enhance and accelerate the process of translation in certain situations. After an introduction into text recognition, the underlying technologies and the operation system Android useful applications are described. Then the concept of the software is created and the implementation examined. Finally an evaluation is conducted to identify strengths and weaknesses of the software.
Dualizing marked Petri nets results in tokens for transitions (t-tokens). A marked transition can strictly not be enabled, even if there are sufficient "enabling" tokens (p-tokens) on its input places. On the other hand, t-tokens can be moved by the firing of places. This permits flows of t-tokens which describe sequences of non-events. Their benefiit to simulation is the possibility to model (and observe) causes and effects of non-events, e.g. if something is broken down.
In this paper, we demonstrate by means of two examples how to work with probability propagation nets (PPNs). The fiirst, which comes from the book by Peng and Reggia [1], is a small example of medical diagnosis. The second one comes from [2]. It is an example of operational risk and is to show how the evidence flow in PPNs gives hints to reduce high losses. In terms of Bayesian networks, both examples contain cycles which are resolved by the conditioning technique [3].
The paper deals with a specific introduction into probability propagation nets. Starting from dependency nets (which in a way can be considered the maximum information which follows from the directed graph structure of Bayesian networks), the probability propagation nets are constructed by joining a dependency net and (a slightly adapted version of) its dual net. Probability propagation nets are the Petri net version of Bayesian networks. In contrast to Bayesian networks, Petri nets are transparent and easy to operate. The high degree of transparency is due to the fact that every state in a process is visible as a marking of the Petri net. The convenient operability consists in the fact that there is no algorithm apart from the firing rule of Petri net transitions. Besides the structural importance of the Petri net duality there is a semantic matter; common sense in the form of probabilities and evidencebased likelihoods are dual to each other.
Standards are widely-used in the computer science and IT industry. Different organizations like the International Organization for Standardization (SO) are involved in the development of computer related standards. An important domain of standardization is the specification of data formats enabling the exchange of information between different applications. Such formats can be expressed in a variety of schema languages thereby defining sets of conformant documents. Often the use of multiple schema languages is required due to their varying expressive power and different kind of validation requirements.rnThis also holds for the Specification Common Cartridge which is maintained by the IMS Global Learning Consortium. The specification defines valid zip packages that can be used to aggregate different learning objects. These learning objects are represented by a set of files which are a part of the package and can be imported into a learning management system. The specification makes use of other specifications to constrain the contents of valid documents. Such documents are expressed in the eXtensible Markup Language and may contain references to other files also part of the package. The specification itself is a so-called domain profile. A domain profile allows the modification of one or more specifications to meet the needs of a particular community. Test rules can be used to determine a set of tasks in order to validate a concrete package. The execution is done by a testsystem which, as we will show, can be created automatically. Hence this method may apply to other package based data formats that are defined as a part of a specification.
This work will examine the applicability of this generic test method to the data formats that are introduced by the so called Virtual Company Dossier. These formats are used in processes related to public e-procurement. They allow the packaging of evidences that are needed to prove the fulfillment of criteria related to a public tender. The work first examines the requirements that are common to both specifications. This will introduce a new view on the requirements by introducing a higher level of abstraction. The identified requirements will then be used to create different domain profiles each capturing the requirements of a package based data format. The process is normally guided by supporting tools that ease the capturing of a domain profile and the creation of testsystems. These tools will be adapted to support the new requirements. Furtheron the generic testsystem will be modified. This system is used as a basis when a concrete testsystem is created.
Finally the author comes to a positive conclusion. Common requirements have been identified and captured. The involved systems have been adapted allowing the capturing of further types of requirements that have not been supported before. Furthermore the background of the specifications quite differ. This indicates that the use of domain profiles and generic test technologies may be suitable in a wide variety of other contexts.
With the reaccreditation of the degree programs in the department of computer science at the University of Koblenz-Landau new trendsetting degree programs will be offered. For further planning and design of the individual degree programs the opinion of the students is a considerable indicator. Information about the new degree programs aren"t available during the accreditation process. Students have an interest in knowing about the new degree programs and the new examination regulation and therefore being part in the decision process would be desirable. The concept of e-participation is an opportunity to satisfy this need. It offers the possibility to discuss topics of the accreditation and to bring in own ideas and opinions into the decision process. This bachelor thesis describes an e-participation project at the University of Koblenz-Landau about the accreditation of the degree programs of the faculty of computer science. By using the reference framework of Scherer and Wimmer (2011) the project will be carried out. Furthermore the accreditation process will be modeled to get a better understanding of the whole process and to identify the possibilities for e-participation project integration. The results of this project are going to be covered by an online survey about the e-participation platform. Using the results of the survey and the experiences gained after the project recommendations are given for further e-participation projects. Moreover the reference framework of Scherer and Wimmer (2011) will be analyzed critically.
Parallelmanipulatoren, welche den Stewartmechanismus nutzen, ermöglichen die präzise Ausführung von Aufgaben in einem begrenzten Arbeitsraum. Durch die Nutzung von sechs Freiheitsgraden wird eine hohe Flexibilität der Positionierung erreicht. Die robuste Konstruktion sorgt zudem für ein sehr gutes Verhältnis von Gewicht zu Nutzlast.
Diese Bachelorarbeit befasst sich mit der Entwicklung einer flexiblen Softwarelösung zur Ansteuerung einer Stewartplattform. Dies umfasst ein Modell der Plattform, welches zu Testzwecken dient. Es werden zunächst die mathematischen Grundlagen der Inversen Kinematik erarbeitet aufbauend auf einem zuvor definierten Bewegungsmodell. Es folgt die Entwicklung einer generischen Architektur zur Übermittlung und Auswertung von Steuerkommandos vom PC. Die Implementierung geschieht in C und wird in verschiedene Module aufgeteilt, welche jeweils einen Aufgabenbereich der Positionskontrolle oder der Hardwarekommunikation abdecken. Es wird zudem eine graphische Nutzeroberfläche vorgestellt, über die man die Position der Plattform manuell verändern kann. Eine automatische Ansteuerung wird im folgenden Anwendungsbeispiel beschrieben, wo die Plattform mit frequentiellen Beschleunigungswerten einer Achterbahnsimulation beliefert wird.
Die Messung der Produktivität von Dienstleistungen ist trotz zunehmender Relevanz immer noch ein wenig durchdrungenes Forschungsfeld. Ursachen hierfür sind vor allem in den besonderen Merkmalen von Dienstleistungen - Immaterialität und Integrativität - zu suchen. Eine typische Dienstleistung der B2B Softwarebranche ist die Anpassung von Systemen an die Bedürfnisse des Kunden - das sogenannte Customizing. Die Ausschöpfung des Customizing-Potentials von Standardsoftware und -produkten und eine stärkere Einbeziehung des Kunden in Innovationsprozesse werden jedoch dadurch erschwert, dass die Produktivität dieser Dienstleistung nur unzureichend mess- und somit bewertbar zu sein scheint.
Der vorliegende Beitrag beschreibt die Entwicklung eines Modells zur Messung der Produktivität von Dienstleistungen auf der Basis unterschiedlicher Vorstudien im Rahmen des CustomB2B Projektes an der Universität Koblenz-Landau.
Virtual Goods + ODRL 2012
(2012)
This is the 10th international workshop for technical, economic, and legal aspects of business models for virtual goods incorporating the 8th ODRL community group meeting. This year we did not call for completed research results, but we invited PhD students to present and discuss their ongoing research work. In the traditional international group of virtual goods and ODRL researchers we discussed PhD research from Belgium, Brazil, and Germany. The topics focused on research questions about rights management in the Internet and e-business stimulation. In the center of rights management stands the conception of a formal policy expression that can be used for human readable policy transparency, as well as for machine readable support of policy conformant systems behavior up to automatic policy enforcement. ODRL has proven to be an ideal basis for policy expressions, not only for digital copy rights, but also for the more general "Policy Awareness in the World of Virtual Goods". In this sense, policies support the communication of virtual goods, and they are a virtualization of rules-governed behavior themselves.
Aspect-orientation in PHP
(2012)
Diese Diplomarbeit hat das Thema der fehlenden Cross-Cutting-Concerns(CCCs)-Unterstützung in PHP zum Inhalt. Die Basis bilden zu definierende Anforderungen an eine AOP-Realisierung im PHP-Umfeld. Es wird analysiert, wie und ob verwandte Sprachen und Paradigmen es gestatten, CCCs zu unterstützen. Darüber hinaus wird die Möglichkeit erörtert, AOP in PHP ohne PHP-Erweiterung zu realisieren. Weiter werden die bisherigen Ansätze, AOP in PHP umzusetzen, qualitativ untersucht. Die vorliegende Arbeit zielt darauf ab, eine eigene AOP-PHP-Lösung zu präsentieren, die nicht die Schwächen existierender Lösungen teilt.
Regarding the rapidly growing amount of data produced every year and the increasing acceptance of Enterprise 2.0 enterprises have to care about the management of their data more and more. Content created and stored in an uncoordinated manner can lead to data-silos (Williams & Hardy 2011, p.57), which result in long search times, inaccessible data and in consequence monetary losses. The "expanding digital universe" forces enterprises to develop new archiving solutions and records management policies (Gantz et al. 2007, p.13). Enterprise Content Management (ECM) is the research field that deals with these challenges. It is placed in the scientific context of Enterprise Information Management. This thesis aims to find out to what extent current Enterprise Content Management Systems (ECMS) support these new requirements, especially concerning the archiving of Enterprise 2.0 data. For this purpose, three scenarios were created to evaluate two different kinds of ECMS (one Open Source - and one proprietary system) chosen on the basis of a short marketrnresearch. The application of the scenarios reveals that the system vendors actually face the industry- concerns: both tools provide functionality for the archiving of data arising from online collaboration and also business records management capabilities but the integration of those topics is not, or is only inconsistently solved. At this point new questions - such as, "Which datarngenerated in an Enterprise 2.0 is worth being a record?" - arise and should be examined in future research.
Procedural content generation, the generation of video game content using pseudo-random algorithms, is a field of increasing business and academic interest due to its suitability for reducing development time and cost as well as the possibility of creating interesting, unique game spaces. Although many contemporary games feature procedurally generated content, the author perceived a lack of games using this approach to create realistic outer-space game environments, and the feasibility of employing procedural content generations in such a game was examined. Using current scientific models, a real-time astronomical simulation was developed in Python which generates star and planets object in a fictional galaxy procedurally to serve as the game space of a simple 2D space exploration game where the player has to search for intelligent life.
Schema information about resources in the Linked Open Data (LOD) cloud can be provided in a twofold way: it can be explicitly defined by attaching RDF types to the resources. Or it is provided implicitly via the definition of the resources´ properties.
In this paper, we analyze the correlation between the two sources of schema information. To this end, we have extracted schema information regarding the types and properties defined in two datasets of different size. One dataset is a LOD crawl from TimBL- FOAF profile (11 Mio. triple) and the second is an extract from the Billion Triples Challenge 2011 dataset (500 Mio. triple). We have conducted an in depth analysis and have computed various entropy measures as well as the mutual information encoded in this two manifestations of schema information.
Our analysis provides insights into the information encoded in the different schema characteristics. It shows that a schema based on either types or properties alone will capture only about 75% of the information contained in the data. From these observations, we derive conclusions about the design of future schemas for LOD.