Refine
Year of publication
Document Type
- Part of Periodical (132)
- Bachelor Thesis (68)
- Diploma Thesis (47)
- Master's Thesis (34)
- Study Thesis (9)
- Conference Proceedings (5)
- Doctoral Thesis (2)
- Article (1)
Keywords
- Simulation (5)
- ontology (5)
- Bluetooth (4)
- computer clusters (4)
- Android <Systemplattform> (3)
- Augmented Reality (3)
- Customer Relationship Management (3)
- Enterprise 2.0 (3)
- Informatik (3)
- Knowledge Compilation (3)
Institute
- Fachbereich 4 (298) (remove)
This work addresses the challenge of calibrating multiple solid-state LIDAR systems. The study focuses on three different solid-state LIDAR sensors that implement different hardware designs, leading to distinct scanning patterns for each system. Consequently, detecting corresponding points between the point clouds generated by these LIDAR systems—as required for calibration—is a complex task. To overcome this challenge, this paper proposes a method that involves several steps. First, the measurement data are preprocessed to enhance its quality. Next, features are extracted from the acquired point clouds using the Fast Point Feature Histogram method, which categorizes important characteristics of the data. Finally, the extrinsic parameters are computed using the Fast Global Registration technique. The best set of parameters for the pipeline and the calibration success are evaluated using the normalized root mean square error. In a static real-world indoor scenario, a minimum root mean square error of 7 cm was achieved. Importantly, the paper demonstrates that the presented approach is suitable for online use, indicating its potential for real-time applications. By effectively calibrating the solid-state LIDAR systems and establishing point correspondences, this research contributes to the advancement of multi-LIDAR fusion and facilitates accurate perception and mapping in various fields such as autonomous driving, robotics, and environmental monitoring.
Social networks are ubiquitous structures that we generate and enrich every-day while connecting with people through social media platforms, emails, and any other type of interaction. While these structures are intangible to us, they carry important information. For instance, the political leaning of our friends can be a proxy to identify our own political preferences. Similarly, the credit score of our friends can be decisive in the approval or rejection of our own loans. This explanatory power is being leveraged in public policy, business decision-making and scientific research because it helps machine learning techniques to make accurate predictions. However, these generalizations often benefit the majority of people who shape the general structure of the network, and put in disadvantage under-represented groups by limiting their resources and opportunities. Therefore it is crucial to first understand how social networks form to then verify to what extent their mechanisms of edge formation contribute to reinforce social inequalities in machine learning algorithms.
To this end, in the first part of this thesis, I propose HopRank and Janus two methods to characterize the mechanisms of edge formation in real-world undirected social networks. HopRank is a model of information foraging on networks. Its key component is a biased random walker based on transition probabilities between k-hop neighborhoods. Janus is a Bayesian framework that allows to identify and rank plausible hypotheses of edge formation in cases where nodes possess additional information. In the second part of this thesis, I investigate the implications of these mechanisms - that explain edge formation in social networks - on machine learning. Specifically, I study the influence of homophily, preferential attachment, edge density, fraction of inorities, and the directionality of links on both performance and bias of collective classification, and on the visibility of minorities in top-k ranks. My findings demonstrate a strong correlation between network structure and machine learning outcomes. This suggests that systematic discrimination against certain people can be: (i) anticipated by the type of network, and (ii) mitigated by connecting strategically in the network.
In this thesis the possibilities for real-time visualization of OpenVDB
files are investigated. The basics of OpenVDB, its possibilities, as well
as NanoVDB and its GPU port, were studied. A system was developed
using PNanoVDB, the graphics API port of OpenVDB. Techniques were
explored to improve and accelerate a single ray approach of ray tracing.
To prove real-time capability, two single scattering approaches were
also implemented. One of these was selected, further investigated and
optimized to achieve interactive real-time rendering.
It is important to give artists immediate feedback on their adjustments, as
well as the possibility to change all parameters to ensure a user friendly
creation process.
In addition to the optical rendering, corresponding benchmarks were
collected to compare different improvement approaches and to prove
their relevance. Attention was paid to the rendering times and memory
consumption on the GPU to ensure optimal use. A special focus, when
rendering OpenVDB files, was put on the integrability and extensibility of
the program to allow easy integration into an existing real-time renderer
like U-Render.
Semantic Web technologies have been recognized to be key for the integration of distributed and heterogeneous data sources on the Web, as they provide means to define typed links between resources in a dynamic manner and following the principles of dataspaces. The widespread adoption of these technologies in the last years led to a large volume and variety of data sets published as machine-readable RDF data, that once linked constitute the so-called Web of Data. Given the large scale of the data, these links are typically generated by computational methods that given a set of RDF data sets, analyze their content and identify the entities and schema elements that should be connected via the links. Analogously to any other kind of data, in order to be truly useful and ready to be consumed, links need to comply with the criteria of high quality data (e.g., syntactically and semantically accurate, consistent, up-to-date). Despite the progress in the field of machine learning, human intelligence is still essential in the quest for high quality links: humans can train algorithms by labeling reference examples, validate the output of algorithms to verify their performance on a data set basis, as well as augment the resulting set of links. Humans —especially expert humans, however, have limited availability. Hence, extending data quality management processes from data owners/publishers to a broader audience can significantly improve the data quality management life cycle.
Recent advances in human computation and peer-production technologies opened new avenues for human-machine data management techniques, allowing to involve non-experts in certain tasks and providing methods for cooperative approaches. The research work presented in this thesis takes advantage of such technologies and investigates human-machine methods that aim at facilitating link quality management in the Semantic Web. Firstly, and focusing on the dimension of link accuracy, a method for crowdsourcing ontology alignment is presented. This method, also applicable to entities, is implemented as a complement to automatic ontology alignment algorithms. Secondly, novel measures for the dimension of information gain facilitated by the links are introduced. These entropy-centric measures provide data managers with information about the extent the entities in the linked data set gain information in terms of entity description, connectivity and schema heterogeneity. Thirdly, taking Wikidata —the most successful case of a linked data set curated, linked and maintained by a community of humans and bots— as a case study, we apply descriptive and predictive data mining techniques to study participation inequality and user attrition. Our findings and method can help community managers make decisions on when/how to intervene with user retention plans. Lastly, an ontology to model the history of crowd contributions across marketplaces is presented. While the field of human-machine data management poses complex social and technical challenges, the work in this thesis aims to contribute to the development of this still emerging field.
Usability experts conduct user studies to identify existing usability problems. An established method is to record gaze behavior with an eye-tracker. These studies require a lot of effort to evaluate the results. Automated recognition of good and bad usability in recorded user data can support usability experts in eye tracking evaluation and reduce the effort. The objective of that bachelor thesis is to identify suitable eye-tracking metrics that correlate with the quality of usability. For this purpose, the central research question is answered: Which eye-tracking metrics correlate with the quality of a web form’s operation? To answer the research question, a quantitative A/B-user-study with eye-tracking was conducted and recorded the
gaze behavior of 30 subjects while filling out the web form. The web form was designed, that each web form page was available as a good and bad variant according to known usability guidelines. The results confirm a significant correlation between the eye-tracking-metric "number of visits to an
AOI" and the quality of the operation of a web form. The eye-tracking-metrics
"number of fixations within an AOI" and "duration of fixations within an AOI" also correlate with the quality of usability. No correlation could be confirmed for the "time of the first fixation within an AOI".
Despite widespread plans of big companies like Amazon and Google to develop unmanned delivery drones, scholarly research in this field is scarce, especially in the information systems field. From technical and legal perspectives, drone delivery in last-mile scenarios is in a quite mature state. However, estimates of user acceptance are varying between high skepticism and exaggerated optimism. This research follows a mixed method approach consisting both qualitative and quantitative research, to identify and test determinants of consumer delivery drone service adoption. The qualitative part rests on ten interviews among average consumers, who use delivery services on a regular basis. Insights gained from the qualitative part were used to develop an online survey and to assess the influence of associated risks on adoption intentions. The quantitative results show that especially financial and physical risks impede drone delivery service adoption. Delivery companies who are currently thinking about providing a delivery drone service may find these results useful when evaluating usage behaviors in the future market for delivery drones.
On-screen interactive presentations have got immense popularity in the domain of attentive interfaces recently. These attentive screens adapt their behavior according to the user's visual attention. This thesis aims to introduce an application that would enable these attentive interfaces to change their behavior not just according to the gaze data but also facial features and expressions. The modern era requires new ways of communications and publications for advertisement. These ads need to be more specific according to people's interests, age, and gender. When advertising, it's important to get a reaction from the user but not every user is interested in providing feedback. In such a context more, advance techniques are required that would collect user's feedback effortlessly. The main problem this thesis intends to resolve is, to apply advanced techniques of gaze and face recognition to collect data about user's reactions towards different ads being played on interactive screens. We aim to create an application that enables attentive screens to detect a person's facial features, expressions, and eye gaze. With eye gaze data we can determine the interests and with facial features, age and gender can be specified. All this information will help in optimizing the advertisements.
This paper describes the robots TIAGo and Lisa used by
team homer@UniKoblenz of the University of Koblenz-Landau, Germany,
for the participation at the RoboCup@Home 2019 in Sydney,
Australia. We ended up first at RoboCup@Home 2019 in the Open Platform
League and won the competition in our league now three times
in a row (four times in total) which makes our team the most successful
in RoboCup@Home. We demonstrated approaches for learning from
demonstration, touch enforcing manipulation and autonomous semantic
exploration in the finals. A special focus is put on novel system components
and the open source contributions of our team. We have released
packages for object recognition, a robot face including speech synthesis,
mapping and navigation, speech recognition interface, gesture recognition
and imitation learning. The packages are available (and new packages
will be released) on http://homer.uni-koblenz.de.
Der Fachbereich 4: Informatik forscht und lehrt in einem diversifzierten Spektrum der Digitalisierung von Wirtschaft, Gesellschaft und öffentlicher Verwaltung. Die Themen schließen Grundlagenforschung und anwendungsorientierte sowie interdisziplinäre Forschung und Lehre ein. Sie reichen von der Kerninformatik bis zu grafschen und ökonomischen Aspekten der Informationsverarbeitung, wie nachfolgende Grafk verdeutlicht (die einzelnen Institute und ihre zugehörigen Arbeitsgruppen stellen sich in den nachfolgenden Kapiteln vor):