Refine
Year of publication
Document Type
- Master's Thesis (91) (remove)
Language
- English (91) (remove)
Keywords
- Internet of Things (3)
- virtual reality (3)
- Beschaffung (2)
- 2019 European Parliament Election (1)
- API analysis (1)
- API-Analyse (1)
- Agentenorientiertes Software Engineering (1)
- Akzeptanz (1)
- Analysis of social platform (1)
- Articles for Deletion (1)
- Artificial Intelligence (1)
- Association Rules (1)
- Bedarfsforschung (1)
- Belief change, concept contraction, EL (1)
- Biometric Authentication (1)
- Blog marketing (1)
- Breeding tree selection (1)
- Business Process Management Recommender Systems Survey (1)
- Bürgerbeiteiligung (1)
- Case Study Analysis (1)
- Challenges (1)
- Computersimulation (1)
- Corvus frugilegus (1)
- Criteria Matrix (1)
- DMN (1)
- DTI (1)
- Data Mining (1)
- Datenaustausch (1)
- Developer profiling (1)
- Diffusionsbildgebung (1)
- Dokumentation (1)
- Dynamische Analyse (1)
- E-Partizipation (1)
- E-participation (1)
- EU (1)
- Effectiveness (1)
- Einkauf (1)
- Entwickler Profil (1)
- Fahrverhalten (1)
- Feature Modeling (1)
- Fiber Tracking (1)
- Fingerprint Recognition (1)
- Function Words (1)
- GDS (1)
- GPU (1)
- Gehirn (1)
- Grounded Theory (1)
- Handsfree editing (1)
- Human resources management (1)
- I-messages (1)
- IBM Bluemix (1)
- IceCube (1)
- International organization (1)
- Interoperability (1)
- Interoperabilität (1)
- IoT (1)
- Knowledge Engineering (1)
- MSR (1)
- Maschinelles Lernen (1)
- Material Point Method (1)
- MeVisLab (1)
- Mitral Valve (1)
- Mitralklappe (1)
- Modellfahrzeug (1)
- Multi-Agenten-Systeme (1)
- Natural Language Processing (1)
- Neutino (1)
- ONDEX (1)
- OPD-SHRM (1)
- OVTK (1)
- OWL (1)
- Ontologie <Wissensverarbeitung> (1)
- OpenVDB (1)
- Organizational Change (1)
- PEPPOL (1)
- Pan European Public Procurement OnLine (1)
- Pfadverfolgung (1)
- Physiksimulation (1)
- Political Communication (1)
- Real-Time (1)
- Recommender Systems, Business Process Modeling, Literature Review (1)
- Reddit (1)
- Reengineering (1)
- Reverse Engineering (1)
- Robust Principal Component Analysis (1)
- Rook (1)
- STOF Model (1)
- Sand (1)
- Sattelkraftfahrzeug (1)
- Sattelzug (1)
- Schnee (1)
- Segmentation (1)
- Segmentierung (1)
- Sensing as a Service (1)
- Service identification (1)
- Service-Identifizierung (1)
- Service-orientierte Architektur (1)
- Serviceorientierte Architektur (1)
- Software (1)
- Software Engineering (1)
- Software Repositories (1)
- Software migration (1)
- Software-Migration (1)
- Softwaretest (1)
- Solutions (1)
- Sozialwissenschaftliche Simulation (1)
- Speaker Recognition (1)
- Statistical Shape Model (1)
- Sustainability (1)
- Testen (1)
- Text Analysis (1)
- Text Mining (1)
- Text classification (1)
- Tool Evaluation (1)
- Ultraschall (1)
- Ultrasound (1)
- Umfrage in Koblenz (1)
- VCD (1)
- Vegetation distribution (1)
- Verification (1)
- Virtual Company Dossier (1)
- WEB (1)
- WSDL (1)
- Web Analytics (1)
- Web Analytics Framework (1)
- Web Mining (1)
- Web Services (1)
- Web log (1)
- Weblog (1)
- Wikipedia (1)
- XML (1)
- You-messages (1)
- blockchain (1)
- cultural dimensions (1)
- data (1)
- data sharing (1)
- data warehouse (1)
- distributed information systems (1)
- distributed ledger (1)
- eGovernment (1)
- eSourcing (1)
- framework (1)
- functional web testing tools (1)
- grassland (1)
- healthcare (1)
- image warping (1)
- internet of things (1)
- iot development platforms (1)
- leap motion (1)
- mPayments (1)
- machine learning (1)
- micro-agent (1)
- multi-agent systems (1)
- performance optimization (1)
- simulation (1)
- social simulation (1)
- stereoscopic rendering (1)
- survey in Koblenz (1)
- tracking (1)
- vegetation modeling (1)
- visualization (1)
- volume rendering (1)
- warp divergence (1)
The Internet of Things (IoT) is a fast-growing, technological concept, which aims to integrate various physical and virtual objects into a global network to enable interaction and communication between those objects (Atzori, Iera and Morabito, 2010). The application possibilities are manifold and may transform society and economy similarly to the usage of the internet (Chase, 2013). Furthermore, the Internet of Things occupies a central role for the realisation of visionary future concepts, for example, Smart City or Smart Healthcare. In addition, the utilisation of this technology promises opportunities for the enhancement of various sustainability aspects, and thus for the transformation to a smarter, more efficient and more conscious dealing with natural resources (Maksimovic, 2017). The action principle of sustainability increasingly gains attention in the societal and academical discourse. This is reasoned by the partly harmful consumption and production patterns of the last century (Mcwilliams et al., 2016). Relating to sustainability, the advancing application of IoT technology also poses risks. Following the precautionary principle, these risks should be considered early (Harremoës et al., 2001). Risks of IoT for sustainability include the massive amounts of energy and raw materials which are required for the manufacturing and operation of IoT objects and furthermore, the disposal of those objects (Birkel et al., 2019). The exact relations in the context of IoT and sustainability are insufficiently explored to this point and do not constitute a central element within the discussion of this technology (Behrendt, 2019). Therefore, this thesis aims to develop a comprehensive overview of the relations between IoT and sustainability.
To achieve this aim, this thesis utilises the methodology of Grounded Theory in combination with a comprehensive literature review. The analysed literature primarily consists of research contributions in the field of Information Technology (IT). Based on this literature, aspects, solution approaches, effects and challenges in the context of IoT and sustainability were elaborated. The analysis revealed two central perspectives in this context. IoT for Sustainability (IoT4Sus) describes the utilisation and usage of IoT-generated information to enhance sustainability aspects. In contrast, Sustainability for IoT (Sus4IoT) fo-cuses on sustainability aspects of the applied technology and highlights methods to reduce negative impacts, which are associated with the manufacturing and operation of IoT. Elaborated aspects and relations were illustrated in the comprehensive CCIS Framework. This framework represents a tool for the capturing of relevant aspects and relations in this context and thus supports the awareness of the link between IoT and sustainability. Furthermore, the framework suggests an action principle to optimise the performance of IoT systems regarding sustainability.
The central contribution of this thesis is represented by the providence of the CCIS Framework and the contained information regarding the aspects and relations of IoT and sustainability.
The Internet of Things (IoT) recently developed from the far-away vision of ubiquitous computing into very tangible endeavors in politics and economy, implemented in expensive preparedness programs. Experts predict considerable changes in business models that need to be addressed by organizations in order to respond to competition. Although there is a need to develop strategies for upcoming transformations, organizational change literature did not turn to the specific change related to the new technology yet. This work aims at investigating IoT-related organizational change by identifying and classifying different change types. It therefore combines the methodological approach of grounded theory with a discussion and classification of identified change informed by a structured literature review of organizational change literature. This includes a meta-analysis of case studies using a qualitative, exploratory coding approach to identify categories of organizational change related to the introduction of IoT. Furthermore a comparison of the identified categories to former technology-related change is provided using the example of Electronic Business (e-business), Enterprise Resource Planning (ERP) systems, and Customer Relationship Management (CRM) systems. As a main result, this work develops a comprehensive model of IoT-related business change. The model presents two main themes of change indicating that personal smart things will transform businesses by means of using more personal devices, suggesting and scheduling actions of their users, and trying to avoid hazards. At the same time, the availability of information in organizations will further increase to a state where information is available ubiquitously. This will ultimately enable accessing real time information about objects and persons anytime and from any place. As a secondary result, this work gives an overview on concepts of technology-related organizational change in academic literature.
The extensive literature in the data visualization field indicates that the process of creating efficient data visualizations requires the data designer to have a large set of skills from different fields (such as computer science, user experience, and business expertise). However, there is a lack of guidance about the visualization process itself. This thesis aims to investigate the different processes for creating data visualizations and develop an integrated framework to guide the process of creating data visualizations that enable the user to create more useful and usable data visualizations. Firstly, existing frameworks in the literature will be identified, analyzed and compared. During this analysis, eight views of the visualization process are developed. These views represent the set of activities which should be done in the visualization process. Then, a preliminary integrated framework is developed based on an analysis of these findings. This new integrated framework is tested in the field of Social Collaboration Analytics on an example from the UniConnect platform. Lastly, the integrated framework is refined and improved based on the results of testing with the help of diagrams, visualizations and textual description. The results show that the visualization process is not a waterfall type. It is the iterative methodology with the certain phases of work, demonstrating how to address the eight views with different levels of stakeholder involvement. The findings are the basis for a visualization process which can be used in future work to develop the fully functional methodology.
The publication of open source software aims to support the reuse, the distribution and the general utilization of software. This can only be enabled by the correct usage of open source software licenses. Therefore associations provide a multitude of open source software licenses with different features, of which a developer can choose, to regulate the interaction with his software. Those licenses are the core theme of this thesis.
After an extensive literature research, two general research questions are elaborated in detail. First, a license usage analysis of licenses in the open source sector is applied, to identify current trends and statistics. This includes questions concerning the distribution of licenses, the consistency in their usage, their association over a period of time and their publication.
Afterwards the recommendation of licenses for specific projects is investigated. Therefore, a recommendation logic is presented, which includes several influences on a suitable license choice, to generate an at most applicable recommendation. Besides the exact features of a license of which a user can choose, different methods of ranking the recommendation results are proposed. This is based on the examination of the current situation of open source licensing and license suggestion. Finally, the logic is evaluated on the exemplary use-case of the 101companies project.
Business Process Querying (BPQ) is a discipline in the field of Business Process Man- agement which helps experts to understand existing process models and accelerates the development of new ones. Its queries can fetch and merge these models, answer questions regarding the underlying process, and conduct compliance checking in return. Many languages have been deployed in this discipline but two language types are dominant: Logic-based languages use temporal logic to verify models as finite state machines whereas graph-based languages use pattern matching to retrieve subgraphs of model graphs directly. This thesis aims to map the features of both language types to features of the other to identify strengths and weaknesses. Exemplarily, the features of Computational Tree Logic (CTL) and The Diagramed Modeling Language (DMQL) are mapped to one another. CTL explores the valid state space and thus is better for behavioral querying. Lacking certain structural features and counting mechanisms it is not appropriate to query structural properties. In contrast, DMQL issues structural queries and its patterns can reconstruct any CTL formula. However, they do not always achieve exactly the same semantic: Patterns treat conditional flow as sequential flow by ignoring its conditions. As a result, retrieved mappings are invalid process execution sequences, i.e. false positives, in certain scenarios. DMQL can be used for behavioral querying if these are absent or acceptable. In conclusion, both language types have strengths and are specialized for different BPQ use cases but in certain scenarios graph-based languages can be applied to both. Integrating the evaluation of conditions would remove the need for logic-based languages in BPQ completely.
Mapping ORM to TGraph
(2017)
Object Role Modeling (ORM) is a semantic modeling language used to describe objects and their relations amongst each other. Both objects and relations may be subject to rules or ORM constraints.
TGraphs are ordered, attributed, typed and directed graphs. The type of a TGraph and its components, the edges and vertices, is defined using the schema language graph UML (grUML), a profiled version of UML class diagrams. The goal of this thesis is to map ORM schemas to grUML schemas in order to be able to represent ORM schema instances as TGraphs.
Up to this point, the preferred representation for ORM schema instances is in form of relational tables. Though mappings from ORM schemas to relational schemas exist, those publicly available do not support most of the constraints ORM has to offer.
Constraints can be added to grUML schemas using the TGraph query language GReQL, which can efficiently check whether a TGraph validates the constraint or not. The graph library JGraLab provides efficient implementations of TGraphs and their query language GReQL and supports the generation of grUML schemas.
The first goal of this work is to perform a complete mapping from ORM schemas to grUML schemas, using GReQL to sepcify constraints. The second goal is to represent ORM instances in form of TGraphs.
This work gives an overview of ORM, TGraphs, grUML and GReQL and the theoretical mapping from ORM schemas to grUML schemas. It also describes the implementation of this mapping, deals with the representation of ORM schema instances as TGraphs and the question how grUML constraints can be validated.
Multi-agent systems are a mature approach to model complex software systems by means of Agent-Oriented Software Engineering (AOSE). However, their application is not widely accepted in mainstream software engineering. Parallel to this the interdisciplinary field of Agent-based Social Simulation (ABSS) finds increasing recognition beyond the purely academic realm which starts to draw attention from the mainstream of agent researchers. This work analyzes factors to improve the uptake of AOSE as well as characteristics which separate the two fields AOSE and ABSS to understand their gap. Based on the efficiency-oriented micro-agent concept of the Otago Agent Platform (OPAL) we have constructed a new modern and self-contained micro-agent platform called µ². The design takes technological trends into account and integrates representative technologies, such as the functionally-inspired JVM language Clojure (with its Transactional Memory), asynchronous message passing frameworks and the mobile application platform Android. The mobile version of the platform shows an innovative approach to allow direct interaction between Android application components and micro-agents by mapping their related internal communication mechanisms. This empowers micro-agents to exploit virtually any capability of mobile devices for intelligent agent-based applications, robotics or simply act as a distributed middleware. Additionally, relevant platform components for the support of social simulations are identified and partially implemented. To show the usability of the platform for simulation purposes an interaction-centric scenario representing group shaping processes in a multi-cultural context is provided. The scenario is based on Hofstede's concept of 'Cultural Dimensions'. It does not only confirm the applicability of the platform for simulations but also reveals interesting patterns for culturally augmented in- and out-group agents. This explorative research advocates the potential of micro-agents as a powerful general system modelling mechanism while bridging the convergence between mobile and desktop systems. The results stimulate future work on the micro-agent concept itself, the suggested platform and the deeper exploration of mechanisms for seemless interaction of micro-agents with mobile environments. Last but not least the further elaboration of the simulation model as well as its use to augment intelligent agents with cultural aspects offer promising perspectives for future research.
The goal of this master thesis was to develop a CRM system for the Assist team of CompuGroup Medical that is aiding in integrating open innovation into the development of the Minerva 2.0 software. To achieve this, CRM methodology has been combined with Social Networking Systems, following the research of Lin and Chen (2010, pp. 11 – 30). To achieve the predefined goals literature has been analyzed on how to successfully im- plement a CRM system as well as an online community. Subsequently the results have been applied to the development of the Minerva Community according to the guidelines of Design Science suggested by Hevner et al. (2004, pp. 75 – 104). The finished product is designed based on customer and management requirements and evaluated from a customer and company perspective.
While Virtual Reality has been around for decades it gained new life in recent years. The release of the first consumer hardware devices allows fully immersive and affordable VR for the user at home. This availability lead to a new focus of research on technical problems as well as psychological effects. The concepts of presence, describing the feeling of being in the virtual place, body ownership and their impact are central topics in research for a long time and still not fully understood.
To enable further research in the area of Mixed Reality, we want to introduce a framework that integrates the users body and surroundings inside a visual coherent virtual environment. As one of two main aspects we want to merge real and virtual objects to a shared environment in a way such that they are no longer visually distinguishable. To achieve this the main focus is not supposed to be on a high graphical fidelity but on a simplified representation of reality. The essential question is, what level of visual realism is necessary to create a believable mixed reality environment that induces a sense of presence in the user? The second aspect considers the integration of virtual persons. Can characters be recorded and replayed in a way such that they are perceived as believable entities of the world and therefore act as a part of the users environment?
The purpose of this thesis was the development of a framework called Mixed Reality Embodiment Platform. This inital system implements fundamental functionalities to be used as a basis for future extensions to the framework. We also provide a first application that enables user studies to evaluate the framework and contribute to aforementioned research questions.
In this work a framework is developed that is used to create an evaluation scheme for the evaluation of text processing tools. The evaluation scheme is developed using a model-dependent software evaluation approach and the focus of the model-dependent part is the text-processing process which is derived from the Conceptual Analysis Process developed in the GLODERS project. As input data a German court document is used containing two incidents of extortion racketeering which happened in 2011 and 2012. The evaluation of six different tools shows that one tool offers great results for the given dataset when it is compared to manual results. It is able to identify and visualize relations between concepts without any additional manual work. Other tools also offer good results with minor drawbacks. The biggest drawback for some tools is the unavailability of models for the German language. They can perform automated tasks only on English documents. Nonetheless some tools can be enhanced by self-written code which allows users with development experience to apply additional methods.
Particle swarm optimization is an optimization technique based on simulation of the social behavior of swarms.
The goal of this thesis is to solve 6DOF local pose estimation using a modified particle swarm technique introduced by Khan et al. in 2010. Local pose estimation is achieved by using continuous depth and color data from a RGB-D sensor. Datasets are aquired from different camera poses and registered into a common model. Accuracy and computation time of the implementation is compared to state of the art algorithms and evaluated in different configurations.
The Material Point Method (MPM) has proven to be a very capable simulation method in computer graphics that is able to model materials that were previously very challenging to animate [1, 2]. Apart from simulating singular materials, the simulation of multiple materials that interact with each other introduces new challenges. This is the focus of this thesis. It will be shown that the self-collision capabilities of the MPM can naturally handle multiple materials interacting in the same scene on a collision basis, even if the materials use distinct constitutive models. This is then extended by porous interaction of materials as in[3], which also integrates easily with MPM.It will furthermore be shown that regular single-grid MPM can be viewed as a subset of this multi-grid approach, meaning that its behavior can also be achieved if multiple grids are used. The porous interaction is generalized to arbitrary materials and freely changeable material interaction terms, yielding a flexible, user-controllable framework that is independent of specific constitutive models. The framework is implemented on the GPU in a straightforward and simple way and takes advantage of the rasterization pipeline to resolve write-conflicts, resulting in a portable implementation with wide hardware support, unlike other approaches such as [4].
Knowledge-based authentication methods are vulnerable to Shoulder surfing phenomenon.
The widespread usage of these methods and not addressing the limitations it has could result in the user’s information to be compromised. User authentication method ought to be effortless to use and efficient, nevertheless secure.
The problem that we face concerning the security of PIN (Personal Identification Number) or password entry is shoulder surfing, in which a direct or indirect malicious observer could identify the user sensitive information. To tackle this issue we present TouchGaze which combines gaze signals and touch capabilities, as an input method for entering user’s credentials. Gaze signals will be primarily used to enhance targeting and touch for selecting. In this work, we have designed three different PIN entry method which they all have similar interfaces. For the evaluation, these methods were compared based on efficiency, accuracy, and usability. The results uncovered that despite the fact that gaze-based methods require extra time for the user to get familiar with yet it is considered more secure. In regards to efficiency, it has the similar error margin to the traditional PIN entry methods.
With the appearance of modern virtual reality (VR) headsets on the consumer market, there has been the biggest boom in the history of VR technology. Naturally, this was accompanied by an increasing focus on the problems of current VR hardware. Especially the control in VR has always been a complex topic.
One possible solution is the Leap Motion, a hand tracking device that was initially developed for desktop use, but with the last major software update it can be attached to standard VR headsets. This device allows very precise tracking of the user’s hands and fingers and their replication in the virtual world.
The aim of this work is to design virtual user interfaces that can be operated with the Leap Motion to provide a natural method of interaction between the user and the VR environment. After that, subject tests are performed to evaluate their performance and compare them to traditional VR controllers.
This thesis proposes the use of MSR (Mining Software Repositories) techniques to identify software developers with exclusive expertise about specific APIs and programming domains in software repositories. A pilot Tool for finding such
“Islands of Knowledge” in Node.js projects is presented and applied in a case study to the 180 most popular npm packages. It is found that on average each package has 2.3 Islands of Knowledge, which is possibly explained by the finding that npm packages tend to have only one main contributor. In a survey, the maintainers of 50 packages are contacted and asked for opinions on the results produced by the Tool. Together with their responses, this thesis reports on experiences made with the pilot Tool and how future iterations could produce even more accurate statements about programming expertise distribution in developer teams.
In this thesis, the performance of the IceCube projects photon propagation
code (clsim) is optimized. The process of GPU code analysis and perfor-
mance optimization is described in detail. When run on the same hard-
ware, the new version achieves a speedup of about 3x over the original
implementation. Comparing the unmodified code on hardware currently
used by IceCube (NVIDIA GTX 1080) against the optimized version run on
a recent GPU (NVIDIA A100) a speedup of about 9.23x is observed. All
changes made to the code are shown and their performance impact as well
as the implications for simulation accuracy are discussed individually.
The approach taken for optimization is then generalized into a recipe.
Programmers can use it as a guide, when approaching large and complex
GPU programs. In addition, the per warp job-queue, a design pattern used
for load balancing among threads in a CUDA thread block, is discussed in
detail.
Object recognition is a well-investigated area in image-based computer vision and several methods have been developed. Approaches based on Implicit Shape Models have recently become popular for recognizing objects in 2D images, which separate objects into fundamental visual object parts and spatial relationships between the individual parts. This knowledge is then used to identify unknown object instances. However, since the emergence of aσordable depth cameras like Microsoft Kinect, recognizing unknown objects in 3D point clouds has become an increasingly important task. In the context of indoor robot vision, an algorithm is developed that extends existing methods based on Implicit Shape Model approaches to the task of 3D object recognition.
This thesis explores the possibilities of probabilistic process modelling for the Computer Supported Cooperative Work (CSCW) systems in order to predict the behaviour of the users present in the CSCW system. Toward this objective applicability, advantages, limitations and challenges of probabilistic modelling are excavated in context of CSCW systems. Finally, as a primary goal seven models are created and examined to show the feasibilities of probabilistic process discovery and predictions of the users behaviour in CSCW systems.
The industry standard Decision Model and Notation (DMN) has enabled a new way for the formalization of business rules since 2015. Here, rules are modeled in so-called decision tables, which are defined by input columns and output columns. Furthermore, decisions are arranged in a graph-like structure (DRD level), which creates dependencies between them. With a given input, the decisions now can be requested by appropriate systems. Thereby, activated rules produce output for future use. However, modeling mistakes produces erroneous models, which can occur in the decision tables as well as at the DRD level. According to the Design Science Research Methodology, this thesis introduces an implementation of a verification prototype for the detection and resolution of these errors while the modeling phase. Therefore, presented basics provide the needed theoretical foundation for the development of the tool. This thesis further presents the architecture of the tool and the implemented verification capabilities. Finally, the created prototype is evaluated.
Predictive Process Monitoring is becoming more prevalent as an aid for organizations to support their operational processes. However, most software applications available today require extensive technical know-how by the operator and are therefore not suitable for most real-world scenarios. Therefore, this work presents a prototype implementation of a Predictive Process Monitoring dashboard in the form of a web application. The system is based on the PPM Camunda Plugin presented by Bartmann et al. (2021) and allows users to easily create metrics, visualizations to display these metrics, and dashboards in which visualizations can be arranged. A usability test is with test users of different computer skills is conducted to confirm the application’s user-friendliness.
Geographic cluster based routing in ad-hoc wireless sensor networks is a current field of research. Various algorithms to route in wireless ad-hoc networks based on position information already exist. Among them algorithms that use the traditional beaconing approach as well as algorithms that work beaconless (no information about the environment is required besides the own position and the destination). Geographic cluster based routing with guaranteed message delivery can be carried out on overlay graphs as well. Until now the required planar overlay graphs are not being constructed reactively.
This thesis proposes a reactive algorithm, the Beaconless Cluster Based Planarization (BCBP) algorithm, which constructs a planar overlay graph and noticeably reduces the number of messages required for that. Based on an algorithm for cluster based planarization it beaconlessly constructs a planar overlay graph in an unit disk graph (UDG). An UDG is a model for a wireless network in which every participant has the same sending radius. Evaluation of the algorithm shows it to be more efficient than the non beaconless variant. Another result of this thesis is the Beaconless LLRAP (BLLRAP) algorithm, for which planarity but not continued connectivity could be proven.
In this thesis the possibilities for real-time visualization of OpenVDB
files are investigated. The basics of OpenVDB, its possibilities, as well
as NanoVDB and its GPU port, were studied. A system was developed
using PNanoVDB, the graphics API port of OpenVDB. Techniques were
explored to improve and accelerate a single ray approach of ray tracing.
To prove real-time capability, two single scattering approaches were
also implemented. One of these was selected, further investigated and
optimized to achieve interactive real-time rendering.
It is important to give artists immediate feedback on their adjustments, as
well as the possibility to change all parameters to ensure a user friendly
creation process.
In addition to the optical rendering, corresponding benchmarks were
collected to compare different improvement approaches and to prove
their relevance. Attention was paid to the rendering times and memory
consumption on the GPU to ensure optimal use. A special focus, when
rendering OpenVDB files, was put on the integrability and extensibility of
the program to allow easy integration into an existing real-time renderer
like U-Render.
To construct a business process model manually is a highly complex and error-prone task which takes a lot of time and deep insights into the organizational structure, its operations and business rules. To improve the output of business analysts dealing with this process, different techniques have been introduced by researchers to support them during construction with helpful recommendations. These supporting recommendation systems vary in their way of what to recommend in the first place as well as their calculations taking place under the hood to recommend the most fitting element to the user. After a broad introduction into the field of business process modeling and its basic recommendation structures, this work will take a closer look at diverse proposals and descriptions published in current literature regarding implementation strategies to effectively and efficiently assist modelers during their business process model creation. A critical analysis of presentations in the selected literature will point out strengths and weaknesses of their approaches, studies and descriptions of those. As a result, the final concept matrix in this work will give a precise and helpful overview about the key features and recommendation methods used and implemented in previous research studies to pinpoint an entry into future works without the downsides already spotted by fellow researchers.
Currently more than 850 biological databases exist. The majority of biological knowledge is not in these databases but rather contained as free text in scientific literature. For systems biology tasks it is often necessary to integrate and extract data from heterogeneous databases and free text as well as to analyse the information in the context of experimental data. ONDEX is an integration framework which aims to address these challenges by combining features of database integration, text mining and sequence analysis with methods for graph-based data analysis and visualisation. The main topics of this diploma thesis are the redesign of the ONDEX backend, the development of a data exchange format, the development of a query environment and the allocation of Web services for data integration, data exchange and queries. These Web services allow backend workflow control from both local and remote workstations.
Public electronic procurement (eProcurement), here electronic sourcing (eSourcing) in particular, is almost certainly on the agenda when eGovernment experts meet. Not surprisingly is eProcurement the first high-impact service to be addressed in the European Union- recent Action Plan. This is mainly dedicated to the fact that public procurement makes out almost 20% of Europe- GDP and therefore holds a huge saving potential. To some extent this potential lies in the common European market, since effective cross-boarder eSourcing solutions can open many doors, both for buyers and suppliers. To achieve this, systems and processes and tools, need to be adoptable, transferable as well as be able to communicate with each other. In one word, they need to be interoperable. In many relevant domains, interoperability has reached a very positive level, standards have been established, workflows been put in place. In other domains however, there is still a long road ahead. As a consequence it is crucial to define requirements for such interoperable eSourcing systems and to identify the progress in research and practice.
Statistical Shape Models (SSMs) are one of the most successful tools in 3Dimage analysis and especially medical image segmentation. By modeling the variability of a population of training shapes, the statistical information inherent in such data are used for automatic interpretation of new images. However, building a high-quality SSM requires manually generated ground truth data from clinical experts. Unfortunately, the acquisition of such data is a time-consuming, error-prone and subjective process. Due to this effort, the majority of SSMs is often based on a limited set of this ground truth training data, which makes the models less statistically meaningful. On the other hand, image data itself is abundant in clinics from daily routine. In this work, methods for automatically constructing a reliable SSM without the need of manual image interpretation from experts are proposed. Thus, the training data is assumed to be the result of any segmentation algorithm or may originate from other sources, e.g. non-expert manual delineations. Depending on the algorithm, the output segmentations will contain errors to a higher or lower degree. In order to account for these errors, areas of low probability of being a boundary should be excluded from the training of the SSM. Therefore, the probabilities are estimated with the help of image-based approaches. By including many shape variations, the corrupted parts can be statistically reconstructed. Two approaches for reconstruction are proposed - an Imputation method and Weighted Robust Principal Component Analysis (WRPCA). This allows the inclusion of many data sets from clinical routine, covering a lot more variations of shape examples. To assess the quality of the models, which are robust against erroneous training shapes, an evaluation compares the generalization and specificity ability to a model build from ground truth data. The results show, that especially WRPCA is a powerful tool to handle corrupted parts and yields to reasonable models, which have a higher quality than the initial segmentations.
Using semantic data from general-purpose programming languages does not provide the unified experience one would want for such an application. Static error checking is lacking, especially with regards to static typing of the data. Based on the previous work of λ-DL, which integrates semantic queries and concepts as types into a typed λ-calculus, this work takes its ideas a step further to meld them into a real-world programming language. This thesis explores how λ-DL's features can be extended and integrated into an existing language, researches an appropriate extension mechanism and produces Semantics4J, a JastAdd-based Java language semantic data extension for type-safe OWL programming, together with examples of its usage.
In scientific data visualization huge amounts of data are generated, which implies the task of analyzing these in an efficient way. This includes the reliable detection of important parts and a low expenditure of time and effort. This is especially important for the big-sized seismic volume datasets, that are required for the exploration of oil and gas deposits. Since the generated data is complex and a manual analysis is very time-intensive, a semi-automatic approach could on one hand reduce the time required for the analysis and on the other hand offer more flexibility, than a fully automatic approach.
This master's thesis introduces an algorithm, which is capable of locating regions of interest in seismic volume data automatically by detecting anomalies in local histograms. Furthermore the results are visualized and a variety of tools for the exploration and interpretation of the detected regions are developed. The approach is evaluated by experiments with synthetic data and in interviews with domain experts on the basis of real-world data. Conclusively further improvements to integrate the algorithm into the seismic interpretation workflow are suggested.
The mitral valve is one of the four valves in the human heart. It is located in the left heart chamber and its function is to control the blood flow from the left atrium to the left ventricle. Pathologies can lead to malfunctions of the valve so that blood can flow back to the atrium. Patients with a faulty mitral valve function may suffer from fatigue and chest pain. The functionality can be surgically restored, which is often a long and exhaustive intervention. Thorough planning is necessary to ensure a safe and effective surgery. This can be supported by creating pre-operative segmentations of the mitral valve. A post-operative analysis can determine the success of an intervention. This work will combine existing and new ideas to propose a new approach to (semi-)automatically create such valve models. The manual part can guarantee a high quality model and reliability, whereas the automatic part contributes to saving valuable labour time.
The main contributions of the automatic algorithm are an estimated semantic separation of the two leaflets of the mitral valve and an optimization process that is capable of finding a coaptation-line and -area between the leaflets. The segmentation method can perform a fully automatic segmentation of the mitral leaflets if the annulus ring is already given. The intermediate steps of this process will be integrated into a manual segmentation method so a user can guide the whole procedure. The quality of the valve models generated by the method proposed in this work will be measured by comparing them to completely manually segmented models. This will show that commonly used methods to measure the quality of a segmentation are too general and do not suffice to reflect the real quality of a model. Consequently the work at hand will introduce a set of measurements that can qualify a mitral valve segmentation in more detail and with respect to anatomical landmarks. Besides the intra-operative support for a surgeon, a segmented mitral valve provides additional benefits. The ability to patient-specifically obtain and objectively describe the valve anatomy may be the base for future medical research in this field and automation allows to process large data sets with reduced expert dependency. Further, simulation methods that use the segmented models as input may predict the outcome of a surgery.
FinTech is deemed to be an underexplored phenomenon even in academic and real environments. Among (1) “Sustainable FinTech” – the application of information technology as innovation in established financial services providers’ business operation; and (2) “Disruptive FinTech” – the provision of financial products and services by non-incumbents which in most cases are information technology entrepreneurs, the former receives more attention. In order to contribute to Disruptive FinTech category, the thesis strive to examine Entrepreneurial Strategy framework applied for technology players taking part in Vietnam financial market.
This Master Thesis is an exploratory research to determine whether it is feasible to construct a subjectivity lexicon using Wikipedia. The key hypothesis is that that all quotes in Wikipedia are subjective and all regular text are objective. The degree of subjectivity of a word, also known as ''Quote Score'' is determined based on the ratio of word frequency in quotations to its frequency outside quotations. The proportion of words in the English Wikipedia which are within quotations is found to be much smaller as compared to those which are not in quotes, resulting in a right-skewed distribution and low mean value of Quote Scores.
The methodology used to generate the subjectivity lexicon from text corpus in English Wikipedia is designed in such a way that it can be scaled and reused to produce similar subjectivity lexica of other languages. This is achieved by abstaining from domain and language-specific methods, apart from using only readily-available English dictionary packages to detect and exclude stopwords and non-English words in the Wikipedia text corpus.
The subjectivity lexicon generated from English Wikipedia is compared against other lexica; namely MPQA and SentiWordNet. It is found that words which are strongly subjective tend to have high Quote Scores in the subjectivity lexicon generated from English Wikipedia. There is a large observable difference between distribution of Quote Scores for words classified as strongly subjective versus distribution of Quote Scores for words classified as weakly subjective and objective. However, weakly subjective and objective words cannot be differentiated clearly based on Quote Score. In addition to that, a questionnaire is commissioned as an exploratory approach to investigate whether subjectivity lexicon generated from Wikipedia could be used to extend the coverage of words of existing lexica.
The status of Business Process Management (BPM) recommender systems is not quite clear as research states. The use of recommenders familiarized itself with the world during the rise of technological evolution in the past decade.Ever since then, several BPM recommender systems came about. However, not a lot of research is conducted in this field. It is not well known to what broad are the technologies used and how are they used. Moreover, this master’s thesis aims at surveying the BPM recommender systems existing. Building on this, the recommendations come in different shapes. They can be positionbased where an element is to be placed at an element’s front, back or to autocomplete a missing link. On the other hand, Recommendations can be textual, to fill the labels of the elements. Furthermore, the literature review for BPM recommender systems took place under the guides of a literature review framework. The framework suggests 5stages of consecutive stages for this sake. The first stage is defining a scope for the research. Secondly, conceptualizing the topic by choosing key terms for literature research. After that in the third stage, comes the research stage.As for the fourth stage, it suggests choosing analysis features over which the literature is to be synthesized and compared. Finally, it recommends defining the research agenda to describe the reason for the literature review. By invoking the mentioned methodology, this master’s thesis surveyed 18 BPM recommender systems. It was found as a result of the survey that there
are not many different technologies for implementing the recommenders. It was also found that the majority of the recommenders suggest nodes that are yet to come in the model, which is called forward recommending. Also, one of the results of the survey indicated the scarce use of textual recommendations to BPM labels. Finally, 18 recommenders are considered less than excepted for a developing field therefore as a result, the survey found a shortage in the number of BPM recommender systems. The results indicate several shortages in several aspects in the field of BPM recommender systems. On this basis, this master’s thesis recommends the future work on it the results.
Data visualization is an effective way to explore data. It helps people to get a valuable insight of the data by placing it in a visual context. However, choosing a good chart without prior knowledge in the area is not a trivial job. Users have to manually explore all possible visualizations and decide upon ones that reflect relevant and desired trend in the data, are insightful and easy to decode, have a clear focus and appealing appearance. To address these challenges we developed a Tool for Automatic Generation of Good viSualizations using Scoring (TAG²S²). The approach tackles the problem of identifying an appropriate metric for judging visualizations as good or bad. It consists of two modules: visualization detection: given a data-set it creates a list of combination of data attributes for scoring and visualization ranking: scores each chart and decides which ones are good or bad. For the later, an utility metric of ten criteria was developed and each visualization detected in the first module is evaluated on these criteria. Only those visualizations that received enough scores are then presented to the user. Additionally to these data parameters, the tool considers user perception regarding the choice of visual encoding when selecting a visualization. To evaluate the utility of the metric and the importance of each criteria, test cases were developed, executed and the results presented.
Tracking is an integral part of many modern applications, especially in areas like autonomous systems and Augmented Reality. For performing tracking there are a wide array of approaches. One that has become a subject of research just recently is the utilization of Neural Networks. In the scope of this master thesis an application will be developed which uses such a Neural Network for the tracking process. This also requires the creation of training data as well as the creation and training of a Neural Network. Subsequently the usage of Neural Networks for tracking will be analyzed and evaluated. This includes several aspects. The quality of the tracking for different degrees of freedom will be checked as well as the the impact of the Neural Network on the applications performance. Additionally the amount of required training data is investigated, the influence of the network architecture and the importance of providing depth data as part of the networks input. This should provide an insight into how relevant this approach could be for its adoption in future products.
Digital transformation is a prevailing trend in the world, especially in dynamic Asia. Vietnam has recorded remarkable changes in the economy as domestic enterprises have made new strides in the digital transformation process. MB Bank, one of the prestigious financial groups in Vietnam, also takes advantage of digital transformation to have the opportunity to break through to become a large-scale technology enterprise with many factors such as improving customer experience, increasing customer base and increasing customer satisfaction. enhance competitiveness, build trust and loyalty for customers. However, in the process of converting MB, there are also many challenges that require banks to have appropriate policies to handle. It can be said that MB Bank is a typical case study of digital transformation in the banking sector in Vietnam.
Current political issues are often reflected in social media discussions, gathering politicians and voters on common platforms. As these can affect the public perception of politics, the inner dynamics and backgrounds of such debates are of great scientific interest. This thesis takes user generated messages from an up-to-date dataset of considerable relevance as Time Series, and applies a topic-based analysis of inspiration and agenda setting to it. The Institute for Web Science and Technologies of the University Koblenz-Landau has collected Twitter data generated beforehand by candidates of the European Parliament Election 2019. This work processes and analyzes the dataset for various properties, while focusing on the influence of politicians and media on online debates. An algorithm to cluster tweets into topical threads is introduced. Subsequently, Sequential Association Rules are mined, yielding wide array of potential influence relations between both actors and topics. The elaborated methodology can be configured with different parameters and is extensible in functionality and scope of application.
Topic models are a popular tool to extract concepts of large text corpora. These text corpora tend to contain hidden meta groups. The size relation of these groups is frequently imbalanced. Their presence is often ignored when applying a topic model. Therefore, this thesis explores the influence of such imbalanced corpora on topic models.
The influence is tested by training LDA on samples with varying size relations. The samples are generated from data sets containing a large group differences i.e language difference and small group differences i.e. political orientation. The predictive performance on those imbalanced corpora is judged using perplexity.
The experiments show that the presence of groups in training corpora can influence the prediction performance of LDA. The impact varies due to various factors, including language-specific perplexity scores. The group-related prediction performance changes for groups when varying the relative group sizes. The actual change varies between data sets.
LDA is able to distinguish between different latent groups in document corpora if differences between groups are large enough, e.g. for groups with different languages. The proportion of group-specific topics is under-proportional to the share of the group in the corpus and relatively smaller for minorities.
Tractography on HARDI data
(2011)
Diffusion weighted imaging is an important modality in clinical imaging and the only possibility to gain insight into the human brain noninvasively and in-vivo. The applications of this imaging technique are diversified. It is used to study the brain, its structure, development and the functionality of the different areas. Further, important fields of application are neurosurgical planning, examinations of pathologies, investigation of Alzheimer-, strokes, and multiple sclerosis. This thesis gives a brief introduction to MRI and diffusion MRI. Based on this, the mostly used data representation in diffusion MRI in clinical imaging, the diffusion tensor, is introduced. As the diffusion tensor suffers from severe limitations new techniques subsumed under the term HARDI (high angular resolution diffusion imaging) are introduced and discussed in detail. Further, an extensive introduction to tractography, approaches that aim at reconstructing neuronal fibers, is given. Based on the knowledge fromthe theoretical part established tractography algorithms are redesigned to handle HARDI data and, thus, improve the reconstruction of neuronal fibers. Among these algorithms, a novel approach is presented that successfully reconstructs fibers on phantom data as well as on human brain data. Further, a novel global classification approach is presented to cluster voxels according to their diffusion properties.
Artificial neural networks is a popular field of research in artificial intelli-
gence. The increasing size and complexity of huge models entail certain
problems. The lack of transparency of the inner workings of a neural net-
work makes it difficult to choose efficient architectures for different tasks.
It proves to be challenging to solve these problems, and with a lack of in-
sightful representations of neural networks, this state of affairs becomes
entrenched. With these difficulties in mind a novel 3D visualization tech-
nique is introduced. Attributes for trained neural networks are estimated
by utilizing established methods from the area of neural network optimiza-
tion. Batch normalization is used with fine-tuning and feature extraction to
estimate the importance of different parts of the neural network. A combi-
nation of the importance values with various methods like edge bundling,
ray tracing, 3D impostor and a special transparency technique results in a
3D model representing a neural network. The validity of the extracted im-
portance estimations is demonstrated and the potential of the developed
visualization is explored.
Willingness to pay and willingness to accept on a two-sided platform - The use case of DoBeeDo
(2019)
It is widely known that especially for technology-based start-ups, entrepreneurs need to set up the boundaries of the business and define the product/service to offer in order to minimize the risk of failure. The goal of this thesis is to not only emphasize the importance of the business model development and evaluation but also show an example customer validation process for an emerging start-up named DoBeeDo, which is a mobile app operating on a two-sided market. During the process of customer validation a survey has been conducted to evaluate the interest of the target groups as well as the fit of their expectations using the Willingness to Pay and Willingness to Accept measures. The paper includes an analysis and evaluation of the gathered results and assesses whether the execution of the Customer Development Model can be continued.
“Did I say something wrong?” A word-level analysis of Wikipedia articles for deletion discussions
(2016)
This thesis focuses on gaining linguistic insights into textual discussions on a word level. It was of special interest to distinguish messages that constructively contribute to a discussion from those that are detrimental to them. Thereby, we wanted to determine whether “I”- and “You”-messages are indicators for either of the two discussion styles. These messages are nowadays often used in guidelines for successful communication. Although their effects have been successfully evaluated multiple times, a large-scale analysis has never been conducted. Thus, we used Wikipedia Articles for Deletion (short: AfD) discussions together with the records of blocked users and developed a fully automated creation of an annotated data set. In this data set, messages were labelled either constructive or disruptive. We applied binary classifiers to the data to determine characteristic words for both discussion styles. Thereby, we also investigated whether function words like pronouns and conjunctions play an important role in distinguishing the two. We found that “You”-messages were a strong indicator for disruptive messages which matches their attributed effects on communication. However, we found “I”-messages to be indicative for disruptive messages as well which is contrary to their attributed effects. The importance of function words could neither be confirmed nor refuted. Other characteristic words for either communication style were not found. Yet, the results suggest that a different model might represent disruptive and constructive messages in textual discussions better.