Refine
Year of publication
- 2015 (106) (remove)
Document Type
- Part of Periodical (34)
- Doctoral Thesis (33)
- Bachelor Thesis (19)
- Master's Thesis (18)
- Conference Proceedings (1)
- Lecture (1)
Keywords
- Vorlesungsverzeichnis (4)
- OpenGL (3)
- Android (2)
- Compute Shader (2)
- Crowdsourcing (2)
- Eyetracking (2)
- Führung (2)
- Grafikkarte (2)
- Serviceorientierte Architektur (2)
- 360 Grad (1)
Institute
Nutzung von Big Data im Marketing : theoretische Grundlagen, Anwendungsfelder und Best-Practices
(2015)
Die zunehmende Digitalisierung des Alltags und die damit verbundene omnipräsente Datengenerierung bieten für Unternehmen und insbesondere Marketingabteilungen die Chance, Informationen in bisher ungekannter Fülle über ihre Kunden und Produkte zu erhalten. Die Gewinnung solcher Informationen aus riesigen Datenmengen, die durch neue Technologien ermöglicht wird, hat sich dabei unter dem Begriff Big Data etabliert.
Die vorliegende Arbeit analysiert diese Entwicklung im Hinblick auf das Potenzial für die unternehmerische Disziplin des Marketings. Dazu werden die theoretischen Grundlagen des Einsatzes von Big Data im Marketing identifiziert und daraus Anwendungsfelder und Best-Practice-Lösungen abgeleitet. Die Untersuchung basiert auf einer Literaturanalyse zu dem Thema Big Data Marketing, welche neben verschiedenen Studien und Befragungen auch Expertenmeinungen und Zukunftsprognosen einschließt. Die Literatur wird dabei zunächst auf die theoretischen Grundlagen des Konstrukts Big Data analysiert.
Anschließend wird die Eignung von Big Data Lösungen für den Einsatz in Unternehmen geprüft, bevor die Anwendung im Bereich des Marketings konkretisiert und analysiert wird. Es wurde dabei festgestellt, dass anhand der theoretischen Aspekte von Big Data eine starke Eignung für den Einsatz im Rahmen des Marketings besteht. Diese zeichnet sich vor allem durch die detaillierten Informationen über Verhaltensmuster von Kunden und ihre Kaufentscheidungen aus. Weiterhin wurden potenzielle Anwendungsfelder identifiziert, welche besonders im Bereich der Kundenorientierung und der Marktforschung liegen. Im Hinblick auf Best-Practice-Lösungen konnte ein grober Leitfaden für die Integration von Big Data in die Unternehmensorganisation entwickelt werden. Abschließend wurde festgehalten, dass das Thema Big Data eine hohe Relevanz für das Marketing aufweist und dies in der Zukunft maßgeblich mitbestimmen wird.
This thesis conducts a text and network analysis of criminological files. The specific focus during the research is the field money laundering. The analysis showed the most important concepts present in the text which were classified in eleven different classes. The relationships of those concepts were analysed using ego networks, key entity identification and clustering. Some of the statements given about money laundering could be validated by the findings of this analysis and their interpretation. Specific concepts like banks and organizations as well as foreign subsidiaries were identified. Aggregating these concepts with the statements in chapter 1.4.3 on the circular process of money laundering it can be stated that different organizations and individuals, present in the criminological files, were placing money through different banks, organizations and investments in the legal financial market. At last this thesis tries to validate the benefits of the used tools for the kind of conducted research process. An estimation on ORA's and Automap's applicability for this kind of research is given in the end.
In this thesis we present an approach to track a RGB-D camera in 6DOF andconstruct 3D maps. We first acquire, register and synchronize RGB and depth images. After preprocessing we extract FAST features and match them between two consecutive frames. By depth projection we regain the z-value for the inlier correspondences. Afterwards we estimate the camera motion by 3D point set alignment between the correspondence set using least-squares. This local motion estimate is incrementally applied to a global transformation. Additionally wernpresent methods to build maps based on point cloud data acquired by a RGB-D camera. For map creation we use the OctoMap framework and optionally create a colored point cloud map. The system is evaluated with the widespread RGB-D benchmark.
In this work a framework is developed that is used to create an evaluation scheme for the evaluation of text processing tools. The evaluation scheme is developed using a model-dependent software evaluation approach and the focus of the model-dependent part is the text-processing process which is derived from the Conceptual Analysis Process developed in the GLODERS project. As input data a German court document is used containing two incidents of extortion racketeering which happened in 2011 and 2012. The evaluation of six different tools shows that one tool offers great results for the given dataset when it is compared to manual results. It is able to identify and visualize relations between concepts without any additional manual work. Other tools also offer good results with minor drawbacks. The biggest drawback for some tools is the unavailability of models for the German language. They can perform automated tasks only on English documents. Nonetheless some tools can be enhanced by self-written code which allows users with development experience to apply additional methods.
In current research of the autonomous mobile robots, path planning is still a very important issue.
This master's thesis deals with various path planning algorithms for the navigation of such mobile systems. This is not only to determine a collision-free trajectory from one point to another. The path should still be optimal and comply with all vehicle-given constraints. Especially the autonomous driving in an unknown and dynamic environment poses a major challenge, because a closed-loop control is necessary and thus a certain dynamic of the planner is demanded.
In this paper, two types of algorithms are presented. First, the path planner, based on A*, which is a common graph search algorithm: A*, Anytime Repairing A*, Lifelong Planning A*, D* Lite, Field D*, hybrid A*. Second, the algorithms which are based on the probabilistic planning algorithm Rapidly-exploring Random Tree (Rapidly-exploring Random Tree, RRT*, Lifelong Planning RRT*), as well as some extensions and heuristics. In addition, methods for collision avoidance and path smoothing are presented. Finally, these different algorithms are evaluated and compared with each other.
Code package managers like Cabal track dependencies between packages. But packages rarely use the functionality that their dependencies provide. This leads to unnecessary compilation of unused parts and to speculative conflicts between package versions where there are no conflicts. In two case studies we show how relevant these two problems are. We then describe how we could avoid them by tracking dependencies not between packages but between individual code fragments.
This master- thesis investigates the topic of intercultural web design. Two websites from different countries are exemplarily compared. On the basis of cultural dimensions, cultural differences are presented on each respective website. The analysis particularly focuses on how detailed the respective website-designer and -operator regards their users" cultural differences and the creation of a cross-cultural web design. The analysis illustrates which cultural - and particularly intercultural - aspects of countries were taken into consideration in the design of the web sites. The investigation led to the conclusion that their implementation was not consequently executed for all web sites. Hence, this thesis offers suggestions for the improvement of aspects which are most important in intercultural web design.
The lasting hype around the mobile internet and the related technology of the mobile applications seem not to drop off. The immense economic potential of this market leads the businesses and ventures to continuously find new ways of monetization. The underlying causes of that phenomenon are rarely challenged. Scientific research in the field of "ubiquitous mobile" has not yet developed a clear overall picture of the causes and effect chains. Attempts of deriving causes by studies in related mass media such as the computer or the internet have been discussed controversially. By combining the research streams of media motive usage and the customer retention, this paper will present a new research model. Based on a quantitative survey in the German speaking the gained data proves the motives for mobility, information gathering and entertainment purposed to be the most important drivers of customer satisfaction in mobile applications. The paper also highlights a significant correlation between the customer satisfaction and the other determinants of customer retention.
The increasing, anthropogenic demand for chemicals has created large environmental problems with repercussions for the health of the environment, especially aquatic ecosystems. As a result, the awareness of the public and decision makers on the risks from chemical pollution has increased over the past half-century, prompting a large number of studies in the field of ecological toxicology (ecotoxicology). However, the majority of ecotoxicological studies are laboratory based, and the few studies extrapolating toxicological effects in the field are limited to local and regional levels. Chemical risk assessment on large spatial scales remains largely unexplored, and therefore, the potential large-scale effects of chemicals may be overlooked.
To answer ecotoxicological questions, multidisciplinary approaches that transcend classical chemical and toxicological concepts are required. For instance, the current models for toxicity predictions - which are mainly based on the prediction of toxicity for a single compound and species - can be expanded to simultaneously predict the toxicity for different species and compounds. This can be done by integrating chemical concepts such as the physicochemical properties of the compounds with evolutionary concepts such as the similarity of species. This thesis introduces new, multidisciplinary tools for chemical risk assessments, and presents for the first time a chemical risk assessment on the continental scale.
After a brief introduction of the main concepts and objectives of the studies, this thesis starts by presenting a new method for assessing the physiological sensitivity of macroinvertebrate species to heavy metals (Chapter 2). To compare the sensitivity of species to different heavy metals, toxicity data were standardized to account for the different laboratory conditions. These rankings were not significantly different for different heavy metals, allowing the aggregation of physiological sensitivity into a single ranking.
Furthermore, the toxicological data for macroinvertebrates were used as input data to develop and validate prediction models for heavy metal toxicity, which are currently lacking for a wide array of species (Chapter 3). Apart from the toxicity data, the phylogenetic information of species (evolutionary relationships among species) and the physicochemical parameters for heavy metals were used. The constructed models had a good explanatory power for the acute sensitivity of species to heavy metals with the majority of the explained variance attributed to phylogeny. Therefore, the integration of evolutionary concepts (relatedness and similarity of species) with the chemical parameters used in ecotoxicology improved prediction models for species lacking experimental toxicity data. The ultimate goal of the prediction models developed in this thesis is to provide accurate predictions of toxicity for a wide range of species and chemicals, which is a crucial prerequisite for conducting chemical risk assessment.
The latter was conducted for the first time on the continental scale (Chapter 4), by making use of a dataset of 4,000 sites distributed throughout 27 European countries and 91 respective river basins. Organic chemicals were likely to exert acute risks for one in seven sites analyzed, while chronic risk was prominent for almost half of the sites. The calculated risks are potentially underestimated by the limited number of chemicals that are routinely analyzed in monitoring programmes, and a series of other uncertainties related with the limit of quantification, the presence of mixtures, or the potential for sublethal effects not covered by direct toxicity.
Furthermore, chemical risk was related to agricultural and urban areas in the upstream catchments. The analysis of ecological data indicated chemical impacts on the ecological status of the river systems; however, it is difficult to discriminate the effects of chemical pollution from other stressors that river systems are exposed to. To test the hypothesis of multiple stressors, and investigate the relative importance of organic toxicants, a dataset for German streams is used in chapter 5. In that study, the risk from abiotic (habitat degradation, organic chemicals, and nutrients enrichment) and biotic stressors (invasive species) was investigated. The results indicated that more than one stressor influenced almost all sites. Stream size and ecoregions influenced the distribution of risks, e.g., the risks for habitat degradation, organic chemicals and invasive species increased with the stream size; whereas organic chemicals and nutrients were more likely to influence lowland streams. In order to successfully mitigate the effects of pollutants in river systems, co-occurrence of stressors has to be considered. Overall, to successfully apply integrated water management strategies, a framework involving multiple environmental stressors on large spatial scales is necessary. Furthermore, to properly address the current research needs in ecotoxicology, a multidisciplinary approach is necessary which integrates fields such as, toxicology, ecology, chemistry and evolutionary biology.
Software systems are often developed as a set of variants to meet diverse requirements. Two common approaches to this are "clone-and-owning" and software product lines. Both approaches have advantages and disadvantages. In previous work we and collaborators proposed an idea which combines both approaches to manage variants, similarities, and cloning by using a virtual platform and cloning-related operators.
In this thesis, we present an approach for aggregating essential metadata to enable a propagate operator, which implements a form of change propagation. For this we have developed a system to annotate code similarities which were extracted throughout the history of a software repository. The annotations express similarity maintenance tasks, which can then either be executed automatically by propagate or have to be performed manually by the user. In this work we outline the automated metadata extraction process and the system for annotating similarities; we explain how the implemented system can be integrated into the workflow of an existing version control system (Git); and, finally, we present a case study using the 101haskell corpus of variants.