Bachelor Thesis
Refine
Year of publication
Document Type
- Bachelor Thesis (45) (remove)
Language
- English (45) (remove)
Keywords
- Computergraphik (2)
- Enterprise 2.0 (2)
- Graphik (2)
- Internet of Things (2)
- OpenGL (2)
- API (1)
- Agenten (1)
- Agents (1)
- Android <Systemplattform> (1)
- Annotation (1)
- Archivierung (1)
- Automatische Klassifikation (1)
- BPM (1)
- Bebauungsdichte (1)
- Bildanalyse (1)
- Blickpunktabhängig (1)
- Business Process Modeling (1)
- C++ (1)
- Cloud Computing (1)
- Cold Chain (1)
- Coloskopie (1)
- Compute Shader (1)
- Computer Graphics (1)
- Computerspiel (1)
- Computervisualistik (1)
- Container Entity Modell (1)
- Content Management (1)
- Darmpolyp (1)
- Data manipulation (1)
- Digitale Bilder (1)
- Drahtloses lokales Netz (1)
- ECSA (1)
- Empfehlungssystem (1)
- Enterprise Information Management (1)
- Entity Component System Architecture (1)
- Europäischer Schadensbericht (1)
- Fabric Simulation (1)
- FlexViz (1)
- Food Transportation System (1)
- Foodstuff (1)
- GPS (1)
- GSM-Standard (1)
- Gamification (1)
- Geocaching (1)
- Germany (1)
- Grafikkarte (1)
- Grafikprogrammierung (1)
- Graphicsprogramming (1)
- Haskell (1)
- Health (1)
- Informatik (1)
- Information Asset Register (1)
- Information Audit (1)
- Information Capturing Methods (1)
- Insurance (1)
- Konturfindung (1)
- Konzept (1)
- Line Space (1)
- Linespace (1)
- Lokalisation (1)
- Machine-Learning (1)
- Machinelles lernen (1)
- Merkmalsdetektion (1)
- Migration (1)
- Mining Software Repositories (1)
- Mister X (1)
- N-Body Simulation (1)
- N-Körper Simulation (1)
- Native language identification (1)
- OWL <Informatik> (1)
- Ontologie <Wissensverarbeitung> (1)
- OpenGL Shading Language (1)
- Path Tracing (1)
- Pfadnachverfolgung (1)
- Politik (1)
- Probabilistic finite automata (1)
- Programmierung (1)
- Prozedurale Synthese (1)
- RDF (1)
- Real-Life Game (1)
- Real-Life Spiel (1)
- Recommender System (1)
- Reflections (1)
- Reflektionen (1)
- Registratur (1)
- Rendering (1)
- Shader (1)
- Smartphone (1)
- Specular (1)
- Steuerung (1)
- Stoffsimulation (1)
- Strassenkreuzung (1)
- Text (1)
- Texterkennung (1)
- Tracing (1)
- University (1)
- WLAN Fingerprinting (1)
- Wearables (1)
- WiFi Fingerprinting (1)
- Wissensbasis (1)
- Wrapping (1)
- archiving (1)
- bait-lamina test (1)
- biodegradation (1)
- concept (1)
- deutsche Hochschulen (1)
- governance (1)
- jOWL (1)
- knowledge base (1)
- modulares System (1)
- phenolic compounds (1)
- policy modelling (1)
- procedural content generation (1)
- soil water repellency (1)
- tool-integration (1)
- video games (1)
- water re-use (1)
Wikipedia is the biggest, free online encyclopaedia that can be expanded by any-one. For the users, who create content on a specific Wikipedia language edition, a social network exists. In this social network users are categorised into different roles. These are normal users, administrators and functional bots. Within the networks, a user can post reviews, suggestions or send simple messages to the "talk page" of another user. Each language in the Wikipedia domain has this type of social network.
In this thesis characteristics of the three different roles are analysed in order to learn how they function in one language network of Wikipedia and apply them to another Wikipedia network to identify bots. Timestamps from created posts are analysed to reveal noticeable characteristics referring to continuous messages, message rates and irregular behaviour of a user are discovered. Through this process we show that there exist differences between the roles for the mentioned characteristics.
The Internet of Things is still one of the most relevant topics in the field of economics and research powered by the increasing demand of innovative services. Cost reductions in manufacturing of IoT hardware and the development of completely new communication ways has led to the point of bil-lions of devices connected to the internet. But in order to rule this new IoT landscape a standardized solution to conquer these challenges must be developed, the IoT Architecture.
This thesis examines the structure, purpose and requirements of IoT Architecture Models in the global IoT landscape and proposes an overview across the selected ones. For that purpose, a struc-tured literature analysis on this topic is conducted within this thesis, including an analysis on three existing research approaches trying to frame this topic and a tool supported evaluation of IoT Archi-tecture literature with over 200 accessed documents.
Furthermore, a coding of literature with the help of the specialised coding tool ATLAS.ti 8 is conduct-ed on 30 different IoT Architecture Models. In a final step these Architecture Models are categorized and compared to each other showing that the environment of IoT and its Architectures gets even more complex the further the research goes.
Social media platforms such as Twitter or Reddit allow users almost unrestricted access to publish their opinions on recent events or discuss trending topics. While the majority of users approach these platforms innocently, some groups have set their mind on spreading misinformation and influencing or manipulating public opinion. These groups disguise as native users from various countries to spread frequently manufactured articles, strong polarizing opinions in the political spectrum and possibly become providers of hate-speech or extremely political positions. This thesis aims to implement an AutoML pipeline for identifying second language speakers from English social media texts. We investigate style differences of text in different topics and across the platforms Reddit and Twitter, and analyse linguistic features. We employ feature-based models with datasets from Reddit, which include mostly English conversation from European users, and Twitter, which was newly created by collecting English tweets from selected trending topics in different countries. The pipeline classifies language family, native language and origin (Native or non-Native English speakers) of a given textual input. We evaluate the resulting classifications by comparing prediction accuracy, precision and F1 scores of our classification pipeline to traditional machine learning processes. Lastly, we compare the results from each dataset and find differences in language use for topics and platforms. We obtained high prediction accuracy for all categories on the Twitter dataset and observed high variance in features such as average text length especially for Balto-Slavic countries.
The Internet of Things (IoT) is a concept in which connected physical objects are integrated into the virtual world to become active partakers of businesses and everyday processes (Uckelmann, Harrison and Michahelles, 2011; Shrouf, Ordieres and Miragliotta, 2014). It is expected to have a major impact on businesses (Council, Nic and Intelligence, 2008), but small and medium enterprises’ business models are threatened if they do not adopt the new concept (Sommer, 2015). Thus, this thesis aims to showcase a sample implementation of connected devices in a small enterprise, demonstrating its added benefits for the business.
Design Science Research (DSR) is used to develop a prototype based on a use case provided by a carpentry. The prototype comprises a hardware sensor and a web application which can be used by the wood shop to improve their processes. The thesis documents the iterative process of developing a prototype from the grounds up to useable hard- and software.
This contribution provides an example of how IoT can be used and implemented at a small business.
Deformable Snow Rendering
(2019)
Accurate snow simulation is key to capture snow's iconic visuals. Intricate
methods exist that attempt to grasp snow behaviour in a holistic manner. Computational complexity prevents them from reaching real-time performance. This thesis presents three techniques making use of the GPU that focus on the deformation of a snow surface in real-time. The approaches are examined by their ability to scale with an increasing number of deformation actors and their visual portrayal of snow deformation. The findings indicate that the approaches maintain real-time performance well into several hundred individual deformation actors. However, these approaches each have their individual restrictions handicapping the visual results. An experimental approach is to combine the techniques at reduced deformation actor count to benefit from the detailed, merged deformation pattern.
Autonomous systems such as robots already are part of our daily life. In contrast to these machines, humans an react appropriately to their counterparts. People can hear and interpret human speech, and interpret facial expressions of other people.
This thesis presents a system for automatic facial expression recognition with emotion mapping. The system is image-based and employs feature-based feature extraction. This thesis analyzes the common steps of an emotion recognition system and presents state-of-the-art methods. The approach presented is based on 2D features. These features are detected in the face. No neutral face is needed as reference. The system extracts two types of facial parameters. The first type consists of distances between the feature points. The second type comprises angles between lines connecting the feature points. Both types of parameters are implemented and tested. The parameters which provide the best results for expression recognition are used to compare the system with state-of-the-art approaches. A multiclass Support Vector Machine classifies the parameters.
The results are codes of Action Units of the Facial Action Coding System. These codes are mapped to a facial emotion. This thesis addresses the six basic emotions (happy, surprised, sad, fearful, angry, and disgusted) plus the neutral facial expression. The system presented is implemented in C++ and is provided with an interface to the Robot Operating System (ROS).
This thesis connects the endeavors of the winemaker’s intention in perfect and profitable wine making with an innovative technological application to use Internet of Things. Thereby the winemaker’s work may be supported and enriched – and enables until recent years still unthinkable optimization of managing and planning of his business, including close state control of different areas of his vineyard, and more than that, not ending up with the single grapevine. It is exemplarily shown in this thesis how to measure, transmit, store and make data available, exemplarily demonstrated with “live” temperature, air and soil humidity values from the vineyard. A modular architecture was designed for the system presented, which allows the use of current sensors, and similar low-voltage sensors, which will be developed in the future.
By using IoT devices in the vineyard, the winemaker advances to a new quality of precision of forecasted data, starting from live data of his vineyard. Of more and more importance, the winemaker can start immediate action, when unforeseen heavy weather conditions occur. Immediate use of current data enabled by a Cloud Infrastructure. For this system, an open service infrastructure is employed. In contrast to other published commercial approaches, the described solution is based on open source.
As an alone-standing part of this work, a physical prototype for measuring relevant parameters in the vineyard was de-novo designed and developed until fulfilling the set of specifications. The outlined features and requirements for a functioning data collection and autonomously transmitting device was developed, described, and the fulfilment by the prototype device were demonstrated. Through literature research and supportive orientationally live interviews of winemakers, the theory and the practical application were synchronized and qualified.
For the development of the prototype the general principles of development of an electronic device were followed, in particular the Design Science Research development rules, and principles of Quality Function Deployment. As a characteristic of the prototype, some principles like re-use of approved construction and material price of the building blocks of the device were taken into consideration as well (e.g. housing; Arduino; PCB). Parts reduction principles, decomplexation and simplified assembly, testing and field service were integrated to the development process by the modular design of the functional vineyard device components, e.g. with partial reference to innovative electrical cabinet construction system Modular-3.
The software architectural concept is based on a three-layer architecture inclusive the TTN infrastructure. The front end is realized as a rich web client, using a WordPress plugin. WordPress was chosen due to the wide adoption through the whole internet, enabling fast and easy user familiarization. Relevant quality issues have been tested and discussed in the view of exemplary functionality, extensibility, requirements fulfilment, as usability and durability of the device and the software.
The prototype was characterized and tested with success in the laboratory and in field exposition under different conditions, in order to allow a measurement and analysis of the fulfilment of all requirements by the selected and realized electronic construction and layout.
The solution presented may serve as a basis for future development and application in this special showcase and within similar technologies. A prognosis of future work and applications concludes this work.
Der Apple ][ war einer der drei ersten kompletten Computersysteme auf dem Markt. Von April 1977 an wurde er rund 16 Jahre lang mehrere Millionen mal verkauft. Entwickelt wurde dieser 8 Bit Homecomputer von Steve Wozniak und Steve Jobs. Sie ebneten damit den Weg für den Macintosh und das heute gut bekannte Unternehmen Apple.
Diese Arbeit beschreibt die Implementierung eines Softwareemulators für das komplette Apple ][ Computersystem auf nur einem Atmel AVR Microcontroller. Die größte Herausforderung besteht darin, dass der Microcontroller nur eine geringfügig höhere Taktrate als die zu emulierende Hardware hat. Dies erfordert eine effiziente Emulation der CPU und Speicherverwaltung, die nachfolgend zusammen mit der Laufzeitumgebung für die Emulation vorgestellt wird. Weiterhin wird die Umsetzung des Emulators mit Display und Tastatur in Hardware naher erläutert.
Mit dieser Arbeit wird die erfolgreiche Entwicklung eines portablen Apple ][ Emulators, von der Software über die Hardware bis hin zu einem Prototypen, vorgestellt.
During olive oil production, large amounts of olive mill wastewater (OMW) are generated within a short period of time. OMW has a high nutrient content and could serve as fertilizer when applied on land. However, its fatty and phenolic constituents have adverse effects on soil properties. It is still unknown how seasonal fluctuations in temperature and precipitation influence the fate and effect of OMW components on soil properties in a long-term perspective. An appropriate application season could mitigate negative consequences of OMW while preserving its beneficial effects. In order to investigate this, 14 L OMW m-2 were applied to different plots of an olive plantation in winter, spring, and summer respectively. Hydrological soil properties (water drop penetration time, hydraulic conductivity, dynamic contact angle), physicochemical parameters (pH, EC, soluble ions, phenolic compounds, organic matter), and biological degradation (bait-lamina test) were measured to assess the soil state after OMW application. After one rainy season following OMW application, the soil quality of summer treatments significantly decreased compared to the control. This was particularly apparent in a three-times lower biodegradation performance, ten-fold higher soil water repellency, and a four-fold higher content of phenolic compounds. The soil properties of winter treatments were comparable to the control, which demonstrated the recovery potential of the soil ecosystem. Spring treatments resulted in an intermediate response compared to summer and winter treatments, but without any precipitation following OMW application. Significant accumulation or leaching effects to deeper soil were not observed. Therefore, the direct application of legally restricted OMW amounts to soil is considered acceptable during the moist seasons. Further research is needed to quantify the effect of spring treatments and to gain further insight into the composition and kinetics of organic OMW constituents in the soil.
In this study, the author evaluates four visualization methods (FlexViz, the Jambalaya applet, Experimental jOWL TouchGraph, and Plone ontology) from the pool of currently available web based visualization methods.
Further, this study attempts to answer of the following research questions: 1. What are the requirements for ontology visualization? 2. How is comparative analysis and evaluation carried out? 3. How can a chosen method be tested? Based on the results of the evaluation, the visualization method FlexViz was found to be the most suitable for the given scenario. The deployment of FlexViz was carried out and integrated within Plone CMS and logical parts of the VCD ontology are tested. Finally, FlexViz was analyzed from different stakeholder- perspectives. Results showed that FleViz is a reliable tool for visualizing, understanding and analyzing a developed ontology because of its user-friendly and interactive interface, but performs poorly in regards to the editing, extension and development process.
Procedural content generation, the generation of video game content using pseudo-random algorithms, is a field of increasing business and academic interest due to its suitability for reducing development time and cost as well as the possibility of creating interesting, unique game spaces. Although many contemporary games feature procedurally generated content, the author perceived a lack of games using this approach to create realistic outer-space game environments, and the feasibility of employing procedural content generations in such a game was examined. Using current scientific models, a real-time astronomical simulation was developed in Python which generates star and planets object in a fictional galaxy procedurally to serve as the game space of a simple 2D space exploration game where the player has to search for intelligent life.
Abstract
This bachelor thesis delivers a comprehensive overview of the topic Internet of Things (IoT). With the help of a first literature review, important characteristics, architectures, and properties have been identified. The main aim of this bachelor thesis is to determine whether the use of IoT in the transport of food, considering the compliance with the cold chain, can provide advantages for companies to reduce food waste. For this purpose, a second literature review has been carried out with food transport systems without the use, as well as with the use of IoT. Based on the literature review, it is possible at the end to determine a theoretical ‘ideal’ system for food transport in refrigerated trucks. The respective used technologies are also mentioned. The findings of several authors have shown that often significant improvements can be achieved in surveillance, transport in general, or traceability of food, and ultimately food waste can be reduced. However, benefits can also be gained using new non-IoT-based technologies. Thus, the main knowledge of this bachelor thesis is that a theoretical ‘ideal’ transport system contains a sensible combination of technologies with and without IoT. This system includes the use of a Wireless Sensor Network (WSN) for real-time food monitoring, as well as an alarm function when the temperature exceeds a maximum. Real-time monitoring with GPS coupled with a monitoring center to prevent traffic jams is another task. Smart and energy-efficient packaging, and finally the use of the new supercooling-technology, make the system significantly more efficient in reducing food waste. These highlights, that when choosing a transport system, which is as efficient and profitable as possible for food with refrigerated transport, companies need not just rely on the use of IoT. On this basis, it is advisable to combine the systems and technologies used so far with IoT in order to avoid as much food waste as possible.
This research examines information audit methodologies and information capturing methods for enterprise social software which are an elementary part of the audit process. Information auditing is lacking of a standardized definition and methodology because the scope of the audit process is diversified and dependent on the organization undertaking the audit. The benefits of information auditing and potential challenges of Enterprise 2.0 the audit can overcome are comprehensive and provide a major incentive for managers to conduct an audit. Information asset registers as a starting point for information auditing are not specifically focusing on social software assets. Therefore this research pro-ject combines asset registers from different areas to create a new register suitable for the requirements of Enterprise 2.0. The necssary adaptations caused by the new character of the assets are minor. The case study applying the asset register for the first time however reveals several problematic areas for information auditors completing the register. Rounding up the thesis a template is developed for setting up new work spaces on enterprise social software systems with appropriate metadata taking into account the meaningful metadata discovered in the asset register.
The goal of this thesis is to create a recommender system (RS) for business processes, based on the existing ProM plugin RegPFA. To accomplish this task, firstly an interface must be created that sets up and expands a database receiving probabilistic finite automata (PFA) created by RegPFA in tsml format as input. Secondly, a Java program must be designed that uses said database to recommend the process elements that are most likely to follow a given sequence of process elements.
In this thesis we exercise a wide variety of libraries, frameworks and other technologies that are available for the Haskell programming language. We show various applications of Haskell in real-world scenarios and contribute implementations and taxonomy entities to the 101companies system. That is, we cover a broad range of the 101companies feature model and define related terms and technologies. The implementations illustrate how different language concepts of Haskell, such as a very strong typing system, polymorphism, higher-order functions and monads, can be effectively used in the development of information systems. In this context we demonstrate both advantages and limitations of different Haskell technologies.
Companies try to utilise Knowledge Management (KM) to gain more efficiency and effectiveness in business. The major problem is that most of these KM projects are not or rarely based on sustainable analyses or established theories about KM. Often there is a big gap between the expectations and the real outcome of such KM initiatives. So the research question to be answered is: What challenges arise in KM projects, which KM requirements can be derived from them and which recommendations support the goal of meeting the requirements for KM? As theoretical foundation a set of KM frameworks is examined. Subsequently KM challenges from literature are analysed and best practices from case studies are used to provide recommendations for action on this challenges. The main outcome of this thesis is a best practice guideline,which allows Chief Knowledge Officers (CKOs) and KM project managers to examine the challenges mentioned in this thesis closely, and to find a suitable method to master these challenge in an optimal way. This guideline shows that KM can be positively and negatively influenced in a variety of ways. Mastering Knowledge Management (KM) in a company is a big and far-reaching venture and that technology respectively Information Technology (IT) is only a part of the big picture.
Real-time graphics applications are tending to get more realistic and approximate real world illumination gets more reasonable due to improvement of graphics hardware. Using a wide variation of algorithms and ideas, graphics processing units (GPU) can simulate complex lighting situations rendering computer generated imagery with complicated effects such as shadows, refraction and reflection of light. Particularly, reflections are an improvement of realism, because they make shiny materials, e.g. brushed metals, wet surfaces like puddles or polished floors, appear more realistic and reveal information of their properties such as roughness and reflectance. Moreover, reflections can get more complex, depending on the view: a wet surface like a street during rain for example will reflect lights depending on the distance of the viewer, resulting in more streaky reflection, which will look more stretched, if the viewer is locatedrnfarther away from the light source. This bachelor thesis aims to give an overview of the state-of-the-art in terms of rendering reflections. Understanding light is a basic need to understand reflections and therefore a physical model of light and its reflection will be covered in section 2, followed by the motivational section 2.2, that will give visual appealing examples for reflections from the real world and the media. Coming to rendering techniques, first, the main principle will be explained in section 3 followed by a short general view of a wide variety of approaches that try to generate correct reflections in section 4. This thesis will describe the implementation of three major algorithms, that produce plausible local reflections. Therefore, the developed framework is described in section 5, then three major algorithms will be covered, that are common methods in most current game and graphics engines: Screen space reflections (SSR), parallax-corrected cube mapping (PCCM) and billboard reflections (BBR). After describing their functional principle, they will be analysed of their visual quality and the possibilities of their real-time application. Finally they will be compared to each other to investigate the advantages and disadvantages over each other. In conclusion, the gained experiences will be described by summarizing advantages and disadvantages of each technique and giving suggestions for improvements. A short perspective will be given, trying to create a view of upcoming real-time rendering techniques for the creation of reflections as specular effects.
The identification of experts for a specific technology or framework produces a large benefit for collaborative software projects. Hence it reduces the communication overhead that is required to identify an expert on the fly. Therefore this thesis describes a tool and approach that can be used to identify an expert that has a specific skill-set. It will mainly focus on the skills and expertise of developers that use the Django framework. By adding more rules to our framework that approach could easily be extended for different technologies or frameworks. The paper will close with a case study on an open source project.
Im Rahmen dieser Bachelorarbeit wurde ein Back-Office für die elektronische Version des Europäischen Schadensberichtes erstellt. Es wurde bereits in anderen Arbeiten ein mobiler Client, welcher auf einem Windows Mobile Handy läuft, sowie ein Polizei Client erstellt. Diese greifen auf das Back-Office zu, um Daten, wie z.B. die Autodaten (Automarke, der Typ, das Baujahr und Bilder eines 3D-Modells des Autos) zu einem bestimmten Kennzeichen oder die Personendaten des jeweiligen Autobesitzers zu erhalten. Der mobile Client sendet zudem die Unfallakte an das Back-Office, damit die Daten über einen Unfall in diesem abgespeichert und weiter bearbeitet werden können. Ziel der Arbeit war es ein erweiterbares, modulares System zu entwickeln, welches später um weitere Module ergänzt werden kann, um neue Funktionen bereitstellen zu können. Diese Module können jeweils beliebige Daten in einer Datenbank abspeichern und diese von der Datenbank auch wieder abfragen, sowie verändern, ohne dass das relationale Schema der Datenbank verändert werden muss.
Texture-based text detection in digital images using wavelet features and support vector machines
(2010)
In this bachelor thesis a new texture-based approach for the detection of text in digital images is presented. The procedure can be essentially divided into two main tasks, in detection of text blocks and detection of individual words, whereby the individual words are extracted from the detected text blocks. Roughly, the developed method acts with multiple support vector machines, which classify possible text regions of an image into real text regions, using wavelet-based features. In the process the possible text regions are defifined by edge projections with diσerent orientations. The results of the approach are X/Y coordinates, width and height of rectangular regions of an image, which contains individual words. This knowledge can be further processed, for example by an optical character recognition software to get the important and useful text information.