Refine
Year of publication
Document Type
- Bachelor Thesis (45) (remove)
Language
- English (45) (remove)
Keywords
- Computergraphik (2)
- Enterprise 2.0 (2)
- Graphik (2)
- Internet of Things (2)
- OpenGL (2)
- API (1)
- Agenten (1)
- Agents (1)
- Android <Systemplattform> (1)
- Annotation (1)
- Archivierung (1)
- Automatische Klassifikation (1)
- BPM (1)
- Bebauungsdichte (1)
- Bildanalyse (1)
- Blickpunktabhängig (1)
- Business Process Modeling (1)
- C++ (1)
- Cloud Computing (1)
- Cold Chain (1)
- Coloskopie (1)
- Compute Shader (1)
- Computer Graphics (1)
- Computerspiel (1)
- Computervisualistik (1)
- Container Entity Modell (1)
- Content Management (1)
- Darmpolyp (1)
- Data manipulation (1)
- Digitale Bilder (1)
- Drahtloses lokales Netz (1)
- ECSA (1)
- Empfehlungssystem (1)
- Enterprise Information Management (1)
- Entity Component System Architecture (1)
- Europäischer Schadensbericht (1)
- Fabric Simulation (1)
- FlexViz (1)
- Food Transportation System (1)
- Foodstuff (1)
- GPS (1)
- GSM-Standard (1)
- Gamification (1)
- Geocaching (1)
- Germany (1)
- Grafikkarte (1)
- Grafikprogrammierung (1)
- Graphicsprogramming (1)
- Haskell (1)
- Health (1)
- Informatik (1)
- Information Asset Register (1)
- Information Audit (1)
- Information Capturing Methods (1)
- Insurance (1)
- Konturfindung (1)
- Konzept (1)
- Line Space (1)
- Linespace (1)
- Lokalisation (1)
- Machine-Learning (1)
- Machinelles lernen (1)
- Merkmalsdetektion (1)
- Migration (1)
- Mining Software Repositories (1)
- Mister X (1)
- N-Body Simulation (1)
- N-Körper Simulation (1)
- Native language identification (1)
- OWL <Informatik> (1)
- Ontologie <Wissensverarbeitung> (1)
- OpenGL Shading Language (1)
- Path Tracing (1)
- Pfadnachverfolgung (1)
- Politik (1)
- Probabilistic finite automata (1)
- Programmierung (1)
- Prozedurale Synthese (1)
- RDF (1)
- Real-Life Game (1)
- Real-Life Spiel (1)
- Recommender System (1)
- Reflections (1)
- Reflektionen (1)
- Registratur (1)
- Rendering (1)
- Shader (1)
- Smartphone (1)
- Specular (1)
- Steuerung (1)
- Stoffsimulation (1)
- Strassenkreuzung (1)
- Text (1)
- Texterkennung (1)
- Tracing (1)
- University (1)
- WLAN Fingerprinting (1)
- Wearables (1)
- WiFi Fingerprinting (1)
- Wissensbasis (1)
- Wrapping (1)
- archiving (1)
- bait-lamina test (1)
- biodegradation (1)
- concept (1)
- deutsche Hochschulen (1)
- governance (1)
- jOWL (1)
- knowledge base (1)
- modulares System (1)
- phenolic compounds (1)
- policy modelling (1)
- procedural content generation (1)
- soil water repellency (1)
- tool-integration (1)
- video games (1)
- water re-use (1)
Procedural content generation, the generation of video game content using pseudo-random algorithms, is a field of increasing business and academic interest due to its suitability for reducing development time and cost as well as the possibility of creating interesting, unique game spaces. Although many contemporary games feature procedurally generated content, the author perceived a lack of games using this approach to create realistic outer-space game environments, and the feasibility of employing procedural content generations in such a game was examined. Using current scientific models, a real-time astronomical simulation was developed in Python which generates star and planets object in a fictional galaxy procedurally to serve as the game space of a simple 2D space exploration game where the player has to search for intelligent life.
The development of a game engine is considered a non-trivial problem. [3] The architecture of such simulation software must be able to manage large amounts of simulation objects in real-time while dealing with “crosscutting concerns” [3,p. 36] between subsystems. The use of object oriented paradigms to model simulation objects in class hierarchies has been reported as incompatible with constantly changing demands during game development [2, p. 9], resulting in anti-patterns and eventual, messy refactoring.[13]
Alternative architectures using data oriented paradigms revolving around object composition and aggregation have been proposed as a result. [13, 9, 1, 11]
This thesis describes the development of such an architecture with the explicit goals to be simple, inherently compatible with data oriented design, and to make reasoning about performance characteristics possible. Concepts are formally defined to help analyze the problem and evaluate results. A functional implementation of the architecture is presented together with use cases common to simulation software.
Companies try to utilise Knowledge Management (KM) to gain more efficiency and effectiveness in business. The major problem is that most of these KM projects are not or rarely based on sustainable analyses or established theories about KM. Often there is a big gap between the expectations and the real outcome of such KM initiatives. So the research question to be answered is: What challenges arise in KM projects, which KM requirements can be derived from them and which recommendations support the goal of meeting the requirements for KM? As theoretical foundation a set of KM frameworks is examined. Subsequently KM challenges from literature are analysed and best practices from case studies are used to provide recommendations for action on this challenges. The main outcome of this thesis is a best practice guideline,which allows Chief Knowledge Officers (CKOs) and KM project managers to examine the challenges mentioned in this thesis closely, and to find a suitable method to master these challenge in an optimal way. This guideline shows that KM can be positively and negatively influenced in a variety of ways. Mastering Knowledge Management (KM) in a company is a big and far-reaching venture and that technology respectively Information Technology (IT) is only a part of the big picture.
Wikipedia is the biggest, free online encyclopaedia that can be expanded by any-one. For the users, who create content on a specific Wikipedia language edition, a social network exists. In this social network users are categorised into different roles. These are normal users, administrators and functional bots. Within the networks, a user can post reviews, suggestions or send simple messages to the "talk page" of another user. Each language in the Wikipedia domain has this type of social network.
In this thesis characteristics of the three different roles are analysed in order to learn how they function in one language network of Wikipedia and apply them to another Wikipedia network to identify bots. Timestamps from created posts are analysed to reveal noticeable characteristics referring to continuous messages, message rates and irregular behaviour of a user are discovered. Through this process we show that there exist differences between the roles for the mentioned characteristics.
Regarding the rapidly growing amount of data produced every year and the increasing acceptance of Enterprise 2.0 enterprises have to care about the management of their data more and more. Content created and stored in an uncoordinated manner can lead to data-silos (Williams & Hardy 2011, p.57), which result in long search times, inaccessible data and in consequence monetary losses. The "expanding digital universe" forces enterprises to develop new archiving solutions and records management policies (Gantz et al. 2007, p.13). Enterprise Content Management (ECM) is the research field that deals with these challenges. It is placed in the scientific context of Enterprise Information Management. This thesis aims to find out to what extent current Enterprise Content Management Systems (ECMS) support these new requirements, especially concerning the archiving of Enterprise 2.0 data. For this purpose, three scenarios were created to evaluate two different kinds of ECMS (one Open Source - and one proprietary system) chosen on the basis of a short marketrnresearch. The application of the scenarios reveals that the system vendors actually face the industry- concerns: both tools provide functionality for the archiving of data arising from online collaboration and also business records management capabilities but the integration of those topics is not, or is only inconsistently solved. At this point new questions - such as, "Which datarngenerated in an Enterprise 2.0 is worth being a record?" - arise and should be examined in future research.
How entrepreneurs become successful with their business ideas has been a topic of discussion within entrepreneurial research circles for many years. Business success has been related to psychological characteristics and many research findings have explained how people become entrepreneurs and furthermore, how they find success in regards to their business ventures. People always wanted to know what defines an entrepreneur and more importantly, what is needed to become a successful entrepreneur. Curious minds began to search for the recipe for success. Personality Traits have been researched for years, leading to the discovery of many diverse and distinct forms which are still recognized today. Researchers compared the Behavioral Approach or the analysis of demographical characteristics to success.
This raises the question of what an entrepreneur needs to become successful which is made more difficult due to the fact that entrepreneurship and who an entrepreneur is can be difficult do define exactly. Entrepreneurs are diverse and distinct individuals that each possess their own characteristics, behavior and reasons for starting a business. Many people assume that it is just someone who starts their own business. While partially correct, this only scratches the surface of what it is to be an entrepreneur.
This thesis answers the following three research questions: 1. Which personality traits were found in the almost sixty years of researching? 2. What triggers do entrepreneurs need for starting a new business? 3. Which factor makes the entrepreneur successful with their business idea? Also, reading the thesis, one may gain a better understanding of what entrepreneurship is, what defines an entrepreneur and what one needs to become a successful entrepreneur.
In this thesis the reviewing literature method developed by Wolfswinkel et al. (2011) was used to identify suitable literature and references for the main chapters. For that the five stages of the grounded-theory method for reviewing the literature were used in an iterative fashion. In step one inclusion and exclusion criterias for literature were defined. Then literature was searched matching the criterias of step one. In step three the samples were refined. Then in step four the found literature was analyzed and coded before the results of this research was presented in the following text. Many entrepreneurial personality traits were discovered and confirmed during the studies conducted over the last sixty years, these include but are not limited to Need for Achievement, Locus of Control, or risk taking propensity. Motivation is clearly the main trigger to start your own business. During the studies reasons were discovered of how and why an entrepreneur may become successful outside of posessing the right combination of traits. For example: having human, financial and social capital, an entrepreneurial behavior and the right economic conditions to start a business that will survive for more than five years.
In this thesis we present an approach to track a RGB-D camera in 6DOF andconstruct 3D maps. We first acquire, register and synchronize RGB and depth images. After preprocessing we extract FAST features and match them between two consecutive frames. By depth projection we regain the z-value for the inlier correspondences. Afterwards we estimate the camera motion by 3D point set alignment between the correspondence set using least-squares. This local motion estimate is incrementally applied to a global transformation. Additionally wernpresent methods to build maps based on point cloud data acquired by a RGB-D camera. For map creation we use the OctoMap framework and optionally create a colored point cloud map. The system is evaluated with the widespread RGB-D benchmark.
Im Rahmen dieser Bachelorarbeit wurde ein Back-Office für die elektronische Version des Europäischen Schadensberichtes erstellt. Es wurde bereits in anderen Arbeiten ein mobiler Client, welcher auf einem Windows Mobile Handy läuft, sowie ein Polizei Client erstellt. Diese greifen auf das Back-Office zu, um Daten, wie z.B. die Autodaten (Automarke, der Typ, das Baujahr und Bilder eines 3D-Modells des Autos) zu einem bestimmten Kennzeichen oder die Personendaten des jeweiligen Autobesitzers zu erhalten. Der mobile Client sendet zudem die Unfallakte an das Back-Office, damit die Daten über einen Unfall in diesem abgespeichert und weiter bearbeitet werden können. Ziel der Arbeit war es ein erweiterbares, modulares System zu entwickeln, welches später um weitere Module ergänzt werden kann, um neue Funktionen bereitstellen zu können. Diese Module können jeweils beliebige Daten in einer Datenbank abspeichern und diese von der Datenbank auch wieder abfragen, sowie verändern, ohne dass das relationale Schema der Datenbank verändert werden muss.
In this thesis we exercise a wide variety of libraries, frameworks and other technologies that are available for the Haskell programming language. We show various applications of Haskell in real-world scenarios and contribute implementations and taxonomy entities to the 101companies system. That is, we cover a broad range of the 101companies feature model and define related terms and technologies. The implementations illustrate how different language concepts of Haskell, such as a very strong typing system, polymorphism, higher-order functions and monads, can be effectively used in the development of information systems. In this context we demonstrate both advantages and limitations of different Haskell technologies.
Next word prediction is the task of suggesting the most probable word a user will type next. Current approaches are based on the empirical analysis of corpora (large text files) resulting in probability distributions over the different sequences that occur in the corpus. The resulting language models are then used for predicting the most likely next word. State-of-the-art language models are based on n-grams and use smoothing algorithms like modified Kneser-Ney smoothing in order to reduce the data sparsity by adjusting the probability distribution of unseen sequences. Previous research has shown that building word pairs with different distances by inserting wildcard words into the sequences can result in better predictions by further reducing data sparsity. The aim of this thesis is to formalize this novel approach and implement it by also including modified Kneser-Ney smoothing.