Refine
Year of publication
Document Type
- Bachelor Thesis (273) (remove)
Has Fulltext
- yes (273) (remove)
Keywords
- Computervisualistik (8)
- OpenGL (7)
- Android (5)
- Computergrafik (5)
- Graphik (4)
- Line Space (4)
- Smartphone (4)
- Android <Systemplattform> (3)
- Augmented Reality (3)
- C++ (3)
- Compute Shader (3)
- Computergraphik (3)
- Computerspiel (3)
- Enterprise 2.0 (3)
- GPGPU (3)
- Informatik (3)
- Linespace (3)
- OpenGL Shading Language (3)
- Path Tracing (3)
- Programmierung (3)
- Routing (3)
- Simulation (3)
- Virtual Reality (3)
- Akkreditierung (2)
- App (2)
- Applikation (2)
- Bildverarbeitung (2)
- Business Intelligence (2)
- Datenstruktur (2)
- E-Government (2)
- Entwicklung (2)
- Global Illumination (2)
- Grafikkarte (2)
- Immersion (2)
- Informationsmanagement (2)
- Instant Messaging (2)
- Internet of Things (2)
- Netzwerk (2)
- Online-Befragung (2)
- Physiksimulation (2)
- Qualität (2)
- Ray tracing (2)
- Raytracing (2)
- Realistische Computergrafik (2)
- Rechnernetz (2)
- Rendering (2)
- Rollenspiel (2)
- Router (2)
- Shader (2)
- Spiel (2)
- Steuerung (2)
- Supply Chain Management (2)
- Umfrage (2)
- VOXEL (2)
- 3D (1)
- 3D-Kartierung (1)
- API (1)
- Ad-Hoc Routing (1)
- Advanced Encryption Standard (1)
- Advertising (1)
- Agenten (1)
- Agents (1)
- Aggregation (1)
- Aktualisierung (1)
- Android 2 (1)
- Android 3 (1)
- Android 4 (1)
- Android development (1)
- Androidentwicklung (1)
- Anforderung (1)
- Annotation (1)
- App <Programm> (1)
- Archivierung (1)
- Assembly (1)
- Astrophysik (1)
- Audiodatei (1)
- Automatische Klassifikation (1)
- Autonomes Fahrzeug (1)
- Autorensystem (1)
- BGV A1 (1)
- BI (1)
- BPM (1)
- BPMN (1)
- Balanced Scorecard (1)
- Bebauungsdichte (1)
- Beleuchtung (1)
- Beschleunigungsensor (1)
- Betriebssystem (1)
- Bewegungsablauf (1)
- Bildanalyse (1)
- Blickpunktabhängig (1)
- Boppard (1)
- Bounding Volume Hierarchy (1)
- Brechung (1)
- Browser (1)
- Business Process Modeling (1)
- C-Programmierung (1)
- CCRDMT (1)
- CIDR (1)
- CSCW (1)
- Chiffrierung (1)
- Cloud Computing (1)
- Cold Chain (1)
- Coloskopie (1)
- Compute-Shader (1)
- Computer Graphics (1)
- Computer Supported Cooperative Work (1)
- Computeranimation (1)
- Congestion Control (1)
- Constitutive Model (1)
- Container Entity Modell (1)
- Content Management (1)
- Corruption (1)
- Crowdsourcing (1)
- Customer Relationship Management (1)
- Cybersickness (1)
- Darmpolyp (1)
- Data Warhousing (1)
- Data manipulation (1)
- Data-Warehouse-Konzept (1)
- Datenbanksystem (1)
- Datenschutz, Datensicherheit, Apps, Informatik im Kontext (1)
- Denkmal (1)
- Design Thinking (1)
- Diagramm (1)
- Diffie-Hellman-Algorithmus (1)
- Digitale Bilder (1)
- Distance Vector Routing (1)
- Distanz Vektor Routing (1)
- Drahtloses lokales Netz (1)
- Dynamische Netzwerkanalyse (1)
- E-Learning (1)
- E-Partizipation (1)
- ECSA (1)
- EMIL-S (1)
- Echtzeit Raytracing (1)
- Echtzeitsystem (1)
- Echtzeitsysteme (1)
- Ecore (1)
- Einstellung (1)
- Electronic Government (1)
- Elektrischer Servomotor (1)
- Elektronisches Wasserzeichen (1)
- Emergenzen (1)
- Empfehlungssystem (1)
- Enterprise Collaboration Sytems (1)
- Enterprise Information Management (1)
- Entity Component System Architecture (1)
- Entwurfssprache (1)
- Erstgebärende (1)
- Erweiterte Realität (1)
- Eulerian grid (1)
- Europäischer Schadensbericht (1)
- Evaluation (1)
- Experiment (1)
- Extended Reality (1)
- Fabric Simulation (1)
- Fahrerassistenzsysteme (1)
- Faktorenanalyse (1)
- Fast Convolution (1)
- Fehlerverhütung (1)
- Fell (1)
- Fell Rendering (1)
- Fertilitaet (1)
- Finanzberatung (1)
- FlexViz (1)
- Flow-Erlebnis (1)
- Fluidsimulation (1)
- Food Transportation System (1)
- Foodstuff (1)
- Forschungslandkarte (1)
- Fourier (1)
- Frame-Semantik (1)
- Framework (1)
- Freiheitsgrad (1)
- Fresnel (1)
- Föderiertes System (1)
- GPS (1)
- GPU (1)
- GSM-Standard (1)
- Game Design (1)
- Gamification (1)
- Geburt (1)
- Geburtshaus (1)
- Geocaching (1)
- Germany (1)
- Gerstner (1)
- Geschichte (1)
- Geschäftsprozess (1)
- Geschäftsprozessmodell (1)
- Geste (1)
- Gesteninterpretation (1)
- Gewinnspiel (1)
- Gewinnspielsendung (1)
- Glint Center Estimation Algorithmus (1)
- Globale Beleuchtung (1)
- Good Government (1)
- Grafik (1)
- Grafikprogrammierung (1)
- Graphicsprogramming (1)
- Grundschule (1)
- Haare (1)
- Halbschatten (1)
- Haskell (1)
- Hausgeburt (1)
- Health (1)
- Hebamme (1)
- Historisierung (1)
- HoloLens (1)
- IP 6 (1)
- IPv6 (1)
- Inflation (1)
- Infobroker (1)
- Information Asset Register (1)
- Information Audit (1)
- Information Capturing Methods (1)
- Information Flow Ontology (1)
- Information Supply Chain (1)
- Informationsmanagementsystem (1)
- Informationssystem (1)
- Innenarchitektur (1)
- Innovation (1)
- Innovationsmanagement (1)
- Insurance (1)
- Integriertes Informationssystem (1)
- Interaktives Fernsehen (1)
- Internet (1)
- Internetregulierung (1)
- Internetworking (1)
- Jabber (1)
- Jabber <Programm> (1)
- Join (1)
- Jump'n Run-Spiel (1)
- KI-Strategie (1)
- Kartesisches Euler-Gitter (1)
- Kartesisches Gitter (1)
- Kinect (1)
- Kinematik (1)
- Kleine und mittlere Unternehmen (1)
- Kletterer (1)
- Kollaboration (1)
- Kollaborationsplattform (1)
- Kollisionserkennung (1)
- Kommunikationsprotokoll (1)
- Konturfindung (1)
- Konzept (1)
- Konzeption (1)
- Kooperatives Informationssystem (1)
- Korruption (1)
- Korrutionsprävention (1)
- Kunstgeschichte (1)
- Künstliche Intelligenz (1)
- Lastenheft (1)
- Layout (1)
- Layout <Mikroelektronik> (1)
- Leap-Motion (1)
- Lehrer (1)
- Lernspiel (1)
- Lernumgebung (1)
- Light Injection (1)
- Lokalisation (1)
- MMORPG (1)
- Machine-Learning (1)
- Machinelles lernen (1)
- Mann-Whitney (1)
- Material Point Method (1)
- Material point method (1)
- Materialmodell (1)
- Materie-Punkt-Methode (1)
- Merkmalsdetektion (1)
- Metriken (1)
- Micro Finance (1)
- Migration (1)
- Mikrocontroller AVR (1)
- Mikrozensus (1)
- Mining Software Repositories (1)
- Mister X (1)
- Mittelrhein (1)
- Modell (1)
- Modellierungstool (1)
- Modellierungswerkzeug (1)
- Modulnetze (1)
- Montage (1)
- Motion Sickness (1)
- Musik (1)
- N-Body Simulation (1)
- N-Körper Simulation (1)
- NPO (1)
- Native language identification (1)
- Network (1)
- Nonprofit-Organisation (1)
- Norm (1)
- Nvidia CUDA (1)
- OCR (1)
- OWL <Informatik> (1)
- Objektverfolgung (1)
- Offener Innovationsprozess (1)
- Office-Prozess (1)
- Onlinesucht (1)
- Ontologie (1)
- Ontologie <Wissensverarbeitung> (1)
- OpenCL (1)
- OpenWrt (1)
- Optimierung (1)
- Optische Zeichenerkennung (1)
- Par (1)
- Parallelmanipulator (1)
- Partikel (1)
- Partikelsystem (1)
- Pathtracing (1)
- Perinatalbogen (1)
- Pfadnachverfolgung (1)
- Pflichtenheft (1)
- Politik (1)
- Politische Kommunikation (1)
- Politolinguistik (1)
- Pre-Integration (1)
- Prevention (1)
- Privatvermögen (1)
- Probabilistic finite automata (1)
- Prozedurale Synthese (1)
- Prozessanalyse (1)
- Prozesskette (1)
- Prozessmanagement (1)
- Prävention (1)
- Psychohygiene (1)
- Pulsweitenmodulation (1)
- Pupil Center Estimation Algorithmus (1)
- Qualitätsmodell (1)
- Qualitätssicherung (1)
- RDF (1)
- RIPng (1)
- Rangtest (1)
- Ray Tracer (1)
- Ray-Casting (1)
- Real-Life Game (1)
- Real-Life Spiel (1)
- Rechnernetze (1)
- Recommender System (1)
- Reflections (1)
- Reflektionen (1)
- Reflexion (1)
- Registratur (1)
- Relationales Datenbanksystem (1)
- Requirements-Engineering (1)
- Research Map (1)
- Rettungsroboter (1)
- Reverberation (1)
- Robbie (1)
- Route Aggregation (1)
- Routen-Aggregation (1)
- SOA-Webservice (1)
- SVG (1)
- Schnee (1)
- Schnelle Faltung (1)
- Schwangerschaft (1)
- Sensor (1)
- Sensorensteuerung (1)
- Serielle Schnittstelle (1)
- Serviceroboter (1)
- Sich langsam verändernde Dimensionen (1)
- Slowly Changing Dimensions (1)
- Social Network Analysis (1)
- Social Software (1)
- Soziale Online Systeme (1)
- Soziologie (1)
- Soziooekonomisches Panel (1)
- Spannender Baum (1)
- Spanning Tree (1)
- Specular (1)
- Staat (1)
- Stadt (1)
- Stakeholder (1)
- Stakeholder-Beteiligung (1)
- Stakeholderanalyse (1)
- Starrkörper (1)
- Staukontrollmechanismen (1)
- Sterne (1)
- Stewart-Plattform (1)
- Stoffsimulation (1)
- Strassenkreuzung (1)
- Suchmaschine (1)
- Supernetting (1)
- System Dynamics (1)
- Systemanalyse (1)
- TCP/IP (1)
- TGraph (1)
- TQM (1)
- TRIZ (1)
- Tablet PC (1)
- Taxonomie (1)
- Templerhaus (1)
- Temporale Datenhaltung (1)
- Tesseract (1)
- Text (1)
- Texterkennung (1)
- Total Quality Management (1)
- Touchscreen (1)
- Tracing (1)
- Transfer function (1)
- Transferfunktion (1)
- Transformation (1)
- Translation (1)
- Ultraschall (1)
- University (1)
- Unternehmensnetzwerke (1)
- VR (1)
- VST (1)
- Validierung (1)
- Verteilte Systeme (1)
- Videospiel (1)
- Videoübertragung (1)
- Virtual Networks (1)
- Virtuelle Realität (1)
- Virtuelle Welt (1)
- Visualisierung (1)
- Volumen (1)
- Volumen-Rendering (1)
- Volumendaten (1)
- Vorbeugung (1)
- Voxelisierung (1)
- WLAN Fingerprinting (1)
- Wasseroberfläche (1)
- Wearables (1)
- Web 2.0 (1)
- Web 2.0- basierte Stakeholderplattform (1)
- Web Analytics (1)
- Webserver (1)
- Wellen (1)
- Werbeform (1)
- Werbung (1)
- WiFi Fingerprinting (1)
- Wiederhall (1)
- Wikipedia (1)
- Wilcoxon-Rangsummentest (1)
- Wirtschaftsprojekt (1)
- Wissensbasis (1)
- World of Warcraft (1)
- Wrapping (1)
- XMPP (1)
- Zufriedenheit (1)
- accelerometer (1)
- advertisements (1)
- amtliche Haushaltsbefragungen (1)
- android (1)
- archiving (1)
- area light (1)
- attitude (1)
- bait-lamina test (1)
- biodegradation (1)
- collaboration plattform (1)
- computer science (1)
- concept (1)
- deutsche Hochschulen (1)
- development (1)
- diagram (1)
- experiment (1)
- factor analysis (1)
- fur rendering (1)
- game (1)
- general purpose (1)
- governance (1)
- grUML (1)
- graphics card (1)
- indirect lighting (1)
- indirektes Licht (1)
- information flow regulation (1)
- interactive application (1)
- interaktiv (1)
- interaktive Applikation (1)
- interaktiven (1)
- jOWL (1)
- jump'n run game (1)
- knowledge base (1)
- line space (1)
- mann-whitney (1)
- menschliches Auge (1)
- metasearch engine (1)
- metrics (1)
- midwife (1)
- model (1)
- modulares System (1)
- multidimensional transfer function (1)
- multidimensionale Transferfunktion (1)
- nichtamtliche Haushaltsbefragungen (1)
- official household surveys (1)
- online survey (1)
- ontology (1)
- out-of-hospital birth (1)
- perinatal (1)
- phenolic compounds (1)
- physics simulation (1)
- policy modelling (1)
- politische Sprachverwendung (1)
- politolinguistische Lexikanalyse (1)
- politolinguistische Wahlprogrammanalyse der Alternative für Deutschland (1)
- pregnancy (1)
- procedural content generation (1)
- prozedural (1)
- quality (1)
- quality assurance (1)
- ray casting (1)
- raycasting (1)
- raytracing (1)
- reflection (1)
- refraction (1)
- rigid body (1)
- search engine (1)
- sensor control (1)
- shader (1)
- snow (1)
- soft shadow (1)
- soil water repellency (1)
- survey (1)
- taxonomy (1)
- tool-integration (1)
- unofficial household surveys (1)
- video game (1)
- video games (1)
- virtuelle Netze (1)
- volume rendering (1)
- volume visualization (1)
- volumenrendering (1)
- voxelization (1)
- water re-use (1)
- water surface (1)
- waves (1)
- Übersetzung (1)
Institute
- Institut für Computervisualistik (100)
- Fachbereich 4 (68)
- Institut für Management (41)
- Institut für Wirtschafts- und Verwaltungsinformatik (31)
- Institut für Informatik (14)
- Institut für Softwaretechnik (8)
- Institute for Web Science and Technologies (3)
- Fachbereich 6 (1)
- Fachbereich 7 (1)
- Institut für Evangelische Theologie (1)
- Institut für Germanistik (1)
- Institut für Integrierte Naturwissenschaften, Abt. Geographie (1)
- Institut für Kunstwissenschaft (1)
Wikipedia is the biggest, free online encyclopaedia that can be expanded by any-one. For the users, who create content on a specific Wikipedia language edition, a social network exists. In this social network users are categorised into different roles. These are normal users, administrators and functional bots. Within the networks, a user can post reviews, suggestions or send simple messages to the "talk page" of another user. Each language in the Wikipedia domain has this type of social network.
In this thesis characteristics of the three different roles are analysed in order to learn how they function in one language network of Wikipedia and apply them to another Wikipedia network to identify bots. Timestamps from created posts are analysed to reveal noticeable characteristics referring to continuous messages, message rates and irregular behaviour of a user are discovered. Through this process we show that there exist differences between the roles for the mentioned characteristics.
The present thesis deals with the realization of a stepper motor driver on an 8-bit microcontroller by the company Atmel. The focus is on the devel- opment of a current control, which allows microstepping in addition to the basic modes of operation like full- and halfstep. For this purpose, a PI con- troller is derived using physical and control engineering principles, which is implemented on the microcontroller. In this context, essential knowledge for the practical implementation will be discussed. In addition, the development of the hardware is documented, which is of great significance for the current measurement.
The present thesis gives an overview of the general conditions for the programming of graphics cards. For this purpose, the most important Application Programming Interfaces (APIs) available on the market are presented and compared. Subsequently, two standard algorithms from the field data processing, prefix sum and radixsort are presented and examined with regard to the implementation with parallel programming on the GPU. Both algorithms were implemented using the OpenGL-API and OpenGL compute shaders. Finally, the execution times of the two algorithms were compared.
To meet the growing demands in the automotive industry, car manufacturers constantly reduce the depth of production and shift value-adding processes to the suppliers. This requires that companies work together more closely and promotes the creation of complex logistics networks. To meet the requirements for information exchange, a consortium of automobile manufacturers launched the project RFID-based Automotive Network (RAN) in 2009. The initiative aims at creating a standardized architecture for efficient material flow management along the entire supply chain. Core component of this architecture is the Informationbroker, an information unit which automatically communicates data which is captured via Auto-ID technology to supply chain participants. The thesis focuses in cooperation with the IBS AG, a software company and consortium partner in the project, on the exchange of goods data.
At first, theoretical foundations are presented by describing the characteristics of a supply chain and explaining standardization efforts and related processes. The chapter on the supply chain focuses on trends in the automotive industry to create a link to the project. The topic of standardization provides in-depth information on electronic data exchange standards in order to additionally create a transition to the Informationbroker concept. In the analytical part, reference projects will be presented with a similar problem and set in relation to RAN. According to project documents, system requirements will be defined and models will be created in order to illustrate the problem. Rich Pictures are used to describe the basis and target state.
Based on these models, the flow of goods related data is depicted between two companies and the role of the Informationbroker for the information exchange is clarified. The thesis aims at establishing an understanding of the challenges of the project and how the proposed concepts of the initiative can lead to an optimization of an automotive supply chain.
Innovation can help a forward-looking company to rise up very quickly, furthermore, innovative products and services bring a company to a stage where it can win new segments of customers and be ahead of the competition. For their innovation process, the companies can distinguish between open and closed innovation. In this case, we will focus on open innovation and how companies share their innovation processes for the benefit of the company. They use information and innovation systems to define their innovation process, as well as, track innovative ideas and the phase of their development. There are always pros and cons when it comes to open innovation processes in an organization. We will try to look at certain examples in the business world to illustrate how good or bad an open innovation process can be for a company. In this Bachelor thesis, we will try to point out the essential criteria for an open innovation process and illustrate companies which have used open innovation processes. In some cases it went successfully and for some companies it went unsuccessfully.
In dieser Arbeit wird ein System zur Erzeugung und Darstellung stereoskopischen Video-Panoramen vorgestellt. Neben der theoretischen Grundlagen werden der Aufbau und die Funktionsweise dieses Systems erläutert.
Dazu werden spezielle Kameras verwendet, die Panoramen aufnehmen
können und zur Wiedergabe synchronisiert werden. Anschließend wird ein Renderer implementiert, welcher die Panoramen mithilfe einer VirtualReality Brille stereoskopisch darstellen kann. Dafür werden separate Aufnahmen für die beiden Augen gemacht und getrennt wiedergegeben. Zum Abschluss wird das entstandene Video-Panorama mit einem Panorama eines schon bestehenden Systems verglichen.
Social media platforms such as Twitter or Reddit allow users almost unrestricted access to publish their opinions on recent events or discuss trending topics. While the majority of users approach these platforms innocently, some groups have set their mind on spreading misinformation and influencing or manipulating public opinion. These groups disguise as native users from various countries to spread frequently manufactured articles, strong polarizing opinions in the political spectrum and possibly become providers of hate-speech or extremely political positions. This thesis aims to implement an AutoML pipeline for identifying second language speakers from English social media texts. We investigate style differences of text in different topics and across the platforms Reddit and Twitter, and analyse linguistic features. We employ feature-based models with datasets from Reddit, which include mostly English conversation from European users, and Twitter, which was newly created by collecting English tweets from selected trending topics in different countries. The pipeline classifies language family, native language and origin (Native or non-Native English speakers) of a given textual input. We evaluate the resulting classifications by comparing prediction accuracy, precision and F1 scores of our classification pipeline to traditional machine learning processes. Lastly, we compare the results from each dataset and find differences in language use for topics and platforms. We obtained high prediction accuracy for all categories on the Twitter dataset and observed high variance in features such as average text length especially for Balto-Slavic countries.
The Internet of Things (IoT) is a concept in which connected physical objects are integrated into the virtual world to become active partakers of businesses and everyday processes (Uckelmann, Harrison and Michahelles, 2011; Shrouf, Ordieres and Miragliotta, 2014). It is expected to have a major impact on businesses (Council, Nic and Intelligence, 2008), but small and medium enterprises’ business models are threatened if they do not adopt the new concept (Sommer, 2015). Thus, this thesis aims to showcase a sample implementation of connected devices in a small enterprise, demonstrating its added benefits for the business.
Design Science Research (DSR) is used to develop a prototype based on a use case provided by a carpentry. The prototype comprises a hardware sensor and a web application which can be used by the wood shop to improve their processes. The thesis documents the iterative process of developing a prototype from the grounds up to useable hard- and software.
This contribution provides an example of how IoT can be used and implemented at a small business.
Im Vergleich zu herkömmlicher Computergrafik (perspektivische Projektion) bietet Raytracing entscheidende Vorteile, die hauptsächlich in der vergleichsweise hohen physikalischen Korrektheit der Methode begründet sind. Die Schwächen liegen hingegen im immensen Rechenaufwand.
Ein Raytracer ist vergleichsweise so rechenintensiv, weil für jeden Pixel mindestens ein Strahl verschickt werden muss. Dieser muss gegen alle Objekte im Raum geschnitten werden. Hinzu kommen noch die Strahlen, die entstehen, wenn Strahlen an Objekten reflektiert werden (Rekursion). Um diesen Rechenaufwand zu verkleinern und zusätzlich ein besseres Bild zu erzeugen, soll der adaptive Sampler den Raytracer unterstützen. Der adaptive Sampler soll während des Rendervorgangs den progressiven Fortschritt in der Bildgenerierung beobachten und Pixel von der weiteren Berechnung ausschließen, für die sich ein zusätzliches Verschießen von Strahlen nicht mehr lohnt.
Anders als der rein progressive Raytracer hört der adaptive Sampler mit dem Konvergieren des Bildes auf zu rechnen. Der adaptive Sampler soll so dafür sorgen, dass schneller ein besseres Bild erzeugt wird und somit die Performanz gesteigert wird. Insgesamt erwartet man sich vom adaptiven Sampler Vorteile bei der Berechnung von bestimmten Szenen. Unter anderem eine Verbesserung bei Szenen mit rein diffus beleuchteten Bildbereichen, sowie eine Verbesserung bei Szenen mit unterschiedlich rechenintensiven Bildbereichen. Ein normaler Raytracer kann nicht beurteilen, wie sinnvoll seine Schüsse sind. Er kann nur mehr Strahlen verschießen, in der Hoffnung, das Bild damit effektiv zu verbessern.
Es gibt jedoch viele Szenarien, bei denen eine linear steigende Schussanzahl pro Pixel keine gleichmäßige Verbesserung im Bild erzeugt. Das bedeutet, dass Bereiche im Bild schon gut aussehen, während andere noch sehr verrauscht sind. Man möchte nun Bildbereiche, die bereits konvergiert sind, in denen sich ein weiterer Beschuss also nicht mehr bemerkbar macht, ausschließen und die Rechenleistung dort nutzen, wo man sie noch braucht.
Wichtig dabei ist, dass Pixel nicht ungewollt zu früh von der Berechnung ausgeschlossen werden, die nicht weit genug konvergiert sind. Der adaptive Sampler soll so lange arbeiten, bis jeder Pixel dauerhaft keine Änderungen mehr vorweist. Das bedeutet, dass die Wahrscheinlichkeit für eine signifikante Farbänderung eines Pixels durch Verschießen eines Strahls (bei mehreren Lichtquellen in RenderGin mehrere Strahlen pro Pixel) klein genug ist. Es wird zwar intern keine Wahrscheinlichkeit berechnet, jedoch bekommt der Raytracer eine Art Gedächtnis: Er speichert die Veränderungen im beleuchteten Bild und deren Verlauf in eigenen Gedächtnisbildern. Das "Gedächtnis" für das alte Bild (Zustand des Bildes in der letzten Iteration über die Pixel) repräsentiert dabei das Kurzzeitgedächtnis. Es ist absolut genau. Das Langzeitgedächtnis wird von drei verschiedenen Bildern repräsentiert. Das erste gibt die Anzahl der verschossenen Strahlen pro Pixel an. Das zweite ist ein Wahrheitswertebild, das für jeden Pixel angibt, ob dieser noch in die Berechnung einbezogen werden soll. Das dritte Bild gibt an, wie oft jeder Pixel eine Farbänderung vollzogen hat, die geringer ist als der geforderte Maximalabstand eines Pixels zu sich selbst (vor und nach dem Verschießen eines weiteren Strahls).
Mit diesen drei Bildern ist es möglich, zusätzliche quantitative Informationen zu den qualitativen Informationen des Vergleichs vom neuen und alten Bild zu berücksichtigen.
In dieser Arbeit kläre ich die Frage, ob die gewünschten Effekte eintreten und ob bei Integration in die bestehende Struktur von RenderGin ein Performanzgewinn möglich ist. Die Umsetzung eines adaptiven Samplers ist als Plug-In in der Software RenderGin von Numenus GmbH geschehen. RenderGin ist ein echtzeitfähiger, progressiver Raytracer, der sich durch seine Performanz auszeichnet. Die Bildgenerierung geschieht allein auf der CPU, die Grafikkarte wird lediglich zur Anzeige des erzeugten Bildes benötigt.
Die Umsetzung und Programmierung des Plug-Ins ist in Microsoft Visual Studio 2010 geschehen unter Verwendung des RenderGin SDK der Numenus GmbH.
Since software influences nearly every aspect of everyday life, the security of software systems is more important than ever before. The evaluation of the security of a software system still poses a significant challenge in practice, mostly due to the lack of metrics, which can map the security properties of source code onto numeric values. It is a common assumption, that the occurrence of security vulnerabilities and the quality of the software design stand in direct correlation, but there is currently no clear evidence to support this. A proof of an existing correlation could help to optimize the measurements of program security, making it possible to apply quality measurements to evaluate it. For this purpose, this work evaluates fifty open-source android applications, using three security and seven quality metrics. It also considers the correlations between the metrics. The quality metrics range from simple code metrics to high-level metrics such as object-oriented anti-patterns, which together provide a comprehensive picture of the quality. Two visibility metrics, along with a metric that computes the minimal permission request for mobile applications, were selected to illustrate the security. Using the evaluation projects, it was found that there is a clear correlation between most quality metrics. By contrast, no significant correlations were found using the security metrics. This work discusses the correlations and their causes as well as further recommendations based on the findings.
Online Handschrifterkennung chinesischer Schriftzeichen auf androidfähigen mobilen Endgeräten
(2014)
Usage of mobile dictionaries or translators requires an input. This input has to be processed and recognized beforehand. Chinese characters are more suited for a handwritten input than a keyboard based one. Reason for that are the characters consisting mostly of pictograms or ideograms.
This thesis deals with an implementation of a prototypical recognition system on a mobile device. The recognition process should be online and therefore running while writing. It can save time for the user, because suggestions are made during runtime.
Basics and an overview over the current state of the art in online handwriting recognition will be given. An approach will be chosen and implemented, such that the recognition process is fast and needs little memory. The implementation will be tested and it will show, that a fast recognition can be possible on small devices. Suggestions for expansions and improvements will be given, including a future work part.
This bachelor thesis deals with the conception, implementation and evaluation of a Jump'n'Run game and the consideration of the influence of achievment systems on players. In the game Age of Tunes you play Bardur, the beardless bard and have to try to free the cursed magical creatures in the world Harmonica. The emphasis of the thesis was the clean conception and gradual development of the game, appealing graphic quality, integration of opponents, a mini-game and the consideration of effects of an achievment system on players. In a final evaluation the game and the behavior could be evaluated regarding the achievments.
There are a few systems high and low-cost ones for gaze tracking. Normally low-cost systems go in hand with low-resolution cameras. Here the image quality is poor, so the algorithms for detecting the gaze have to work more precisely. But how to test and analyse them correctly, when there is a bad image quality and no reference point known? The idea of this work is, to generate synthetic eye images, where the reference points are known, because they are mainly manually set and then to test and analyse the algorithms with these synthetic images. By switching on features like gaussian noise or a second glint-like reflection point, it is possible to stepwise approximate the synthetic images close to reality. In fact the experiments will lead to an improvement of the algorithms used in a low-resolution system environment.
Digitalisation and further media development are core processes of the current digital age. In order for companies to benefit from technical progress, their employees must have or are expected to acquire the relevant skills. Companies therefore are faced with the task of not being overwhelmed with the mass of innovations and opportunities and, in the best-case scenario, of being able to use them to improve their own performance.
Small and medium-sized enterprises represent 99% of all enterprises in Germany. Though, it has not been further established, how the majority of small enterprises and their employees participate in this development. The research question therefore consists of two parts. On the one hand: "Is the promotion of employees' skills taken care of in micro-enterprises?" and on the other hand: "Where are the opportunities and challenges for companies of this size?”
In order to answer the research question, a qualitative research method was used, the guideline-based interview. The interviewed companies were all in the media and IT sector. Thus, the recorded and transcribed data provided a real insight into the current situation in micro-enterprises.
The responses to the interviews showed that companies with very small numbers of employees are more dependent on their employees than others. So, the commitment of the employees is decisive for the success of the company itself. It is the management's task to promote this and ensure employee satisfaction.
Companies that pay more attention to employee development are therefore more recommendable for career starters who need and/or want to develop themselves and their entire horizon of experience.
This thesis addresses the implementation of a particle simulation of an explosion. The simulation will be displayed via ray tracing in near real time. The implementation makes use of the openCL standard. The focus of research in this thesis is to analyse the performance of this combination of components.
This thesis deals with quality assurance of model-based SRS, in particular SRS-Models and SRS-Diagrams. The interesting thing about model-based SRS is that they are generated by a documentation generator based on the following input data: SRS-Model, SRS-Diagrams and texts external to the model. Therefore to assure the quality of the documentation the quality of their four factors must be assured, which are the SRS-Model, SRS-Diagrams, external texts and the documentation generator. The thesis" goal is to define a quality connotation for SRS-Models and -Diagrams and to show an approach for realizing automatically quality testing, measurement and assessment for the modelling tool Innovator.
Deformable Snow Rendering
(2019)
Accurate snow simulation is key to capture snow's iconic visuals. Intricate
methods exist that attempt to grasp snow behaviour in a holistic manner. Computational complexity prevents them from reaching real-time performance. This thesis presents three techniques making use of the GPU that focus on the deformation of a snow surface in real-time. The approaches are examined by their ability to scale with an increasing number of deformation actors and their visual portrayal of snow deformation. The findings indicate that the approaches maintain real-time performance well into several hundred individual deformation actors. However, these approaches each have their individual restrictions handicapping the visual results. An experimental approach is to combine the techniques at reduced deformation actor count to benefit from the detailed, merged deformation pattern.
In der vorliegenden Arbeit wird die Integration einer Business Intelligence-Lösung in eine bestehende Social Software beschrieben. Dafür wird zunächst der Begriff Business Intelligence und Social Software, der Aufbau sowie deren Bestandteile näher erläutert. Danach erfolgt eine Analyse der IST-Situation der Zielgruppe durch Interviews, deren Auswertungen in der SOLL-Konzeptionierung in eine Anforderungsliste transformiert werden. Abschließend werden die herausgearbeiteten Anforderungen an der finalen Installation geprüft und getestet, um festzustellen ob die Erwartungen der Zielgruppe und ihre Vorstellungen von Business Intelligence realisierbar sind.
Das Ergebnis dieser Arbeit soll eine installierte Business Intelligence-Lösung in einer Social Software sein. Diese soll einen Überblick darüber geben, was mit der aktuellsten Version der Software bereits möglich ist und kritisch aufzeigen, wo es Stärken und Schwächen gibt, die bei zukünftigen Versionenrnbedacht werden sollten.
In this bachelor thesis, the question of whether or not a jump'n run game with sensor control for android devices is useful, is handled. To this end, a game was developed, which is once controlled with and without sensors at different levels. In a second version, the game is completely controlled by means of sensors, so that the controls can later be compared. It is explained how the game was planned, designed and investigated. In addition, it is checked whether games with sensor control already exist. The engine, which was used to developed the game, is also introduced. Finally, the evaluation is carried out for an elaborated user test on the playability of the game in terms of control.
Eine systematische Literaturstudie zu beaconless Algorithmen für drahtlose Ad-hoc- und Sensornetze
(2014)
Wireless sensor networks (WSN) are computer networks consisting of sensor nodes.In position-based protocols nodes broadcast messages containing their position information periodically. These messages are called beacons. The use of beacons however has downsides, which led to the emerging of beaconless algorithms. Despite their importance in the research area of computer networks and to the best of our knowledge, beaconless algorithms have not been systematically surveyed.
The aim of this work is a comprehensive, systematic review of beaconless algorithms since the turn of the millennium. Relevant articles are categorized based on their similarities and their differences are compared. Thus, this work serves as a reference to the current state of research of beaconless algorithms and as a basis for finding research gaps.