Refine
Year of publication
Document Type
- Master's Thesis (193) (remove)
Keywords
- Augmented Reality (3)
- Computersimulation (3)
- Datenschutz (3)
- Internet of Things (3)
- virtual reality (3)
- Beschaffung (2)
- E-Partizipation (2)
- E-participation (2)
- Simulation (2)
- Sport (2)
Institute
- Institut für Computervisualistik (46)
- Fachbereich 4 (34)
- Institut für Management (33)
- Institut für Wirtschafts- und Verwaltungsinformatik (27)
- Institute for Web Science and Technologies (20)
- Institut für Informatik (16)
- Institut für Softwaretechnik (6)
- Fachbereich 1 (1)
- Fachbereich 3 (1)
- Fachbereich 6 (1)
- Institut für Integrierte Naturwissenschaften, Abt. Biologie (1)
- Institut für Kulturwissenschaft (1)
- Institut für Psychologie (1)
- Institut für Sportwissenschaft (1)
- Institut für Umweltwissenschaften (1)
- Institut für naturwissenschaftliche Bildung (1)
- Zentrum für Fernstudien und Universitäre Weiterbildung (1)
In the realm of education, the timely identification of students who need further support to succeed in their respective courses, plays a pivotal role in fostering aca- demic success and preventing potential setbacks. This thesis thus aims to contribute to this critical area by focusing on the development of predictive models for the early detection of at-risk students in their academic journey. The primary dataset used for this thesis is provided by kaggle, encompassing diverse student informa- tion, including demographic, socio-economic factors, and academic performance categorized into three different classes, presenting an imbalanced nature that poses a significant challenge.
Thus the primary objectives of this thesis are to address the problem of imbal- anced data, explore and assess the performance of multiple classification methods such as, logistic regression, decision tress, random forests and support vector ma- chines (SVM), neural networks, and create a comprehensive end-to-end processing pipeline which includes the systematic steps of balancing the data, model training and evaluation. Additionally the developed pipeline is tested on two additional datasets to assess its generalizability and robustness. This research aims to provide a comprehensive understanding of addressing the challenges of imbalanced data and how different classification methods and regression can be optimally applied to early detection of at-risk students. The findings are expected to aid educational institutions in supporting their students and enhancing academic success through timely interventions.
Key findings demonstrates the robustness of SVM SMOTE balancing technique acro- ss the datasets used in this study, where it consistently achieved best results when combined with various models, particularly highlighting the success of the combi- nation of Random Forest model with SVM SMOTE, and Decision tree model with SVM SMOTE in achieving notable accuracy rates. This emphasizes the adaptability of the balancing techniques employed, providing a strong foundation for predictive intervention educational settings.
Assessing ChatGPT’s Performance in Analyzing Students’ Sentiments: A Case Study in Course Feedback
(2024)
The emergence of large language models (LLMs) like ChatGPT has impacted fields such as education, transforming natural language processing (NLP) tasks like sentiment analysis. Transformers form the foundation of LLMs, with BERT, XLNet, and GPT as key examples. ChatGPT, developed by OpenAI, is a state-of-the-art model and its ability in natural language tasks makes it a potential tool in sentiment analysis. This thesis reviews current sentiment analysis methods and examines ChatGPT’s ability to analyze sentiments across three labels (Negative, Neutral, Positive) and five labels (Very Negative, Negative, Neutral, Positive, Very Positive) on a dataset of student course reviews. Its performance is compared with fine tuned state-of-the-art models like BERT, XLNet, bart-large-mnli, and RoBERTa-large-mnli using quantitative metrics. With the help of 7 prompting techniques which are ways to instruct ChatGPT, this work also analyzed how well it understands complex linguistic nuances in the given texts using qualitative metrics. BERT and XLNet outperform ChatGPT mainly due to their bidirectional nature, which allows them to understand the full context of a sentence, not just left to right. This, combined with fine-tuning, helps them capture patterns and nuances better. ChatGPT, as a general purpose, open-domain model, processes text unidirectionally, which can limit its context understanding. Despite this, ChatGPT performed comparably to XLNet and BERT in three-label scenarios and outperformed others. Fine-tuned models excelled in five label cases. Moreover, it has shown impressive knowledge of the language. Chain-of-Thought (CoT) was the most effective technique for prompting with step by step instructions. ChatGPT showed promising performance in correctness, consistency, relevance, and robustness, except for detecting Irony. As education evolves with diverse learning environments, effective feedback analysis becomes increasingly valuable. Addressing ChatGPT’s limitations and leveraging its strengths could enhance personalized learning through better sentiment analysis.
Exploring Academic Perspectives: Sentiments and Discourse on ChatGPT Adoption in Higher Education
(2024)
Artificial intelligence (AI) is becoming more widely used in a number of industries, including in the field of education. Applications of artificial intelligence (AI) are becoming crucial for schools and universities, whether for automated evaluation, smart educational systems, individualized learning, or staff support. ChatGPT, anAI-based chatbot, offers coherent and helpful replies based on analyzing large volumes of data. Integrating ChatGPT, a sophisticated Natural Language Processing (NLP) tool developed by OpenAI, into higher education has sparked significant interest and debate. Since the technology is already adapted by many students and teachers, this study delves into analyzing the sentiments expressed on university websites regarding ChatGPT integration into education by creating a comprehensive sentiment analysis framework using Hierarchical Residual RSigELU Attention Network (HR-RAN). The proposed framework addresses several challenges in sentiment analysis, such as capturing fine-grained sentiment nuances, including contextual information, and handling complex language expressions in university review data. The methodology involves several steps, including data collection from various educational websites, blogs, and news platforms. The data is preprocessed to handle emoticons, URLs, and tags and then, detect and remove sarcastic text using the eXtreme Learning Hyperband Network (XLHN). Sentences are then grouped based on similarity and topics are modeled using the Non-negative Term-Document Matrix Factorization (NTDMF) approach. Features, such as lexico-semantic, lexico structural, and numerical features are extracted. Dependency parsing and coreference resolution are performed to analyze grammatical structures and understand semantic relationships. Word embedding uses the Word2Vec model to capture semantic relationships between words. The preprocessed text and extracted features are inputted into the HR-RAN classifier to categorize sentiments as positive, negative, or neutral. The sentiment analysis results indicate that 74.8% of the sentiments towards ChatGPT in higher education are neutral, 21.5% are positive, and only 3.7% are negative. This suggests a predominant neutrality among users, with a significant portion expressing positive views and a very small percentage holding negative opinions. Additionally, the analysis reveals regional variations, with Canada showing the highest number of sentiments, predominantly neutral, followed by Germany, the UK, and the USA. The sentiment analysis results are evaluated based on various metrics, such as accuracy, precision, recall, F-measure, and specificity. Results indicate that the proposed framework outperforms conventional sentiment analysis models. The HR-RAN technique achieved a precision of 98.98%, recall of 99.23%, F-measure of 99.10%, accuracy of 98.88%, and specificity of 98.31%. Additionally, word clouds are generated to visually represent the most common terms within positive, neutral, and negative sentiments, providing a clear and immediate understanding of the key themes in the data. These findings can inform educators, administrators, and developers about the benefits and challenges of integrating ChatGPT into educational
settings, guiding improvements in educational practices and AI tool development.
This thesis explores and examines the effectiveness and efficacy of traditional machine learning (ML), advanced neural networks (NN) and state-of-the-art deep learning (DL) models for identifying mental distress indicators from the social media discourses based on Reddit and Twitter as they are immensely used by teenagers. Different NLP vectorization techniques like TF-IDF, Word2Vec, GloVe, and BERT embeddings are employed with ML models such as Decision Tree (DT), Random Forest (RF), Logistic Regression (LR) and Support Vector Machine (SVM) followed by NN models such as Convolutional Neural Network (CNN), Recurrent Neural Network (RNN) and Long Short-Term Memory (LSTM) to methodically analyse their impact as feature representation of models. DL models such as BERT, DistilBERT, MentalRoBERTa and MentalBERT are end-to-end fine tuned for classification task. This thesis also compares different text preprocessing techniques such as tokenization, stopword removal and lemmatization to assess their impact on model performance. Systematic experiments with different configuration of vectorization and preprocessing techniques in accordance with different model types and categories have been implemented to find the most effective configurations and to gauge the strengths, limitations, and capability to detect and interpret the mental distress indicators from the text. The results analysis reveals that MentalBERT DL model significantly outperformed all other model types and categories due to its specific pretraining on mental data as well as rigorous end-to-end fine tuning gave it an edge for detecting nuanced linguistic mental distress indicators from the complex contextual textual corpus. This insights from the results acknowledges the ML and NLP technologies high potential for developing complex AI systems for its intervention in the domain of mental health analysis. This thesis lays the foundation and directs the future work demonstrating the need for collaborative approach of different domain experts as well as to explore next generational large language models to develop robust and clinically approved mental health AI systems.
Im Rahmen der Masterthesis „Analyse des Managements invasiver gebietsfremder Arten am Beispiel des Roten Amerikanischen Sumpfkrebses (Procambarus clarkii) während und im Anschluss an notwendige Sanierungsarbeiten am Hochwasserrückhaltebecken Breitenauer See östlich von Heilbronn“ wurde das Vorkommen des invasiven Roten Amerikanischen Sumpfkrebses am Breitenauer See umfangreich kartiert. Auch die nahegelegene Sulm mit bekanntem Vorkommen des Signalkrebses sowie das Nonnenbachsystem mit bekanntem Vorkommen des Steinkrebses wurden erfasst. Der Fokus lag auf der Beantwortung dreier Kernfragen. Zunächst wurde untersucht, ob und wie ein dauerhaftes IAS-Management (invasive alien species) des Roten Amerikanischen Sumpfkrebses am Breitenauer See nachhaltig durchgeführt werden kann, um inakzeptable ökologische Effekte zu vermeiden. Die zweite Fragestellung bezog sich auf die Wirksamkeit ergriffener Risikomanagementmaßnahmen während der Ablassaktion des Breitenauer Sees. Abschließend war fraglich, wie sich der Rote Amerikanische Sumpfkrebs verhält, wenn sein besiedeltes Gewässer trockenfällt.
Artificial neural networks is a popular field of research in artificial intelli-
gence. The increasing size and complexity of huge models entail certain
problems. The lack of transparency of the inner workings of a neural net-
work makes it difficult to choose efficient architectures for different tasks.
It proves to be challenging to solve these problems, and with a lack of in-
sightful representations of neural networks, this state of affairs becomes
entrenched. With these difficulties in mind a novel 3D visualization tech-
nique is introduced. Attributes for trained neural networks are estimated
by utilizing established methods from the area of neural network optimiza-
tion. Batch normalization is used with fine-tuning and feature extraction to
estimate the importance of different parts of the neural network. A combi-
nation of the importance values with various methods like edge bundling,
ray tracing, 3D impostor and a special transparency technique results in a
3D model representing a neural network. The validity of the extracted im-
portance estimations is demonstrated and the potential of the developed
visualization is explored.
Digital transformation is a prevailing trend in the world, especially in dynamic Asia. Vietnam has recorded remarkable changes in the economy as domestic enterprises have made new strides in the digital transformation process. MB Bank, one of the prestigious financial groups in Vietnam, also takes advantage of digital transformation to have the opportunity to break through to become a large-scale technology enterprise with many factors such as improving customer experience, increasing customer base and increasing customer satisfaction. enhance competitiveness, build trust and loyalty for customers. However, in the process of converting MB, there are also many challenges that require banks to have appropriate policies to handle. It can be said that MB Bank is a typical case study of digital transformation in the banking sector in Vietnam.
Digital Transformation Maturity of Vietnam Aviation Industry: The Effect of Organizational Readiness
(2023)
The paper studies the digital transformation maturity in the context of the aviation industry in Vietnam. Digital transformation can mean enhancing existing processes, finding new opportunities within existing business domains, or finding new opportunities outside existing business domains. In the era of post Covid-19, digital transformation will play a vital role in the recovery with the support from digital technology to leverage the communication and implementation of new projects or changes.
Digital transformation and digital transformation maturity sometimes are used indistinguishing, but they are two different definitions. This paper will further explain the differences and will apply digital transformation maturity as a scale for the digital transformation in the report.
Due to the lack of experiment in the relationship between digital transformation maturity and the organizational readiness, the study will explore four components of organizational readiness, including digital leadership, digital culture, digital capabilities, and digital partnering.
FinTech is deemed to be an underexplored phenomenon even in academic and real environments. Among (1) “Sustainable FinTech” – the application of information technology as innovation in established financial services providers’ business operation; and (2) “Disruptive FinTech” – the provision of financial products and services by non-incumbents which in most cases are information technology entrepreneurs, the former receives more attention. In order to contribute to Disruptive FinTech category, the thesis strive to examine Entrepreneurial Strategy framework applied for technology players taking part in Vietnam financial market.
Challenges of Implementing Innovation Strategies at Large Organizations: A case of Lotte Group
(2023)
For many decades, one of the most important focuses of research has been on determining whether or not there is a correlation between the size of an organization and its level of innovation. Unlike small companies, large companies often have well-established structure that are hard to change and change managements seems to be much more difficult especially related to innovation. Nevertheless, there are many examples to prove the opposites. Some large organization like Apple, Amazon... always show great innovation efforts and keep changing in a much positive way. Therefore, the aim of this thesis is to discuss of how large organization can be able to implement innovation when having much drawbacks compare to SMEs. Through the use of a qualitative research approach, researcher was able to explore essential information on the innovation strategies that large companies are using in order to innovate and how they could overcome existing challenges by studying the working process of Lotte Group – one of the biggest companies in Korea.
The paper is a study focusing on exploring which factors and examining the impact of those factors influencing the entrepreneurial intention among students in the Construction industry, specifically among students of Hanoi Construction University and Hanoi Architecture University. The study also mentions some solution of this findings for entrepreneurship in the Construction field in Vietnam that the author might think of based on this research work for future study. The Theory of planned behavior is used as the theoritical framework for this study. Both qualitative and quantitative methods are employed. The questionaire will be conducted among students of the two universities mentioned above. Then, an exploratory factor analysis (EFA) will performed to test the validity of the constructs. The research findings provide factors and their impact factors influencing the entrepreneurial intention and propose some solutions to improve the entrepreneurship in the Construction field in Vietnam.
Predictive Process Monitoring is becoming more prevalent as an aid for organizations to support their operational processes. However, most software applications available today require extensive technical know-how by the operator and are therefore not suitable for most real-world scenarios. Therefore, this work presents a prototype implementation of a Predictive Process Monitoring dashboard in the form of a web application. The system is based on the PPM Camunda Plugin presented by Bartmann et al. (2021) and allows users to easily create metrics, visualizations to display these metrics, and dashboards in which visualizations can be arranged. A usability test is with test users of different computer skills is conducted to confirm the application’s user-friendliness.
In this thesis the possibilities for real-time visualization of OpenVDB
files are investigated. The basics of OpenVDB, its possibilities, as well
as NanoVDB and its GPU port, were studied. A system was developed
using PNanoVDB, the graphics API port of OpenVDB. Techniques were
explored to improve and accelerate a single ray approach of ray tracing.
To prove real-time capability, two single scattering approaches were
also implemented. One of these was selected, further investigated and
optimized to achieve interactive real-time rendering.
It is important to give artists immediate feedback on their adjustments, as
well as the possibility to change all parameters to ensure a user friendly
creation process.
In addition to the optical rendering, corresponding benchmarks were
collected to compare different improvement approaches and to prove
their relevance. Attention was paid to the rendering times and memory
consumption on the GPU to ensure optimal use. A special focus, when
rendering OpenVDB files, was put on the integrability and extensibility of
the program to allow easy integration into an existing real-time renderer
like U-Render.
The growing numbers of breeding rooks (Corvus frugilegus) in the city of Landau (Rhineland- Palatinate, Germany) increase the potential for conflict between rooks and humans, which is mainly associated with noise and faeces. Therefore, the aim of this work is a better understanding of the breeding tree selection of the rook in order to develop options for action and management in the future.
Part I of this thesis provides general background information on the rook and includes mapping of the rookeries in the Anterior Palatinate and South Palatinate including Landau in the year 2020. That mapping revealed that the number of rural colonies has decreased, while the number of urban colonies has increased in the study area in the last few years. In line with current literature, tree species and tree size were important criteria for breeding tree selection. However, the mapping showed that additional factors must be important as well.
Therefore, as rooks seem to often breed along traffic axes, Part II of this thesis examines how temperature, artificial light and noise, which are all linked to traffic axes, affect the breeding tree selection of the rook in the city of Landau. The following three hypotheses are developed: (1) manually selected breeding trees (Bm) have a warmer microclimate than manually selected non-breeding trees (Nm) or randomly selected non-breeding trees (Nr), (2) Bm are exposed to a higher light level than Nm or Nr and (3) Bm are exposed to a higher noise level than Nm or Nr. To test these hypotheses, 15 Bm, 13 Nm and 16 Nr are investigated.
The results show that Bm were exposed to more noise than both types of non-breeding trees (μBm, noise = 36.52481 dB, μNm, noise = 31.27229 dB, μNr, noise = 29.17417 dB) where the difference between Bm and Nr was significant. In addition, there was a tendency for Bm to be exposed to less light (μBm, light = 0.356 lx) than Nm (μNm, light = 0.4107692 lx) and significantly less light than Nr (μNr, light = 1.995 lx), while temperature did not differ between the groups (μBm, temp = 16.90549 °C, μNm, temp = 16.93118 °C, μNr, temp = 17.28639 °C).
This study shows for the first time that rooks prefer trees which are exposed to low light levels and high noise levels, i.e. more intense traffic noise, for breeding. It can only be speculated that the cause of this is lower enemy pressure at such sites. The fact that temperature does not seem to have any influence on breeding tree selection may be due to only small temperature differences at nest height, which might be compensated by breeding behaviour. Consequently, in the long term one management approach could be to divert traffic from inner-city areas, especially schools and hospitals, to bypasses. If tree genera suitable for rooks, such as plane trees, are planted along the bypasses, those sites could provide suitable alternative habitats to inner-city breeding locations, which become less attractive for breeding due to noise reduction. In the short term in addition to locally implemented repellent measures the most effective approach is to strengthen rook acceptance among the population. However, further research is needed to verify the results of this thesis and to gain further insights into rook breeding site selection in order to develop effective management measures.
The application of artificial intelligences on digital games became more and more successful in recent years. A drawback is, that they need lots of computing power to achieve good results, the more complex the game, the more computing power is needed. In this thesis a strategy learning-system is implemented, which is based on crowd-learned heuristics. The heuristics are given in a wiki. The research is done according to the Design Science Research Methodology. The implemented system is allied to the game Dominion. To do this, an ontology for Dominion is designed. A mapping language is defined and implemented in the system, which allows the mapping of information in the wiki to an ontology. Furthermore, metrics to rate the found strategies are defined. Using the system, users can enter a mapping for the information transfer and apply it. They can also select cards from Dominion, for which the system determines and rates strategies. Finally, the system is evaluated by Dominion-players by rating the strategies, which are found by the system, and the defined metrics.
Der Zweck dieser Arbeit ist es, sich auf die kritischen Forschungsherausforderungen und -themen zu konzentrieren, die UI/UX-Designprinzipien umgeben, mit einem Schwerpunkt auf kulturübergreifenden Konzepten aus der Perspektive von E-Learning-Plattformen. Zu diesem Zweck betrachten wir zunächst die kulturellen Dimensionen auf der Grundlage des Hofstede-Rahmens mit dem Ziel, wichtige kulturelle Werte zu identifizieren. Als zweites Ziel der Forschung erleichtert eine Reihe von Kriterien, die so genannte Usability-Heuristik von Nielsen, die Erkennung von Usability Problemen bei der Gestaltung von Benutzeroberflächen (UI). Die Usability-Heuristiken umfassen zehn Variablen, die die Interaktion zwischen dem Benutzer und einem Produkt oder System beeinflussen. Wenn wir uns näher mit
diesen Themen befassen, werden wir in der Lage sein, eine Matrix mit Beziehungen zwischen der heuristischen Bewertung von Nielsen und dem kulturellen Rahmen von Geert Hofstede aufzudecken. Abschließend erörtern wir das mögliche Potenzial kultureller Werte zur Beeinflussung von Benutzeroberflächen für E-Learning-Plattformen. In der Tat gibt es einige Funktionen in E-Learning-Plattformen, die aufgrund der Kultur weniger diskutiert werden, obwohl sie sehr praktisch in die Plattformen integriert werden können.
In der Masterthesis von Benjamin Waldmann mit dem Titel „Flusskrebse in Deutschland – Aktueller Stand der Verbreitung heimischer und invasiver gebietsfremder Flusskrebse in Deutschland; Überblick über die erfolgten Schutzmaßnahmen und den damit verbundenen Erfahrungen; Vernetzung der Akteure im Flusskrebsschutz“ wurden erstmals für alle heimischen wie gebietsfremden Flusskrebsarten (Zehnfußkrebse) Verbreitungskarten für Deutschland vorgelegt. Grundlage der Arbeit waren umfangreiche Recherchen und Abfragen zur Verbreitung der Arten in den Bundesländern bei den zuständigen Behörden, Institutionen, Artexperten und Privatpersonen. Die Rohdaten wurden qualitätsgesichert und in einem Geoinformationssystem aufbereitet und dargestellt, so dass daraus bundesweite Verbreitungskarten für jede Art in einem zehn Kilometerraster (UTM-Gitter im Bezugssystem ETRS89) erstellt werden konnten. Darüber hinaus wurden, ebenfalls auf Basis umfangreicher Recherchen und Abfragen, die unterschiedlichen Möglichkeiten für Schutzmaßnahmen für heimische Flusskrebspopulationen aufgezeigt, bewertet und daraus Empfehlungen abgeleitet. Besonderes Augenmerk wurde dabei auf das Management invasiver gebietsfremder Flusskrebsarten sowie der Umgang mit der Tierseuche Krebspest (Aphanomyces astaci) gelegt. Abschließend wurden Empfehlungen zur Vernetzung der Akteure im Flusskrebsschutz gegeben sowie die Ansprechpartner:innen in den einzelnen Bundesländern aufgeführt.
Rudolf Virchows Vorstellungen von einer human-bürgerlichen Krankenpflege in der Zeit des preußischen Kulturkampfes
Über die Auseinandersetzungen im Preußischen Haus der Abgeordneten zwischen Rudolf Virchow, dem Kultusministerium und der Zentrumspartei über die katholischen Kranken-pflegeorden – von Hilmar Conrad
Der Mediziner und liberale Abgeordnete des Preußischen Hauses der Abgeordneten Rudolf Virchow ist vielen bekannt für den Satz „‹Omnis cellula e cellula›“1, aus seinem Werk ‚Die Zellularpathologie‘. Er wies mit diesem Lehrsatz und der zugrundeliegenden Forschung nach, dass Zellen in Morphologie und Funktion die kleinste Einheit des menschlichen Körpers sind.2 Der spätere Medizinprofessor wurde am 13. Oktober 1821 in Schivelbein geboren und verstarb am 5. September 1902 in Berlin. Das Jahr 1848 ist in zweierlei Hinsicht als Schicksalsjahr für Virchow zu betrachten. Er war in dieser Zeit Militärarzt und wurde in diesem Winter 1847/48 von der preußischen Regierung nach Ober-schlesien entsendet, wo eine Typhus-Epidemie wütete. Sein Auftrag war die wissenschaftliche Analyse der Geschehnisse. An seinem 80. Geburtstag sagte er über diese Reise: „Jene 16 Tage in Oberschlesien, 1848, waren das entscheidende Ereignis meines Lebens.“3 Es war Revolution in Berlin und dort zurück, nahm er an den Kämpfen für Demokratie teil.4 Im Bericht über seine Reise schlussfolgerte er, dass zur Prävention einer solchen Seuche in Oberschlesien Demokratisierung und Bildung für die Bevölkerung notwendig seien – das war der Beginn der Sozialmedizin. Ab 1859 war er Berliner Stadtverordneter und ab 1861 Mitglied des Preußischen Abgeordnetenhauses für die linksliberale Deutsche Fortschrittspartei, die er zusammen mit Mitstreitern gegründet hatte. Von 1880 bis 1893 war er Abgeordneter im Deutschen Reichstag.5 Für ihn war „[d]ie Medizin […] eine soziale Wissenschaft, und die Politik ist weiter nichts als Medizin im Großen.“6
Was diese Person also so spannend macht, ist, dass sie gleich zwei relevanten Gruppen angehörte, welche einflussreich über die Krankenpflege sprachen: die der Politik und die der Ärzte.
Rudolf Virchow forderte am 9. November 1869 auf dem Kongress der deutschen Frauenvereine in Berlin eine Pflege, die sich außerhalb der Kirchen organisieren solle:
„Organisiren wir ganz und gar ausserhalb der kirchlichen Organisation, organisiren wir ganz innerhalb der bürgerlichen Gesellschaft als solcher, nach rein menschlichen Aufgaben, ohne irgend einen weiteren Nebenzweck.“7
Was Virchow hier forderte ist eine Hinwendung zum ‚Humanen‘8, wie er es nannte und auch in mehreren Debatten im Abgeordnetenhaus erläuterte. Dies überrascht nicht, war er doch maßgeblich am Kulturkampf im Parlament beteiligt und ein vehementer Verfechter der Säkularisierung.9 Das wissenschaftlich relevante Problem an dieser Stelle ist Virchows zentrale Äußerungen über die Krankenpflege im Kontext der geführten Debatten im Preußischen Abgeordnetenhaus darzustellen. Die wissenschaftliche Literatur geht hierbei, nur sehr oberflächlich auf das vor- und hinterher Gesagte ein, obwohl es sich um Debatten handelt. Und hier zeigt sich auch die Relevanz der Arbeit. Natürlich liegt das Interesse darauf, Virchows Vorstellungen über die Krankenpflege anhand seiner Beiträge im Preußischen Haus der Abgeordneten nachzuvollziehen, aber eben auch nachzuzeichnen, wie sich die gesamten Debatten um die Krankenpflege, in denen sich Virchow bewegte, zugetragen hatten.
Demnach ist das Erkenntnisinteresse dieser Arbeit herauszuarbeiten, welche Ansichten Rudolf Virchow über die Krankenpflege seiner Zeit hatte, wie er sie im Preußischen Haus der Abgeordneten verteidigte, wie sich die Ansichten seiner politischen Gegner und der preußischen Regierung dazu verhielten sowie daraus folgend, wie die Debatten über die Krankenpflege im Preußischen Abgeordnetenhaus verliefen.
1 Virchow, zit. n. Schipperges (1994): Rudolf Virchow, S. 58.
2 Schipperges (2008): Virchow, Rudolf (Ludwig Carl), S. 257.
3 Andree (2006): Rudolf Virchow (1821–1902) im Spannungsfeld von Glauben, Kirche und Staat, S. 99.
4 Vgl. ebd., S. 99–100.
5 Vgl. Schipperges (2008): Virchow, Rudolf (Ludwig Carl), S. 257–258.
6 Schipperges (1994): Rudolf Virchow, S. 113. Im Original kursiv.
7 Virchow (1879): Die berufsmässige Ausbildung zur Krankenpflege, auch ausserhalb der kirchlichen Organisationen, S. 49. Im Original gesperrt.
8 Vgl. Virchow (1875): Rede im Preußischen Abgeordnetenhaus, Berlin. 8. Mai 1875, S. 207.
9 Vgl. Virchow (1873): Rede im Preußischen Abgeordnetenhaus, Berlin. 17. Januar 1873, S. 359.
Advanced Auditing of Inconsistencies in Declarative Process Models using Clustering Algorithms
(2021)
To have a compliant business process of an organization, it is essential to ensure a onsistent process. The measure of checking if a process is consistent or not depends on the business rules of a process. If the process adheres to these business rules, then the process is compliant and efficient. For huge processes, this is quite a challenge. Having an inconsistency in a process can yield very quickly to a non-functional process, and that’s a severe problem for organizations. This thesis presents a novel auditing approach for handling inconsistencies from a post-execution perspective. The tool identifies the run-time inconsistencies and visualizes them in heatmaps. These plots aim to help modelers observe the most problematic constraints and help them make the right remodeling decisions. The modelers assisted with many variables can be set in the tool to see a different representation of heatmaps that help grasp all the perspectives of the problem. The heatmap sort and shows the run-time inconsistency patterns, so that modeler can decide which constraints are highly problematic and should address a re-model. The tool can be applied to real-life data sets in a reasonable run-time.
Point Rendering
(2021)
In this thesis different methods for rendering point data are shown and compared with each other. The methods can be divided into two categories. For one visual methods are introduced that strictly deal with the displaying of point primitves. The main problem here lies in the depiction of surfaces since point data, unlike traditional triangle meshes, doesn't contain any connectivity information. On the other hand data strucutres are shown that enable real-time rendering of large point clouds. Point clouds often contain large amounts of data since they are mostly generated through 3D scanning processes such as laser scanning and photogrammetry.
In this thesis, the performance of the IceCube projects photon propagation
code (clsim) is optimized. The process of GPU code analysis and perfor-
mance optimization is described in detail. When run on the same hard-
ware, the new version achieves a speedup of about 3x over the original
implementation. Comparing the unmodified code on hardware currently
used by IceCube (NVIDIA GTX 1080) against the optimized version run on
a recent GPU (NVIDIA A100) a speedup of about 9.23x is observed. All
changes made to the code are shown and their performance impact as well
as the implications for simulation accuracy are discussed individually.
The approach taken for optimization is then generalized into a recipe.
Programmers can use it as a guide, when approaching large and complex
GPU programs. In addition, the per warp job-queue, a design pattern used
for load balancing among threads in a CUDA thread block, is discussed in
detail.
Soziale Netzwerke spielen im Alltagsleben der Schülerinnen und Schüler eine entscheidende Rolle. Im Rahmen der vorliegenden Masterarbeit wurde ein Konzept für die Anzeige von Profilvorschlägen innerhalb des sozialen Netzwerks „InstaHub“, welches ein speziell für den Informatikunterricht programmiertes Werkzeug zum Thema „Datenbanken“ darstellt, entwickelt. Als Hürde stellte sich dabei dar, dass von den etablierten sozialen Netzwerken nur wenig bis gar keine Informationen über die Berechnung von Profil- oder Freundschaftsvorschlägen preisgegeben werden. Daher wurde zunächst das Wesen von Beziehungen zwischen Menschen in nicht-internetbasierten und in internetbasierten sozialen Netzwerken sowie die Gründe für Beziehungen zwischen Menschen in diesen Netzwerken dargelegt. Anhand der Beobachtung von Vorschlägen in anderen sozialen Netzwerken sowie der in InstaHub gespeicherten Nutzerdaten wurde ein Algorithmus für Profilvorschläge in InstaHub entworfen und mitsamt einer passenden Visualisierung entsprechend implementiert. Den zweiten Teil der Arbeit bildete eine Unterrichtseinheit für die Sekundarstufe II mit dem Thema Gefahren der Erzeugung und Verarbeitung von personenbezogenen Daten. In der Unterrichtseinheit dienen die Profilvorschläge in InstaHub, die auf von InstaHub über dessen Nutzer gesammelten Daten aufbauen, als Einstieg in die Thematik. Anschließend wird der Fokus von sozialen Netzwerken auf andere Online-Dienste erweitert und auf die Verarbeitung und Weitergabe dieser Daten eingegangen.
This thesis focuses on approximate inference in assumption-based argumentation frameworks. Argumentation provides a significant idea in the computerization of theoretical and practical reasoning in AI. And it has a close connection with AI, engaging in arguments to perform scientific reasoning. The fundamental approach in this field is abstract argumentation frameworks developed by Dung. Assumption-based argumentation can be regarded as an instance of abstract argumentation with structured arguments. When facing a large scale of data, a challenge of reasoning in assumption-based argumentation is how to construct arguments and resolve attacks over a given claim with minimal cost of computation and acceptable accuracy at the same time. This thesis proposes and investigates approximate methods that randomly select and construct samples of frameworks based on graphical dispute derivations to solve this problem. The presented approach aims to improve reasoning performance and get an acceptable trade-off between computational time and accuracy. The evaluation shows that for reasoning in assumption-based argumentation, in general, the running time is reduced with the cost of slightly low accuracy by randomly sampling and constructing inference rules for potential arguments over a query.
The Material Point Method (MPM) has proven to be a very capable simulation method in computer graphics that is able to model materials that were previously very challenging to animate [1, 2]. Apart from simulating singular materials, the simulation of multiple materials that interact with each other introduces new challenges. This is the focus of this thesis. It will be shown that the self-collision capabilities of the MPM can naturally handle multiple materials interacting in the same scene on a collision basis, even if the materials use distinct constitutive models. This is then extended by porous interaction of materials as in[3], which also integrates easily with MPM.It will furthermore be shown that regular single-grid MPM can be viewed as a subset of this multi-grid approach, meaning that its behavior can also be achieved if multiple grids are used. The porous interaction is generalized to arbitrary materials and freely changeable material interaction terms, yielding a flexible, user-controllable framework that is independent of specific constitutive models. The framework is implemented on the GPU in a straightforward and simple way and takes advantage of the rasterization pipeline to resolve write-conflicts, resulting in a portable implementation with wide hardware support, unlike other approaches such as [4].
„La liaison est un phénomène complexe dont la phénoménologie est encore aujourd’hui sujette à recherches et à débats. Dans la littérature classique, orthoépique ou descriptive, comme dans les recherches les plus actuelles, la liaison est considérée comme un phénomène multi-paramétrique et tous les niveaux linguistiques sont convoqués : phonologie, prosodie et syllabation, morphologie, syntaxe, lexique et sémantique, diachronie, orthographe et différentiation des styles [...] toutes les dimensions de la variation externe : variation dans le temps, dans l’espace géographique et dans l’espace social, variation dans l’espace stylistique des genres de discours“
(Eychenne/Laks 2017:1).
Dieses Zitat beschreibt die Liaison als ein sehr komplexes, von vielen Parametern beeinflusstes Phänomen. Wie gehen Lernende 1 mit einem solchen Phänomen um? Welche Liaison realisie-ren sie wie häufig? Welche Fehler treten auf? Welche Gründe gibt es für diese Fehler? Welche Auswirkungen hat ein längerer Auslandsaufenthalt des Lernenden in einem französischsprachi-gen Land auf die Produktion von Liaisons? Gibt es Unterschiede zwischen dem Erwerb der Liaison bei Kindern mit Französisch als Erstsprache (L1) und Lernenden des Französischen als Fremdsprache (L2)?
Auf all diese Fragen möchte ich im Laufe der vorliegenden Arbeit eingehen. Nach dem Zusam-mentragen einiger grundlegender Fakten über die Liaison soll daher ein Korpus mit französi-schen Sprachaufnahmen von deutschen Studierenden ausgewertet werden. Die Ergebnisse wer-den im Anschluss präsentiert und zunächst mit Resultaten von Kindern mit Französisch als L1 sowie anschließend mit Ergebnissen anderer Studien über Französischlernende verglichen.
The industry standard Decision Model and Notation (DMN) has enabled a new way for the formalization of business rules since 2015. Here, rules are modeled in so-called decision tables, which are defined by input columns and output columns. Furthermore, decisions are arranged in a graph-like structure (DRD level), which creates dependencies between them. With a given input, the decisions now can be requested by appropriate systems. Thereby, activated rules produce output for future use. However, modeling mistakes produces erroneous models, which can occur in the decision tables as well as at the DRD level. According to the Design Science Research Methodology, this thesis introduces an implementation of a verification prototype for the detection and resolution of these errors while the modeling phase. Therefore, presented basics provide the needed theoretical foundation for the development of the tool. This thesis further presents the architecture of the tool and the implemented verification capabilities. Finally, the created prototype is evaluated.
On-screen interactive presentations have got immense popularity in the domain of attentive interfaces recently. These attentive screens adapt their behavior according to the user's visual attention. This thesis aims to introduce an application that would enable these attentive interfaces to change their behavior not just according to the gaze data but also facial features and expressions. The modern era requires new ways of communications and publications for advertisement. These ads need to be more specific according to people's interests, age, and gender. When advertising, it's important to get a reaction from the user but not every user is interested in providing feedback. In such a context more, advance techniques are required that would collect user's feedback effortlessly. The main problem this thesis intends to resolve is, to apply advanced techniques of gaze and face recognition to collect data about user's reactions towards different ads being played on interactive screens. We aim to create an application that enables attentive screens to detect a person's facial features, expressions, and eye gaze. With eye gaze data we can determine the interests and with facial features, age and gender can be specified. All this information will help in optimizing the advertisements.
Blockchain in Healthcare
(2020)
The underlying characteristics of blockchain can facilitate data provenance, data integrity, data security, and data management. It has the potential to transform the healthcare sector. Since the introduction of Bitcoin in the fintech industry, the blcockhain technology has been gaining a lot of traction and its purpose is not just limited to finance. This thesis highlights the inner workings of blockchain technology and its application areas with possible existing solutions. Blockchain could lay the path for a new revolution in conventional healthcare systems. We presented how individual sectors within the healthcare industry could use blockchain and what solution persists. Also, we have presented our own concept to improve the existing paper-based prescription management system which is based on Hyperledger framework. The results of this work suggest that healthcare can benefit from blockchain technology bringing in the new ways patients can be treated.
Since the invention of U-net architecture in 2015, convolutional networks based on its encoder-decoder approach significantly improved results in image analysis challenges. It has been proven that such architectures can also be successfully applied in different domains by winning numerous championships in recent years. Also, the transfer learning technique created an opportunity to push state-of-the-art benchmarks to a higher level. Using this approach is beneficial for the medical domain, as collecting datasets is generally a difficult and expensive process.
In this thesis, we address the task of semantic segmentation with Deep Learning and make three main contributions and release experimental results that have practical value for medical imaging.
First, we evaluate the performance of four neural network architectures on the dataset of the cervical spine MRI scans. Second, we use transfer learning from models trained on the Imagenet dataset and compare it to randomly initialized networks. Third, we evaluate models trained on the bias field corrected and raw MRI data. All code to reproduce results is publicly available online.
Constituent parsing attempts to extract syntactic structure from a sentence. These parsing systems are helpful in many NLP applications such as grammar checking, question answering, and information extraction. This thesis work is about implementing a constituent parser for German language using neural networks. Over the past, recurrent neural networks have been used in building a parser and also many NLP applications. In this, self-attention neural network modules are used intensively to understand sentences effectively. With multilayered self-attention networks, constituent parsing achieves 93.68% F1 score. This is improved even further by using both character and word embeddings as a representation of the input. An F1 score of 94.10% was the best achieved by constituent parser using only the dataset provided. With the help of external datasets such as German Wikipedia, pre-trained ELMo models are used along with self-attention networks achieving 95.87% F1 score.
Thesis is devoted to the topic of challenges and solutions for human resources management (HRM) in international organizations. The aim is to investigate methodological approaches to assessment of HRM challenges and solutions, and to apply them on practice, to develop ways of improvement of HRM of a particular enterprise. The practical research question investigated is “Is the Ongoing Professional Development – Strategic HRM (OPD-SHRM) model a better solution for HRM system of PrJSC “Philip Morris Ukraine”?”
To achieve the aim of this work and to answer the research question, we have studied theoretical approaches to explaining and assessing HRM in section 1, analyzed HRM system of an international enterprise in section 2, and then synthesized theory and practice to find intersection points in section 3.
Research findings indicate that the main challenge of HRM is to balance between individual and organizational interests. Implementation of OPD-SHRM is one of the solutions. Switching focus from satisfaction towards success will bring both tangible and intangible benefits for individuals and organization. In case of PrJSC “Philip Morris Ukraine”, the maximum forecasted increase is 330% in net profit, 350% in labor productivity, and 26% in Employee Development and Engagement Index.
Current political issues are often reflected in social media discussions, gathering politicians and voters on common platforms. As these can affect the public perception of politics, the inner dynamics and backgrounds of such debates are of great scientific interest. This thesis takes user generated messages from an up-to-date dataset of considerable relevance as Time Series, and applies a topic-based analysis of inspiration and agenda setting to it. The Institute for Web Science and Technologies of the University Koblenz-Landau has collected Twitter data generated beforehand by candidates of the European Parliament Election 2019. This work processes and analyzes the dataset for various properties, while focusing on the influence of politicians and media on online debates. An algorithm to cluster tweets into topical threads is introduced. Subsequently, Sequential Association Rules are mined, yielding wide array of potential influence relations between both actors and topics. The elaborated methodology can be configured with different parameters and is extensible in functionality and scope of application.
Unkontrolliert gewachsene Software-Architekturen zeichnen sich i.d.R. durch fehlende oder schlecht nachvollziehbare Strukturen aus. Hierfür können als Gründe beispielsweise mangelhafte Definitionen oder ein langsames Erodieren sein. Dies ist auch unter dem Begriff "Big Ball of Mud" bekannt. Langfristig erhöhen solche architekturellen Mängel nicht nur die Entwicklungskosten, sondern können letztendlich auch Veränderungen vollständig verhindern.
Die Software-Architektur benötigt somit eine kontinuierliche Weiterentwicklung, um solchen Effekten entgegen wirken zu können. Eine gute Software-Architektur unterstützt die Software-Entwicklung und erhöht die Produktivität. Auf der Ebene von Quellcode existieren bereits etablierte Vorgehensweisen zur kontrollierten Verbesserung der Qualität. Im Gegensatz hierzu existieren für Verbesserungen einer Software-Architektur jedoch keine allgemeingültigen Vorgehensweisen, welche unabhängig vom Anwendungsfall angewandt werden können. An diesem Punkt setzt die vorliegende Arbeit an.
Bisherige Arbeiten beschäftigen sich einerseits nur mit Teilpunkten des Problems. Anderseits existieren zwar bereits Vorgehensweisen zum Treffen von Architekturentscheidungen, jedoch agieren diese auf einer stark abstrakten Ebene ohne praktische Beispiele. Diese Arbeit stellt eine leichtgewichtige Vorgehensweise zum gezielten Verbessern einer Software-Architektur vor. Die Vorgehensweise basiert auf einem generischen Problemlösungsprozess. Auf dieser Basis ist ein Prozess zum Lösen von Problemen einer Software-Architektur entwickelt worden. Im Fokus der Arbeit stehen zur Eingrenzung des Umfanges architektonische Probleme aufgrund geforderter Variabilität sowie externer Abhängigkeiten.
Die wissenschaftliche Methodik, welcher der Arbeit zugrunde liegt, agiert im Rahmen der Design Science Research (DSR). Über mehrere Iterationen hinweg wurde eine Vorgehensweise entwickelt, welche sich an Softwareentwickler mit zwei bis drei Jahren Erfahrung und Kenntnissen über Grundlage der Softwareentwicklung und Software-Architektur richtet. Fünf Schritte inkl. Verweise auf aussagekräftige Literatur leiten Anwender anschließend durch den Prozess zur gezielten Verbesserung einer Software-Architektur.
Belief revision is the subarea of knowledge representation which studies the dynamics of epistemic states of an agent. In the classical AGM approach, contraction, as part of the belief revision, deals with the removal of beliefs in knowledge bases. This master's thesis presents the study and the implementation of concept contraction in the Description Logic EL. Concept contraction deals with the following situation. Given two concept C and D, assuming that C is subsumed by D, how can concept C be changed so that it is not subsumed by D anymore, but is as similar as possible to C? This approach of belief change is different from other related work because it deals with contraction in the level of concepts and not T-Boxes and A-Boxes in general. The main contribution of the thesis is the implementation of the concept contraction. The implementation provides insight into the complexity of contraction in EL, which is tractable since the main inference task in EL is also tractable. The implementation consists of the design of five algorithms that are necessary for concept contraction. The algorithms are described, illustrated with examples, and analyzed in terms of time complexity. Furthermore, we propose an new approach for a selection function, adapt for the concept contraction. The selection function uses metadata about the concepts in order to select the best from an input set. The metadata is modeled in a framework that we have designed, based on standard metadata frameworks. As an important part of the concept contraction, the selection function is responsible for selecting the best concepts that are as similar as possible to concept C. Lastly, we have successfully implemented the concept contraction in Python, and the results are promising.
To construct a business process model manually is a highly complex and error-prone task which takes a lot of time and deep insights into the organizational structure, its operations and business rules. To improve the output of business analysts dealing with this process, different techniques have been introduced by researchers to support them during construction with helpful recommendations. These supporting recommendation systems vary in their way of what to recommend in the first place as well as their calculations taking place under the hood to recommend the most fitting element to the user. After a broad introduction into the field of business process modeling and its basic recommendation structures, this work will take a closer look at diverse proposals and descriptions published in current literature regarding implementation strategies to effectively and efficiently assist modelers during their business process model creation. A critical analysis of presentations in the selected literature will point out strengths and weaknesses of their approaches, studies and descriptions of those. As a result, the final concept matrix in this work will give a precise and helpful overview about the key features and recommendation methods used and implemented in previous research studies to pinpoint an entry into future works without the downsides already spotted by fellow researchers.
Commonsense reasoning can be seen as a process of identifying dependencies amongst events and actions. Understanding the circumstances surrounding these events requires background knowledge with sufficient breadth to cover a wide variety of domains. In the recent decades, there has been a lot of work in extracting commonsense knowledge, a number of these projects provide their collected data as semantic networks such as ConceptNet and CausalNet. In this thesis, we attempt to undertake the Choice Of Plausible Alternatives (COPA) challenge, a problem set with 1000 questions written in multiple-choice format with a premise and two alternative choices for each question. Our approach differs from previous work by using shortest paths between concepts in a causal graph with the edge weight as causality metric. We use CausalNet as primary network and implement a few design choices to explore the strengths and drawbacks of this approach, and propose an extension using ConceptNet by leveraging its commonsense knowledge base.
Implementation of Agile Software Development Methodology in a Company – Why? Challenges? Benefits?
(2019)
The software development industry is enhancing day by day. The introduction of agile software development methodologies was a tremendous structural change in companies. Agile transformation provides unlimited opportunities and benefits to the existing and new developing companies. Along with benefits, agile conversion also brings many unseen challenges. New entrants have the advantage of being flexible and cope with the environmental, consumer, and cultural changes, but existing companies are bound to rigid structure.
The goal of this research is to have deep insight into agile software development methodology, agile manifesto, and principles behind the agile manifesto. The prerequisites company must know for agile software development implementation. The benefits a company can achieve by implementing agile software development. Significant challenges that a company can face during agile implementation in a company.
The research objectives of this study help to generate strong motivational research questions. These research questions cover the cultural aspects of company agility, values and principles of agile, benefits, and challenges of agile implementation. The project management triangle will show how benefits of cost, benefits of time, and benefits of quality can be achieved by implementing agile methodologies. Six significant areas have been explored, which shows different challenges a company can face during implementation agile software development methodology. In the end, after the in depth systematic literature review, conclusion is made following some open topics for future work and recommendations on the topic of implementation of agile software development methodology in a company.
Business Process Querying (BPQ) is a discipline in the field of Business Process Man- agement which helps experts to understand existing process models and accelerates the development of new ones. Its queries can fetch and merge these models, answer questions regarding the underlying process, and conduct compliance checking in return. Many languages have been deployed in this discipline but two language types are dominant: Logic-based languages use temporal logic to verify models as finite state machines whereas graph-based languages use pattern matching to retrieve subgraphs of model graphs directly. This thesis aims to map the features of both language types to features of the other to identify strengths and weaknesses. Exemplarily, the features of Computational Tree Logic (CTL) and The Diagramed Modeling Language (DMQL) are mapped to one another. CTL explores the valid state space and thus is better for behavioral querying. Lacking certain structural features and counting mechanisms it is not appropriate to query structural properties. In contrast, DMQL issues structural queries and its patterns can reconstruct any CTL formula. However, they do not always achieve exactly the same semantic: Patterns treat conditional flow as sequential flow by ignoring its conditions. As a result, retrieved mappings are invalid process execution sequences, i.e. false positives, in certain scenarios. DMQL can be used for behavioral querying if these are absent or acceptable. In conclusion, both language types have strengths and are specialized for different BPQ use cases but in certain scenarios graph-based languages can be applied to both. Integrating the evaluation of conditions would remove the need for logic-based languages in BPQ completely.
Data visualization is an effective way to explore data. It helps people to get a valuable insight of the data by placing it in a visual context. However, choosing a good chart without prior knowledge in the area is not a trivial job. Users have to manually explore all possible visualizations and decide upon ones that reflect relevant and desired trend in the data, are insightful and easy to decode, have a clear focus and appealing appearance. To address these challenges we developed a Tool for Automatic Generation of Good viSualizations using Scoring (TAG²S²). The approach tackles the problem of identifying an appropriate metric for judging visualizations as good or bad. It consists of two modules: visualization detection: given a data-set it creates a list of combination of data attributes for scoring and visualization ranking: scores each chart and decides which ones are good or bad. For the later, an utility metric of ten criteria was developed and each visualization detected in the first module is evaluated on these criteria. Only those visualizations that received enough scores are then presented to the user. Additionally to these data parameters, the tool considers user perception regarding the choice of visual encoding when selecting a visualization. To evaluate the utility of the metric and the importance of each criteria, test cases were developed, executed and the results presented.
Clubs, such as Scouts, rely on the work of their volunteer members, who have a variety of tasks to accomplish. Often there are sudden changes in their organization teams and offices, whereby planning steps are lost and inexperience in planning occurs. Since the special requirements are not covered by already existing tools, ScOuT, a planning tool for the organization administration, is designed and developed in this work to support clubs with regard to the mentioned problems. The focus was on identifying and using various suitable guidelines and heuristic methods to create a usable interface. The developed product was evaluated empirically by a user survey in terms of usability.
The result of this study shows that already a high degree of the desired goal could be reached by the inclusion of the guidelines and methods. From this it can be concluded that with the help of user-specific concept ideas and the application of suitable guidelines and methods, a suitable basis for a usable application to support clubs can be created.
The goal of simulations in computergraphics is the simulation of realistic phenomena of materials. Therefore, internal and external acting forces are accumulated in each timestep. From those, new velocities get calculated that ultimately change the positions of geometry or particles. Position Based Dynamics omits thie velocity layer and directly works on the positions. Constraints are a set of rules defining the simulated material. Those rules must not be violated throughout the simulation. If this happens, the violating positions get changed so that the constraints get fullfilled once again. In this work a PBD-framework gets implemented, that allows simulations of solids and fluids. Constraints get solved using GPU implementations of Gauss-Seidel and Gauss-Jakobi solvers. Results are physically plausible simulations that are real-time capable.
The erosion of the closed innovation paradigm in conjunction with increasing competitive pressure has boosted the interest of both researchers and organizations in open innovation. Despite such rising interest, several companies remain reluctant to open their organizational boundaries to practice open innovation. Among the many reasons for such reservation are the pertinent complexity of transitioning toward open innovation and a lack of understanding of the procedures required for such endeavors. Hence, this thesis sets out to investigate how organizations can open their boundaries to successfully transition from closed to open innovation by analyzing the current literature on open innovation. In doing so, the transitional procedures are structured and classified into a model comprising three phases, namely unfreezing, moving, and institutionalizing of changes. Procedures of the unfreezing phase lay the foundation for a successful transition to open innovation, while procedures of the moving phase depict how the change occurs. Finally, procedures of the institutionalizing phase contribute to the sustainability of the transition by employing governance mechanisms and performance measures. Additionally, the individual procedures are characterized along with their corresponding barriers and critical success factors. As a result of this structured depiction of the transition process, a guideline is derived. This guideline includes the commonly employed actions of successful practitioners of open innovation, which may serve as a baseline for interested parties of the paradigm. With the derivation of the guideline and concise depiction of the individual transitional phases, this thesis consequently reduces the overall complexity and increases the comprehensibility of the transition and its implications for organizations.
With the appearance of modern virtual reality (VR) headsets on the consumer market, there has been the biggest boom in the history of VR technology. Naturally, this was accompanied by an increasing focus on the problems of current VR hardware. Especially the control in VR has always been a complex topic.
One possible solution is the Leap Motion, a hand tracking device that was initially developed for desktop use, but with the last major software update it can be attached to standard VR headsets. This device allows very precise tracking of the user’s hands and fingers and their replication in the virtual world.
The aim of this work is to design virtual user interfaces that can be operated with the Leap Motion to provide a natural method of interaction between the user and the VR environment. After that, subject tests are performed to evaluate their performance and compare them to traditional VR controllers.
Data flow models in the literature are often very fine-grained, which transfers to the data flow analysis performed on them and thus leads to a decrease in the analysis' understandability. Since a data flow model, which abstracts from the majority of implementation details of the program modeled, allows for potentially easier to understand data flow analyses, this master thesis deals with the specification and construction of a highly abstracted data flow model and the application of data flow analyses on this model. The model and the analyses performed on it have been developed in a test-driven manner, so that a wide range of possible data flow scenarios could be covered. As a concrete data flow analysis, a static security check in the form of a detection of insufficient user input sanitization has been performed. To date, there's no data flow model on a similarly high level of abstraction. The proposed solution is therefore unique and facilitates developers without expertise in data flow analysis to perform such analyses.
Despite the inception of new technologies at a breakneck pace, many analytics projects fail mainly due to the use of incompatible development methodologies. As big data analytics projects are different from software development projects, the methodologies used in software development projects could not be applied in the same fashion to analytics projects. The traditional agile project management approaches to the projects do not consider the complexities involved in the analytics. In this thesis, the challenges involved in generalizing the application of agile methodologies will be evaluated, and some suitable agile frameworks which are more compatible with the analytics project will be explored and recommended. The standard practices and approaches which are currently applied in the industry for analytics projects will be discussed concerning enablers and success factors for agile adaption. In the end, after the comprehensive discussion and analysis of the problem and complexities, a framework will be recommended that copes best with the discussed challenges and complexities and is generally well suited for the most data-intensive analytics projects.
In dieser Arbeit wird eine Unterrichtsreihe beschrieben, welche aus den drei Bereichen „mathematische Relationen“, „Datenbanken in Sozialen Netzwerken“ und „Datenschutz“ zusammengesetzt ist. Zu jedem Bereich wird ein eigener Unterrichtsentwurf präsentiert.
Außerdem wurde im Rahmen der vorliegenden Arbeit ein Programm zur Visualisierung der Relationen des Sozialen Netzwerks Instahub entworfen, welches im Anschluss an die Beschreibung der Unterrichtsreihe aufgeführt wird.
Diese Arbeit soll das von Dietz und Oppermann entwickelte Planspiel „Datenschutz 2.0“ an den heutigen Alltag der Schüler anpassen, die Benutzung in der Sekundarstufe II ermöglichen und die technischen und gesetzlichen Problematiken des Planspiels beheben. Das mit dem Planspiel aufgegriffene Thema Datenschutz ist im rheinland-pfälzischen Informatik-Lehrplan für die Sekundarstufe II verankert. Hier wird der Begriff Datenschutz in der Reihe „Datenerhebung unter dem Aspekt Datenschutz beurteilen“ genannt. Jedoch werden in dem Planspiel keine Daten erhoben, sondern die selbst hinterlassenen Datenspuren untersucht. Diese Form des Datenschutzes ist im Grundkurs in der vorgeschlagenen Reihe „Datensicherheit unter der Berücksichtigung kryptologischer Verfahren erklären und beachten“ unter dem Thema Kommunikation in Rechnernetzen zu finden. Im Leistungskurs steht die Datensicherheit in gleichbenannter Reihe und Thema und in der Reihe „Datenerhebung unter dem Aspekt Datenschutz beurteilen“ im Thema Wechselwirkung zwischen Informatiksysteme, Individuum und Gesellschaft.
The Internet of Things (IoT) is a fast-growing, technological concept, which aims to integrate various physical and virtual objects into a global network to enable interaction and communication between those objects (Atzori, Iera and Morabito, 2010). The application possibilities are manifold and may transform society and economy similarly to the usage of the internet (Chase, 2013). Furthermore, the Internet of Things occupies a central role for the realisation of visionary future concepts, for example, Smart City or Smart Healthcare. In addition, the utilisation of this technology promises opportunities for the enhancement of various sustainability aspects, and thus for the transformation to a smarter, more efficient and more conscious dealing with natural resources (Maksimovic, 2017). The action principle of sustainability increasingly gains attention in the societal and academical discourse. This is reasoned by the partly harmful consumption and production patterns of the last century (Mcwilliams et al., 2016). Relating to sustainability, the advancing application of IoT technology also poses risks. Following the precautionary principle, these risks should be considered early (Harremoës et al., 2001). Risks of IoT for sustainability include the massive amounts of energy and raw materials which are required for the manufacturing and operation of IoT objects and furthermore, the disposal of those objects (Birkel et al., 2019). The exact relations in the context of IoT and sustainability are insufficiently explored to this point and do not constitute a central element within the discussion of this technology (Behrendt, 2019). Therefore, this thesis aims to develop a comprehensive overview of the relations between IoT and sustainability.
To achieve this aim, this thesis utilises the methodology of Grounded Theory in combination with a comprehensive literature review. The analysed literature primarily consists of research contributions in the field of Information Technology (IT). Based on this literature, aspects, solution approaches, effects and challenges in the context of IoT and sustainability were elaborated. The analysis revealed two central perspectives in this context. IoT for Sustainability (IoT4Sus) describes the utilisation and usage of IoT-generated information to enhance sustainability aspects. In contrast, Sustainability for IoT (Sus4IoT) fo-cuses on sustainability aspects of the applied technology and highlights methods to reduce negative impacts, which are associated with the manufacturing and operation of IoT. Elaborated aspects and relations were illustrated in the comprehensive CCIS Framework. This framework represents a tool for the capturing of relevant aspects and relations in this context and thus supports the awareness of the link between IoT and sustainability. Furthermore, the framework suggests an action principle to optimise the performance of IoT systems regarding sustainability.
The central contribution of this thesis is represented by the providence of the CCIS Framework and the contained information regarding the aspects and relations of IoT and sustainability.