Filtern
Erscheinungsjahr
Dokumenttyp
- Masterarbeit (92) (entfernen)
Sprache
- Englisch (92) (entfernen)
Volltext vorhanden
- ja (92) (entfernen)
Schlagworte
- Internet of Things (3)
- virtual reality (3)
- Beschaffung (2)
- 2019 European Parliament Election (1)
- API analysis (1)
- API-Analyse (1)
- Agentenorientiertes Software Engineering (1)
- Akzeptanz (1)
- Analysis of social platform (1)
- Articles for Deletion (1)
- Artificial Intelligence (1)
- Association Rules (1)
- Bedarfsforschung (1)
- Belief change, concept contraction, EL (1)
- Biometric Authentication (1)
- Blog marketing (1)
- Breeding tree selection (1)
- Business Process Management Recommender Systems Survey (1)
- Bürgerbeiteiligung (1)
- Case Study Analysis (1)
- Challenges (1)
- Computersimulation (1)
- Corvus frugilegus (1)
- Criteria Matrix (1)
- DMN (1)
- DTI (1)
- Data Mining (1)
- Datenaustausch (1)
- Developer profiling (1)
- Diffusionsbildgebung (1)
- Dokumentation (1)
- Dynamische Analyse (1)
- E-Partizipation (1)
- E-participation (1)
- EU (1)
- Effectiveness (1)
- Einkauf (1)
- Entwickler Profil (1)
- Fahrverhalten (1)
- Feature Modeling (1)
- Fiber Tracking (1)
- Fingerprint Recognition (1)
- Function Words (1)
- GPU (1)
- Gehirn (1)
- Grounded Theory (1)
- Handsfree editing (1)
- Human resources management (1)
- I-messages (1)
- IBM Bluemix (1)
- IceCube (1)
- International organization (1)
- Interoperability (1)
- Interoperabilität (1)
- IoT (1)
- Knowledge Engineering (1)
- MSR (1)
- Maschinelles Lernen (1)
- Material Point Method (1)
- MeVisLab (1)
- Mitral Valve (1)
- Mitralklappe (1)
- Modellfahrzeug (1)
- Multi-Agenten-Systeme (1)
- Natural Language Processing (1)
- Neutino (1)
- OPD-SHRM (1)
- OpenVDB (1)
- Organizational Change (1)
- PEPPOL (1)
- Pan European Public Procurement OnLine (1)
- Pfadverfolgung (1)
- Physiksimulation (1)
- Political Communication (1)
- Real-Time (1)
- Recommender Systems, Business Process Modeling, Literature Review (1)
- Reddit (1)
- Reengineering (1)
- Reverse Engineering (1)
- Robust Principal Component Analysis (1)
- Rook (1)
- STOF Model (1)
- Sand (1)
- Sattelkraftfahrzeug (1)
- Sattelzug (1)
- Schnee (1)
- Segmentation (1)
- Segmentierung (1)
- Sensing as a Service (1)
- Sentiment Analysis, ChatGPT, Students sentiments (1)
- Service identification (1)
- Service-Identifizierung (1)
- Service-orientierte Architektur (1)
- Serviceorientierte Architektur (1)
- Software (1)
- Software Engineering (1)
- Software Repositories (1)
- Software migration (1)
- Software-Migration (1)
- Softwaretest (1)
- Solutions (1)
- Sozialwissenschaftliche Simulation (1)
- Speaker Recognition (1)
- Statistical Shape Model (1)
- Sustainability (1)
- Testen (1)
- Text Analysis (1)
- Text Mining (1)
- Text classification (1)
- Tool Evaluation (1)
- Ultraschall (1)
- Ultrasound (1)
- Umfrage in Koblenz (1)
- VCD (1)
- Vegetation distribution (1)
- Verification (1)
- Virtual Company Dossier (1)
- WEB (1)
- Web Analytics (1)
- Web Analytics Framework (1)
- Web Mining (1)
- Web log (1)
- Weblog (1)
- Wikipedia (1)
- You-messages (1)
- blockchain (1)
- cultural dimensions (1)
- data (1)
- data sharing (1)
- distributed information systems (1)
- distributed ledger (1)
- eGovernment (1)
- eSourcing (1)
- framework (1)
- functional web testing tools (1)
- grassland (1)
- healthcare (1)
- image warping (1)
- internet of things (1)
- iot development platforms (1)
- leap motion (1)
- mPayments (1)
- machine learning (1)
- micro-agent (1)
- multi-agent systems (1)
- performance optimization (1)
- simulation (1)
- social simulation (1)
- stereoscopic rendering (1)
- survey in Koblenz (1)
- tracking (1)
- vegetation modeling (1)
- visualization (1)
- volume rendering (1)
- warp divergence (1)
Mit dem Erscheinen moderner Virtual Reality (VR) Headsets auf dem Verbrauchermarkt, gab es den bisher größten Aufschwung in der Geschichte der VR Technologie. Damit einhergehend rücken aber auch die Problematiken aktueller VR Hardware immer mehr in den Vordergrund. Insbesondere die Steuerung in VR war schon immer ein komplexes Thema.
Eine mögliche Lösung bietet die Leap Motion: Ein Hand-Tracking Gerät, welches ursprünglich für den Desktop-Einsatz entwickelt wurde, aber mit dem letzten größeren Softwareupdate an üblichen VR Headsets angebracht werden kann. Dieses Gerät ermöglicht ein sehr genaues Tracking beider Hände und aller Finger. Damit ist es möglich, diese vollständig in der VR Welt zu replizieren und zur Steuerung zu verwenden.
Ziel dieser Arbeit ist es, virtuelle Benutzeroberflächen zu entwerfen, die mit der Leap Motion bedient werden können. Dies soll eine natürliche Interaktion zwischen dem Benutzer und der VR-Umgebung ermöglichen. Danach werden mit Hilfe einer Demoanwendung Probanden-Tests durchgeführt, um ihre Leistung zu bewerten und mit herkömmlichen VR-Reglern zu vergleichen.
The purpose of this thesis is to explore the sentiment distributions of Wikipedia concepts.
We analyse the sentiment of the entire English Wikipedia corpus, which includes 5,669,867 articles and 1,906,375 talks, by using a lexicon-based method with four different lexicons.
Also, we explore the sentiment distributions from a time perspective using the sentiment scores obtained from our selected corpus. The results obtained have been compared not only between articles and talks but also among four lexicons: OL, MPQA, LIWC, and ANEW.
Our findings show that among the four lexicons, MPQA has the highest sensitivity and ANEW has the lowest sensitivity to emotional expressions. Wikipedia articles show more sentiments than talks according to OL, MPQA, and LIWC, whereas Wikipedia talks show more sentiments than articles according to ANEW. Besides, the sentiment has a trend regarding time series, and each lexicon has its own bias regarding text describing different things.
Moreover, our research provides three interactive widgets for visualising sentiment distributions for Wikipedia concepts regarding the time and geolocation attributes of concepts.
To construct a business process model manually is a highly complex and error-prone task which takes a lot of time and deep insights into the organizational structure, its operations and business rules. To improve the output of business analysts dealing with this process, different techniques have been introduced by researchers to support them during construction with helpful recommendations. These supporting recommendation systems vary in their way of what to recommend in the first place as well as their calculations taking place under the hood to recommend the most fitting element to the user. After a broad introduction into the field of business process modeling and its basic recommendation structures, this work will take a closer look at diverse proposals and descriptions published in current literature regarding implementation strategies to effectively and efficiently assist modelers during their business process model creation. A critical analysis of presentations in the selected literature will point out strengths and weaknesses of their approaches, studies and descriptions of those. As a result, the final concept matrix in this work will give a precise and helpful overview about the key features and recommendation methods used and implemented in previous research studies to pinpoint an entry into future works without the downsides already spotted by fellow researchers.
The thesis develops and evaluates a hypothetical model of the factors that influence user acceptance of weblog technology. Previous acceptance studies are reviewed, and the various models employed are discussed. The eventual model is based on the technology acceptance model (TAM) by Davis et al. It conceptualizes and operationalizes a quantitative survey conducted by means of an online questionnaire, strictly from a user perspective. Finally, it is tested and validated by applying methods of data analysis.
This Master Thesis is an exploratory research to determine whether it is feasible to construct a subjectivity lexicon using Wikipedia. The key hypothesis is that that all quotes in Wikipedia are subjective and all regular text are objective. The degree of subjectivity of a word, also known as ''Quote Score'' is determined based on the ratio of word frequency in quotations to its frequency outside quotations. The proportion of words in the English Wikipedia which are within quotations is found to be much smaller as compared to those which are not in quotes, resulting in a right-skewed distribution and low mean value of Quote Scores.
The methodology used to generate the subjectivity lexicon from text corpus in English Wikipedia is designed in such a way that it can be scaled and reused to produce similar subjectivity lexica of other languages. This is achieved by abstaining from domain and language-specific methods, apart from using only readily-available English dictionary packages to detect and exclude stopwords and non-English words in the Wikipedia text corpus.
The subjectivity lexicon generated from English Wikipedia is compared against other lexica; namely MPQA and SentiWordNet. It is found that words which are strongly subjective tend to have high Quote Scores in the subjectivity lexicon generated from English Wikipedia. There is a large observable difference between distribution of Quote Scores for words classified as strongly subjective versus distribution of Quote Scores for words classified as weakly subjective and objective. However, weakly subjective and objective words cannot be differentiated clearly based on Quote Score. In addition to that, a questionnaire is commissioned as an exploratory approach to investigate whether subjectivity lexicon generated from Wikipedia could be used to extend the coverage of words of existing lexica.
The content aggregator platform Reddit has established itself as one of the most popular websites in the world. However, scientific research on Reddit is hindered as Reddit allows (and even encourages) user anonymity, i.e., user profiles do not contain personal information such as the gender. Inferring the gender of users in large-scale could enable the analysis of gender-specific areas of interest, reactions to events, and behavioral patterns. In this direction, this thesis suggests a machine learning approach of estimating the gender of Reddit users. By exploiting specific conventions in parts of the website, we obtain a ground truth for more than 190 million comments of labeled users. This data is then used to train machine learning classifiers to use them to gain insights about the gender balance of particular subreddits and the platform in general. By comparing a variety of different approaches for classification algorithm, we find that character-level convolutional neural network achieves performance with an 82.3% F1 score on a task of predicting a gender of a user based on his/her comments. The score surpasses 85% mark for frequent users with more than 50 comments. Furthermore, we discover that female users are less active on Reddit platform, they write fewer comments and post in fewer subreddits on average, when compared to male users.
Since the invention of U-net architecture in 2015, convolutional networks based on its encoder-decoder approach significantly improved results in image analysis challenges. It has been proven that such architectures can also be successfully applied in different domains by winning numerous championships in recent years. Also, the transfer learning technique created an opportunity to push state-of-the-art benchmarks to a higher level. Using this approach is beneficial for the medical domain, as collecting datasets is generally a difficult and expensive process.
In this thesis, we address the task of semantic segmentation with Deep Learning and make three main contributions and release experimental results that have practical value for medical imaging.
First, we evaluate the performance of four neural network architectures on the dataset of the cervical spine MRI scans. Second, we use transfer learning from models trained on the Imagenet dataset and compare it to randomly initialized networks. Third, we evaluate models trained on the bias field corrected and raw MRI data. All code to reproduce results is publicly available online.
Despite the inception of new technologies at a breakneck pace, many analytics projects fail mainly due to the use of incompatible development methodologies. As big data analytics projects are different from software development projects, the methodologies used in software development projects could not be applied in the same fashion to analytics projects. The traditional agile project management approaches to the projects do not consider the complexities involved in the analytics. In this thesis, the challenges involved in generalizing the application of agile methodologies will be evaluated, and some suitable agile frameworks which are more compatible with the analytics project will be explored and recommended. The standard practices and approaches which are currently applied in the industry for analytics projects will be discussed concerning enablers and success factors for agile adaption. In the end, after the comprehensive discussion and analysis of the problem and complexities, a framework will be recommended that copes best with the discussed challenges and complexities and is generally well suited for the most data-intensive analytics projects.
Particle Swarm Optimization ist ein Optimierungsverfahren, das auf der Simulation von Schwärmen basiert.
In dieser Arbeit wird ein modifizierter Algorithmus, der durch Khan et al. 2010 eingeführt wurde, zur Schätzung der lokalen Kamerapose in 6DOF verwendet. Die Poseschätzung basiert auf kontinuierlichen Farb-und Tiefendaten, die durch einen RGB-D Sensor zur Verfügung gestellt werden. Daten werden von unterschiedlichen Posen aufgenommen und als gemeinsames Model registriert. Die Genauigkeit und Berechnungsdauer der Implementierung wird mit aktuellen Algorithmen verglichen und in unterschiedlichen Konfigurationen evaluiert.
Current political issues are often reflected in social media discussions, gathering politicians and voters on common platforms. As these can affect the public perception of politics, the inner dynamics and backgrounds of such debates are of great scientific interest. This thesis takes user generated messages from an up-to-date dataset of considerable relevance as Time Series, and applies a topic-based analysis of inspiration and agenda setting to it. The Institute for Web Science and Technologies of the University Koblenz-Landau has collected Twitter data generated beforehand by candidates of the European Parliament Election 2019. This work processes and analyzes the dataset for various properties, while focusing on the influence of politicians and media on online debates. An algorithm to cluster tweets into topical threads is introduced. Subsequently, Sequential Association Rules are mined, yielding wide array of potential influence relations between both actors and topics. The elaborated methodology can be configured with different parameters and is extensible in functionality and scope of application.
The output of eye tracking Web usability studies can be visualized to the analysts as screenshots of the Web pages with their gaze data. However, the screenshot visualizations are found to be corrupted whenever there are recorded fixations on fixed Web page elements on different scroll positions. The gaze data are not gathered on their fixated fixed elements; rather they are scattered on their recorded scroll positions. This problem has raised our attention to find an approach to link gaze data to their intended fixed elements and gather them in one position on the screenshot. The approach builds upon the concept of creating the screenshot during the recording session, where images of the viewport are captured on visited scroll positions and lastly stitched into one Web page screenshot. Additionally, the fixed elements in the Web page are identified and linked to their fixations. For the evaluation, we compared the interpretation of our enhanced screenshot against the video visualization, which overcomes the problem. The results revealed that both visualizations equally deliver accurate interpretations. However, interpreting the visualizations of eye tracking Web usability studies using the enhanced screenshots outperforms the video visualizations in terms of speed and it requires less temporal demands from the interpreters.
Geographisches Cluster-basiertes Routing ist ein aktueller Ansatz, wenn es um das Entwicklen von effizienten Routingalgorithmen für drahtlose ad-hoc Netzwerke geht. Es gibt bereits eine Anzahl an Algorithmen, die Nachrichten nur auf Basis von Positionsinformationen durch ein drahtloses ad-hoc Netzwerk routen können. Darunter befinden sich sowohl Algorithmen, die auf das klassische Beaconing setzen, als auch Algorithmen, die beaconlos arbeiten (keine Informationen über die Umgebung werden benötigt, außer der eigenen Position und der Position des Ziels). Geographisches Routing mit Auslieferungsgarantie kann auch auf Overlay-Graphen durchgeführt werden. Bisher werden die dafür benötigten Overlay-Graphen nicht reaktiv konstruiert.
In dieser Arbeit wird ein reaktiver Algorithmus, der Beaconless Cluster Based Planarization Algorithmus (BCBP), für die Konstruktion eines planaren Overlay-Graphen vorgestellt, der die benötigte Anzahl an Nachrichten für die Konstruktion eines planaren Overlay-Graphen, und demzufolge auch Cluster-basiertes geographishes Routing, deutlich reduziert. Basierend auf einem Algorithmus für Cluster-basierte Planarisierung, konstruiert er beaconlos einen planaren Overlay-Graphen in einem unit disk graph (UDG). Ein UDG ist ein Modell für ein drahtloses Netzwerk, bei dem alle Teilnehmer den gleichen Senderadius haben.
Die Evaluierung des Algorithmus zeigt, dass er wesentlich effizienter ist als die Baecon-basierte Variante. Ein weiteres Ergebnis dieser Arbeit ist ein weiterer beaconloser Algorithmus (Beaconless LLRAP (BLLRAP)), für\r\nden zwar die Planarität, aber nicht die Konnektivität nachgewiesen werden konnte.
This thesis focuses on approximate inference in assumption-based argumentation frameworks. Argumentation provides a significant idea in the computerization of theoretical and practical reasoning in AI. And it has a close connection with AI, engaging in arguments to perform scientific reasoning. The fundamental approach in this field is abstract argumentation frameworks developed by Dung. Assumption-based argumentation can be regarded as an instance of abstract argumentation with structured arguments. When facing a large scale of data, a challenge of reasoning in assumption-based argumentation is how to construct arguments and resolve attacks over a given claim with minimal cost of computation and acceptable accuracy at the same time. This thesis proposes and investigates approximate methods that randomly select and construct samples of frameworks based on graphical dispute derivations to solve this problem. The presented approach aims to improve reasoning performance and get an acceptable trade-off between computational time and accuracy. The evaluation shows that for reasoning in assumption-based argumentation, in general, the running time is reduced with the cost of slightly low accuracy by randomly sampling and constructing inference rules for potential arguments over a query.
Mobile payment has been a payment option in the market for a long time now and was predicted to become a widely used payment method. However, over the years, the market penetration rate of mPayments has been relatively low, despite it having all characteristics required of a convenient payment method. The primaryrnreason for this has been cited as a lack of customer acceptance mainly caused due to the lack of perceived security by the end-user. Although biometric authentication is not a new technology, it is experiencing a revival in the light of the present day terror threats and increased security requirements in various industries. The application of biometric authentication in mPayments is analysed here and a suitable biometric authentication method for use with mPayments is recommended. The issue of enrolment, human and technical factors to be considered are discussed and the STOF business model is applied to a BiMoP (biometric mPayment) application.
This thesis explores and examines the effectiveness and efficacy of traditional machine learning (ML), advanced neural networks (NN) and state-of-the-art deep learning (DL) models for identifying mental distress indicators from the social media discourses based on Reddit and Twitter as they are immensely used by teenagers. Different NLP vectorization techniques like TF-IDF, Word2Vec, GloVe, and BERT embeddings are employed with ML models such as Decision Tree (DT), Random Forest (RF), Logistic Regression (LR) and Support Vector Machine (SVM) followed by NN models such as Convolutional Neural Network (CNN), Recurrent Neural Network (RNN) and Long Short-Term Memory (LSTM) to methodically analyse their impact as feature representation of models. DL models such as BERT, DistilBERT, MentalRoBERTa and MentalBERT are end-to-end fine tuned for classification task. This thesis also compares different text preprocessing techniques such as tokenization, stopword removal and lemmatization to assess their impact on model performance. Systematic experiments with different configuration of vectorization and preprocessing techniques in accordance with different model types and categories have been implemented to find the most effective configurations and to gauge the strengths, limitations, and capability to detect and interpret the mental distress indicators from the text. The results analysis reveals that MentalBERT DL model significantly outperformed all other model types and categories due to its specific pretraining on mental data as well as rigorous end-to-end fine tuning gave it an edge for detecting nuanced linguistic mental distress indicators from the complex contextual textual corpus. This insights from the results acknowledges the ML and NLP technologies high potential for developing complex AI systems for its intervention in the domain of mental health analysis. This thesis lays the foundation and directs the future work demonstrating the need for collaborative approach of different domain experts as well as to explore next generational large language models to develop robust and clinically approved mental health AI systems.
Diese Arbeit betrachtet die Online-Aufmerksamkeit gegenüber Forschern und deren Forschungsthemen. Die enthaltenen Studien vergleichen die Aufmerksamkeitsdynamiken gegenüber Gewinnern wichtiger Forschungspreise mit Forschern die keinen Preis erhalten haben. Web-Signale wie Wikipedia Seitenaufrufe, Editierungen von Wikipedia-Artikeln und Google Trends wurden als Proxy für Online-Aufmerksamkeit verwendet. Dabei wurde herausgefunden, dass Wikipedia-Artikel über die Forschungsthemen von Gewinnern zeitnahe zum Artikel über den Gewinner erstellt wurden. Eine mögliche Erklärung hierfür könnte sein, dass die Forschungsthemen in einer engeren Beziehung zu den Gewinnern stehen. Dies würde die These unterstützen, dass Gewinner ihr Forschungsgebiet eingeführt haben. Zusätzlich wuchs die Online-Aufmerksamkeit gegenüber den Forschungsthemen von Gewinnern nach dem Tag an dem der Artikel über den Forscher erstellt wurde. Daraus kann abgeleitet werden, dass Themen von Gewinnern beliebter sind als die Themen von Forschern die keinen Preis erhalten haben. Des Weiteren wurde gezeigt, dass Gewinner des Nobelpreises vor der Verkündung weniger Online-Aufmerksamkeit erhalten als die Liste von Nominierten basierend auf den Thomson Reuters Citation Laureates. Ferner sank die Beliebtheit gegenüber der Preisträger schneller als gegenüber Forschern die keinen Preis erhalten haben. Zuletzt wurde demonstriert, dass eine Vorhersage der Gewinner basierend auf Aufmerksamkeitsdynamiken gegenüber Forschern problematisch ist.
Tractography on HARDI data
(2011)
Diffusionsgewichtete Bildgebung ist eine wichtige Modalität in der klinischen Praxis. Sie stellt gegenwärtig die einzige Möglichkeit dar, nicht invasiv und in vivo Einblicke in das menschliche Gehirn zu erhalten. Die Einsatzgebiete dieser Technik sind sehr vielseitig. Sie wird zur Untersuchung des Gehirns, seiner Struktur, seiner Entwicklung und der Funktionsweisenseiner verschiedenen Areale einsetzt. Weiterhin spielt diese Modalität eine wichtige Rolle bei der Operationsplanung am Gehirn und der Untersuchung von Schlaganfall, Alzheimer und Multipler Sklerose. Diese Arbeit gibt eine kurze Einführung in die Bildgebungmittels MRT und geht auf die Entstehung diffusionsgewichtete Bilder ein. Darauf aufbauend wird der Diffusionstensor, die am meisten verbreitete Datenrepräsentation in der Diffusionsbildgebung, vorgestellt. Da die Repräsentation der Diffusion als Diffusionstensor erhebliche Einschränkungen darstellt, werden neue Methoden zur Datenrepräsentation vorgestellt und diskutiert. Diese neuen Methoden werden unter dem Begriff HARDI (Diffusionsbildgebung mit hoher Winkelauflösung, von engl. high angular resolution diffusion imaging) zusammengefasst. Weiterhin wird eine ausführliche Einführung in das Thema der Traktografie, der Rekonstruktion von Nervenbahnen im Gehirn, gegeben. Basierend auf diesem theoretischenWissen werden etablierte Algorithmen der Traktografie von Diffusionstensor- auf HARDI-Daten überführt. Dadurch wird die Rekonstruktion derNervenbahnen entscheidend verbessert. Es wird eine vollständig neue Methode vorgestellt, die in der Lage ist, Nervenbahnen sowohl auf einem Phantomdatensatz, als auch auf einem vom Menschen stammenden Gehirndatensatz zu rekonstruieren. Weiterhin wird ein neuartiger globaler Ansatz vorgestellt, um Voxel anhand ihrer Diffusionseigenschaften zu klassifizieren.
In dieser Arbeit wird die Geschwindigkeit des Simulationscodes zur Pho-
tonenausbreitung beim IceCube-Projekt (clsim) optimiert. Der Prozess der
GPU-Code-Analyse und Leistungsoptimierung wird im Detail beschrie-
ben. Wenn beide Codes auf der gleichen Hardware ausgeführt werden,
wird ein Speedup von etwa 3x gegenüber der ursprünglichen Implemen-
tierung erreicht. Vergleicht man den unveränderten Code auf der derzeit
von IceCube verwendeten Hardware (NVIDIA GTX 1080) mit der opti-
mierten Version, die auf einer aktuellen GPU (NVIDIA A100) läuft, wird
ein Speedup von etwa 9,23x beobachtet. Alle Änderungen am Code wer-
den vorgestellt und deren Auswirkung auf die Laufzeit und Genauigkeit
der Simulation diskutiert.
Der für die Optimierung verfolgte Weg wird dann in einem Schema
verallgemeinert. Programmierer können es als Leitfaden nutzen, um große
und komplexe GPU-Programme zu optimieren. Darüber hinaus wird die
per warp job-queue, ein Entwurfsmuster für das load balancing innerhalb
eines CUDA-Thread-Blocks, im Detail besprochen.
Paketmanager für Quellcode wie zum Beispiel Cabal verwalten unter anderem die Abhängigkeiten zwischen Paketen. Softwareprojekte nutzen jedoch selten sämtliche Funktionalitäten, die ihre Abhängigkeiten bereitstellen. Das führt zur unnötigen Kompilation unbenutzter Code-Fragmente und zu vermeintlichen Versionskonflikten, wo gar keine Konflikte sind. In zwei Fallstudien zeigen wir, wie relevant diese zwei Probleme sind. Danach beschreiben wir, wie wir sie vermeiden können, indem wir Abhängigkeiten nicht zwischen ganzen Paketen, sondern zwischen einzelnen Code-Fragmenten feststellen.