Filtern
Erscheinungsjahr
Dokumenttyp
- Masterarbeit (93) (entfernen)
Sprache
- Englisch (93) (entfernen)
Schlagworte
- Internet of Things (3)
- virtual reality (3)
- Beschaffung (2)
- 2019 European Parliament Election (1)
- API analysis (1)
- API-Analyse (1)
- Agentenorientiertes Software Engineering (1)
- Akzeptanz (1)
- Analysis of social platform (1)
- Articles for Deletion (1)
- Artificial Intelligence (1)
- Association Rules (1)
- Bedarfsforschung (1)
- Belief change, concept contraction, EL (1)
- Biometric Authentication (1)
- Blog marketing (1)
- Breeding tree selection (1)
- Business Process Management Recommender Systems Survey (1)
- Bürgerbeiteiligung (1)
- Case Study Analysis (1)
- Challenges (1)
- Computersimulation (1)
- Corvus frugilegus (1)
- Criteria Matrix (1)
- DMN (1)
- DTI (1)
- Data Mining (1)
- Datenaustausch (1)
- Developer profiling (1)
- Diffusionsbildgebung (1)
- Dokumentation (1)
- Dynamische Analyse (1)
- E-Partizipation (1)
- E-participation (1)
- EU (1)
- Effectiveness (1)
- Einkauf (1)
- Entwickler Profil (1)
- Fahrverhalten (1)
- Feature Modeling (1)
- Fiber Tracking (1)
- Fingerprint Recognition (1)
- Function Words (1)
- GDS (1)
- GPU (1)
- Gehirn (1)
- Grounded Theory (1)
- Handsfree editing (1)
- Human resources management (1)
- I-messages (1)
- IBM Bluemix (1)
- IceCube (1)
- International organization (1)
- Interoperability (1)
- Interoperabilität (1)
- IoT (1)
- Knowledge Engineering (1)
- MSR (1)
- Maschinelles Lernen (1)
- Material Point Method (1)
- MeVisLab (1)
- Mitral Valve (1)
- Mitralklappe (1)
- Modellfahrzeug (1)
- Multi-Agenten-Systeme (1)
- Natural Language Processing (1)
- Neutino (1)
- ONDEX (1)
- OPD-SHRM (1)
- OVTK (1)
- OWL (1)
- Ontologie <Wissensverarbeitung> (1)
- OpenVDB (1)
- Organizational Change (1)
- PEPPOL (1)
- Pan European Public Procurement OnLine (1)
- Pfadverfolgung (1)
- Physiksimulation (1)
- Political Communication (1)
- Real-Time (1)
- Recommender Systems, Business Process Modeling, Literature Review (1)
- Reddit (1)
- Reengineering (1)
- Reverse Engineering (1)
- Robust Principal Component Analysis (1)
- Rook (1)
- STOF Model (1)
- Sand (1)
- Sattelkraftfahrzeug (1)
- Sattelzug (1)
- Schnee (1)
- Segmentation (1)
- Segmentierung (1)
- Sensing as a Service (1)
- Sentiment Analysis, ChatGPT, Students sentiments (1)
- Service identification (1)
- Service-Identifizierung (1)
- Service-orientierte Architektur (1)
- Serviceorientierte Architektur (1)
- Software (1)
- Software Engineering (1)
- Software Repositories (1)
- Software migration (1)
- Software-Migration (1)
- Softwaretest (1)
- Solutions (1)
- Sozialwissenschaftliche Simulation (1)
- Speaker Recognition (1)
- Statistical Shape Model (1)
- Sustainability (1)
- Testen (1)
- Text Analysis (1)
- Text Mining (1)
- Text classification (1)
- Tool Evaluation (1)
- Ultraschall (1)
- Ultrasound (1)
- Umfrage in Koblenz (1)
- VCD (1)
- Vegetation distribution (1)
- Verification (1)
- Virtual Company Dossier (1)
- WEB (1)
- WSDL (1)
- Web Analytics (1)
- Web Analytics Framework (1)
- Web Mining (1)
- Web Services (1)
- Web log (1)
- Weblog (1)
- Wikipedia (1)
- XML (1)
- You-messages (1)
- blockchain (1)
- cultural dimensions (1)
- data (1)
- data sharing (1)
- data warehouse (1)
- distributed information systems (1)
- distributed ledger (1)
- eGovernment (1)
- eSourcing (1)
- framework (1)
- functional web testing tools (1)
- grassland (1)
- healthcare (1)
- image warping (1)
- internet of things (1)
- iot development platforms (1)
- leap motion (1)
- mPayments (1)
- machine learning (1)
- micro-agent (1)
- multi-agent systems (1)
- performance optimization (1)
- simulation (1)
- social simulation (1)
- stereoscopic rendering (1)
- survey in Koblenz (1)
- tracking (1)
- vegetation modeling (1)
- visualization (1)
- volume rendering (1)
- warp divergence (1)
Mit dem Erscheinen moderner Virtual Reality (VR) Headsets auf dem Verbrauchermarkt, gab es den bisher größten Aufschwung in der Geschichte der VR Technologie. Damit einhergehend rücken aber auch die Problematiken aktueller VR Hardware immer mehr in den Vordergrund. Insbesondere die Steuerung in VR war schon immer ein komplexes Thema.
Eine mögliche Lösung bietet die Leap Motion: Ein Hand-Tracking Gerät, welches ursprünglich für den Desktop-Einsatz entwickelt wurde, aber mit dem letzten größeren Softwareupdate an üblichen VR Headsets angebracht werden kann. Dieses Gerät ermöglicht ein sehr genaues Tracking beider Hände und aller Finger. Damit ist es möglich, diese vollständig in der VR Welt zu replizieren und zur Steuerung zu verwenden.
Ziel dieser Arbeit ist es, virtuelle Benutzeroberflächen zu entwerfen, die mit der Leap Motion bedient werden können. Dies soll eine natürliche Interaktion zwischen dem Benutzer und der VR-Umgebung ermöglichen. Danach werden mit Hilfe einer Demoanwendung Probanden-Tests durchgeführt, um ihre Leistung zu bewerten und mit herkömmlichen VR-Reglern zu vergleichen.
The purpose of this thesis is to explore the sentiment distributions of Wikipedia concepts.
We analyse the sentiment of the entire English Wikipedia corpus, which includes 5,669,867 articles and 1,906,375 talks, by using a lexicon-based method with four different lexicons.
Also, we explore the sentiment distributions from a time perspective using the sentiment scores obtained from our selected corpus. The results obtained have been compared not only between articles and talks but also among four lexicons: OL, MPQA, LIWC, and ANEW.
Our findings show that among the four lexicons, MPQA has the highest sensitivity and ANEW has the lowest sensitivity to emotional expressions. Wikipedia articles show more sentiments than talks according to OL, MPQA, and LIWC, whereas Wikipedia talks show more sentiments than articles according to ANEW. Besides, the sentiment has a trend regarding time series, and each lexicon has its own bias regarding text describing different things.
Moreover, our research provides three interactive widgets for visualising sentiment distributions for Wikipedia concepts regarding the time and geolocation attributes of concepts.
To construct a business process model manually is a highly complex and error-prone task which takes a lot of time and deep insights into the organizational structure, its operations and business rules. To improve the output of business analysts dealing with this process, different techniques have been introduced by researchers to support them during construction with helpful recommendations. These supporting recommendation systems vary in their way of what to recommend in the first place as well as their calculations taking place under the hood to recommend the most fitting element to the user. After a broad introduction into the field of business process modeling and its basic recommendation structures, this work will take a closer look at diverse proposals and descriptions published in current literature regarding implementation strategies to effectively and efficiently assist modelers during their business process model creation. A critical analysis of presentations in the selected literature will point out strengths and weaknesses of their approaches, studies and descriptions of those. As a result, the final concept matrix in this work will give a precise and helpful overview about the key features and recommendation methods used and implemented in previous research studies to pinpoint an entry into future works without the downsides already spotted by fellow researchers.
The thesis develops and evaluates a hypothetical model of the factors that influence user acceptance of weblog technology. Previous acceptance studies are reviewed, and the various models employed are discussed. The eventual model is based on the technology acceptance model (TAM) by Davis et al. It conceptualizes and operationalizes a quantitative survey conducted by means of an online questionnaire, strictly from a user perspective. Finally, it is tested and validated by applying methods of data analysis.
This Master Thesis is an exploratory research to determine whether it is feasible to construct a subjectivity lexicon using Wikipedia. The key hypothesis is that that all quotes in Wikipedia are subjective and all regular text are objective. The degree of subjectivity of a word, also known as ''Quote Score'' is determined based on the ratio of word frequency in quotations to its frequency outside quotations. The proportion of words in the English Wikipedia which are within quotations is found to be much smaller as compared to those which are not in quotes, resulting in a right-skewed distribution and low mean value of Quote Scores.
The methodology used to generate the subjectivity lexicon from text corpus in English Wikipedia is designed in such a way that it can be scaled and reused to produce similar subjectivity lexica of other languages. This is achieved by abstaining from domain and language-specific methods, apart from using only readily-available English dictionary packages to detect and exclude stopwords and non-English words in the Wikipedia text corpus.
The subjectivity lexicon generated from English Wikipedia is compared against other lexica; namely MPQA and SentiWordNet. It is found that words which are strongly subjective tend to have high Quote Scores in the subjectivity lexicon generated from English Wikipedia. There is a large observable difference between distribution of Quote Scores for words classified as strongly subjective versus distribution of Quote Scores for words classified as weakly subjective and objective. However, weakly subjective and objective words cannot be differentiated clearly based on Quote Score. In addition to that, a questionnaire is commissioned as an exploratory approach to investigate whether subjectivity lexicon generated from Wikipedia could be used to extend the coverage of words of existing lexica.
The content aggregator platform Reddit has established itself as one of the most popular websites in the world. However, scientific research on Reddit is hindered as Reddit allows (and even encourages) user anonymity, i.e., user profiles do not contain personal information such as the gender. Inferring the gender of users in large-scale could enable the analysis of gender-specific areas of interest, reactions to events, and behavioral patterns. In this direction, this thesis suggests a machine learning approach of estimating the gender of Reddit users. By exploiting specific conventions in parts of the website, we obtain a ground truth for more than 190 million comments of labeled users. This data is then used to train machine learning classifiers to use them to gain insights about the gender balance of particular subreddits and the platform in general. By comparing a variety of different approaches for classification algorithm, we find that character-level convolutional neural network achieves performance with an 82.3% F1 score on a task of predicting a gender of a user based on his/her comments. The score surpasses 85% mark for frequent users with more than 50 comments. Furthermore, we discover that female users are less active on Reddit platform, they write fewer comments and post in fewer subreddits on average, when compared to male users.
Since the invention of U-net architecture in 2015, convolutional networks based on its encoder-decoder approach significantly improved results in image analysis challenges. It has been proven that such architectures can also be successfully applied in different domains by winning numerous championships in recent years. Also, the transfer learning technique created an opportunity to push state-of-the-art benchmarks to a higher level. Using this approach is beneficial for the medical domain, as collecting datasets is generally a difficult and expensive process.
In this thesis, we address the task of semantic segmentation with Deep Learning and make three main contributions and release experimental results that have practical value for medical imaging.
First, we evaluate the performance of four neural network architectures on the dataset of the cervical spine MRI scans. Second, we use transfer learning from models trained on the Imagenet dataset and compare it to randomly initialized networks. Third, we evaluate models trained on the bias field corrected and raw MRI data. All code to reproduce results is publicly available online.
Despite the inception of new technologies at a breakneck pace, many analytics projects fail mainly due to the use of incompatible development methodologies. As big data analytics projects are different from software development projects, the methodologies used in software development projects could not be applied in the same fashion to analytics projects. The traditional agile project management approaches to the projects do not consider the complexities involved in the analytics. In this thesis, the challenges involved in generalizing the application of agile methodologies will be evaluated, and some suitable agile frameworks which are more compatible with the analytics project will be explored and recommended. The standard practices and approaches which are currently applied in the industry for analytics projects will be discussed concerning enablers and success factors for agile adaption. In the end, after the comprehensive discussion and analysis of the problem and complexities, a framework will be recommended that copes best with the discussed challenges and complexities and is generally well suited for the most data-intensive analytics projects.
Particle Swarm Optimization ist ein Optimierungsverfahren, das auf der Simulation von Schwärmen basiert.
In dieser Arbeit wird ein modifizierter Algorithmus, der durch Khan et al. 2010 eingeführt wurde, zur Schätzung der lokalen Kamerapose in 6DOF verwendet. Die Poseschätzung basiert auf kontinuierlichen Farb-und Tiefendaten, die durch einen RGB-D Sensor zur Verfügung gestellt werden. Daten werden von unterschiedlichen Posen aufgenommen und als gemeinsames Model registriert. Die Genauigkeit und Berechnungsdauer der Implementierung wird mit aktuellen Algorithmen verglichen und in unterschiedlichen Konfigurationen evaluiert.