Refine
Year of publication
- 2016 (34) (remove)
Document Type
- Doctoral Thesis (18)
- Master's Thesis (8)
- Bachelor Thesis (4)
- Part of Periodical (2)
- Conference Proceedings (1)
- Habilitation (1)
Language
- English (34) (remove)
Keywords
- soil water repellency (2)
- Ad-hoc-Netz (1)
- Algorithmische Geometrie (1)
- Articles for Deletion (1)
- Beaconless (1)
- Bedrohte Tiere (1)
- Big Five (1)
- Core Self-Evaluations (1)
- Daphnia (1)
- Distributed Algorithm (1)
- Drahtloses Sensorsystem (1)
- Drahtloses vermachtes Netz (1)
- E-Participation (1)
- Ebener Graph (1)
- Enterprise Architecture Framework (1)
- Evacuation modeling (1)
- Five Factor model (1)
- Flow decomposition (1)
- Function Words (1)
- Geographic routing (1)
- Geometric spanner (1)
- Graph (1)
- I-messages (1)
- IBM Bluemix (1)
- IT-Security (1)
- Komplexität / Algorithmus (1)
- Local algorithm (1)
- Maifisch (1)
- Metamodel (1)
- Nachbarschaftsgraph (1)
- Netzwerktopologie (1)
- Optimization (1)
- Planar graphs (1)
- Process (1)
- Quasi unit disk graph (1)
- Reactive algorithm (1)
- Reference Model (1)
- Rhein (1)
- Robust Principal Component Analysis (1)
- Routing (1)
- Security Requirements (1)
- Semantic Web (1)
- Semantic Web Data (1)
- Southern Amazonia (1)
- Statistical Shape Model (1)
- Unit disk graph (1)
- Verteilter Algorithmus (1)
- Wanderfische (1)
- Wikipedia (1)
- Wireless sensor network (1)
- You-messages (1)
- Zooplankton (1)
- Zusammenhängender Graph (1)
- bait-lamina test (1)
- biodegradation (1)
- cross-cultural psychology (1)
- decomposition (1)
- description logic (1)
- ecotoxicology (1)
- emic-etic (1)
- energetics (1)
- field experiment (1)
- flows over time (1)
- fluid disturbances (1)
- fungicide (1)
- greenhouse gases (1)
- internet of things (1)
- iot development platforms (1)
- kinematics (1)
- land use change (1)
- media competence model (1)
- modeling (1)
- olive mill wastewater (1)
- pesticide (1)
- phenolic compounds (1)
- privacy and personal data (1)
- privacy competence model (1)
- reasoning (1)
- risk (1)
- security awareness (1)
- soils (1)
- stream (1)
- swarming (1)
- swimming behaviour (1)
- thermal analysis (1)
- virtual reality (1)
- water re-use (1)
- zooplankton (1)
Institute
- Fachbereich 7 (7)
- Institut für Informatik (6)
- Institut für Computervisualistik (5)
- Institut für Wirtschafts- und Verwaltungsinformatik (3)
- Institute for Web Science and Technologies (3)
- Fachbereich 4 (2)
- Fachbereich 8 (2)
- Institut für Management (2)
- Mathematisches Institut (2)
- Institut für Psychologie (1)
“Did I say something wrong?” A word-level analysis of Wikipedia articles for deletion discussions
(2016)
This thesis focuses on gaining linguistic insights into textual discussions on a word level. It was of special interest to distinguish messages that constructively contribute to a discussion from those that are detrimental to them. Thereby, we wanted to determine whether “I”- and “You”-messages are indicators for either of the two discussion styles. These messages are nowadays often used in guidelines for successful communication. Although their effects have been successfully evaluated multiple times, a large-scale analysis has never been conducted. Thus, we used Wikipedia Articles for Deletion (short: AfD) discussions together with the records of blocked users and developed a fully automated creation of an annotated data set. In this data set, messages were labelled either constructive or disruptive. We applied binary classifiers to the data to determine characteristic words for both discussion styles. Thereby, we also investigated whether function words like pronouns and conjunctions play an important role in distinguishing the two. We found that “You”-messages were a strong indicator for disruptive messages which matches their attributed effects on communication. However, we found “I”-messages to be indicative for disruptive messages as well which is contrary to their attributed effects. The importance of function words could neither be confirmed nor refuted. Other characteristic words for either communication style were not found. Yet, the results suggest that a different model might represent disruptive and constructive messages in textual discussions better.
In order to enhance the company’s appeal for potential employees and improve the satisfaction of already salaried workers, it is necessary to offer a variety of work-life balance measures. But as their implementation causes time and financial costs, a prioritization of measures is needed. To express a recommendation for companies, this study is led by the questions if there are work-life balance measures which have more impact on employee satisfaction than others, how big the relative impact of work-life balance measures on job satisfaction in comparison to other work and private life variables is, if there is a relation between the effectiveness of measures and their use and if there is a difference between the measures which are most important from the employees’ perspective and the companies’ offers.
These questions are formulated in eight research hypotheses which are examined in a quantitative research design with online survey data from 289 employees of fifteen different German companies. The formation of a hierarchy of the effectiveness of measures towards job satisfaction as well as the investigation of the relative impact in comparison to other variables is performed using a multiple regression analysis, whilst the differences between employees’ expectations and the availability of offers are examined with t-tests.
Support in childcare, support in voluntary activities and teambuilding events have a significantly higher impact on job satisfaction than other work-life balance measures, and their potential use is higher than the actual use which leads to the conclusion that there is yet potential for companies to improve their employees’ satisfaction by implementing these measures. In addition, flexible work hours, flexible work locations and free time and overtime accounts are the most important measures from the employees’ point of view and already widely offered by the surveyed companies. In general, the overall use of the available measures and the quantity of offered measures are more important with regard to job satisfaction than the specific kind of measure. In addition, work-life balance measures are more important towards job satisfaction for younger people.
We are entering the 26th year from the time the World Wide Web (WWW) became reality. Since the birth of the WWW in 1990, the Internet and therewith websites have changed the way businesses compete, shifting products, services and even entire markets.
Therewith, gathering and analysing visitor traffic on websites can provide crucial information to un- derstand customer behavior and numerous other aspects.
Web Analytics (WA) tools offer a quantity of diverse functionality, which calls for complex decision- making in information management. Website operators implement Web Analytic tools such as Google Analytics to analyse their website for the purpose of identifying web usage to optimise website design and management. The gathered data leads to emergent knowledge, which provides new marketing opportunities and can be used to improve business processes and understand customer behavior to increase profit. Moreover, Web Analytics plays a significant role to measure performance and has therefore become an important component in web-based environments to make business decisions.
However, many small and medium –sized enterprises try to keep up with the web business competi- tion, but do not have the equivalent resources in manpower and knowledge to stand the pace, there- fore some even resign entirely on Web Analytics.
This research project aims to develop a Web Analytics framework to assist small and medium-sized enterprises in making better use of Web Analytics. By identifying business requirements of SMEs and connecting them to the functionality of Google Analytics, a Web Analytics framework with attending guidelines is developed, which guides SMEs on how to proceed in using Google Analytics to achieve actionable outcomes.
Recent estimates have confirmed that inland waters emit a considerable amount of CH4 and CO2 to the atmosphere at the regional and global scale. But these estimates are based on extrapolated measured data and lack of data from inland waters in arid and semi-arid regions and carbon sources from wastewater treatment plants (WWTPs) as well insufficient resolution of the spatiotemporal variability of these emissions.
Through this study, we analyzed monthly hydrological, meteorological and water quality data from three irrigation and drinking water reservoirs in the lower Jordan River basin and estimated the atmospheric emission rates of CO2. We investigated the effect of WWTPs on surrounding aquatic systems in term of CH4 and CO2 emission by presenting seasonally resolved data for dissolved concentrations of both gases in the effluents and in the receiving streams at nine WWTPs in Germany.
We investigated spatiotemporal variability of CH4 and CO2 emission from aquatic ecosystems by using of simple low-cost tools for measuring CO2 flux and bubble release rate from freshwater systems. Our estimates showed that reservoirs in semi-arid regions are oversaturated with CO2 and acted as net sources to the atmosphere. The magnitude of observed fluxes at the three water reservoirs in Jordan is comparable to those from tropical reservoirs (3.3 g CO2 m-2 d-1). The CO2 emission rate from these reservoirs is linked to changes of water surface area, which is the result of water management practices. WWTPs have been shown to discharge a considerable amount of CH4 (30.9±40.7 kg yr-1) and CO2 (0.06±0.05 Gg yr-1) to their surrounding streams, and emission rates of CH4 and CO2 from these streams are significantly enhanced by effluents of WWTPs up to 1.2 and 8.6 times, respectively.
Our results showed that both diffusive flux and bubble release rate varied in time and space, and both of emission pathways should be included and variability should be resolved adequately in further sampling and measuring strategies. We conclude that future emission measurements and estimates from inland waters may consider water management practices, carbon sources from WWTPs as well spatial and temporal variability of emission.
This habilitation thesis collects works addressing several challenges on handling uncertainty and inconsistency in knowledge representation. In particular, this thesis contains works which introduce quantitative uncertainty based on probability theory into abstract argumentation frameworks. The formal semantics of this extension is investigated and its application for strategic argumentation in agent dialogues is discussed. Moreover, both the computational as well as the meaningfulness of approaches to analyze inconsistencies, both in classical logics as well as logics for uncertain reasoning is investigated. Finally, this thesis addresses the implementation challenges for various kinds of knowledge representation formalisms employing any notion of inconsistency tolerance or uncertainty.
The provision of electronic participation services (e-participation) is a complex socio-technical undertaking that needs comprehensive design and implementation strategies. E-participation service providers, in the most cases administrations and governments, struggle with changing requirements that demand more transparency, better connectivity and increased collaboration among different actors. At the same time, less staff are available. As a result, recent research assesses only a minority of e-participation services as successful. The challenge is that the e-participation domain lacks comprehensive approaches to design and implement (e-)participation services. Enterprise Architecture (EA) frameworks have evolved in information systems research as an approach to guide the development of complex socio-technical systems. This approach can guide the design and implementation services, if the collection of organisations with the commonly held goal to provide participation services is understood as an E Participation Enterprise (EE). However, research & practice in the e participation domain has not yet exploited EA frameworks. Consequently, the problem scope that motivates this dissertation is the existing gap in research to deploy EA frameworks in e participation design and implementation. The research question that drives this research is: What methodical and technical guides do architecture frameworks provide that can be used to design and implement better and successful e participation?
This dissertation presents a literature study showing that existing approaches have not covered yet the challenges of comprehensive e participation design and implementation. Accordingly, the research moves on to investigate established EA frameworks such as the Zachman Framework, TOGAF, the DoDAF, the FEA, the ARIS, and the ArchiMate for their use. While the application of these frameworks in e participation design and implementation is possible, an integrated approach is lacking so far. The synthesis of literature review and practical insights in design and implementation of e participation services from four projects show the challenges of adapting architecture frameworks for this domain. However, the research shows also the potential of a combination of the different approaches. Consequently, the research moves on to develop the E-Participation Architecture Framework (EPART-Framework). Therefore, the dissertation applies design science research including literature review and action research. Two independent settings test an initial EPART-Framework version. The results yield into the EPART-Framework presented in this dissertation.
The EPART-Framework comprises of the EPART-Metamodel with six EPART-Viewpoints, which frame the stakeholder concerns: the Participation Scope, the Participant Viewpoint, the Participation Viewpoint, the Data & Information Viewpoint, the E-participation Viewpoint, and Implementation & Governance Viewpoint. The EPART-Method supports the stakeholders to design the EE and implement e participation and stores its output in an architecture description and a solution repository. It consists of five consecutive phases accompanied by requirements management: Initiation, Design, Implementation and Preparation, Participation, and Evaluation. The EPART-Framework fills the gap between the e participation domain and the enterprise architecture framework domain. The evaluation gives reasonable evidence that the framework is a valuable addition in academia and in practice to improve e-participation design and implementation. The same time, it shows opportunities for future research to extend and advance the framework.
One of the main goals of the artificial intelligence community is to create machines able to reason with dynamically changing knowledge. To achieve this goal, a multitude of different problems have to be solved, of which many have been addressed in the various sub-disciplines of artificial intelligence, like automated reasoning and machine learning. The thesis at hand focuses on the automated reasoning aspects of these problems and address two of the problems which have to be overcome to reach the afore-mentioned goal, namely 1. the fact that reasoning in logical knowledge bases is intractable and 2. the fact that applying changes to formalized knowledge can easily introduce inconsistencies, which leads to unwanted results in most scenarios.
To ease the intractability of logical reasoning, I suggest to adapt a technique called knowledge compilation, known from propositional logic, to description logic knowledge bases. The basic idea of this technique is to compile the given knowledge base into a normal form which allows to answer queries efficiently. This compilation step is very expensive but has to be performed only once and as soon as the result of this step is used to answer many queries, the expensive compilation step gets worthwhile. In the thesis at hand, I develop a normal form, called linkless normal form, suitable for knowledge compilation for description logic knowledge bases. From a computational point of view, the linkless normal form has very nice properties which are introduced in this thesis.
For the second problem, I focus on changes occurring on the instance level of description logic knowledge bases. I introduce three change operators interesting for these knowledge bases, namely deletion and insertion of assertions as well as repair of inconsistent instance bases. These change operators are defined such that in all three cases, the resulting knowledge base is ensured to be consistent and changes performed to the knowledge base are minimal. This allows us to preserve as much of the original knowledge base as possible. Furthermore, I show how these changes can be applied by using a transformation of the knowledge base.
For both issues I suggest to adapt techniques successfully used in other logics to get promising methods for description logic knowledge bases.
Reactive local algorithms are distributed algorithms which suit the needs of battery-powered, large-scale wireless ad hoc and sensor networks particularly well. By avoiding both unnecessary wireless transmissions and proactive maintenance of neighborhood tables (i.e., beaconing), such algorithms minimize communication load and overhead, and scale well with increasing network size. This way, resources such as bandwidth and energy are saved, and the probability of message collisions is reduced, which leads to an increase in the packet reception ratio and a decrease of latencies.
Currently, the two main application areas of this algorithm type are geographic routing and topology control, in particular the construction of a node's adjacency in a connected, planar representation of the network graph. Geographic routing enables wireless multi-hop communication in the absence of any network infrastructure, based on geographic node positions. The construction of planar topologies is a requirement for efficient, local solutions for a variety of algorithmic problems.
This thesis contributes to reactive algorithm research in two ways, on an abstract level, as well as by the introduction of novel algorithms:
For the very first time, reactive algorithms are considered as a whole and as an individual research area. A comprehensive survey of the literature is given which lists and classifies known algorithms, techniques, and application domains. Moreover, the mathematical concept of O- and Omega-reactive local topology control is introduced. This concept unambiguously distinguishes reactive from conventional, beacon-based, topology control algorithms, serves as a taxonomy for existing and prospective algorithms of this kind, and facilitates in-depth investigations of the principal power of the reactive approach, beyond analysis of concrete algorithms.
Novel reactive local topology control and geographic routing algorithms are introduced under both the unit disk and quasi unit disk graph model. These algorithms compute a node's local view on connected, planar, constant stretch Euclidean and topological spanners of the underlying network graph and route messages reactively on these spanners while guaranteeing the messages' delivery. All previously known algorithms are either not reactive, or do not provide constant Euclidean and topological stretch properties. A particularly important partial result of this work is that the partial Delaunay triangulation (PDT) is a constant stretch Euclidean spanner for the unit disk graph.
To conclude, this thesis provides a basis for structured and substantial research in this field and shows the reactive approach to be a powerful tool for algorithm design in wireless ad hoc and sensor networking.
Confidentiality, integrity, and availability are often listed as the three major requirements for achieving data security and are collectively referred to as the C-I-A triad. Confidentiality of data restricts the data access to authorized parties only, integrity means that the data can only be modified by authorized parties, and availability states that the data must always be accessible when requested. Although these requirements are relevant for any computer system, they are especially important in open and distributed networks. Such networks are able to store large amounts of data without having a single entity in control of ensuring the data's security. The Semantic Web applies to these characteristics as well as it aims at creating a global and decentralized network of machine-readable data. Ensuring the confidentiality, integrity, and availability of this data is therefore also important and must be achieved by corresponding security mechanisms. However, the current reference architecture of the Semantic Web does not define any particular security mechanism yet which implements these requirements. Instead, it only contains a rather abstract representation of security.
This thesis fills this gap by introducing three different security mechanisms for each of the identified security requirements confidentiality, integrity, and availability of Semantic Web data. The mechanisms are not restricted to the very basics of implementing each of the requirements and provide additional features as well. Confidentiality is usually achieved with data encryption. This thesis not only provides an approach for encrypting Semantic Web data, it also allows to search in the resulting ciphertext data without decrypting it first. Integrity of data is typically implemented with digital signatures. Instead of defining a single signature algorithm, this thesis defines a formal framework for signing arbitrary Semantic Web graphs which can be configured with various algorithms to achieve different features. Availability is generally supported by redundant data storage. This thesis expands the classical definition of availability to compliant availability which means that data must only be available as long as the access request complies with a set of predefined policies. This requirement is implemented with a modular and extensible policy language for regulating information flow control. This thesis presents each of these three security mechanisms in detail, evaluates them against a set of requirements, and compares them with the state of the art and related work.
Statistical Shape Models (SSMs) are one of the most successful tools in 3Dimage analysis and especially medical image segmentation. By modeling the variability of a population of training shapes, the statistical information inherent in such data are used for automatic interpretation of new images. However, building a high-quality SSM requires manually generated ground truth data from clinical experts. Unfortunately, the acquisition of such data is a time-consuming, error-prone and subjective process. Due to this effort, the majority of SSMs is often based on a limited set of this ground truth training data, which makes the models less statistically meaningful. On the other hand, image data itself is abundant in clinics from daily routine. In this work, methods for automatically constructing a reliable SSM without the need of manual image interpretation from experts are proposed. Thus, the training data is assumed to be the result of any segmentation algorithm or may originate from other sources, e.g. non-expert manual delineations. Depending on the algorithm, the output segmentations will contain errors to a higher or lower degree. In order to account for these errors, areas of low probability of being a boundary should be excluded from the training of the SSM. Therefore, the probabilities are estimated with the help of image-based approaches. By including many shape variations, the corrupted parts can be statistically reconstructed. Two approaches for reconstruction are proposed - an Imputation method and Weighted Robust Principal Component Analysis (WRPCA). This allows the inclusion of many data sets from clinical routine, covering a lot more variations of shape examples. To assess the quality of the models, which are robust against erroneous training shapes, an evaluation compares the generalization and specificity ability to a model build from ground truth data. The results show, that especially WRPCA is a powerful tool to handle corrupted parts and yields to reasonable models, which have a higher quality than the initial segmentations.