Filtern
Erscheinungsjahr
- 2016 (34) (entfernen)
Dokumenttyp
- Dissertation (18)
- Masterarbeit (8)
- Bachelorarbeit (4)
- Ausgabe (Heft) zu einer Zeitschrift (2)
- Konferenzveröffentlichung (1)
- Habilitation (1)
Sprache
- Englisch (34) (entfernen)
Schlagworte
- soil water repellency (2)
- Ad-hoc-Netz (1)
- Algorithmische Geometrie (1)
- Articles for Deletion (1)
- Beaconless (1)
- Bedrohte Tiere (1)
- Big Five (1)
- Core Self-Evaluations (1)
- Daphnia (1)
- Distributed Algorithm (1)
- Drahtloses Sensorsystem (1)
- Drahtloses vermachtes Netz (1)
- E-Participation (1)
- Ebener Graph (1)
- Enterprise Architecture Framework (1)
- Evacuation modeling (1)
- Five Factor model (1)
- Flow decomposition (1)
- Function Words (1)
- Geographic routing (1)
- Geometric spanner (1)
- Graph (1)
- I-messages (1)
- IBM Bluemix (1)
- IT-Security (1)
- Komplexität / Algorithmus (1)
- Local algorithm (1)
- Maifisch (1)
- Metamodel (1)
- Nachbarschaftsgraph (1)
- Netzwerktopologie (1)
- Optimization (1)
- Planar graphs (1)
- Einbringen (1)
- Quasi unit disk graph (1)
- Reactive algorithm (1)
- Reference Model (1)
- Rhein (1)
- Robust Principal Component Analysis (1)
- Routing (1)
- Security Requirements (1)
- Semantic Web (1)
- Semantic Web Data (1)
- Southern Amazonia (1)
- Statistical Shape Model (1)
- Unit disk graph (1)
- Verteilter Algorithmus (1)
- Wanderfische (1)
- Wikipedia (1)
- Wireless sensor network (1)
- You-messages (1)
- Zooplankton (1)
- Zusammenhängender Graph (1)
- bait-lamina test (1)
- biodegradation (1)
- cross-cultural psychology (1)
- decomposition (1)
- description logic (1)
- ecotoxicology (1)
- emic-etic (1)
- energetics (1)
- field experiment (1)
- flows over time (1)
- fluid disturbances (1)
- fungicide (1)
- greenhouse gases (1)
- internet of things (1)
- iot development platforms (1)
- kinematics (1)
- land use change (1)
- media competence model (1)
- modeling (1)
- olive mill wastewater (1)
- pesticide (1)
- phenolic compounds (1)
- privacy and personal data (1)
- privacy competence model (1)
- reasoning (1)
- risk (1)
- security awareness (1)
- soils (1)
- stream (1)
- swarming (1)
- swimming behaviour (1)
- thermal analysis (1)
- virtual reality (1)
- water re-use (1)
- zooplankton (1)
Institut
- Fachbereich 7 (7)
- Institut für Informatik (6)
- Institut für Computervisualistik (5)
- Institut für Wirtschafts- und Verwaltungsinformatik (3)
- Institute for Web Science and Technologies (3)
- Fachbereich 4 (2)
- Fachbereich 8 (2)
- Institut für Management (2)
- Mathematisches Institut (2)
- Institut für Psychologie (1)
- Institut für Umweltwissenschaften (1)
Five personality traits commonly known as the “Big Five” have been widely acknowledged as universal. But most available psychological instruments are not necessarily transferable to other cultures. They are referred to as “W.E.I.R.D.” (western, educated, industrial, rich, democratic) and lack the combined emic-etic approach that is necessary for a transcultural perspective. This intercontinental congress brings experts from Kenya and Germany together – thinking out of the box and collecting ideas for a scientific based partnership of East Africa and Europe. Main topics are psychological constructs that prove relevant for Human Resources Management. The Five-Factor Model, core self-evaluations, coping processes and acculturation as well as globalization effects and gender issues are discussed.
Aktuelle Schätzungen bestätigten, dass Binnengewässer eine erhebliche Menge Methan (CH4) und Kohlendioxid (CO2) sowohl auf regionaler Ebene, als auch global freisetzen. Jedoch basieren diese Schätzungen auf extrapolierten gemessenen Daten, ungenügender Auflösung der räumlich-zeitlichen Variabilität und es mangelt an Daten aus ariden und semi-ariden Gebieten, sowie den Kohlestoffquellen aus Kläranlagen.
Für die hier vorliegende Studie analysierten wir monatliche hydrologische und meteorologische Daten sowie Daten zur Wasserqualität von drei Stauseen aus dem Gebiet des unteren Jordans, die zur Trinkwassergewinnung und zur Bewässerung genutzt werden, und schätzten damit deren Emissionsrate an CO2 ab. Wir untersuchten den Effekt von Kläranlagen auf die umliegenden Gewässer im Hinblick auf CH4 und CO2-Emissionen indem wir saisonal aufgelöste Daten der Konzentration der beiden gelösten Gase in Kläranlagenauslässen und in Vorflutern von neun Kläranlagen in Deutschland analysierten. Mithilfe von Low-Cost-Methoden die die CO2-Transportrate und die Ausgasungsrate über Gasblasen messen, untersuchten wir die räumliche und zeitliche Variabilität der CH4 und CO2-Emissionen von aquatischen Süßwasser-Ökosystemen.
Unsere Schätzungen zeigen, dass Stauseen in semi-ariden Regionen CO2 übersättigt sind und somit CO2 an die Atmosphäre abgeben, also eine Netto-Quelle sind.
Die Größenordnung der beobachteten Transportraten der drei jordanischen Stauseen ist vergleichbar mit denen von tropischen Stauseen (3,3 g CO2 m-2 Tag-1). Die CO2-Emissionsrate ist abhängig von Änderungen der Wasseroberfläche, welche durch den Betrieb der Stauseen verursacht sind. Kläranlagen entlassen eine beachtlichen Menge an CH4 (30.9±40.7 kg Jahr-1) und CO2 (0.06±0.05 Gg Jahr-1) in ihre umgebenden Flüsse und Bäche. Deren Emissionsraten sind durch diese Einleitung der Kläranlagen um 1,2-fach für CH4 oder 8,6-fach für CO2 erhöht. Unsere Ergebnisse zeigen, dass sowohl die diffusive als auch die Gasblasenemissionsrate räumlich und zeitlich variabel ist, weshalb beide Emissionsraten bei zukünftigen Studien auch in der nötigen Auflösung gemessen werden sollten.
Wir schlussfolgern, dass bei zukünftigen Emissionsmessungen und –schätzungen von Binnengewässern auch die Gewässerbewirtschaftung, die Kohlenstoffquelle von Kläranlagen und die räumliche und zeitliche Variabilität der Emissionen beachtet werden sollten.
Agriculture covers one third of the world land area and has become a major source of water pollution due to its heavy reliance on chemical inputs, namely fertilisers and pesticides. Several thousands of tonnes of these chemicals are applied worldwide annually and partly reach freshwaters. Despite their widespread use and relatively unspecific modes of action, fungicides are the least studied group of pesticides. It remains unclear whether the taxonomic groups used in pesticide risk assessment are protective for non-target freshwater fungi. Fungi and bacteria are the main microbial decomposers converting allochthonous organic matter (litter) into a more nutritious food resource for leaf-shredding macroinvertebrates. This process of litter decomposition (LD) is central for aquatic ecosystem because it fuels local and downstream food webs with energy and nutrients. Effects of fungicides on decomposer communities and LD have been mainly analysed under laboratory conditions with limited representation of the multiple factors that may moderate effects in the field.
In this thesis a field study was conducted in a German vineyard area to characterise recurrent episodic exposure to fungicides in agricultural streams (chapter 2) and its effects on decomposer communities and LD (chapter 3). Additionally, potential interaction effects of nutrient enrichment and fungicides on decomposer communities and LD were analysed in a mesocosm experiment (chapter 4).
In the field study event-driven water sampling (EDS) and passive sampling with EmporeTM styrene-divinylbenzene reverse phase sulfonated disks (SDB disks) were used to assess exposure to 15 fungicides and 4 insecticides. A total of 17 streams were monitored during 4 rainfall events within the local application period of fungicides in 2012. EDS exceeded the time-weighted average concentrations provided by the SDB disks by a factor of 3, though high variability among compounds was observed. Most compounds were detected in more than half of the sites and mean and maximum peak (EDS) concentrations were under 1 and 3 µg/l, respectively. Besides, SDB disk-sampling rates and a free-software solution to derive sampling rates under time-variable exposure were provided.
Several biotic endpoints related to decomposers and LD were measured in the same sampling sites as the fungicide monitoring, coinciding with the major litter input period. Our results suggest that polar organic fungicides in streams change the structure of the fungal community. Causality of this finding was supported by a subsequent microcosm experiment. Whether other effects observed in the field study, such as reduced fungal biomass, increased bacterial density or reduced microbial LD can be attributed to fungicides remains speculative and requires further investigation. By contrast, neither the invertebrate LD nor in-situ measured gammarid feeding rates correlated with water-borne fungicide toxicity, but both were negatively associated with sediment copper concentrations. The mesocosm experiment showed that fungicides and nutrients affect microbial decomposers differently and that they can alter community structure, though longer experiments are needed to determine whether these changes may propagate to invertebrate communities and LD. Overall, further studies should include representative field surveys in terms of fungicide pollution and physical, chemical and biological conditions. This should be combined with experiments under controlled conditions to test for the causality of field observations.
This study had two main aims. The first one was to investigate the quality of lesson plans. Two important features of lesson plans were used as a basis to determine the quality of lesson plans. These are adaptability to preconditions and cognitive activation of students. The former refers to how the planning teacher considers the diversity of students pre-existing knowledge and skills. The latter refers to how the planning teacher sequences deep learning tasks and laboratory activities to promote the cognitive activation of students.
The second aim of the study was to explore teachers thinking about and explanation of externally generated feedback data on their students’ performance. The emphasis here was to understand how the teachers anticipate planning differentiated lessons to accommodate the variations in students learning outcomes revealed by the feedback data.
The study followed a qualitative approach with multiple sources of data. Concept maps, questionnaires, an online lesson planning tool, standardized tests, and semi-structured interviews were the main data collection instruments used in the study. Participants of this study were four physics teachers teaching different grade levels. For the purpose of generating feedback for the participant teachers, a test was administered to 215 students. Teachers were asked to plan five lessons for their ongoing practices. The analysis showed that the planned lessons were not adapted to the diversity in students pre-existing knowledge and skills. The analysis also indicated that the lessons planned had limitations with regard to cognitive activation of students. The analysis of the interview data also revealed that the participant teachers do not normally consider differentiating lessons to accommodate the differences in students learning, and place less emphasis on the cognitive activation of students. The analysis of the planned lessons showed a variation in teachers approach in integrating laboratory activities in the sequence of the lessons ranging from a complete absence through a demonstrative to an investigative approach. Moreover, the findings from the interviews indicated differences between the participant teachers espoused theory (i.e. what they said during interview) and their theory- in –use (i.e. what is evident from the planned lessons). The analysis of the interview data demonstrated that teachers did not interpret the data, identify learning needs, draw meaningful information from the data for adapting (or differentiating) instruction. They attributed their students’ poor performance to task difficulty, students’ ability, students’ motivation and interest. The teachers attempted to use the item level and subscale data only to compare the relative position of their class with the reference group. However, they did not read beyond the data, like identifying students learning needs and planning for differentiated instruction based on individual student’s performance.
We are entering the 26th year from the time the World Wide Web (WWW) became reality. Since the birth of the WWW in 1990, the Internet and therewith websites have changed the way businesses compete, shifting products, services and even entire markets.
Therewith, gathering and analysing visitor traffic on websites can provide crucial information to un- derstand customer behavior and numerous other aspects.
Web Analytics (WA) tools offer a quantity of diverse functionality, which calls for complex decision- making in information management. Website operators implement Web Analytic tools such as Google Analytics to analyse their website for the purpose of identifying web usage to optimise website design and management. The gathered data leads to emergent knowledge, which provides new marketing opportunities and can be used to improve business processes and understand customer behavior to increase profit. Moreover, Web Analytics plays a significant role to measure performance and has therefore become an important component in web-based environments to make business decisions.
However, many small and medium –sized enterprises try to keep up with the web business competi- tion, but do not have the equivalent resources in manpower and knowledge to stand the pace, there- fore some even resign entirely on Web Analytics.
This research project aims to develop a Web Analytics framework to assist small and medium-sized enterprises in making better use of Web Analytics. By identifying business requirements of SMEs and connecting them to the functionality of Google Analytics, a Web Analytics framework with attending guidelines is developed, which guides SMEs on how to proceed in using Google Analytics to achieve actionable outcomes.
Statistical Shape Models (SSMs) are one of the most successful tools in 3Dimage analysis and especially medical image segmentation. By modeling the variability of a population of training shapes, the statistical information inherent in such data are used for automatic interpretation of new images. However, building a high-quality SSM requires manually generated ground truth data from clinical experts. Unfortunately, the acquisition of such data is a time-consuming, error-prone and subjective process. Due to this effort, the majority of SSMs is often based on a limited set of this ground truth training data, which makes the models less statistically meaningful. On the other hand, image data itself is abundant in clinics from daily routine. In this work, methods for automatically constructing a reliable SSM without the need of manual image interpretation from experts are proposed. Thus, the training data is assumed to be the result of any segmentation algorithm or may originate from other sources, e.g. non-expert manual delineations. Depending on the algorithm, the output segmentations will contain errors to a higher or lower degree. In order to account for these errors, areas of low probability of being a boundary should be excluded from the training of the SSM. Therefore, the probabilities are estimated with the help of image-based approaches. By including many shape variations, the corrupted parts can be statistically reconstructed. Two approaches for reconstruction are proposed - an Imputation method and Weighted Robust Principal Component Analysis (WRPCA). This allows the inclusion of many data sets from clinical routine, covering a lot more variations of shape examples. To assess the quality of the models, which are robust against erroneous training shapes, an evaluation compares the generalization and specificity ability to a model build from ground truth data. The results show, that especially WRPCA is a powerful tool to handle corrupted parts and yields to reasonable models, which have a higher quality than the initial segmentations.
Six and Gimmler have identified concrete capabilities that enable users to use the Internet in a competent way. Their media competence model can be used for the didactical design of media usage in secondary schools. However, the special challenge of security awareness is not addressed by the model. In this paper, the important dimension of risk and risk assessment will be introduced into the model. This is especially relevant for the risk of the protection of personal data and privacy. This paper will apply the method of IT risk analysis in order to select those dimensions of the Six/Gimmler media competence model that are appropriate to describe privacy aware Internet usage. Privacy risk aware decisions for or against the Internet usage is made visible by the trust model of Mayer et al.. The privacy extension of the competence model will lead to a measurement of the existing privacy awareness in secondary schools, which, in turn, can serve as a didactically well-reasoned design of Informatics modules in secondary schools. This paper will provide the privacy-extended competence model, while empirical measurement and module design is planned for further research activities.
The largest population of the anadromous Allis shad (A. alosa) of the 19th century was found in River Rhine and has to be considered extinct today. To facilitate the return of A. alosa into River Rhine an EU LIFE-project was initiated in 2007. The overall objective of this thesis was to assist aquaculture and stocking-measures at River Rhine, as well as to support restoration and conservation of populations of Allis shad in Europe.
By culturing the free-swimming nematode T. aceti in a solution of cider vinegar we developed a cost-effective live food organism for the larviculture of fish. As indicated by experiments with C. maraena, T. aceti cannot be regarded as an alternative to Artemia nauplii. However it has to be considered a suitable supplemental feed in the early rearing of C. maraena by providing essential fatty acids, thereby optimizing growth.
Also mass-marking practices with Oxytetracycline, as they are applied in the restocking of Allis shad have been evaluated. In experiments with D. rerio we demonstrated that water hardness can detrimentally affect mortality during marking and has to be considered crucial in the development of marking protocols for freshwater fish.
In order to get independent from wild spawners an ex-situ Broodstock-facility for Allis shad was established in 2011. Upon examination of two complete year classes of this broodstock, we found a high prevalence of various malformations, which could be traced back to distinct cysts developing one month post hatch. Despite applying a variety of clinical tests we could not identify any infectious agents causing these malformations. The observed malformations are probably a consequence of suboptimal feeding practices or the properties of the physio-chemical rearing environment.
The decline of stocks of A. alosa in Europe has been largely explained with the increase of river temperatures as a consequence of global warming. By investigating the temperature physiology of larval Allis shad we demonstrated that A. alosa ranges among the most thermo-tolerant species in Europe and that correlations between rising temperatures and the disappearance of this species have to be understood in a synecological context and by integrating a variety of stressors other than temperature. By capturing and examining juvenile and adult Allis shad from River Rhine, we demonstrated the first natural reproduction of A. alosa in River Rhine since nearly 100 years and the success of stocking measures within the framework of the LIFE project.
Die vorliegende Arbeit behandelt Techniken zur interakativen und physikalisch basierten Darstellung von Haaren für Computer-Generated Imagery (CGI). Dafür werden Techniken zur Simulation und Approximierung der Interaktionen von Licht mit Haar hergeleitet und vorgestellt. Des Weiteren wird beschrieben, wie Haare, trotz solch berechnungsintensiver Algorithmen, sehr interaktiv dargestellt werden können. Verfahren zur Berechnung von Schatten in Haaren sowie Ansätze zur effizienten Darstellung von Haar als transparente Geometrie werden ebenfalls vorgestellt. Einen Hauptschwerpunkt der Arbeit bildet dabei der DBK-Buffer, welcher im Rahmen dieser konzeptioniert, implementiert und evaluiert wurde. Mit Hilfe des DBK-Buffers ist es möglich tausende von transparenten Haaren sehr effizient darzustellen ohne auf Funktionalitäten der neusten Grafikkarten-Generation, oder sehr viel Videospeicher, angewiesen zu sein. Darüber hinaus wurde eine umfassende Evaluierung der beschriebenden Techniken bezüglich der visuellen Qualität, der Performanz und des Speicheraufwandes durchgeführt. Dabei wurde gezeigt, dass Haare nicht nur mit interaktiven, sondern sogar mit echtzeitfähigen Bildwiederholungsraten physikalisch basiert dargestellt werden können.
Confidentiality, integrity, and availability are often listed as the three major requirements for achieving data security and are collectively referred to as the C-I-A triad. Confidentiality of data restricts the data access to authorized parties only, integrity means that the data can only be modified by authorized parties, and availability states that the data must always be accessible when requested. Although these requirements are relevant for any computer system, they are especially important in open and distributed networks. Such networks are able to store large amounts of data without having a single entity in control of ensuring the data's security. The Semantic Web applies to these characteristics as well as it aims at creating a global and decentralized network of machine-readable data. Ensuring the confidentiality, integrity, and availability of this data is therefore also important and must be achieved by corresponding security mechanisms. However, the current reference architecture of the Semantic Web does not define any particular security mechanism yet which implements these requirements. Instead, it only contains a rather abstract representation of security.
This thesis fills this gap by introducing three different security mechanisms for each of the identified security requirements confidentiality, integrity, and availability of Semantic Web data. The mechanisms are not restricted to the very basics of implementing each of the requirements and provide additional features as well. Confidentiality is usually achieved with data encryption. This thesis not only provides an approach for encrypting Semantic Web data, it also allows to search in the resulting ciphertext data without decrypting it first. Integrity of data is typically implemented with digital signatures. Instead of defining a single signature algorithm, this thesis defines a formal framework for signing arbitrary Semantic Web graphs which can be configured with various algorithms to achieve different features. Availability is generally supported by redundant data storage. This thesis expands the classical definition of availability to compliant availability which means that data must only be available as long as the access request complies with a set of predefined policies. This requirement is implemented with a modular and extensible policy language for regulating information flow control. This thesis presents each of these three security mechanisms in detail, evaluates them against a set of requirements, and compares them with the state of the art and related work.
How entrepreneurs become successful with their business ideas has been a topic of discussion within entrepreneurial research circles for many years. Business success has been related to psychological characteristics and many research findings have explained how people become entrepreneurs and furthermore, how they find success in regards to their business ventures. People always wanted to know what defines an entrepreneur and more importantly, what is needed to become a successful entrepreneur. Curious minds began to search for the recipe for success. Personality Traits have been researched for years, leading to the discovery of many diverse and distinct forms which are still recognized today. Researchers compared the Behavioral Approach or the analysis of demographical characteristics to success.
This raises the question of what an entrepreneur needs to become successful which is made more difficult due to the fact that entrepreneurship and who an entrepreneur is can be difficult do define exactly. Entrepreneurs are diverse and distinct individuals that each possess their own characteristics, behavior and reasons for starting a business. Many people assume that it is just someone who starts their own business. While partially correct, this only scratches the surface of what it is to be an entrepreneur.
This thesis answers the following three research questions: 1. Which personality traits were found in the almost sixty years of researching? 2. What triggers do entrepreneurs need for starting a new business? 3. Which factor makes the entrepreneur successful with their business idea? Also, reading the thesis, one may gain a better understanding of what entrepreneurship is, what defines an entrepreneur and what one needs to become a successful entrepreneur.
In this thesis the reviewing literature method developed by Wolfswinkel et al. (2011) was used to identify suitable literature and references for the main chapters. For that the five stages of the grounded-theory method for reviewing the literature were used in an iterative fashion. In step one inclusion and exclusion criterias for literature were defined. Then literature was searched matching the criterias of step one. In step three the samples were refined. Then in step four the found literature was analyzed and coded before the results of this research was presented in the following text. Many entrepreneurial personality traits were discovered and confirmed during the studies conducted over the last sixty years, these include but are not limited to Need for Achievement, Locus of Control, or risk taking propensity. Motivation is clearly the main trigger to start your own business. During the studies reasons were discovered of how and why an entrepreneur may become successful outside of posessing the right combination of traits. For example: having human, financial and social capital, an entrepreneurial behavior and the right economic conditions to start a business that will survive for more than five years.
This thesis presents novel approaches for integrating context information into probabilistic models. Data from social media is typically associated with metadata, which includes context information such as timestamps, geographical coordinates or links to user profiles. Previous studies showed the benefits of using such context information in probabilistic models, e.g.\ improved predictive performance. In practice, probabilistic models which account for context information still play a minor role in data analysis. There are multiple reasons for this. Existing probabilistic models often are complex, the implementation is difficult, implementations are not publicly available, or the parameter estimation is computationally too expensive for large datasets. Additionally, existing models are typically created for a specific type of content and context and lack the flexibility to be applied to other data.
This thesis addresses these problems by introducing a general approach for modelling multiple, arbitrary context variables in probabilistic models and by providing efficient inference schemes and implementations.
In the first half of this thesis, the importance of context and the potential of context information for probabilistic modelling is shown theoretically and in practical examples. In the second half, the example of topic models is employed for introducing a novel approach to context modelling based on document clusters and adjacency relations in the context space. They can cope with areas of sparse observations and These models allow for the first time the efficient, explicit modelling of arbitrary context variables including cyclic and spherical context (such as temporal cycles or geographical coordinates). Using the novel three-level hierarchical multi-Dirichlet process presented in this thesis, the adjacency of ontext clusters can be exploited and multiple contexts can be modelled and weighted at the same time. Efficient inference schemes are derived which yield interpretable model parameters that allow analyse the relation between observations and context.
The application of pesticides to agricultural areas can result in transport to adjacent non-target environments. In particular, surface water systems are likely to receive agricultural pesticide input. When pesticides enter aquatic environments, they may pose a substantial threat to the ecological integrity of surface water systems. To minimize the risk to non-target ecosystems the European Union prescribes an ecotoxicological risk assessment within the registration procedure of pesticides, which consists of an effect and an exposure assessment.
This thesis focuses on the evaluation of the exposure assessment and the implications to the complete regulatory risk assessment, and is based on four scientific publications. The main part of the thesis focuses on evaluation of the FOCUS modelling approach, which is used in regulatory risk assessment to predict pesticide surface water concentrations. This was done by comparing measured field concentrations (MFC) of agricultural insecticides (n = 466) and fungicides (n = 417) in surface water to respective predicted environmental concentrations (PEC) calculated with FOCUS step 1 to step 4 at two different levels of field relevance. MFCs were extracted from the scientific literature and were measured in field studies conducted primarily in Europe (publications 1 and 3).
In addition, an alternative fugacity-based multimedia mass-balance model, which needs fewer input parameters and less computing effort, was used to calculate PECs for the same insecticide MFC dataset and compared to the FOCUS predictions (publication 3). Furthermore, FOCUS predictions were also conducted for veterinary pharmaceuticals in runoff from an experimental plot study, to assess the FOCUS predictions for a different class of chemicals with a different relevant entry pathway (publication 2).
In publication 4, the FOCUS step-3 approach was used to determine relevant insecticide exposure patterns. These patterns were analysed for different monitoring strategies and the implications for the environmental risk assessment (publication 4).
The outcome of this thesis showed that the FOCUS modelling approach is neither protective nor appropriate in predicting insecticide and fungicide field concentrations. Up to one third of the MFCs were underpredicted by the model calculations, which means that the actual risk might be underestimated. Furthermore, the results show that a higher degree of realism even reduces the protectiveness of model results and that the model predictions are worse for highly hydrophobic and toxic pyrethroids.
In addition, the absence of any relationship between measured and predicted concentrations questions the general model performance quality (publication 1 and 3). Further analyses revealed that deficiencies in protectiveness and predictiveness of the environmental exposure assessment might even be higher than shown in this thesis, because actual short-term peak concentrations are only detectable with an event-related sampling strategy (publication 4). However, it was shown that the PECs of a much simpler modelling approach are much more appropriate for the prediction of insecticide MFC, especially for calculations with a higher field relevance (publication 3). The FOCUS approach also failed to predict concentrations of veterinary pharmaceuticals in runoff water (publication 2). In conclusion, the findings of this thesis showed that there is an urgent need for the improvement of exposure predictions conducted in the environmental risk assessment of pesticides as a group of highly relevant environmental chemicals, to ensure that the increasing use of those chemicals does not lead to further harmful effects in aquatic ecosystems.
The establishment of aquatic alien species can strongly affect community and food web structure of the invaded systems and thus represents a major threat to native biodiversity. One of the most important aquatic invasive species in European rivers is the Ponto-Caspian amphipod Dikerogammarus villosus. The species invaded most of the major European waterways within two decades, often associated with a decline of many other macroinvertebrate species, including other amphipods. Based on laboratory results predation by the so called ‘killer shrimp’ is often regarded as the key driver for observed displacement effects, but recent studies indicated a minor relevance of predation by D. villosus in the field. To allow the determination of exact predator-prey interactions from field samples, I established 22 group-specific rDNA primers for freshwater taxa suitable for prey species identification in dietary samples (Chapter II) and an approach for the screening of D. villosus gut contents using 16 of these primers. Combining genetic gut content analyses, with one of these primers, and stable isotope analyses, I examined the importance of intraguild predation (IGP) by D. villosus, which is often assumed the key driver for the displacement of native amphipod species, at an invasion front of the species in Switzerland (Chapter III). The results of this study revealed a low importance of IGP during this particular D. villosus invasion and indicated an overall sparsely predacious feeding behaviour of the species. As the feeding behaviour of D. villosus is supposed to differ between habitats and this study was only conducted at a few sampling sites of one river, I also investigated the role of predation by D. villosus at multiple sites of the River Rhine system, covering a broad range of microhabitats (Chapter IV). In keeping with the results from the invasion front results of this study strongly indicated a sparsely predacious feeding but rather a flexible feeding behaviour of D. villosus even within the same microhabitat.
However, established populations of D. villosus have changed aquatic food webs and can be expected to affect aquatic-terrestrial energy fluxes. In Chapter V of my thesis, I present a field study investigating the impact of D. villosus on the diet of two riparian spider taxa. The results of this study indicate an effect of D. villosus on the terrestrial food web via cross-ecosystem resource flow.
In conclusion, D. villosus influences terrestrial food webs by altering cross-ecosystem resource fluxes, but it is rather an opportunistic omnivore than a predator in the field.
Um die Attraktivität eines Unternehmens für Bewerber zu steigern und die Zufriedenheit der Angestellten zu verbessern ist es heutzutage unumgänglich, eine Vielzahl an Work-Life-Balance Maßnahmen anzubieten. Doch die zeitlichen und finanziellen Kosten, welche deren Einführung verursacht, fordern eine Priorisierung der Maßnahmen. Zur Entwicklung einer solchen Empfehlung für Unternehmen untersucht diese Studie ob es Work-Life-Balance Maßnahmen gibt, welche einen höheren Einfluss auf die Arbeitszufriedenheit ausüben als andere, wie groß der relative Effekt von den Maßnahmen im Vergleich zu anderen arbeitsbezogenen und privaten Variablen auf die Veränderung der Arbeitszufriedenheit ist, ob es einen Zusammenhang zwischen der Effektivität einer Maßnahme und deren Nutzung gibt und ob es Unterschiede zwischen den Erwartungen der Angestellten und den Angeboten der Unternehmen gibt.
Diese Fragen sind in acht Forschungshypothesen formuliert, welche in einem quantitativen Design mit Daten von 289 Angestellten von fünfzehn verschiedenen deutschen Unternehmen aus einem Online-Fragebogen überprüft werden. Für die Bildung einer Hierarchie von Maßnahmen nach ihrem Einfluss auf die Arbeitszufriedenheit und die Untersuchung des relativen Effektes im Vergleich zu anderen Variablen wird eine multiple Regressionsanalyse verwendet, während für die Ermittlung der Unterschiede zwischen den Erwartungen der Angestellten und der Verfügbarkeit der Angebote T-Tests durchgeführt werden.
Unterstützung bei der Kindesbetreuung, Unterstützung bei ehrenamtlichen Tätigkeiten und Teambuilding-Events haben einen signifikant höheren Einfluss auf die Arbeitszufriedenheit als andere Maßnahmen, und die hypothetische Nutzung ist höher
als die tatsächliche Nutzung, was auf ein hohes Potenzial dieser Maßnahmen bezüglich der Verbesserung der Arbeitszufriedenheit durch deren Einführung schließen lässt. Darüber hinaus sind aus Sicht der Angestellten flexible Arbeitszeiten und Arbeitsorte sowie Freizeit- und Überstundenkonten die wichtigsten Maßnahmen, welche auch bereits flächendeckend in den befragten Unternehmen angeboten werden. Allgemein sind die Nutzung der verfügbaren Maßnahmen und die Anzahl der angebotenen Maßnahmen wichtiger im Hinblick auf die Arbeitszufriedenheit als die Art der Maßnahmen. Außerdem nehmen Work-Life-Balance Maßnahmen bei jüngeren Menschen einen höheren Stellenwert in Bezug auf die Arbeitszufriedenheit ein als bei älteren Menschen.
Das Ziel dieser wissenschaftlichen Arbeit ist es, verschiedene vorhandene cloud-basierte Internet of Things (IoT) Entwicklungsplattformen zu untersuchen und hier im Speziellen eine Plattform (IBM Watson IoT) anhand eines Anwendungsfallszenarios detailliert zu untersuchen. Bei IoT handelt es sich um eine aufkommende Technologie mit der Vision, die virtuelle Welt (z.B. Clouds, soziale Netzwerke) und die physikalische Welt (z.B. Geräte, Autos, Kühlschrank, Menschen, Tiere) durch die Internettechnologie miteinander zu verknüpfen. Beispielsweise kann das IoT-Konzept von “smart cities”, welche das Ziel verfolgen, die Produktivität und die Geschäftsentwicklung sowie die sozialen und kulturellen Angebote in der Stadt zu verbessern, durch die Nutzung von Sensoren, Aktuatoren, Clouds und mobilen Geräten erreicht werden (IEEE, 2015). Ein Sensor (z.B. ein Temperatursensor) in einem Gebäude (globale Welt) kann Echtzeitdaten an die IoT Cloud-Plattform (virtuelle Welt) senden, wo sie überwacht, gespeichert und analysiert werden oder eine Aktion auslösen können (z.B. das Kühlsystem in einem Gebäude anschalten, wenn die Temperatur eine bestimmte Grenze überschreitet). Obwohl IoT viele Möglichkeiten in verschiedenen Bereichen schafft (z.B. Transportwesen, Gesundheitsversorgung, verarbeitende Industrie), bringt es auch Herausforderungen mit sich, wie z.B. die Standardisierung, Interoperabilität, Skalierbarkeit, Sicherheit und Privatsphäre. In diesem Bericht werden IoT Konzepte und verwandte Schlüsselprobleme behandelt.
Der Fokus dieser wissenschaftlichen Untersuchung liegt in dem Vergleich verschiedener cloud-basierter IoT Plattformen, um die geschäftlichen Aspekte und die technischen Funktionen zu verstehen, die diese bieten. Die cloud-basierten IoT Plattformen von IBM, Google, Microsoft, PTC und Amazon wurden dabei untersucht.
Für die Ausgestaltung dieser Arbeit wurde die Design Science Research (DSR) Methode verwendet; für die Nachbildung des Echtzeit-IoT Systems wurde die IOT-A modellig Methode verwendet.
Der Vergleich verschiedener cloud-basierter IoT Entwicklungsplattformen zeigt, dass alle untersuchten Plattformen einfache IoT Funktionen bereitstellen, wie z.B. die Verbindung von IoT Geräten und der cloud-basierten IoT Plattform, das Sammeln von Daten von IoT Geräten, die Datenspeicherung und die Datenanalyse. Jedoch ist die IBM IoT Plattform den anderen Plattformen gegenüber aufgrund des integrierten Laufzeitsystems im Vorteil; dies macht sie zudem entwicklerfreundlich. Aus diesem Grund wurde die IBM Watson IoT für Bluemix für die weitere Untersuchung ihrer Einsatzmöglichkeiten ausgewählt. Das Angebot von IBM Watson IoT für Bluemix beinhalten Analytik, Risikomanagement, Verbindungs- und Informationsmanagement. Es wurde ein Anwendungsfall implementiert, um die Einsatzmöglichkeiten der IBM Watson IoT Plattform einzuschätzen. Die digitalen Artefakte (d.h. Anwendungen) wurden entwickelt, um die IoT Lösung von IBM zu bewerten. Die Ergebnisse zeigen, dass IBM eine skalierbare und entwickler- und einsetzungsfreundliche IoT Plattform bietet. Die enthaltene kognitive, kontextuelle und vorhersehbare Analytik erlaubt eine vielversprechende Funktionsweise, die Einblicke auf Basis der IoT Daten gewährt, die durch Sensoren oder andere IoT Geräte übertragen werden.
Conversion of natural vegetation into cattle pastures and croplands results in altered emissions of greenhouse gases (GHG), such as carbon dioxide (CO2), methane (CH4), and nitrous oxide (N2O). Their atmospheric concentration increase is attributed the main driver of climate change. Despite of successful private initiatives, e.g. the Soy Moratorium and the Cattle Agreement, Brazil was ranked the worldwide second largest emitter of GHG from land use change and forestry, and the third largest emitter from agriculture in 2012. N2O is the major GHG, in particular for the agricultural sector, as its natural emissions are strongly enhanced by human activities (e.g. fertilization and land use changes). Given denitrification the main process for N2O production and its sensitivity to external changes (e.g. precipitation events) makes Brazil particularly predestined for high soil-derived N2O fluxes.
In this study, we followed a bottom-up approach based on a country-wide literature research, own measurement campaigns, and modeling on the plot and regional scale, in order to quantify the scenario-specific development of GHG emissions from soils in the two Federal States Mato Grosso and Pará. In general, N2O fluxes from Brazilian soils were found to be low and not particularly dynamic. In addition to that, expected reactions to precipitation events stayed away. These findings emphasized elaborate model simulations in daily time steps too sophisticated for regional applications. Hence, an extrapolation approach was used to first estimate the influence of four different land use scenarios (alternative futures) on GHG emissions and then set up mitigation strategies for Southern Amazonia. The results suggested intensification of agricultural areas (mainly cattle pastures) and, consequently, avoided deforestation essential for GHG mitigation.
The outcomes of this study provide a very good basis for (a) further research on the understanding of underlying processes causing low N2O fluxes from Brazilian soils and (b) political attempts to avoid new deforestation and keep GHG emissions low.
In recent years head mounted displays (HMD) and their abilities to create virtual realities comparable with the real world moved more into the focus of press coverage and consumers. The reason for this lies in constant improvements in available computing power, miniaturisation of components as well as the constantly shrinking power consumption. These trends originate in the general technical progress driven by advancements made in smartphone sector. This gives more people than ever access to the required components to create these virtual realities. However at the same time there is only limited research which uses the current generation of HMDs especially when comparing the virtual and real world against each other. The approach of this thesis is to look into the process of navigating both real and virtual spaces while using modern hardware and software. One of the key areas are the spatial and peripheral perception without which it would be difficult to navigate a given space. The influence of prior real and virtual experiences on these will be another key aspect. The final area of focus is the influence on the emotional state and how it compares to the real world. To research these influences a experiment using the Oculus Rift DK2 HMD will be held in which subjects will be guided through a real space as well as a virtual model of it. Data will be gather in a quantitative manner by using surveys. Finally, the findings will be discussed based on a statistical evaluation. During these tests the different perception of distances and room size will the compared and how they change based on the current reality. Furthermore, the influence of prior spatial activities both in the real and the virtual world will looked into. Lastly, it will be checked how real these virtual worlds are and if they are sufficiently sophisticated to trigger the same emotional responses as the real world.
Reaktiv lokale Algorithmen sind verteilte Algorithmen, die den Anforderungen großer, batteriebetriebener, Drahtloser Ad Hoc und Sensornetzwerke im besonderen Maße gerecht werden. Durch Vermeidung überflüssiger Nachrichtenübertragungen sowie Verzicht auf proaktive Ermittlung von Nachbarschaftstabellen (d.h. beaconing) minimieren solche Algorithmen den Kommunikationsaufwand und skalieren gut bei wachsender Netzgröße. Auf diese Weise werden Ressourcen wie Bandbreite und Energie geschont, es kommt seltener zu Nachrichtenkollisionen und dadurch zu einer Erhöhung der Paketempfangsrate, sowie einer Reduktion der Latenzen.
Derzeit wird diese Algorithmenklasse hauptsächlich für Geografisches Routing, sowie zur Topologiekontrolle, insbesondere zur Ermittlung der Adjazenzliste eines Knotens in zusammenhängenden, kantenschnittfreien (planaren) Repräsentationen des Netzgraphen, eingesetzt. Ersteres ermöglicht drahtlose multi-hop Kommunikation auf Grundlage von geografischen Knotenpositionen ohne Zuhilfenahme zusätzlicher Netzwerkinfrastruktur, wohingegen Letzteres eine hinreichende Grundlage für effiziente, lokale Lösungen einer Reihe algorithmischer Problemstellungen ist.
Die vorliegende Dissertation liefert neue Erkenntnisse zum Forschungsgebiet der reaktiven Algorithmen, zum Einen auf einer abstrakten Ebene und zum Anderen durch die Einführung neuer Algorithmen.
Erstens betrachtet diese Arbeit reaktive Algorithmen erstmalig im Ganzen und als eigenständiges Forschungsfeld. Es wird eine umfangreiche Literaturstudie zu dieser Thematik präsentiert, welche die aus der Literatur bekannten Algorithmen, Techniken und Anwendungsfelder systematisch auflistet, klassifiziert und einordnet. Weiterhin wird das mathematische Konzept der O- und Omega-reaktiv lokalen Topologiekontrolle eingeführt. Dieses Konzept ermöglicht erstmals die eindeutige Unterscheidung reaktiver von konventionellen, beacon-basierten, verteilten Topologiekontrollalgorithmen. Darüber hinaus dient es als Klassifikationsschema für existierende, sowie zukünftige Algorithmen dieser Art. Zu guter Letzt ermöglicht dieses Konzept grundlegende Aussagen über die Mächtigkeit des reaktiven Prinzips, welche über Entwurf und Analyse von Algorithmen hinaus reichen.
Zweitens werden in dieser Arbeit neue reaktiv lokale Algorithmen zur Topologiekontrolle und Geografischem Routing eingeführt, wobei drahtlose Netze durch Unit Disk bzw. Quasi Unit Disk Graphen modelliert werden. Diese Algorithmen berechnen für einen gegebenen Knoten die lokale Sicht auf zusammenhängende, planare, Euklidische bzw. Topologische Spanner mit konstanter Spannrate bzgl. des Netzgraphen und routen Nachrichten reaktiv entlang der Kanten dieser Spanner, wobei die Nachrichtenauslieferung garantiert wird. Alle bisher bekannten Verfahren sind entweder nicht reaktiv oder gewährleisten keine konstanten Euklidischen oder Topologischen Spannraten. Ein wesentliches Teilergebnis dieser Arbeit ist der Nachweis, dass die partielle Delaunay Triangulierung (PDT) ein Euklidischer Spanner mit konstanter Spannrate für Unit Disk Graphen ist.
Die in dieser Dissertation gewonnenen Erkenntnisse bilden die Basis für grundlegende und strukturierte Forschung auf diesem Gebiet und zeigen, dass das reaktive Prinzip ein wichtiges Werkzeug des Algorithmenentwurfs für Drahtlose Ad Hoc und Sensornetzwerke ist.
This thesis is providing an overview over the current topics and influences of mobile components on Enterprise Content Management (ECM). With a literature review the core topics of enterprise mobility and ECM have been identified and projected on the context of using mobile Apps within the environment of ECM. An analysis of three ECM systems and their mobile software lead to an understanding of the functionalities and capabilities mobile systems are providing in the ECM environment. These findings lead to a better un- derstanding for the usage of mobile Enterprise Content Management and is preparation. The thesis focuses the most important topics, which need to be considered for the usage and adoption of mobile Apps in ECM.
In Part I: "The flow-decomposition problem", we introduce and discuss the flow-decomposition problem. Given a flow F, this problem consists of decomposing the flow into a set of paths optimizing specific properties of those paths. We introduce different types of decompositions, such as integer decompositions and alpha-decompositions, and provide two formulations of the set of feasible decompositions.
We show that the problem of minimizing the longest path in a decomposition is NP-hard, even for fractional solutions. Then we develop an algorithm based on column generation which is able to solve the problem.
Tight upper bounds on the optimal objective value help to improve the performance.
To find upper bounds on the optimal solution for the shortest longest path problem, we develop several heuristics and analyze their quality. On pearl graphs we prove a constant approximation ratio of 2 and 3 respectively for all heuristics. A numerical study on random pearl graphs shows that the solutions generated by the heuristics are usually much better than this worst-case bound.
In Part II: "Construction and analysis of evacuation models using flows over time", we consider two optimization models in the context of evacuation planning. The first model is a parameter-based quickest flow model with time-dependent supply values. We give a detailed description of the network construction and of how different scenarios are modeled by scenario parameters. In a second step we analyze the effect of the scenario parameters on the evacuation time. Understanding how the different parameters influence the evacuation time allows us to provide better advice for evacuation planning and allows us to predict evacuation times without solving additional optimization problems. To understand the effect of the time-dependent supply values, we consider the quickest path problem with time-dependent supply values and provide a solution algorithm. The results from this consideration are generalized to approximate the behavior of the evacuation times in the context of quickest flow problems.
The second model we consider is a path-based model for evacuation in the presence of a dynamic cost function. We discuss the challenges of this model and provide ideas for how to approach the problem from different angles. We relate the problem to the flow-decomposition problem and consider the computation of evacuation paths with dynamic costs for large capacities. For the latter method we provide heuristics to find paths and compare them to the optimal solutions by applying the methods to two evacuation scenarios. An analysis shows that the paths generated by the heuristic yield close to optimal solutions and in addition have several desirable properties for evacuation paths which are not given for the optimal solution.
Science education has been facing important challenges in the recent years: the decline in student’s interest in scientific topics, and moreover, the decrease of students pursuing science beyond their compulsory studies (Bennett, Hogarth, Lubben, 2003). As a result, research has focus on examining different approaches that could attempt to improve the situation. One of these approaches has been the use of context-based problem-solving tasks (Kölbach & Sumfleth, 2011; Bennett, Hogarth, Lubben, 2003). While research into context-based problem-solving tasks suggest that they are very motivating for students, it is still unclear how they influence motivation. Following an experimental pretest-postest design, two studies examined the effects of context-based task characteristics of contextualization, complexity, and transparency, on students’ motivational variables, performance, and metacognitive experiences.
Results from both studies suggest that the task characteristic of contextualization directly influences how students’ interest is triggered and maintained throughout the task. On the other hand, the task characteristics of complexity and transparency had different effects for the other dependent variables of effort, difficulty, and solution correctness.
Moreover, data shows that other motivational variables such as anxiety and success expectancies are strongly influenced by the interaction of the parameters under study. The dissertation concludes that appropriate design and use of context-based task characteristics can benefit students’ learning processes and outcomes.
While Virtual Reality has been around for decades it gained new life in recent years. The release of the first consumer hardware devices allows fully immersive and affordable VR for the user at home. This availability lead to a new focus of research on technical problems as well as psychological effects. The concepts of presence, describing the feeling of being in the virtual place, body ownership and their impact are central topics in research for a long time and still not fully understood.
To enable further research in the area of Mixed Reality, we want to introduce a framework that integrates the users body and surroundings inside a visual coherent virtual environment. As one of two main aspects we want to merge real and virtual objects to a shared environment in a way such that they are no longer visually distinguishable. To achieve this the main focus is not supposed to be on a high graphical fidelity but on a simplified representation of reality. The essential question is, what level of visual realism is necessary to create a believable mixed reality environment that induces a sense of presence in the user? The second aspect considers the integration of virtual persons. Can characters be recorded and replayed in a way such that they are perceived as believable entities of the world and therefore act as a part of the users environment?
The purpose of this thesis was the development of a framework called Mixed Reality Embodiment Platform. This inital system implements fundamental functionalities to be used as a basis for future extensions to the framework. We also provide a first application that enables user studies to evaluate the framework and contribute to aforementioned research questions.
While reading this sentence, you probably gave (more or less deliberately) instructions to approximately 100 to 200 muscles of your body. A sceptical face or a smile, your fingers scrolling through the text or holding a printed version of this work, holding your head, sitting, and much more.
All these processes take place almost automatically, so they seem to be no real achievement. In the age of digitalization it is a defined goal to transfer human (psychological and physiological) behavior to machines (robots). However, it turns out that it is indeed laborious to obtain human facial expression or walking from robots. To optimize this transfer, a deeper understanding of a muscle's operating principle is needed (and of course an understanding of the human brain, which will, however, not be part of this thesis).
A human skeletal muscle can be shortened willingly, but not lengthened, thereto it takes an antagonist. The muscle's change in length is dependent on the incoming stimulus from the central nervous system, the current length of the muscle itself, and certain muscle--specific quantities (parameters) such as the maximum force. Hence, a muscle can be mathematically described by a differential equation (or more exactly a coupled differential--algebraic system, DAE), whose structure will be revealed in the following chapters. The theory of differential equations is well-elaborated. A multitude of applicable methods exist that may not be known by muscle modelers. The purpose of this work is to link the methods from applied mathematics to the actual application in biomechanics.
The first part of this thesis addresses stability theory. Let us remember the prominent example from middle school physics, in which the resting position of a ball was obviously less susceptible towards shoves when lying in a bowl rather than balancing at the tip of a hill. Similarly, a dynamical (musculo-skeletal) system can attain equilibrium states that react differently towards perturbations.
We are going to compute and classify these equilibria.
In the second part, we investigate the influence of individual parameters on model equations or more exactly their solutions. This method is known as sensitivity analysis.
Take for example the system "car" containing a value for the quantity "pressure on the break pedal while approaching a traffic light". A minor deviation of this quantity upward or downward may lead to an uncomfortable, abrupt stop or even to a collision, instead of a smooth stop with a sufficient gap.
The considered muscle model contains over 20 parameters that, if changed slightly, have varying effects on the model equation solutions at different instants of time. We will investigate the sensitivity of those parameters regarding different sub--models, as well as the whole model among different dynamical boundary conditions.
The third and final part addresses the \textit{optimal control} problem (OCP).
The muscle turns a nerve impulse (input or control) into a length change and therefore a force response (output). This forward process is computable by solving the respective DAE. The reverse direction is more difficult to manage. As an everyday example, the OCP is present regarding self-parking cars, where a given path is targeted and the controls are the position of the
steering wheel as well as the gas pedal.
We present two methods of solving OCPs in muscle modeling: the first is a conjunction of variational calculus and optimization in function spaces, the second is a surrogate-based optimization.
“Did I say something wrong?” A word-level analysis of Wikipedia articles for deletion discussions
(2016)
Diese Arbeit beschäftigt sich damit, linguistische Erkenntnisse auf Wortebene über schriftlichen Diskussionen zu gewinnen. Die Unterscheidung zwischen Botschaften, welche sich förderlich auf Diskussionen auswirken und jene, welche diese unterbrechen, spielte dabei eine besondere Rolle. Hierbei lag ein Schwerpunkt darauf, zu ermitteln, ob Ich- und Du-Botschaften charakteristisch für die beiden Kommunikationsarten sind. Diese Botschaften sind über Jahre hinweg zu Empfehlungen für erfolgreiche Kommunikation avanciert. Ihre zugeschriebene Wirkung wurde zwar mehrfach bestätigt, jedoch geschah dies stets in kleineren Studien. Deshalb wurde in dieser Arbeit mithilfe der Löschdiskussionen der englischen Wikipedia und der Liste gesperrter Nutzer eine vollautomatische Erstellung eines annotierten Datensatzes entwickelt. Dabei wurden Diskussionsbotschaften entweder als förderlich oder schädlich für einen konstruktiven Diskussionsverlauf markiert. Dieser Datensatz wurde anschließend im Rahmen einer binären Klassifikation verwendet, um charakteristische Worte für die beiden Kommunikationsarten zu bestimmen. Es wurde zudem untersucht, ob anhand von Synsemantika (auch bekannt als Funktionswörter) wie Pronomen oder Konjunktionen eine Entscheidung über die Kommunikationsart einer Botschaft getroffen werden kann. Du-Botschaften wurden, übereinstimmend mit ihrer zugeschriebenen negativen Auswirkung auf Kommunikation, als schädlich in den durchgeführten Untersuchungen identifiziert. Entgegen der zugeschriebenen positiven Auswirkung von Ich-Botschaften, wurde bei diesen ebenfalls eine schädlich Wirkung festgestellt. Eine klare Aussage über die Relevanz von Synsemantika konnte anhand der Ergebnisse nicht getroffen werden. Weitere charakteristische Worte konnten nicht festgestellt werden. Die Ergebnisse deuten darauf hin, dass ein anderes Modell textliche Diskussionen potentiell besser abbilden könnte.
Die Bereitstellung elektronischer Beteiligungsverfahren (E-Partizipation) ist ein komplexes sozio-technisches Unterfangen, das eine sorgfältige Vorgehensweise erfordert. Die Herausforderung ist, dass Regierungen oder Kommunalverwaltungen, als häufigste Anbieter, bei der ganzheitlichen Planung und Umsetzung nur unzureichend unterstützt werden. Infolgedessen beschreibt die Literatur nur wenige E-Partizipationsangebote als erfolgreich. Die Wirtschaftsinformatik entwickelte das Konzept der Enterprise Architectures um die Entwicklung komplexer sozio-technischer Systeme zu unterstützen. Versteht man die Gruppe an Organisationen, die E-Partizipationsverfahren bereitstellt nun als Enterprise, so können die Prinzipien von Enterprise Architectures angewendet werden. Nichtsdestotrotz wurde dieser Ansatz in der E-Partizipationspraxis und –Forschung bisher kaum beachtet.
Motiviert durch diese Forschungslücke, untersucht die Dissertation bestehende Ansätze aus der E-Partizipationspraxis und –Forschung auf Besonderheiten und Schwachstellen, um anschließend den Nutzen bestehender Rahmenwerke für Enterprise Architectures im Kontext von E-Partizipation zu analysieren. Die Literaturstudie überprüft sowohl konzeptionelle als auch prozedurale Ansätze aus Praxis und Wissenschaft auf ihre Ganzheitlichkeit und identifiziert die Herausforderungen. Im Rahmen von vier Projekten wird das Vorgehen beim Entwurf und der Umsetzung von elektronisch unterstützten Beteiligungsverfahren dokumentiert und ausgewertet sowie Verbesserungspotenziale aufgezeigt. Enterprise Architecture Frameworks (Zachman Framework, TOGAF, DoDAF, FEA, ARIS, und ArchiMate) werden auf Anforderungen und Nützlichkeit für E-Partizipation untersucht. Die Ergebnisse zeigen die Herausforderungen bei der Übernahme existierender Enterprise Architecture Frameworks, aber auch das Potential einer Kombination aus speziellen Ansätzen für E-Partizipation mit Enterprise Architectures auf. Folglich ist das Ziel der weiteren Forschung die Entwicklung eines Architekturrahmenwerks für E-Partizipation.
Die vorliegende Dissertation führt die Ergebnisse der Literaturstudien und Aktionsforschung zusammen und wendet dementsprechend Konstruktionsforschung (Design Science Research) bei der Entwicklung des E-Participation Architecture Framework (EPART-Framework) an. Die initiale Version dieses Rahmenwerks wird in zwei unterschiedlichen Umgebungen angewandt und getestet. Die Erkenntnisse führen zu dem EPART-Framework, das in dieser Dissertation vorgestellt wird. Es besteht aus dem EPART-Metamodel, das E Partizipation aus sechs verschiedenen Perspektiven unterschiedlicher Akteure (den sog. EPART-Viewpoints) betrachtet. Die EPART-Method unterstützt den Entwurf und die Umsetzung von E-Partizipation mit Hilfe des EPART-Frameworks. Sie beschreibt fünf, durch Anforderungsmanagement begleitete, Phasen: Initiierung, Entwurf, Umsetzung, Partizipation und Evaluation. Die Ergebnisse während diesen Phasen werden in der Architekturbeschreibung (Architecture Description) und dem Solution Repository gespeichert. Das finale EPART-Framework wird im Rahmen der Arbeit evaluiert, um Empfehlungen für die Anwendung sowie weiteren Forschungsbedarf aufzuzeigen. Die Ergebnisse geben Grund zur Annahme, dass das EPART-Framework die Lücke zwischen E-Partizipation und Enterprise Architectures füllt und Forschung und Praxis in dem Feld wertvoll ergänzt.
The publication of open source software aims to support the reuse, the distribution and the general utilization of software. This can only be enabled by the correct usage of open source software licenses. Therefore associations provide a multitude of open source software licenses with different features, of which a developer can choose, to regulate the interaction with his software. Those licenses are the core theme of this thesis.
After an extensive literature research, two general research questions are elaborated in detail. First, a license usage analysis of licenses in the open source sector is applied, to identify current trends and statistics. This includes questions concerning the distribution of licenses, the consistency in their usage, their association over a period of time and their publication.
Afterwards the recommendation of licenses for specific projects is investigated. Therefore, a recommendation logic is presented, which includes several influences on a suitable license choice, to generate an at most applicable recommendation. Besides the exact features of a license of which a user can choose, different methods of ranking the recommendation results are proposed. This is based on the examination of the current situation of open source licensing and license suggestion. Finally, the logic is evaluated on the exemplary use-case of the 101companies project.
One of the main goals of the artificial intelligence community is to create machines able to reason with dynamically changing knowledge. To achieve this goal, a multitude of different problems have to be solved, of which many have been addressed in the various sub-disciplines of artificial intelligence, like automated reasoning and machine learning. The thesis at hand focuses on the automated reasoning aspects of these problems and address two of the problems which have to be overcome to reach the afore-mentioned goal, namely 1. the fact that reasoning in logical knowledge bases is intractable and 2. the fact that applying changes to formalized knowledge can easily introduce inconsistencies, which leads to unwanted results in most scenarios.
To ease the intractability of logical reasoning, I suggest to adapt a technique called knowledge compilation, known from propositional logic, to description logic knowledge bases. The basic idea of this technique is to compile the given knowledge base into a normal form which allows to answer queries efficiently. This compilation step is very expensive but has to be performed only once and as soon as the result of this step is used to answer many queries, the expensive compilation step gets worthwhile. In the thesis at hand, I develop a normal form, called linkless normal form, suitable for knowledge compilation for description logic knowledge bases. From a computational point of view, the linkless normal form has very nice properties which are introduced in this thesis.
For the second problem, I focus on changes occurring on the instance level of description logic knowledge bases. I introduce three change operators interesting for these knowledge bases, namely deletion and insertion of assertions as well as repair of inconsistent instance bases. These change operators are defined such that in all three cases, the resulting knowledge base is ensured to be consistent and changes performed to the knowledge base are minimal. This allows us to preserve as much of the original knowledge base as possible. Furthermore, I show how these changes can be applied by using a transformation of the knowledge base.
For both issues I suggest to adapt techniques successfully used in other logics to get promising methods for description logic knowledge bases.
Diese Arbeit betrachtet die Online-Aufmerksamkeit gegenüber Forschern und deren Forschungsthemen. Die enthaltenen Studien vergleichen die Aufmerksamkeitsdynamiken gegenüber Gewinnern wichtiger Forschungspreise mit Forschern die keinen Preis erhalten haben. Web-Signale wie Wikipedia Seitenaufrufe, Editierungen von Wikipedia-Artikeln und Google Trends wurden als Proxy für Online-Aufmerksamkeit verwendet. Dabei wurde herausgefunden, dass Wikipedia-Artikel über die Forschungsthemen von Gewinnern zeitnahe zum Artikel über den Gewinner erstellt wurden. Eine mögliche Erklärung hierfür könnte sein, dass die Forschungsthemen in einer engeren Beziehung zu den Gewinnern stehen. Dies würde die These unterstützen, dass Gewinner ihr Forschungsgebiet eingeführt haben. Zusätzlich wuchs die Online-Aufmerksamkeit gegenüber den Forschungsthemen von Gewinnern nach dem Tag an dem der Artikel über den Forscher erstellt wurde. Daraus kann abgeleitet werden, dass Themen von Gewinnern beliebter sind als die Themen von Forschern die keinen Preis erhalten haben. Des Weiteren wurde gezeigt, dass Gewinner des Nobelpreises vor der Verkündung weniger Online-Aufmerksamkeit erhalten als die Liste von Nominierten basierend auf den Thomson Reuters Citation Laureates. Ferner sank die Beliebtheit gegenüber der Preisträger schneller als gegenüber Forschern die keinen Preis erhalten haben. Zuletzt wurde demonstriert, dass eine Vorhersage der Gewinner basierend auf Aufmerksamkeitsdynamiken gegenüber Forschern problematisch ist.
Bei der Olivenölproduktion fallen innerhalb kürzester Zeit große Mengen Olivenabwasser (OMW) an. OMW kann aufgrund seines hohen Nährstoffgehalts als landwirtschaftlicher Dünger eingesetzt werden. Doch seine öligen und phenolischen Bestandteile schaden dem Boden. Es ist nicht bekannt, inwiefern jahreszeitliche Temperatur- und Niederschlagsschwankungen den Verbleib und die Wirkung der Abwasserkomponenten im Boden längerfristig beeinflussen. Um dem nachzugehen, wurden jeweils 14 L OMW m-2 im Winter, Frühling und Sommer auf verschiedenen Parzellen einer Olivenplantage ausgebracht. Hydrologische Bodeneigenschaften (Wassertropfeneindringzeit, Wasserleitfähigkeit, Kontaktwinkel), physikalisch-chemische Parameter (pH, EC, lösliche Ionen, phenolische Verbindungen, organischer Kohlenstoff) sowie der biologische Abbau (Köderstreifen) wurden erfasst, um den Zustand des Bodens nach der Applikation zu beurteilen. Nach einer Regensaison war die Bodenqualität der im Sommer behandelten Flächen signifikant reduziert. Dies wurde insbesondere anhand einer dreimal niedrigeren biologischen Fraßaktivität, zehnmal höherer Hydrophobizität, sowie einem viermal höheren Gehalt an phenolischen Substanzen im Vergleich zu den Kontrollflächen deutlich. Die Ausbringung im Winter zeigte gegenteilige Effekte, welche das natürliche Regenerierungspotential des Bodens erkennen lassen. Der Einfluss der Frühlingsapplikation lag zwischen den zuvor genannten. Es wurden keinerlei Anzeichen auf Verlagerung von OMW-Bestandteilen in tiefere Bodenschichten beobachtet. Während der feuchten Jahreszeiten gilt die Ausbringung gesetzlich begrenzter Mengen Olivenabwasser somit als vertretbar. Weitere Forschung ist notwendig um den Einfluss von Frühlingsapplikationen zu quantifizieren und weitere Erkenntnisse über die Zusammensetzung und Mobilität organischer OMW-Bestandteile im Boden zu gewinnen.
The global problematic issue of the olive oil industry is in its generation of large amounts of olive mill wastewater (OMW). The direct discharge of OMW to the soil is very common which presents environmental problems for olive oil producing countries. Both, positive as well as negative effects on soil have been found in earlier studies. Therefore, the current study hypothesized that whether beneficial effects or negative effects dominate depends on the prevailing conditions before and after OMW discharge to soil. As such, a better understanding of the OMW-soil interaction mechanisms becomes essential for sustainable safe disposal of OMW on soil and sustainable soil quality.
A field experiment was carried out in an olive orchard in Palestine, over a period of 24 months, in which the OMW was applied to the soil as a single application of 14 L m-2 under four different environmental conditions: in winter (WI), spring (SP), and summer with and without irrigation (SUmoist and SUdry). The current study investigated the effects of seasonal conditions on the olive mill wastewater (OMW) soil interaction in the short-term and the long-term. The degree and persistence of soil salinization, acidification, accumulation of phenolic compounds and soil water repellency were investigated as a function of soil depth and time elapsed after the OMW application. Moreover, the OMW impacts on soil organic matter SOM quality and quantity, total organic carbon (SOC), water-extractable soil organic carbon (DOC), as well as specific ultraviolet absorbance analysis (SUVA254) were also investigated for each seasonal application in order to assess the degree of OMW-OM decomposition or accumulation in soil, and therefore, the persisting effects of OMW disposal to soil.
The results of the current study demonstrate that the degree and persistence of relevant effects due to OMW application on soil varied significantly between the different seasonal OMW applications both in the short-term and the long-term. The negative effects of the potentially hazardous OMW residuals in the soil were highly dependent on the dominant transport mechanisms and transformation mechanisms, triggered by the ambient soil moisture and temperature which either intensified or diminished negative effects of OMW in the soil during and after the application season. The negative effects of OMW disposal to the soil decreased by increasing the retention time of OMW in soil under conditions favoring biological activity. The moderate conditions of soil moisture and temperature allowed for a considerable amount of applied OMW to be biologically degraded, while the prolonged application time under dry conditions and high temperature resulted in a less degradable organic fraction of the OMW, causing the OMW constituents to accumulate and polymerize without being degraded. Further, the rainfall during winter season diminished negative effects of OMW in the soil; therefore, the risk of groundwater contamination by non-degraded constituents of OMW can be highly probable during the winter season.
This habilitation thesis collects works addressing several challenges on handling uncertainty and inconsistency in knowledge representation. In particular, this thesis contains works which introduce quantitative uncertainty based on probability theory into abstract argumentation frameworks. The formal semantics of this extension is investigated and its application for strategic argumentation in agent dialogues is discussed. Moreover, both the computational as well as the meaningfulness of approaches to analyze inconsistencies, both in classical logics as well as logics for uncertain reasoning is investigated. Finally, this thesis addresses the implementation challenges for various kinds of knowledge representation formalisms employing any notion of inconsistency tolerance or uncertainty.
The work presented in this thesis investigated interactions of selected biophysical processes that affect zooplankton ecology at smaller scales. In this endeavour, the extent of changes in swimming behaviour and fluid disturbances produced by swimming Daphnia in response to changing physical environments were quantified. In the first research question addressed within this context, size and energetics of hydrodynamic trails produced by Daphnia swimming in non-stratified still waters were characterized and quantified as a function of organisms’ size and their swimming patterns.
The results revealed that neither size nor the swimming pattern of Daphnia affects the width of induced trails or dissipation rates. Nevertheless, as the size and swimming velocity of the organisms increased, trail volume increased in proportional to the cubic power of Reynolds number, and the biggest trail volume was about 500 times the body volume of the largest daphnids. Larger spatial extent of fluid perturbation and prolonged period to decay caused by bigger trail volumes would play a significant role in zooplankton ecology, e.g. increasing the risk of predation.
The study also found that increased trail volume brought about significantly enhanced total dissipated power at higher Reynolds number, and the magnitudes of total dissipated power observed varied in the range of (1.3-10)X10-9 W.
Furthermore, this study provided strong evidence that swimming speed of Daphnia and total dissipated power in Daphnia trails exceeded those of some other selected zooplankton species.
In recognizing turbulence as an intrinsic environmental perturbation in aquatic habitats, this thesis also examined the response of Daphnia to a range of turbulence flows, which correspond to turbu-lence levels that zooplankton generally encounter in their habitats. Results indicated that within the range of turbulent intensities to which the Daphnia are likely to be exposed in their natural habitats, increasing turbulence compelled the organisms to enhance their swimming activity and swim-ming speed. However, as the turbulence increased to extremely high values (10-4 m2s-3), Daphnia began to withdraw from their active swimming behaviour. Findings of this work also demonstrated that the threshold level of turbulence at which animals start to alleviate from largely active swimming is about 10-6 m2s-3. The study further illustrated that during the intermediate range of turbu-lence; 10-7 - 10-6 m2s-3, kinetic energy dissipation rates in the vicinity of the organisms is consistently one order of magnitude higher than that of the background turbulent flow.
Swarming, a common conspicuous behavioural trait observed in many zooplankton species, is considered to play a significant role in defining freshwater ecology of their habitats from food exploitation, mate encountering to avoiding predators through hydrodynamic flow structures produced by them, therefore, this thesis also investigated implications of Daphnia swarms at varied abundance & swarm densities on their swimming kinematics and induced flow field.
The results showed that Daphnia aggregated in swarms with swarm densities of (1.1-2.3)x103 L-1, which exceeded the abundance densities by two orders of magnitude (i.e. 1.7 - 6.7 L-1). The estimated swarm volume decreased from 52 cm3 to 6.5 cm3, and the mean neighbouring distance dropped from 9.9 to 6.4 body lengths. The findings of this work also showed that mean swimming trajectories were primarily horizontal concentric circles around the light source. Mean flow speeds found to be one order of magnitude lower than the corresponding swimming speeds of Daphnia. Furthermore, this study provided evidences that the flow fields produced by swarming Daphnia differed considerably between unidirectional vortex swarming and bidirectional swimming at low and high abundances respectively.
Leaf litter breakdown is a fundamental process in aquatic ecosystems, being mainly mediated by decomposer-detritivore systems that are composed of microbial decomposers and leaf-shredding, detritivorous invertebrates. The ecological integrity of these systems can, however, be disturbed, amongst others, by chemical stressors. Fungicides might pose a particular risk as they can have negative effects on the involved microbial decomposers but may also affect shredders via both waterborne toxicity and their diet; the latter by toxic effects due to dietary exposure as a result of fungicides’ accumulation on leaf material and by negatively affecting fungal leaf decomposers, on which shredders’ nutrition heavily relies. The primary aim of this thesis was therefore to provide an in-depth assessment of the ecotoxicological implications of fungicides in a model decomposer-detritivore system using a tiered experimental approach to investigate (1) waterborne toxicity in a model shredder, i.e., Gammarus fossarum, (2) structural and functional implications in leaf-associated microbial communities, and (3) the relative importance of waterborne and diet-related effects for the model shredder.
Additionally, knowledge gaps were tackled that were related to potential differences in the ecotoxicological impact of inorganic (also authorized for organic farming in large parts of the world) and organic fungicides, the mixture toxicity of these substances, the field-relevance of their effects, and the appropriateness of current environmental risk assessment (ERA).
In the course of this thesis, major differences in the effects of inorganic and organic fungicides on the model decomposer-detritivore system were uncovered; e.g., the palatability of leaves for G. fossarum was increased by inorganic fungicides but deteriorated by organic substances. Furthermore, non-additive action of fungicides was observed, rendering mixture effects of these substances hardly predictable. While the relative importance of the waterborne and diet-related effect pathway for the model shredder seems to depend on the fungicide group and the exposure concentration, it was demonstrated that neither path must be ignored due to additive action. Finally, it was shown that effects can be expected at field-relevant fungicide levels and that current ERA may provide insufficient protection for decomposer-detritivore systems. To safeguard aquatic ecosystem functioning, this thesis thus recommends including leaf-associated microbial communities and long-term feeding studies using detritus feeders in ERA testing schemes, and identifies several knowledge gaps whose filling seems mandatory to develop further reasonable refinements for fungicide ERA.