Refine
Year of publication
Document Type
- Doctoral Thesis (471)
- Part of Periodical (354)
- Bachelor Thesis (275)
- Diploma Thesis (196)
- Master's Thesis (192)
- Study Thesis (138)
- Article (20)
- Conference Proceedings (12)
- Other (8)
- Report (8)
Language
- German (1137)
- English (543)
- Multiple languages (4)
- Spanish (2)
- (1)
Keywords
- Vorlesungsverzeichnis (55)
- Bildverarbeitung (16)
- Augmented Reality (15)
- Simulation (12)
- Computergraphik (10)
- Computersimulation (10)
- Pestizid (10)
- Robotik (10)
- Computergrafik (9)
- Computervisualistik (9)
Institute
- Institut für Computervisualistik (336)
- Fachbereich 4 (298)
- Zentrale Einrichtungen (176)
- Institut für Wirtschafts- und Verwaltungsinformatik (147)
- Institut für Informatik (143)
- Institut für Management (116)
- Fachbereich 7 (99)
- Institute for Web Science and Technologies (58)
- Institut für Softwaretechnik (54)
- Fachbereich 8 (47)
In der vorliegenden Arbeit wird das thermochemische Wechselwirkungsverhalten verschiedener Magnesiakohlenstoffmaterialen in Abhängigkeit verschiedener Einflussgrößen wissenschaftlich untersucht. Schwerpunkte der experimentellen Arbeiten bilden thermoanalytische Experimente, Gefügeuntersuchungen der Magnesiakohlenstoff-Proben sowie thermodyna-mische Berechnungen und Auswertungen durch CAT (Computer Aided Thermochemistry) mittels des Softwarepakets FactSage.
Erster Themenbereich dieser Arbeit ist die Untersuchung des Einflusses der in dem Rohstoff Magnesia enthaltenen mineralogischen Nebenphasen Merwinit (C3MS2), Monticellit (CMS) und Belit (C2S) auf den carbothermisch induzierten Verschleiß im MgO-C-Material. Für die Messreihen wurden die Nebenphasen eigens synthetisiert und hiermit MgO-C-Nebenphase-Modellwerkstoffe hergestellt. Die Nebenphase Monticellit ist unbeständig gegenüber der carbothermischen Reduktion. Monticellit wird im MgO-C-Gefüge durch Kohlenstoff reduziert und hieraus ergibt sich ein erhöhter Gewichtsverlust des Probenmaterials. Auch Merwinit wird bei T = 1600°C reduziert, der Gewichtsverlust wird dadurch allerdings nicht erhöht. Belit ist im MgO-C-Gefüge stabil gegenüber carbothermischer Reduktion.
Ein weiterer Schwerpunkt der Arbeit lag auf der Untersuchung des Einflusses des klassischen Antioxidans Aluminium auf die thermochemische Stabilität von MgO-C. Bei geringen Sauerstoffpartialdrücken ist die Reaktion des Aluminium-Metalls bzw. des bereits zu Al4C3 carbidisierten Aluminiums mit dem steineigenen Periklas unter Mg(g)-Bildung möglich, was einen erhöhten Gewichtsverlust zur Folge hat. Aber auch nach der Oxidation zu Al2O3 bzw. Spinell liegt Aluminium in signifikanten Mengen als Al(g) und Al2O(g) in der Gasphase vor und greift des Weiteren die Nebenphasen an, was ebenfalls zu einem messbaren Gewichtsverlust führt.
Dritter Arbeitsschwerpunkt war die Untersuchung des Einflusses des Umgebungsdruckes auf die carbothermische Reduktion von MgO. Die Ergebnisse zeigen, dass der Druck sich in zweierlei Hinsicht auf die carbothermische Reduktion von MgO auswirkt. Zum einen bewirkt ein sinkender Umgebungsdruck eine Beschleunigung der carbothermischen Reduktion durch die Verschiebung des thermodynamischen Gleichgewichts auf die Produktseite. Des Weiteren sorgt er für einen schnelleren Abtransport der Produktgase vom Reaktionsort und ver-hindert somit die Einstellung eines lokalen Gleichgewichts im Gefüge. Dritter Effekt ist die mit steigendem Druck verstärkt ablaufende Kohlenstoffoxidation durch Umgebungssauerstoff, da die Sauerstoffmenge in der Umgebung des MgO-C-Materials vom Umgebungsdruck bestimmt wird. Für die Geschwindigkeit des thermochemischen Verschleißes von Magnesiakohlenstoffmaterialien, der immer eine Kombination aus Kohlenstoffoxidation und carbothermischer Reduktion darstellt, bedeutet dies, dass sie in Abhängigkeit vom Umgebungsdruck in unterschiedlichem Ausmaß von diesen beiden Reaktionen beeinflusst wird.
Organic substances play an essential role for the formation of stable soil structures. In this context, their physico-chemical properties, interactions with mineral soil constituents and soil-water interactions are particu-larly important. However, the underlying mechanisms contributing to soil particle cementation by swollen or-ganic substances (hydrogels) remains unclear. Up to now, no mechanistic model is available which explains the mechanisms of interparticulate hydrogel swelling and its contribution to soil-water interactions and soil structur-al stability. This mainly results from the lack of appropriate testing methods to study hydrogel swelling in soil as well as from the difficulties of adapting available methods to the system soil/hydrogel.
In this thesis, 1H proton nuclear magnetic resonance (NMR) relaxometry was combined with various soil micro- and macrostructural stability testing methods in order to identify the contribution of hydrogel swelling-induced soil-water interactions to the structural stability of water-saturated and unsaturated soils. In the first part, the potentials and limitations of 1H NMR relaxometry to enlighten soil structural stabilization mechanism and vari-ous water populations were investigated. In the second part, 1H-NMR relaxometry was combined with rheologi-cal measurements of soil to assess the contribution of interparticulate hydrogel swelling and various polymer-clay interactions on soil-water interactions and soil structural stability in an isolated manner. Finally, the effects of various organic and mineral soil fractions on soil-water interactions and soil structural stability was assessed in more detail for a natural, agriculturally cultivated soil by soil density fractionation and on the basis of the experiences gained from the previous experiments.
The increased experiment complexity in the course of this thesis enabled to link physico-chemical properties of interparticulate hydrogel structures with soil structural stability on various scales. The established mechanistic model explains the contribution of interparticulate hydrogels to the structural stability of water-saturated and unsaturated soils: While swollen clay particles reduce soil structural stability by acting as lubricant between soil particles, interparticulate hydrogel structures increase soil structural stability by forming a flexible polymeric network which interconnects mineral particles more effectively than soil pore- or capillary water. It was appar-ent that soil structural stability increases with increasing viscosity of the interparticluate hydrogel in dependence on incubation time, soil texture, soil solution composition and external factors in terms of moisture dynamics and agricultural management practices. The stabilizing effect of interparticulate hydrogel structures further in-crease in the presence of clay particles which is attributed to additional polymer-clay interactions and the incor-poration of clay particles into the three-dimensional interparticulate hydrogel network. Furthermore, the simul-taneous swelling of clay particles and hydrogel structures results in the competition for water and thus in a mu-tual restriction of their swelling in the interparticle space. Thus, polymer-clay interactions not only increase the viscosity of the interparticulate hydrogel and thus its ability to stabilize soil structures but further reduce the swelling of clay particles and consequently their negative effects on soil structural stability. The knowledge on these underlying mechanisms enhance the knowledge on the formation of stable soil structures and enable to take appropriate management practices in order to maintain a sustainable soil structure. The additionally out-lined limitations and challenges of the mechanistic model should provide information on areas with optimization and research potential, respectively.
The STOR project aims at the development of a scientific component system employing models and knowledge for object recognition in images. This interim report elaborates on the requirements for such a component system, structures the application area by identifying a large set of basic operations, and shows how a set of appropriate data structures and components can be derived. A small case studies exemplifies the approach.
Based on dual process models of information processing, the present research addressed how explicit disgust sensitivity is re-adapted according to implicit disgust sensitivity via self-perception of automatic behavioral cues. Contrary to preceding studies (Hofmann, Gschwendner, & Schmitt, 2009) that concluded that there was a "blind spot" for self- but not for observer perception of automatic behavioral cues, in the present research, a re-adaption process was found for self-perceivers and observers. In Study 1 (N = 75), the predictive validity of an indirect disgust sensitivity measure was tested with a double-dissociation strategy. Study 2 (N = 117) reinvestigated the hypothesis that self-perception of automatic behavioral cues, predicted by an indirect disgust sensitivity measure, led to a re-adaption of explicit disgust sensitivity measures. Using a different approach from Hofmann et al. (2009), the self-perception procedure was modified by (a) feeding back the behavior several times while a small number of cues had to be rated for each feedback condition, (b) using disgust sensitivity as a domain with clearly unequivocal cues of automatic behavior (facial expression, body movements) and describing these cues unambiguously, and (c) using a specific explicit disgust sensitivity measure in addition to a general explicit disgust sensitivity measure. In Study 3 (N = 130), the findings of Study 2 were replicated and display rules and need for closure as moderator effects of predictive validity and cue utilization were additionally investigated. The moderator effects give hints that both displaying a disgusted facial expression and self-perception of one- own disgusted facial expression are subject to a self-serving bias, indicating that facial expression may not be an automatic behavior. Practical implications and implications for future research are discussed.
Abstract The present work investigates the wetting characteristics of soils with regard to their dependence on environmental parameters such as water content (WC), pH, drying temperature and wetting temperature of wettable and repellent soils from two contrasting anthropogenic sites, the former sewage disposal field Berlin-Buch and the inner-city park Berlin-Tiergarten. The aim of this thesis is to deepen the understanding of processes and mechanisms leading to changes in soil water repellency. This helps to gain further insight into the behaviour of soil organic matter (SOM) and identifying ways to prevent or reduce the negative effects of soil water repellency (SWR). The first focus of this work is to determine whether chemical reactions are required for wetting repellent samples. This hypothesis was tested by time and temperature dependence of sessile drop spreading on wettable and repellent samples. Additionally, diffuse reflectance infrared Fourier transform (DRIFT) spectroscopy was used to determine whether various drying regimes cause changes in the relative abundance of hydrophobic and hydrophilic functional groups in the outer layer of soil particles and whether these changes can be correlated with water content and the degree of SWR. Finally, by artificially altering the pH in dried samples applying acidic and alkaline reagents in a gaseous state, the influence of only pH on the degree of SWR was investigated separately from the influence of changes in moisture status. The investigation of the two locations Buch and Tiergarten, each exceptionally different in the nature of their respective wetting properties, leads to new insights in the variety of appearance of SWR. The results of temperature, water content and pH dependency of SWR on the two contrasting sites resulted in one respective hypothetical model of nature of repellency for each site which provides an explanation for most of the observations made in this and earlier studies: At the Tiergarten site, wetting characteristics are most likely determined by micelle-like arrangement of amphiphiles which depends on the concentration of water soluble amphiphilic substances, pH and ionic strength in soil solution. At low pH and at high ionic strength, repulsion forces between hydrophilic charged groups are minimized allowing their aggregation with outward orientated hydrophobic molecule moieties. At high pH and low ionic strength, higher repulsion forces between hydrophilic functional groups lead to an aggregation of hydrophobic groups during drying, which results in a layer with outward oriented hydrophilic moieties on soil organic matter surface leading to enhanced wettability. For samples from the Buch site, chemical reactions are necessary for the wetting process. The strong dependence of SWR on water content indicates that hydrolysis-condensation reactions are the controlling mechanisms. Since acid catalyzed hydrolysis is an equilibrium reaction dependent on water content, an excess of water favours hydrolysis leading to an increasing number of hydrophilic functional groups. In contrast, water deficiency favours condensation reactions leading to a reduction of hydrophilic functional groups and thus a reduction of wettability. The results of the present investigation and its comparison with earlier investigations clearly show that SWR is subject to numerous antagonistically and synergistically interacting environmental factors. The degree of influence, which a single factor exerts on SWR, is site-specific, e.g., it is dependent on special characteristics of mineral constituents and SOM which underlies the influence of climate, soil texture, topography, vegetation and the former and current use of the respective site.
Larvae of Cx.pipiens coocurred with Cladocera, but the latter established delayed in time. Biotope structure influenced time of species occurrence with ponds at reed-covered wetlands favouring crustacean development, while ponds at grassland biotopes favoured colonization by mosquito larvae. The mechanisms driving the negative effect of crustaceans on mosquito larvae were investigated within an experiment under artificial conditions. Crustacean communities were found to reduce both oviposition and larval development of Cx.pipiens. Crustacean communities of high taxa diversity, including both predatory and competing crustaceans, were more effective compared with crustacean communities dominated by single taxa. Presence of crustacean communities characterised by high taxa diversity increased the sensitivity of Cx.pipiens larvae towards Bti and prolonged the time of recolonization. In a final step the combined approach, using Bti and crustaceans, was evaluated under field conditions. The joint application of Bti and crustaceans was found to reduce mosquito larval populations over the whole observation period, while single application of Bti caused only short-term reduction of mosquito larvae. Single application of crustaceans had no significant effect, because high abundances of prior established mosquito larvae impeded propagation of crustaceans. At combined treatment, mosquito larvae were reduced by Bti application and hence crustaceans were able to proliferate without disturbance by interspecific competition. In conclusion, natural competitors were found to have a strong negative impact on mosquito larval populations. However, a time span of about 2 weeks has to be bridged, before crustacean communities reached a level sufficient for mosquito control. Results of a combined approach, complementing the short-term effect of the biological insecticide Bti with the long-term effect of crustaceans, were promising. Using natural competitors within an integrated control strategy could be an important tool for an effective, environmentally friendly and sustainable mosquito management.
The role of alternative resources for pollinators and aphid predators in agricultural landscapes
(2021)
The world wide decline of insects is often associated with loss of natural and semi-natural habitat caused by intensified land-use. Many insects provide important ecosystem services to agriculture, such as pest control or pollination. To efficiently promote insects on remaining semi-natural habitat we need precise knowledge of their requirements to non-crop habitat. This thesis focuses on identifying
the most important semi-natural habitats (forest edges, grasslands, and semi-open habitats) for pollinators and natural enemies of crop pests with respect to their food resource requirements. Special
attention is given to floral resources and their spatio-temporal distribution in agricultural landscapes.
Floral resource maps might get closer at characterizing landscapes the way they are experienced by insects compared to classical habitat maps. Performance of the two map types was compared on the prediction of wild bees and natural enemies that consume nectar and pollen, identifying habitats of special importance in the process. In wild bees, influences of spatio-temporal floral resource availability were analysed as well as habitat preferences of specific groups of bees. Understanding dietary needs of natural enemies of crop pests requires additional knowledge on prey use. To this end, ladybird gut contents have been analysed by means of high-throughput sequencing for insight into aphid prey-use.
Results showed, that wild bees were predicted better by floral resource maps compared to classical habitat maps. Forest edge area, as well as floral resources in forest edges had positive effects on abundance and diversity of rare bees and important crop pollinators. Similar patterns were retained for grassland diversity. Especially early floral resources seemed to have positive effects on wild bees. Crops and fruit trees produced a resource pulse in April that exceeded floral resource availability in May and June by tenfold. Most floral resources in forest edges appeared early in the season, with the highest floral density per area. Grasslands provided the lowest amount of floral resources but highest diversity, which was evenly distributed over the season.
Despite natural enemies need for floral resources, classical habitat maps performed better at predicting natural enemies of crop pests compared to floral resource maps. Classical habitat maps revealed a positive effect of forest edge habitat on the abundance of pest enemies, which translated into improved aphid control. Results from gut content analysis reveal high portions of pest aphid species and nettle aphids as well as a broader insight into prey spectra retained from ladybirds collected from sticky traps compared to individuals collected by hand. The aphid specific primer designed for this purpose will be helpful for identifying aphid consumption by ladybirds in future studies.
Findings of this thesis show the potential of floral resource maps for understanding interactions of wild bees and the landscape but also indicate that natural enemies are limited by other resources. I would like to highlight the positive effects of forest edges for different groups of bees as well as natural enemies and their performance on pest control.
Dualizing marked Petri nets results in tokens for transitions (t-tokens). A marked transition can strictly not be enabled, even if there are sufficient "enabling" tokens (p-tokens) on its input places. On the other hand, t-tokens can be moved by the firing of places. This permits flows of t-tokens which describe sequences of non-events. Their benefiit to simulation is the possibility to model (and observe) causes and effects of non-events, e.g. if something is broken down.
The following thesis analyses the functionality and programming capabilitiesrnof compute shaders. For this purpose, chapter 2 gives an introductionrnto compute shaders by showing how they work and how they can be programmed. In addition, the interaction of compute shaders and OpenGL 4.3 is shown through two introductory examples. Chapter 3 describes an NBodyrnsimulation that has been implemented in order to show the computational power of compute shaders and the use of shared memory. Then it is shown in chapter 4 how compute shaders can be used for physical simulationsrnand where problems may arise. In chapter 5 a specially conceived and implemented algorithm for detecting lines in images is described and then compared with the Hough transform. Lastly, a final conclusion is drawn in chapter 6.
With the ongoing process of building business networks in today- economy, business to-business integration (B2B Integration) has become a strategic tool for utilizing and optimizing information exchange between business partners. Industry and academia have made remarkable progress in implementing and conceptualizing different kinds of electronic inter-company relationships in the last years. Nevertheless, academic findings generally focus exclusively on certain aspects of the research object, e.g. document standards, process integration or other descriptive criteria. Without arncommon framework these results stay unrelated and their mutual impact on each other remains largely unexplained. In this paper we explore motivational factors of B2B integration in practice. In a research project using a uniform taxonomy (eXperience methodology) we classified real-world B2B integration projects from a pool of over 400 case studies using a pre-developed framework for integration scenarios. The result of our partly exploratory research shows the influence of the role of a company in the supply chain and its motive to invest in a B2B solution.
The model evolution calculus
(2004)
The DPLL procedure is the basis of some of the most successful propositional satisfiability solvers to date. Although originally devised as a proof procedure for first-order logic, it has been used almost exclusively for propositional logic so far because of its highly inefficient treatment of quantifiers, based on instantiation into ground formulas. The recent FDPLL calculus by Baumgartner was the first successful attempt to lift the procedure to the first-order level without resorting to ground instantiations. FDPLL lifts to the first-order case the core of the DPLL procedure, the splitting rule, but ignores other aspects of the procedure that, although not necessary for completeness, are crucial for its effectiveness in practice. In this paper, we present a new calculus loosely based on FDPLL that lifts these aspects as well. In addition to being a more faithful litfing of the DPLL procedure, the new calculus contains a more systematic treatment of universal literals, one of FDPLL's optimizations, and so has the potential of leading to much faster implementations.
Ray tracing acceleration through dedicated data structures has long been an important topic in computer graphics. In general, two different approaches are proposed: spatial and directional acceleration structures. The thesis at hand presents an innovative combined approach of these two areas, which enables a further acceleration of the tracing process of rays. State-of-the-art spatial data structures are used as base structures and enhanced by precomputed directional visibility information based on a sophisticated abstraction concept of shafts within an original structure, the Line Space.
In the course of the work, novel approaches for the precomputed visibility information are proposed: a binary value that indicates whether a shaft is empty or non-empty as well as a single candidate approximating the actual surface as a representative candidate. It is shown how the binary value is used in a simple but effective empty space skipping technique, which allows a performance gain in ray tracing of up to 40% compared to the pure base data structure, regardless of the spatial structure that is actually used. In addition, it is shown that this binary visibility information provides a fast technique for calculating soft shadows and ambient occlusion based on blocker approximations. Although the results contain a certain inaccuracy error, which is also presented and discussed, it is shown that a further tracing acceleration of up to 300% compared to the base structure is achieved. As an extension of this approach, the representative candidate precomputation is demonstrated, which is used to accelerate the indirect lighting computation, resulting in a significant performance gain at the expense of image errors. Finally, techniques based on two-stage structures and a usage heuristic are proposed and evaluated. These reduce memory consumption and approximation errors while maintaining the performance gain and also enabling further possibilities with object instancing and rigid transformations.
All performance and memory values as well as the approximation errors are measured, presented and discussed. Overall, the Line Space is shown to result in a considerate improvement in ray tracing performance at the cost of higher memory consumption and possible approximation errors. The presented findings thus demonstrate the capability of the combined approach and enable further possibilities for future work.
This research examines information audit methodologies and information capturing methods for enterprise social software which are an elementary part of the audit process. Information auditing is lacking of a standardized definition and methodology because the scope of the audit process is diversified and dependent on the organization undertaking the audit. The benefits of information auditing and potential challenges of Enterprise 2.0 the audit can overcome are comprehensive and provide a major incentive for managers to conduct an audit. Information asset registers as a starting point for information auditing are not specifically focusing on social software assets. Therefore this research pro-ject combines asset registers from different areas to create a new register suitable for the requirements of Enterprise 2.0. The necssary adaptations caused by the new character of the assets are minor. The case study applying the asset register for the first time however reveals several problematic areas for information auditors completing the register. Rounding up the thesis a template is developed for setting up new work spaces on enterprise social software systems with appropriate metadata taking into account the meaningful metadata discovered in the asset register.
Grassland management has been increasingly intensified throughout centuries since mankind started to control and modify the landscape. Species communities were always shaped alongside management changes leading to huge alterations in species richness and diversity up to the point where land use intensity exceeded the threshold. Since then biodiversity became increasingly lost. Today, global biodiversity and especially grassland biodiversity is pushed beyond its boundaries. Policymakers and conservationists seek for management options which fulfill the requirements of agronomic interests as well as biodiversity conservation alongside with the maintenance of ecosystem processes. However, there is and will always be a trade-off.
Earlier in history, natural circumstances in a landscape mainly determined regionally adapted land use. These regional adaptions shaped islands for many specialist species, and thus diverse species communities, favoring the establishment of a high β-diversity. With the raising food demand, these regional and traditional management regimes became widely unprofitable, and the invention of mineral fertilizers ultimately led to a wide homogenization of grassland management and, as follows, the loss of biotic heterogeneity. In the course of the green revolution, this immediate coherence and the dependency between grassland biodiversity and traditional land use practices becomes increasingly noticed. Indeed, some traditional forms of management such as meadow irrigation have been preserved in a few regions and thus give us the opportunity to directly investigate their long-term relevance for the species communities and ecosystem processes. Traditional meadow irrigation was a common management practice to improve productivity in lowland, but also alpine hay meadows throughout Europe until the 20th century. Nowadays, meadow irrigation is only practiced as a relic in a few remnant areas. In parts of the Queichwiesen meadows flood irrigation goes back to the Middle Ages, which makes them a predestined as a model region to study the long- and short-term effects of lowland meadow irrigation on the biodiversity and ecosystem processes.
Our study pointed out the conservation value of traditional meadow irrigation for the preservation of local species communities as well as the plant diversity at the landscape scale. The structurally more complex irrigated meadows lead to the assumption of a higher arthropod diversity (Orthodoptera, Carabidae, Araneae), which could not be detected. However, irrigated meadows are a significant habitat for moisture dependent arthropod species. In the light of the agronomic potential, flood irrigation could be a way to at least reduce fertilizer costs to a certain degree and possibly prevent overfertilization pulses which are necessarily hazardous to non-target ecosystems. Still, the reestablishment of flood irrigation in formerly irrigated meadows, or even the establishment of new irrigation systems needs ecological and economic evaluation dependent on regional circumstances and specific species communities, at which this study could serve as a reference point.
Aquatic macrophytes can contribute to the retention of organic contaminants in streams, whereas knowledge on the dynamics and the interaction of the determining processes is very limited. The objective of the present study was thus to assess how aquatic macrophytes influence the distribution and the fate of organic contaminants in small vegetated streams. In a first study that was performed in vegetated stream mesocosms, the peak reductions of five compounds were significantly higher in four vegetated stream mesocosms compared to a stream mesocosm without vegetation. Compound specific sorption to macrophytes was determined, the mass retention in the vegetated streams, however, did not explain the relationship between the mitigation of contaminant peaks and macrophyte coverage. A subsequent mesocosm study revealed that the mitigation of peak concentrations in the stream mesocosms was governed by two fundamentally different processes: dispersion and sorption. Again, the reductions of the peak concentrations of three different compounds were in the same order of magnitude in a sparsely and a densely vegetated stream mesocosm, respectively, but higher compared to an unvegetated stream mesocosm. The mitigation of the peak reduction in the sparsely vegetated stream mesocosm was found to be fostered by longitudinal dispersion as a result of the spatial distribution of the macrophytes in the aqueous phase. The peak reduction attributable to longitudinal dispersion was, however, reduced in the densely vegetated stream mesocosm, which was compensated by compound-specific but time-limited and reversible sorption to macrophytes. The observations on the reversibility of sorption processes were subsequently confirmed by laboratory experiments. The experiments revealed that sorption to macrophytes lead to compound specific elimination from the aqueous phase during the presence of transient contaminant peaks in streams. After all, these sorption processes were found to be fully reversible, which results in the release of the primarily adsorbed compounds, once the concentrations in the aqueous phase starts to decrease. Nevertheless, the results of the present thesis demonstrate that the processes governing the mitigation of contaminant loads in streams are fundamentally different to those already described for non-flowing systems. In addition, the present thesis provides knowledge on how the interaction of macrophyte-induced processes in streams contributes to mitigate loads of organic contaminants and the related risk for aquatic environments.
Digital transformation is a prevailing trend in the world, especially in dynamic Asia. Vietnam has recorded remarkable changes in the economy as domestic enterprises have made new strides in the digital transformation process. MB Bank, one of the prestigious financial groups in Vietnam, also takes advantage of digital transformation to have the opportunity to break through to become a large-scale technology enterprise with many factors such as improving customer experience, increasing customer base and increasing customer satisfaction. enhance competitiveness, build trust and loyalty for customers. However, in the process of converting MB, there are also many challenges that require banks to have appropriate policies to handle. It can be said that MB Bank is a typical case study of digital transformation in the banking sector in Vietnam.
The protected areas of Rwanda are facing various challenges resulting from the anthropogenic activities of the surrounding communities especially in the adjacent area to Cyamudongo isolated rain forest, which results in climate change, soil degradation, and loss of biodiversity. Therefore, this study aims to broaden current knowledge on the impact of sustainable Agroforestry (AF) on the Carbon (C) stock and Biodiversity conservation on the surroundings of Cyamudongo isolated rain forest and Ruhande Arboretum.
To understand this, the permanent sample plots (PSPs) were established mainly in the designed four transects of four km long originating on the boundary of the Cyamudongo isolated rain forest following the slope gradient ranging from 1286 to 2015 m asl. A total number of 73 PSPs were established in the Cyamudongo study area while 3 PSPs were established in the Ruhande AF plot. The Arc Map GIS 10.4 was used to design and map the sampling areas while GPS was used for localization of collected items. Statistical significance was analyzed through the R-software especially for wood and soil variables while for biodiversity indicator species, MVSP Software 3.0 was used to determine the Shannon Diversity indices and similarities among species.
In this study, I have obtained comprehensive results demonstrating that in all study areas, the various AF tree species contribute differently to C stock and C sequestration and the amount of C stored and removed from the atmosphere depends on different factors such as tree species, plantation density, growth stage, or the age of establishment, applied management practices, wood specific density (WSD), wood C concentration, and climatic conditions. The estimated quantity of sequestrated C for 2 years and 34 years AF species were 13.11 t C ha -1 yr-1 (equivalent to 48 t CO2 ha -1 yr-1) and 6.85 t ha-1 yr-1 (equivalent to 25.1 t CO2 ha -1 yr-1) in Cyamudongo and Ruhande respectively. The estimated quantity of C stored by the Ruhande AF plot is 232.94 t ha-1. In Cyamudongo, the overall C stored by the AF systems was 823 t ha-1 by both young tree species established by the Cyamudongo Project (35.84 t ha-1) and C stored by existed AF species before the existence of the Project (787.12 t ha-1). In all study areas, the Grevillea robusta was found to contribute more to overall stored C compared to other species under this study.
The tests revealed differences in terms of nutrient contents (C, N, C: N ratio, K, Na, Ca, and Mg) for various AF tree species of Cyamudongo and Ruhande study areas. The differences in terms of correlation for various variables of AF tree species in different study areas varied with tree species, age, stage of growth, and tree shape. By comparing the correlation coefficients for various tree variables for young and mature AF tree species, the results showed a high correlation variability for young species than mature or old species recorded in different environmental conditions of Cyamudongo and Ruhande study areas.
The recorded soil pH mean value across in Cyamudongo study area is 4.2, which is very strongly acidic. The tests revealed that the soil pH, C, C: N ratio, OM, NH4+, NO3-+NO2-, PO43-, and CEC were significantly (P < 0.05) different in various soil depths whereas the N was not statistically significant. The pH, N, C: N ratio, CEC, NH4+, PO43-, and Al3+ showed a significant difference across land uses whereas the C and NO3-+NO2- did not show any statistical difference. All tested chemical elements showed a statistical difference as far as altitude ranges are concerned. The only NH4+, PO43-, and CEC showed significant differences with time whereas all other remaining chemical elements did not show any statistical significance. The bulk density of soil was statistically different across land uses and altitude ranges. The soil pH was very strongly correlated with CEC, Mg, and Ca in cropland (CL) whereas it was strongly correlated in both AF and natural forest (NF) except for Mg, which was moderately correlated in AF. Furthermore, its correlation with K was strong in CL, moderate in AF while it was weak in NF. Finally, the pH correlation with Na was weak in both AF and CL whereas it was negligible in NF. The overall estimated soil C stock of the study area was 16848 t ha -1.
The sustainable AF practices changed significantly the frequency of reptiles, amphibians, and flowering plants while there was no statistical change observed on ferns with time. In terms of species richness, 16 flowering plants, 14 ferns, 5 amphibians, and 3 reptiles were recorded and monitored. These findings add to a growing body of literature on the impact of AF on the C stock, soil improvement, and Biodiversity. It is recommended that further researches should be undertaken for the contribution of other AF tree species to the C stock found in the agricultural landscape around all protected areas of Rwanda and the impact on them on the soil and biodiversity.
Pelagic oxyclines, the transition zone between oxygen rich surface waters and oxygen depleted deep waters, are a common characteristic of eutrophic lakes during summer stratification. They can have tremendous effects on the biodiversity and the ecosystem functioning of lakes and, to add insult to injury, are expected to become more frequent and more pronounced as climate warming progresses. On these grounds, this thesis endeavors to advance the understanding of formation, persistence, and consequences of pelagic oxyclines: We test, whether the formation of metalimnetic oxygen minima is intrinsically tied to a locally enhanced oxygen consuming process, investigate the relative importance of vertical physical oxygen transport and biochemical oxygen consumption for the persistence of pelagic oxyclines, and finally assess their potential consequences for whole lake cycling. To pursue these objectives, the present thesis nearly exclusively resorts to in situ measurements. Field campaigns were conducted at three lakes in Germany featuring different types of oxyclines and resolved either a short (hours to days) or a long (weeks to months) time scale. Measurements comprised temperature, current velocity, and concentrations of oxygen and reduced substances in high temporal and vertical resolution. Additionally, vertical transport was estimated by applying the eddy correlation technique within the pelagic region for the first time. The thesis revealed, that the formation of metalimnetic oxygen minima does not necessarily depend on locally enhanced oxygen depletion, but can solely result from gradients and curvatures of oxygen concentration and depletion and their relative position to each other. Physical oxygen transport was found to be relevant for oxycline persistence when it considerably postponed anoxia on a long time scale. However, its influence on oxygen dynamics was minor on short time scales, although mixing and transport were highly variable. Biochemical consumption always dominated the fate of oxygen in pelagic oxyclines. It was primarily determined by the oxidative breakdown of organic matter originating from the epilimnion, whereas in meromictic lakes, the oxidation of reduced substances dominated. Beyond that, the results of the thesis emphasize that pelagic oxyclines can be a hotspot of mineralization and, hence, short-circuit carbon and nutrient cycling in the upper part of the water column. Overall, the present thesis highlights the importance of considering physical transport as well as biochemical cycling in future studies.
Tagging systems are intriguing dynamic systems, in which users collaboratively index resources with the so-called tags. In order to leverage the full potential of tagging systems, it is important to understand the relationship between the micro-level behavior of the individual users and the macro-level properties of the whole tagging system. In this thesis, we present the Epistemic Dynamic Model, which tries to bridge this gap between the micro-level behavior and the macro-level properties by developing a theory of tagging systems. The model is based on the assumption that the combined influence of the shared background knowledge of the users and the imitation of tag recommendations are sufficient for explaining the emergence of the tag frequency distribution and the vocabulary growth in tagging systems. Both macro-level properties of tagging systems are closely related to the emergence of the shared community vocabulary. rnrnWith the help of the Epistemic Dynamic Model, we show that the general shape of the tag frequency distribution and of the vocabulary growth have their origin in the shared background knowledge of the users. Tag recommendations can then be used for selectively influencing this general shape. In this thesis, we especially concentrate on studying the influence of recommending a set of popular tags. Recommending popular tags adds a feedback mechanism between the vocabularies of individual users that increases the inter-indexer consistency of the tag assignments. How does this influence the indexing quality in a tagging system? For this purpose, we investigate a methodology for measuring the inter-resource consistency of tag assignments. The inter-resource consistency is an indicator of the indexing quality, which positively correlates with the precision and recall of query results. It measures the degree to which the tag vectors of indexed resources reflect how the users perceive the similarity between resources. We argue with our model, and show it with a user experiment, that recommending popular tags decreases the inter-resource consistency in a tagging system. Furthermore, we show that recommending the user his/her previously used tags helps to increase the inter-resource consistency. Our measure of the inter-resource consistency complements existing measures for the evaluation and comparison of tag recommendation algorithms, moving the focus to evaluating their influence on the indexing quality.
Technical products have become more than practical tools to us. Mobile phones, for example, are a constant companion in daily life. Besides purely pragmatic tasks, they fulfill psychological needs such as relatedness, stimulation, competence, popularity, or security. Their potential for the mediation of positive experience makes interactive products a rich source of pleasure. Research acknowledged this: in parallel to the hedonic/utilitarian model in consumer research, Human-Computer Interaction (HCI) researchers broadened their focus from mere task-fulfillment (i.e., the pragmatic) to a holistic view, encompassing a product's ability for need-fulfillment and positive experience (i.e., the hedonic). Accordingly, many theoretical models of User Experience (UX) acknowledge both dimensions as equally important determinants of a product's appeal: pragmatic attributes (e.g., usability) as well as hedonic attributes (e.g., beauty). In choice situations, however, people often overemphasize the pragmatic, and fail to acknowledge the hedonic. This phenomenon may be explained by justification. Due to their need for justification, people attend to the justifiability of hedonic and pragmatic attributes rather than to their impact on experience. Given that pragmatic attributes directly contribute to task-fulfillment, they are far easier to justify than hedonic attributes. People may then choose the pragmatic over the hedonic, despite a true preference for the hedonic. This can be considered a dilemma, since people choose what is easy to justify and not what they enjoy the most. The present thesis presents a systematic exploration of the notion of a hedonic dilemma in the context of interactive products.
A first set of four studies explored the assumed phenomenon. Study 1 (N = 422) revealed a reluctance to pay for a hedonic attribute compared to a pragmatic attribute. Study 2 (N = 134) demonstrated that people (secretly) prefer a more hedonic product, but justify their choice by spurious pragmatic advantages. Study 3 (N = 118) confronted participants with a trade-off between hedonic and pragmatic quality. Even though the prospect of receiving a hedonic product was related to more positive affect, participants predominantly chose the pragmatic, especially those with a high need for justification. This correlation between product choice and perceived need for justification lent further support to the notion that justification lies at the heart of the dilemma. Study 4 (N = 125) explored affective consequences and justifications provided for hedonic and pragmatic choice. Data on positive affect suggested a true preference for the hedonic - even among those who chose the pragmatic product.
A second set of three studies tested different ways to reduce the dilemma by manipulating justification. Manipulations referred to the justifiability of attributes as well as the general need for justification. Study 5 (N = 129) enhanced the respective justifiability of hedonic and pragmatic choice by ambiguous product information, which could be interpreted according to latent preferences. As expected, enhanced justifiability led to an increase in hedonic but not in pragmatic choice. Study 6 (N = 178) manipulated the justifiability of hedonic choice through product information provided by a "test report", which suggested hedonic attributes as legitimate. Again, hedonic choice increased with increased justifiability. Study 7 (N = 133) reduced the general need for justification by framing a purchase as gratification. A significant positive effect of the gratification frame on purchase rates occurred for a hedonic but not for a pragmatic product.
Altogether, the present studies revealed a desire for hedonic attributes, even in interactive products, which often are still understood as purely pragmatic "tools". But precisely because of this predominance of pragmatic quality, people may hesitate to give in to their desire for hedonic quality in interactive products - at least, as long as they feel a need for justification. The present findings provide an enhanced understanding of the complex consequences of hedonic and pragmatic attributes, and indicate a general necessity to expand the scope of User Experience research to the moment of product choice. Limitations of the present studies, implications for future research as well as practical implications for design and marketing are discussed.
This study examines the contribution of saving and credit cooperatives (SACCOS) on the improvement of members‘ socio economic development in Rwanda: Opportunities and challenges‖, Evidence from Umwalimu SACCO- Huye District‖. The appearance of saving and credit cooperatives or credit unions has been known as remedy for social ills rooted in poverty because of its efficiency in loans or credits dispensation, social equality for enhancement and reduction of poverty amongst low income earners. Therefore, millions and millions of poor people and non-bankable in developing countries (or third world countries) have been provided access to formal financial services through saving and credit cooperatives‘ programs.
The targeted population concerned by the study was 1,940 members of USACCO from which a sample of 92 respondents was purposively selected. The study has adopted a combination of correlation and descriptive research design. It has employed both quantitative and qualitative approaches. The study used both primary and secondary data. The primary data was collected using questionnaire and interview and, while secondary data was collected using documentations techniques whereby, Manual of procedures and Credit policies of USACCO and financial reports have been consulted. The analysis of data was done using SPSS version 21. The data was presented in form of tables, charts and graphs designed by SPSS v. 21. The bio-characteristics of respondents showed that, the majority of respondents were women with 55.4%, majority of respondents‘ age is between 26 to 45 years Furthermore, and majority 77.20% of respondents were married. 100% of respondents attended school, where the majority of respondents attended secondary school with certificate A2.
The study has revealed that Umwalimu SACCO services offered to its members have a positive effect on the improvement of members‘ welfare. It was found that USACCO services have slightly affected income level of members, assets acquired, access to education and medical care as well as small income generating activities established by members in Huye District. The analysis of data also revealed that there are some variables which have effected USACCO members‘ socio-economic status, these were listed as: Education background of a member, number of dependents, the occupation of a member, and number of loans got from USACCO, government programs against teachers‘ welfare, and membership duration played very important role on the improvement of standard living of teachers. All these variables were found to have positive effects on teachers‘ socio-economic status, except the family size of respondents.
In addition, the findings showed that, the majority of respondents confirmed that, they did not find opportunities to save with other financial institutions, and other respondents did not have access to loan from other financial institutions due to complicated loan requirements. In addition, after they
have joined USACCO, their deplorable status somewhat changed, both socially and economically which has contributed to the improvement of their welfare. Therefore, the study testified that, the welfare of USACCO members in terms of assets acquired, income increased was improved compared to situation before joining USACCO. The study concludes that ―the level of improvement of living conditions of teachers depends largely to the level of loan granted by USACCO to teachers and membership duration. If the level of teachers‘ loan and saving increases, there will be also improvement of teachers‘ wellbeing‖ and finally, USACCO financial service is a veritable instrument for better improvement of economic and social conditions of teachers. The study recommends that, USACCO should provide frequent and regular trainings on business management game to their members. This could help members for good management of their loans and reducing loan defaulters‘ cases observed at USACCO. Challenges observed were lack of physical collateral security required by USACCO, complicated loan requirements terms and conditions and insufficient trainings on business management game.
The belief in a just world in face of injustice: victim, observer, and perpetrator perspectives
(2021)
Injustice happens every day either to us, to our neighbors, or people across the world. Yet, believing that the world is a fair place helps us to cope with this injustice and motivates us to behave fairly. Scholars have found that these functions that the belief in a just world (BJW) serves are crucial for maintaining mental health. However, the conditions under which BJW is functional and when people give up this belief are not well studied. The current dissertation aims to examine: when the BJW can be shattered, the role of the external world and other internal resources in face of injustice, and the role of BJW in predicting corrupt behavior. Three studies were conducted corresponding to each party of injustice: a victim, an observer, and a perpetrator.
Study 1 examined the effects of criminal victimization on BJW and buffering role of perceptions of justice in the criminal justice process. A cross-sectional study showed that victims of very severe crimes such as domestic violence and human trafficking had lower personal BJW than non-victims and victims of less severe crimes, and higher informational justice perceptions reduced the effect of victimization on the personal BJW. Study 2 aimed to test the changes in BJW after observing severe injustice. A longitudinal study showed that after observing school rampage attacks that happened at other schools, BJW of adolescent participants increased. Moreover, life satisfaction and perceived social support moderated the change of BJW. Study 3 examined relationships between BJW and corrupt behavior. A cross-sectional study showed that personal BJW can predict bribery behavior.
The findings of three studies provided evidence that BJW does not function in isolation. An external world and internal resources can reduce the threat of injustice on BJW. BJW plays an important role in predicting unfair behavior therefore authorities should aim to maintain the BJW of their citizens.
Due to their confinement to specific host plants or restricted habitat types, Auchenorrhyncha are suitable biological indicators to measure the quality of chalk grassland under different management practices for nature conservation. They can especially be used as a tool to assess the success of restoring chalk grassland on ex-arable land. One objective of this study was to identify the factors which most effectively conserve and enhance biological diversity of existing chalk grasslands or allow the creation of new areas of such species-rich grassland on ex-arable land. A second objective was to link Auchenorrhyncha communities to the different grassland communities occurring on chalk according to the NVC (National Vegetation Classification). Altogether 100 chalk grassland and arable reversion sites were sampled between 1998 and 2002. Some of the arable reversion sites had been under certain grazing or mowing regimes for up to ten years by 2002. Vegetation structure and composition were recorded, and Auchenorrhyncha were sampled three times during the summer of each year using a "vortis" suction sampler. Altogether 110 leafhopper species were recorded during the study. Two of the species, Kelisia occirrega and Psammotettix helvolus, although widespread within the area studied, had not previously been recognized as part of the British fauna. By displaying insect frequency and dominance as it is commonly done for vegetation communities, it was possible to classify preferential and differential species of distinct Auchenorrhyncha communities. The linking of the entomological data with vegetation communities defined by the NVC showed that different vegetation communities were reflected by distinct Auchenorrhyncha communities. Significant differences were observed down to the level of sub-communities. The data revealed a strong positive relationship between the diversity of leafhoppers species and the vegetation height. There was also a positive correlation between the species richness of Auchenorrhyncha and the diversity of plant species. In that context it is remarkable that there was no correlation between vegetation height and botanical diversity. There is a substantial decrease in Auchenorrhyncha species richness from unimproved grassland to improved grassland and arable reversion. The decline of typical chalk grassland and general dry grassland species is especially notable. Consequently, the number of stenotopic Auchenorrhyncha species which are confined to only a few habitat types, are drastically reduced with the improvement of chalk grassland. Improved grassland and arable reversion fields are almost exclusively inhabited by common habitat generalists. The decrease in typical chalk grassland plants due to improvement is mirrored in the decline of Auchenorrhyncha species, which rely monophagously or oligophagously on specific host plants. But even where suitable host plants re-colonize arable reversion sites quickly, there is a considerable delay before leafhoppers follow. That becomes especially obvious with polyphagous leafhoppers like Turrutus socialis or Mocydia crocea, which occur on improved grassland or arable reversion sites only in low frequency and abundance, despite wide appearance or even increased dominance of their host plants. These species can be considered as the most suitable indicators to measure success or failure of long term grassland restoration. A time period of ten years is not sufficient to restore species-rich invertebrate communities on grassland, even if the flora indicates an early success.
Entrepreneurship plays a vital role in scientific literature and in public debates. Especially in these hightech and digitized times it happens more and more frequently that young entrepreneurs with a good idea make the breakthrough and set up an established company. Basically, there are an increasing number of start-ups and a trend towards independence. The economy of a country depends on young entrepreneurs in order to remain economically competitive in international competition. It follows that young entrepreneurs must be encouraged and supported. This support is expressed in various stages of foundation and through various fields of action. In the meantime, there are many offers for start-up support. These networks satisfy different fields of action along a foundation. However, a structured overview of these networks on which a young founder can orient himself and gain easily access to the offers of the networks, is missing until then.
This work attempts to present these offers clearly on a map and to categorize and present the commitment in the respective fields of action. In addition to this main objective, the following three key questions are investigated and answered in this work:
1. How can the Entrepreneurship Networks be assigned to the respective fields of action of Entrepreneurship Education?
2. What is the benefit of such a classification for potential entrepreneurs in detail?
3. Are these Entrepreneurship networks missing an important step? Might they improve their offer? Does the value chain cover every need a young entrepreneur might have?
For this purpose, the respective fields of action of the networks are first separated from each other along a founding and defined individually. Subsequently, a combination of quantitative and qualitative approaches was used to filter and analyze the contents of the websites of the networks. The results of this investigation were transformed in a classification
The aim of this work is to produce a map that displays the existing networks in the world clearly. The map also contains information that is more detailed and the classifica-tion of the networks in the respective fields of action.
Today’s agriculture heavily relies on pesticides to manage diverse pests and maximise crop yields. Despite elaborate regulation of pesticide use based on a complex environmental risk assessment (ERA) scheme, the widespread use of these biologically active compounds has been shown to be a threat to the environment. For surface waters, pesticide exposure has been observed to exceed safe concentration levels and negatively impact stream ecology leading to the question whether current ERA schemes ensure a sustainable use of pesticides. To answer this, the large-scale “Kleingewässer-Monitoring” (KgM) assessed the occurrence of pesticides and related effects in 124 streams throughout Germany, Central Europe, in 2018 and 2019.
Based on five scientific publications originating from the KgM, this thesis evaluated pesticide exposure in streams, ecological effects and the regulatory implications. More than 1,000 water samples were analysed for over 100 pesticide analytes to characterise occurrence patterns (publication 1). Measured concentrations and effects were used to validate the exposure and effect concentrations predicted in the ERA (publication 2). By jointly analysing real-world pesticide application data and measured pesticide mixtures in streams, the disregard of environmental pesticide mixtures in the ERA was evaluated (publication 3). The toxic potential of mixtures in stream water was additionally investigated using suspect screening for 395 chemicals and a battery of in-vitro bioassays (publication 4). Finally, the results from the KgM stream monitoring were used to assess the capability to identify pesticide risks in governmental monitoring programmes (publication 5).
The results of this thesis reveal the widespread occurrence of pesticides in non-target stream ecosystems. The water samples contained a variety of pesticides occurring in complex mixtures predominantly in short-term peaks after rainfall events (publications 1 & 4). Respective pesticide concentration maxima were linked to declines in vulnerable invertebrate species and exceeded regulatory acceptable concentrations in about 80% of agricultural streams, while these thresholds were still estimated partly insufficient to protect the invertebrate community (publication 2). The co-occurrence of pesticides in streams led to a risk underestimated in the single substance-oriented ERA by a factor of about 3.2 in realistic worst-case scenarios, which is further exacerbated by a high frequency at which non-target organism are exposed to pesticides (publication 3). Stream water samples taken after rainfall caused distinct effects in bioassays which were only explainable to a minor extent by the many analytes, indicating the relevance of unknown chemical or biological mixture components (publication 4). Finally, the regulatory monitoring of surface waters under the Water Framework Directive (WFD) was found to significantly underestimate pesticide risks, as about three quarters of critical pesticides and more than half of streams at risk were overlooked (publication 5).
Essentially, this thesis involves a new level of validation of the ERA of pesticides in aquatic ecosystems by assessing pesticide occurrence and environmental impacts at a scale so far unique. The overall results demonstrate that the current agricultural use of pesticides leads to significant impacts on stream ecology that go beyond the level tolerated under the ERA. This thesis identified the underestimation of pesticide exposure, the potential insufficiency of regulatory thresholds and the general inertia of the authorisation process as the main causes why the ERA fails to meet its objectives. To achieve a sustainable use of pesticides, the thesis proposes substantial refinements of the ERA. Adequate monitoring programmes such as the KgM, which go beyond current government monitoring efforts, will continue to be needed to keep pesticide regulators constantly informed of the validity of their prospective ERA, which will always be subject to uncertainty.
TGraphBrowser
(2010)
This thesis describes the implementation of a web server that enables a browser to display graphs created with the Java Graph Laboratory (JGraLab). The user has the choice between a tabular view and a graphical presentation. In both views it is possible to navigate through the graph. Since graphs with thousands of elements may be confusing for the user, he or she is given the option to filter the displayed vertices and edges by their types. Furthermore, the number of graph elements shown can be limited by means of a GreQL query or by directly entering their ids.
Zur Erstellung von 3-D-Oberflächenmodellen real existierender Objekte wird häufig sehr teure Hardware eingesetzt, z.B. 3-D-Laser-Range-Scanner. Da diese keine Grauwert- oder Farbinformationen erfassen können, muss das Objekt zur Wiedergabe farbiger Strukturen zusätzlich abfotografiert und mit den Bildern registriert werden. Die Arbeit entwickelt demgegenüber ein Verfahren zum Einsatz eines kalibrierten Stereokamerasystems. Aus den erhaltenen Sequenzen zweidimensionaler Stereobilder kann ein texturiertes 3-D-Mesh rekonstruiert werden. Im Vergleich zum Einsatz eines Scanners ist dieses Verfahren zwar weniger genau, aber dafür preisgünstiger, platzsparend und schneller einsetzbar. Den Schwerpunkt der Arbeit bilden die Fusionierung der Tiefenkarten und die Erstellung eines texturierten Meshs aus diesen.
Texture-based text detection in digital images using wavelet features and support vector machines
(2010)
In this bachelor thesis a new texture-based approach for the detection of text in digital images is presented. The procedure can be essentially divided into two main tasks, in detection of text blocks and detection of individual words, whereby the individual words are extracted from the detected text blocks. Roughly, the developed method acts with multiple support vector machines, which classify possible text regions of an image into real text regions, using wavelet-based features. In the process the possible text regions are defifined by edge projections with diσerent orientations. The results of the approach are X/Y coordinates, width and height of rectangular regions of an image, which contains individual words. This knowledge can be further processed, for example by an optical character recognition software to get the important and useful text information.
This volume contains those research papers presented at the Second International Conference on Tests and Proofs (TAP 2008) that were not included in the main conference proceedings. TAP was the second conference devoted to the convergence of proofs and tests. It combines ideas from both areas for the advancement of software quality. To prove the correctness of a program is to demonstrate, through impeccable mathematical techniques, that it has no bugs; to test a program is to run it with the expectation of discovering bugs. On the surface, the two techniques seem contradictory: if you have proved your program, it is fruitless to comb it for bugs; and if you are testing it, that is surely a sign that you have given up on any hope of proving its correctness. Accordingly, proofs and tests have, since the onset of software engineering research, been pursued by distinct communities using rather different techniques and tools. And yet the development of both approaches leads to the discovery of common issues and to the realization that each may need the other. The emergence of model checking has been one of the first signs that contradiction may yield to complementarity, but in the past few years an increasing number of research efforts have encountered the need for combining proofs and tests, dropping earlier dogmatic views of their incompatibility and taking instead the best of what each of these software engineering domains has to offer. The first TAP conference (held at ETH Zurich in February 2007) was an attempt to provide a forum for the cross-fertilization of ideas and approaches from the testing and proving communities. For the 2008 edition we found the Monash University Prato Centre near Florence to be an ideal place providing a stimulating environment. We wish to sincerely thank all the authors who submitted their work for consideration. And we would like to thank the Program Committee members as well as additional referees for their great effort and professional work in the review and selection process. Their names are listed on the following pages. In addition to the contributed papers, the program included three excellent keynote talks. We are grateful to Michael Hennell (LDRA Ltd., Cheshire, UK), Orna Kupferman (Hebrew University, Israel), and Elaine Weyuker (AT&T Labs Inc., USA) for accepting the invitation to address the conference. Two very interesting tutorials were part of TAP 2008: "Parameterized Unit Testing with Pex" (J. de Halleux, N. Tillmann) and "Integrating Verification and Testing of Object-Oriented Software" (C. Engel, C. Gladisch, V. Klebanov, and P. Rümmer). We would like to express our thanks to the tutorial presenters for their contribution. It was a team effort that made the conference so successful. We are grateful to the Conference Chair and the Steering Committee members for their support. And we particularly thank Christoph Gladisch, Beate Körner, and Philipp Rümmer for their hard work and help in making the conference a success. In addition, we gratefully acknowledge the generous support of Microsoft Research Redmond, who financed an invited speaker.
While real-time applications used to be executed on highly specialized hardware and individually developed operation systems, nowadays more often regular off-the-shelf hardware is used, with a variation of the Linux kernel running on top.
Within the scope of this thesis, test methods have been developed and implemented as a real-time application to measure several performance properties of the Linux kernel with regards to its real-time capability.
These tests have been run against three different versions of the Linux kernel. Afterwards, the results of the test series were compared to each other.
Terrainklassifikation mit Markov-Zufallsfeldern auf Basis von fusionierten Kamera- und Laserdaten
(2011)
A mobile system, that has to navigate automatically in an outdoor environment,rnneeds to have knowledge about its surrounding terrain. Laser-range-finders, sometimes combined with cameras, are often used to analyse terrain. Several problems, like missing or noisy data, lead to erroneous identification of environment.
The target of this work is to add a context-sensitive classification component and data of other sensors to a procedure, based on 3d-data, obtained by a laser-range-finder. The first upgrade consists of a Markov Random Field, which is used to model the relationships between neighbouring terrainsegments and allows a segmentation of the whole terrain.
The second upgrade fuses laserdata with pictures from cameras to obtain additional terrainfeatures.
Das sichere Befahren von komplexen und unstruktierten Umgebungen durch autonome Roboter ist seit den Anfängen der Robotik ein Problem und bis heute eine Herausforderung geblieben. In dieser Studienarbeit werden drei Verfahren basierend auf 3-D-Laserscans, Höhenvarianz, der Principle Component Analysis (PCA) und Tiefenbildverarbeitung vorgestellt, die es Robotern ermöglichen, das sie umgebende Terrain zu klassifizieren und die Befahrbarkeit zu bewerten, sodass eine sichere Navigation auch in Bereichen möglich wird, die mit reinen 2-D-Laserscannern nicht sicher befahren werden können. Hierzu werden 3-D-Laserscans mit einem 2-D-Laserscanner erstellt, der auf einer Roll-Tilt-Einheit basierend auf Servos montiert ist, und gleichzeitig auch zur Kartierung und Navigation eingesetzt wird. Die einzeln aufgenommenen 2-D-Scans werden dann anhand des Bewegungsmodells der Roll-Tilt-Einheit in ein emeinsames 3-D-Koordinatensystem transformiert und mit für die 3-D-Punktwolkenerarbeitung üblichen Datenstrukturen (Gittern, etc.) und den o.g. Methoden klassifiziert. Die Verwendung von Servos zur Bewegung des 2-D-Scanners erfordert außerdem eine Kalibrierung und Genauigkeitsbetrachtung derselben, um zuverlässige Ergebnisse zu erzielen und Aussagen über die Qualität der 3-D-Scans treffen zu können. Als Ergebnis liegen drei Implementierungen vor, welche evolutionär entstanden sind. Das beschriebene Höhenvarianz-Verfahren wurde im Laufe dieser Studienarbeit von einem Principle Component Analysis basierten Verfahren, das bessere Ergebnisse insbesondere bei schrägen Untergründen und geringer Punktdichte bringt, abgelöst. Die Verfahren arbeiten beide zuverlässig, sind jedoch natürlich stark von der Genauigkeit der zur Erstellung der Scans verwendeten Hardware abhängig, die oft für Fehlklassifikationen verantwortlich war. Die zum Schluss entwickelte Tiefenbildverarbeitung zielt darauf ab, Abgründe zu erkennen und tut dies bei entsprechender Erkennbarkeit des Abgrunds im Tiefenbild auch zuverlässig.
This thesis deals with the verbal categories tense and aspect in the context of analysing the English perfect. The underlying notion of time is examined from the viewpoint of etymology and from the viewpoint of fields of knowledge that lie outside the immediate scope of temporal semantics, e.g. mathematics. The category tense is scrutinised by discussing the concept of Reichenbach tense and the concept of correlation (Giering). The starting point of the discussion of the category verbal aspect is the dichotomy perfective vs. imperfective in the Slavic languages. The main part about the perfect is concerned with the possessive perfect as a cross-linguistic phenomenon (including a comparison of the English and the Slavic perfect) and focuses on the usage and the meaning of the English present perfect. There are three appendices which are an integral part of this dissertation. Appendix A deals with the systematization of English verb forms and their graphical representation. Three different visualizations are presented, two of which are genuine to this paper. Appendix B reproduces the target setting according to which an animated visualization of English infinitives was programmed. Appendix C represents a synopsis of approaches to the English perfect in grammars and textbooks.
Tracking is an integral part of many modern applications, especially in areas like autonomous systems and Augmented Reality. For performing tracking there are a wide array of approaches. One that has become a subject of research just recently is the utilization of Neural Networks. In the scope of this master thesis an application will be developed which uses such a Neural Network for the tracking process. This also requires the creation of training data as well as the creation and training of a Neural Network. Subsequently the usage of Neural Networks for tracking will be analyzed and evaluated. This includes several aspects. The quality of the tracking for different degrees of freedom will be checked as well as the the impact of the Neural Network on the applications performance. Additionally the amount of required training data is investigated, the influence of the network architecture and the importance of providing depth data as part of the networks input. This should provide an insight into how relevant this approach could be for its adoption in future products.
This bachelor thesis implements a system for camera tracking based on a particle filter. For this purpose, a marker tracking is realized and the camera position is calculated based on the marker position. The marker is to be found with a particle filter and in order to accomplish this possible marker positions are simulated, also called particles, and weighted with Likelyhood-Functions. The focus lies on the evaluation of different Likelihood-Functions of the particle filter. The Likelyhood functions were implemented in CUDA as part of the implementation.
This thesis deals with the approach, the structure and the variety of methods in the field of technology forecasting. Furthermore insights into the environment of Railways Diagnostics and Monitoring Technologies (RDMT), their technology management and industryspecific requirements for technology forecasting (especially in the area SMEs) are given. Combining those a recommendation for practical use of research results in companies of this size and industry is developed. In order to initiate the previously mentioned research elements a literature research was operated in the field of innovation management and the railway environment. In addition, an interview with an expert of a medium-sized technology manufacturer of RDMT industry was performed. The most important finding was that a single method cannot fulfill the requirements on its own, thus a combination of methods and organizational forms is necessary to achieve an effective technology forecasting.
One of the main goals of the artificial intelligence community is to create machines able to reason with dynamically changing knowledge. To achieve this goal, a multitude of different problems have to be solved, of which many have been addressed in the various sub-disciplines of artificial intelligence, like automated reasoning and machine learning. The thesis at hand focuses on the automated reasoning aspects of these problems and address two of the problems which have to be overcome to reach the afore-mentioned goal, namely 1. the fact that reasoning in logical knowledge bases is intractable and 2. the fact that applying changes to formalized knowledge can easily introduce inconsistencies, which leads to unwanted results in most scenarios.
To ease the intractability of logical reasoning, I suggest to adapt a technique called knowledge compilation, known from propositional logic, to description logic knowledge bases. The basic idea of this technique is to compile the given knowledge base into a normal form which allows to answer queries efficiently. This compilation step is very expensive but has to be performed only once and as soon as the result of this step is used to answer many queries, the expensive compilation step gets worthwhile. In the thesis at hand, I develop a normal form, called linkless normal form, suitable for knowledge compilation for description logic knowledge bases. From a computational point of view, the linkless normal form has very nice properties which are introduced in this thesis.
For the second problem, I focus on changes occurring on the instance level of description logic knowledge bases. I introduce three change operators interesting for these knowledge bases, namely deletion and insertion of assertions as well as repair of inconsistent instance bases. These change operators are defined such that in all three cases, the resulting knowledge base is ensured to be consistent and changes performed to the knowledge base are minimal. This allows us to preserve as much of the original knowledge base as possible. Furthermore, I show how these changes can be applied by using a transformation of the knowledge base.
For both issues I suggest to adapt techniques successfully used in other logics to get promising methods for description logic knowledge bases.
Empirical studies in software engineering use software repositories as data sources to understand software development. Repository data is either used to answer questions that guide the decision-making in the software development, or to provide tools that help with practical aspects of developers’ everyday work. Studies are classified into the field of Empirical Software Engineering (ESE), and more specifically into Mining Software Repositories (MSR). Studies working with repository data often focus on their results. Results are statements or tools, derived from the data, that help with practical aspects of software development. This thesis focuses on the methods and high order methods used to produce such results. In particular, we focus on incremental methods to scale the processing of repositories, declarative methods to compose a heterogeneous analysis, and high order methods used to reason about threats to methods operating on repositories. We summarize this as technical and methodological improvements. We contribute the improvements to methods and high-order methods in the context of MSR/ESE to produce future empirical results more effectively. We contribute the following improvements. We propose a method to improve the scalability of functions that abstract over repositories with high revision count in a theoretically founded way. We use insights on abstract algebra and program incrementalization to define a core interface of highorder functions that compute scalable static abstractions of a repository with many revisions. We evaluate the scalability of our method by benchmarks, comparing a prototype with available competitors in MSR/ESE. We propose a method to improve the definition of functions that abstract over a repository with a heterogeneous technology stack, by using concepts from declarative logic programming and combining them with ideas on megamodeling and linguistic architecture. We reproduce existing ideas on declarative logic programming with languages close to Datalog, coming from architecture recovery, source code querying, and static program analysis, and transfer them from the analysis of a homogeneous to a heterogeneous technology stack. We provide a prove-of-concept of such method in a case study. We propose a high-order method to improve the disambiguation of threats to methods used in MSR/ESE. We focus on a better disambiguation of threats, operationalizing reasoning about them, and making the implications to a valid data analysis methodology explicit, by using simulations. We encourage researchers to accomplish their work by implementing ‘fake’ simulations of their MSR/ESE scenarios, to operationalize relevant insights about alternative plausible results, negative results, potential threats and the used data analysis methodologies. We prove that such way of simulation based testing contributes to the disambiguation of threats in published MSR/ESE research.
Zum Inhalt:
Die Dissertation ist in einem empirisch-qualitativen Forschungssetting eingebettet, bei der das Team-Teaching als Unterrichtsmethode im Forschungsfokus steht. Die Erhebung des empirischen Datenmaterials erfolgt einerseits mittels teilnehmender Beobachtung im Forschungsfeld und andererseits durch ein Gruppeninterview, das mit Lehrkräften geführt wird, die über einen gewissen Beobachtungszeitraum Erfahrungen mit der Unterrichtsmethode des Team-Teaching gesammelt haben. Für die hier zu Grunde gelegte Feldforschung wird das „theoretical sampling“ zum Einsatz kommen, das 1967 von Glaser und Strauss im Rahmen einer empirischen Untersuchung begründet worden ist.
Zum Aufbau:
In der Einleitung wird das Forschungsinteresse, der Forschungsstand und die zentrale Forschungsfrage der Dissertation beschrieben. Im zweiten Kapitel folgt der theoretische Teil mit einer differenzierten Definition des Team-Teachings als Unterrichtsmethode. Es folgt eine allgemeine Definition zum Unterricht, um die spezifischen Merkmale des Team-Teachings differenziert herausarbeiten zu können. Die empirischen Daten werden im Rahmen einer teilnehmenden Beobachtung im Unterricht der Oberstufe einer Förderschule mit dem Förderschwerpunkt Lernen und sozial-emotionale Entwicklung erhoben. Deshalb sind im Theorieteil der Förderschwerpunkt und die curricularen Aspekte zum Förderschwerpunkt dargestellt. Die Bedeutung der Unterrichtsmethode des Team-Teachings für das Unterrichtssetting in einer Förderschule mit dem sozial-emotionalem Förderschwerpunkt ist ebenfalls im Theorieteil verortet.
Im dritten Kapitel werden die qualitativen und quantitativen Erhebungsinstrumente beschrieben und Hypothesen formuliert. Die Darstellung der Methoden zur Aufbereitung des Datenmaterials ist im vierten Kapitel verortet. Die Interpretation der Forschungsergebnisse erfolgt im fünften Kapitel. Die qualitative und quantitative Datenauswertung des empirischen Datenmaterials ist im sechsten Kapitel dargestellt. Die Dissertation schließt im siebten Kapitel mit einen Fazit und Ausblick.
Previous research revealed that teachers hold beliefs about gifted students combining high intellectual ability with deficits in non-cognitive domains, outlined in the so-called disharmony hypothesis. Since teachers’ beliefs about giftedness can influence which students they identify as gifted, the empirical investigation of beliefs is of great practical relevance. This dissertation comprises three research articles that investigated teacher beliefs about gifted students’ characteristics in samples of pre-service teachers using an experimental vignette approach. Chapter I starts with a general introduction into beliefs, and presents the research aims of the present dissertation. The first article (Chapter II) focused on the interaction of beliefs about giftedness and gender in a sample of Australian pre-service teachers and tested if social desirability occurred when using the vignette design. Beside evidence for beliefs in line with the disharmony hypothesis, results revealed typical gender stereotypes. However, beliefs about giftedness appeared not to be gender specific and thus, to be similar for gifted girls and boys. The vignette approach was found to be an adequate design for assessing teacher beliefs. The second article (Chapter III) investigated teacher beliefs and their relationship to motivational orientations for teaching gifted students in a cross-country sample of German and Australian pre-service teachers. Motivational orientations comprise cognitive components (self-efficacy) and affective components (enthusiasm). Findings revealed beliefs in the sense of the disharmony hypothesis for pre-service teachers from both countries. Giftedness when paired with beliefs about high maladjustment was found to be negatively related to teachers’ self-efficacy for teaching gifted students. The third article (Chapter IV) examined the role of teachers’ belief in a just world for the formation of beliefs using a sample of Belgian pre-service teachers. It was found that the stronger pre-service teachers’ belief in a just world was, the more they perceived gifted students’ high intellectual ability as unfair and thus, neutralized that injustice by de-evaluating students’ non-cognitive abilities. In a general discussion (Chapter V), findings of the three articles are combined and reflected. Taken together, the present dissertation showed that teacher beliefs about gifted students’ characteristics are not gender specific, generalizable over countries, negatively related to teacher motivation and can be driven by fairness beliefs.
The diversity within amphibian communities in cultivated areas in Rwanda and within two selected, taxonomically challenging groups, the genera Ptychadena and Hyperolius, were investigated in this thesis. The amphibian community of an agricultural wetland near Butare in southern Rwanda comprised 15 anuran species. Rarefaction and jackknife analyses corroborated that the complete current species richness of the assemblage had been recorded, and the results of acoustic niche analysis suggested species saturation of the community. Surveys at many other Rwandan localities showed that the species recorded in Butare are widespread in cultivated and pristine wetlands. The species were readily distinguishable using morphological, bioacoustic, and molecular (DNA barcoding) features, but only eight of the 15 species could be assigned unambiguously to nominal species. The remaining represented undescribed or currently unrecognized taxa, including three species of Hyperolius, two Phrynobatrachus species, one Ptychadena species, and one species of Amietia. The diversity of the Ridged Frogs in Rwanda was investigated in two studies (Chapters III and IV). Three species of Ptychadena were recorded in wetlands in the catchment of the Nile. They can be distinguished by morphological characters (morphometrics and qualitative features) as well as by their advertisement calls and genetics. The Rwandan species of the P. mascareniensis group was shown to differ from the topotypic population as well as from other genetic lineages in sub-Saharan Africa and an old available name, P. nilotica, was resurrected from synonymy for this lineage. Two further Ptychadena species were identified among voucher specimens from Rwanda deposited in the collection of the RMCA, P. chrysogaster and P. uzungwensis. Morphologically they can be unambiguously distinguished from each other and the three other Rwandan species. A key based on qualitative morphological characters was developed, which allows unequivocal identification of specimens of all species that have been recorded from Rwanda. DNA was isolated from a Rwandan voucher specimen of P. chrysogaster, and the genetic analysis corroborated the species" distinct status.
A species of Hyperolius collected in the Nyungwe National Park was compared to all other Rwandan species of the genus and to morphologically or genetically similar species from neighbouring countries. Its distinct taxonomic status was justified by morphological, bioacoustic, and molecular evidence and it was described as a new species, H. jackie. A species of the H. nasutus group collected at agricultural sites in Rwanda was described as a new species in the course of a revision of the species of the Hyperolius nasutus group. The group was shown to consist of 15 distinct species which can be distinguished from each other genetically, bioacoustically, and morphologically.
The aerial performance, i.e. parachuting, of the Disc-fingered Reed Frog, Hyperolius discodactylus, was described. It represents a novel observation of a behaviour that has been known from a number of Southeast Asian and Neotropical frog species. Parachuting frogs, including H. discodactylus, exhibit certain morphological characteristics and, while airborne, assume a distinct posture which is best-suited for maneuvering in the air. Another study on the species addressed the validity of the taxon H. alticola which had been considered either a synonym of H. discodactylus or a distinct species. Type material of both taxa was re-examined and the status of H. alticola reassessed using morphological data from historic and new collections, call recordings, and molecular data from animals collected on recent expeditions. A northern and a southern genetic clade were identified, a divide that is weakly supported by diverging morphology of the vouchers from the respective localities. No distinction in advertisement call features could be recovered to support this split and both genetic and morphological differences between the two geographic clades are marginal and not always congruent and more likely reflect population-level variation. Therefore it was concluded that H. alticola is not a valid taxon and should be treated as a synonym of H. discodactylus.
Web-programming is a huge field of different technologies and concepts. Each technology implements a web-application requirement like content generation or client-server communication. Different technologies within one application are organized by concepts, for example architectural patterns. The thesis describes an approach for creating a taxonomy about these web-programming components using the free encyclopaedia Wikipedia. Our 101companies project uses implementations to identify and classify the different technology sets and concepts behind a web-application framework. These classifications can be used to create taxonomies and ontologies within the project. The thesis also describes, how we priorize useful web-application frameworks with the help of Wikipedia. Finally, the created implementations concerning web-programming are documented.
Taxonomy and Systematics of Spiny-Backed Treefrogs, Genus Osteocephalus (Amphibia: Anura: Hylidae)
(2015)
The pan-Amazonian treefrog genus Ostecephalus is poorly understood both on a taxonomic and phylogenetic level. The status of several frogs already or not yet referred to the genus is unclear and the relationships among the genus and with respect to related genera is not understood. In this work O. cabrerai (Cochran and Goin, 1970) from Colombia and Peru is redescribed and O. festae (Peracca, 1904) from the foothills of the Andes in Ecuador is revalidated. Hyla inframaculata Boulenger, 1882, from the lower Amazon in Brazil, is reallocated to Osteocephalus and O. elkejungingerae (Henle, 1981) from the Andean foothills in Peru is shown to be a synonym of Hyla mimetica (Melin, 1941), the valid name being O. mimeticus. Hyla vilarsi Melin, 1941 is considered a valid species in the genus Osteocephalus and revalidated from the synonymies of several other frogs. Three new species, O. castaneicola from northern Bolivia and southern Peru, O. duellmani from a sub-Andean mountain range in southern Ecuador, and O. camufatus from central Amazonian Brazil, are described. A phylogenetic analysis based on up to nine mitochondrial genes and one nuclear one reveals the paraphyly of the genus as previously understood with respect to the genus Tepuihyla. A new taxonomy is proposed, securing the monophyly of Osteocephalus and Tepuihyla by rearranging and redefining the content of both genera. A new genus, Dryaderces, is erected for the sister group of Osteocephalus. The colouration of newly metamorphosed individuals is proposed as a morphological synapomorphy for Osteocephalus. Five monophyletic species groups within Osteocephalus are recognized, three species of Osteocephalus (O. germani, O.rnphasmatus, O. vilmae) and three species of Tepuihyla (T. celsae, T. galani, T. talbergae) synonymized and three species (Hyla helenae to Osteocephalus, O.rnexophthalmus to Tepuihyla and O. pearsoni to Dryaderces gen. n.) reallocated. Furthermore, nine putative new species are flagged (an increase to 138% of the current diversity), an indication that species numbers are largely underestimated, with most hidden diversity centred on widespread and polymorphic nominal species. The evolutionary origin of breeding strategies within Osteocephalus is discussed in the light of this new phylogeny and a novel type of amplexus (gular amplexus) is described.
Taktstraße
(2008)
Eine Taktstraße ermöglicht eine automatisierte Verarbeitung eines Werkstückes mit Hilfe von Förderbändern, Lichtschranken, Schiebern und Bearbeitungsstationen. Für eine vorgegebene Taktstraße wird eine Ansteuerung entwickelt. Dazu wird der Mikrocontroller ATMega16 von Atmel eingesetzt. Ein externer Controller sendet über den TWI-Bus Steuerbefehle an den mit der Taktstraße verbundenen Controller. Um die Taktstraße bedienbar zu machen, wird eine geeignete Platine entworfen sowie eine LCD-Bibliothek als Ausgabe- und Informationsmedium. Die Arbeit umfasst alle für ein Projekt im Rahmen eines Informatikstudiums benötigten Entwicklungsstadien von der Projektplanung über die Aneignung von spezifischem Grundlagenwissen, die Hard- und Softwareentwicklung bis hin zu ausführlichen Entwicklungs- und Testphasen.
Data visualization is an effective way to explore data. It helps people to get a valuable insight of the data by placing it in a visual context. However, choosing a good chart without prior knowledge in the area is not a trivial job. Users have to manually explore all possible visualizations and decide upon ones that reflect relevant and desired trend in the data, are insightful and easy to decode, have a clear focus and appealing appearance. To address these challenges we developed a Tool for Automatic Generation of Good viSualizations using Scoring (TAG²S²). The approach tackles the problem of identifying an appropriate metric for judging visualizations as good or bad. It consists of two modules: visualization detection: given a data-set it creates a list of combination of data attributes for scoring and visualization ranking: scores each chart and decides which ones are good or bad. For the later, an utility metric of ten criteria was developed and each visualization detected in the first module is evaluated on these criteria. Only those visualizations that received enough scores are then presented to the user. Additionally to these data parameters, the tool considers user perception regarding the choice of visual encoding when selecting a visualization. To evaluate the utility of the metric and the importance of each criteria, test cases were developed, executed and the results presented.
Das Bulletin Esskulturen ist aus dem Verbundprojekt „Esskulturen. Objekte, Praktiken, Semantiken“ hervorgegangen, das im Rahmen der Förderlinie „Sprache der Objekte“ von September 2018 bis August 2021 vom Bundesministerium für Bildung und Forschung finanziert wird. In jeder Ausgabe bildet ein Objekt der Stiftung Bürgerliche Wohnkultur, Sammlung Alex Poignard (Landesmuseum Koblenz) den Ausgangspunkt für eine interdisziplinäre Auseinandersetzung mit unterschiedlichen soziokulturellen Fragen rund um das Thema Essen.
Tafel-Dekorationen. Speisegestaltung und Repräsentation,
Bulletin Esskulturen, 2. Jahrgang 2020, Mappe V, Faszikel 25-30
Inhalt der Ausgabe
Britta Stein, Tafeldekoration = Repräsentation? Einblicke in die Sammlung Alex Poignard
Barbara Weyandt, „Nur Verschwendung bringt Prestige ...“. Tafelaufsätze zwischen Luxus und sozialem Sinn
Stefanie Brüning, Vergängliche Tafelfreuden
Heinz Georg Held, Das stille Leben nach dem Leben. Zur Kunst-Sprache des Tafeldekors
Hans Körner, Eine Runkelrübe als Tischdekoration
Andreas Ackermann, Wein, dekorativ betrachtet. Unter besonderer Berücksichtigung des bundesrepublikanischen Staatsbanketts
Impressum
Generalized methods for automated theorem proving can be used to compute formula transformations such as projection elimination and knowledge compilation. We present a framework based on clausal tableaux suited for such tasks. These tableaux are characterized independently of particular construction methods, but important features of empirically successful methods are taken into account, especially dependency directed backjumping and branch local operation. As an instance of that framework an adaption of DPLL is described. We show that knowledge compilation methods can be essentially improved by weaving projection elimination partially into the compilation phase.
Szeneneditor für ein Echtzeitanimationssystem und andere XML konfigurierte und erweiterbare Systeme
(2006)
Mit dieser Arbeit werden die folgenden Ziele verfolgt: Eine repräsentative Auswahl und Sammlung von Beispielen für Mobile Ticketing Systeme (insbesondere aus dem deutschsprachigen Raum) zu recherchieren, die sich im praktischen Einsatz befinden. Eine Zusammenstellung sinnvoller Kategorien (Charakteristika) zu erarbeiten, anhand derer sich Mobile Ticketing Systeme unterscheiden oder strukturieren lassen. Eine Gegenüberstellung der Beispiele und Kategorien zu erstellen, um zu untersuchen, welche typischen Klassen bei Mobile Ticketing Systemen identifiziert werden können.
Education and training of the workforce have become an important competitive factor for companies because of the rapid technological changes in the economy and the corresponding ever shorter innovation cycles. Traditional training methods, however, are limited in terms of meeting the resulting demand for education and training in a company, which continues to grow and become faster all the time. Therefore, the use of technology-based training programs (that is, courseware) is increasing because courseware enables self-organized and self-paced learning and, through integration into daily work routines, allows optimal transfer of knowledge and skills, resulting in high learning outcome. To achieve these prospects, high-quality courseware is required, with quality being defined as supporting learners optimally in achieving their learning goals. Developing high-quality courseware, however, usually requires more effort and takes longer than developing other programs, which limits the availability of this courseware in time and with the required quality.
This dissertation therefore deals with the research question of how courseware has to be developed in order to produce high-quality courseware with less development effort and shorter project duration. In addition to its high quality, this courseware should be optimally aligned to the characteristics and learning goals of the learners as well as to the planned usage scenarios for the knowledge and skills being trained. The IntView Method for the systematic and efficient development of high-quality courseware was defined to answer the research question of this dissertation. It aims at increasing the probability of producing courseware in time without exceeding project schedules and budgets while developing a high-quality product optimally focused on the target groups and usage scenarios.
The IntView Methods integrates those execution variants of all activities and activity steps required to develop high-quality courseware, which were identified in a detailed analysis of existing courseware development approaches as well as production approaches from related fields, such as multimedia, web, or software engineering, into a systematic process that in their interaction constitute the most efficient way to develop courseware. The main part of the proposed method is therefore a systematic process for engineering courseware that encompasses all courseware lifecycle phases and integrates the activities and methods of all disciplines involved in courseware engineering, including a lifecycle encompassing quality assurance, into a consolidated process. This process is defined as a lifecycle model as well as a derived process model in the form of a dependency model in order to optimally support courseware project teams in coordinating and synchronizing their project work. In addition to the models, comprehensive, ready-to-apply enactment support materials are provided, consisting of work sheets and document templates that include detailed activity descriptions and examples.
The evaluation of the IntView Method proved that the method together with the enactment support materials enables efficient as well as effective courseware development. The projects and case studies conducted in the context of this evaluation demonstrate that, on the one hand, the method is easily adaptable to the production of different kinds of courseware or to different project contexts, and, on the other hand, that it can be used efficiently and effectively.
Die vorliegende Arbeir zeichnet eine Kategorisierung der im deutschen Fernsehen und Rundfunk vorhandenen Gewinnspiele auf. Beginnend mit der Erläuterung von grundlegenden Begrifflichkeiten und dem Beleuchten der Verfahren zur Teilnehmerauswahl wird der Leser an das Thema der Arbeit herangeführt. Fortgesetzt wird mit der eigentlichen Darstellung der momentan vorhandenen Gewinnspielformate. Abschließend gibt die Arbeit eine erste Festlegung und Aufrechnung einer Gewinnwahrscheinlichkeit an.
The genus Cheilolejeunea (Spruce) Schiffn. (Lejeuneaceae, Jungermanniopsida) is represented by 23 species in continental tropical Africa. The morphological characters such as features of the stem, leaf, lobule and perianth traditionally used to separate the taxa at both species and generic level have been found to be unstable. The species are variably ranked in several subgenera including Cheilolejeunea (Spruce) Schiffn., Euosmolejeunea Schiffn., Strepsilejeunea (Spruce) Schiffn. and Xenolejeunea Kachroo & Schust. Although the genus has never been monographed, there are a few regional taxonomic accounts for America, Australia and China. A comprehensive revision of Cheilolejeunea species is lacking in Africa where the existing studies are based on single subgenus and sub-regional flora or checklist compilations, which are sometimes without identification keys. This study revises the taxonomy of Cheilolejeunea and the closely allied genus Leucolejeunea A. Evans, in continental Africa based on morphological data analysed using phenetic and phylogenetic methods.
Dracaena L. (Ruscaceae) is a predominantly African genus with a smaller centre of diversity in South-East Asia. The taxonomy of the 29 species occurring in Central, East and Southern Africa was revised through phenetic and phylogenetic analyses of the morphology as well as through herbarium, literature and field studies. An infrageneric classification is proposed, in which four sub-genera are recognised for the first time. A taxonomic account for the study area incorporating an identification key, distribution maps and an IUCN Red List assessment is presented. Analysis of Dracaena phytogeography reveals that the Guineo-Congolian centre of endemism is the richest with 21 species while the Maputaland-Pondoland regional mosaic and the Guinea-Congolia/Sudania regional transition zone are the poorest, having only one species each. Investigation of the ecology of Dracaena in the Kakamega Forest, Kenya, shows that it plays an important role in the forest ecology and is an indicator of forest quality.
A taxonomic revision of the genus Pteris in tropical Africa revealed 26 species. An identification key to the species is provided. Morphological characters were used to prepare a cladistic analysis of the relevant taxa. Each species was evaluated concerning the IUCN red list status. Only Pteris mkomaziensis was considered as Near Threatened, and all other species only as Least Concern. An inventory of the ferns of Kakamega Forest / Kenya and Budongo Forest / Uganda revealed 85 species in Kakamega and 66 species in Budongo. Life form spectra were analysed and the ferns were studied for their value for bioindication.
Betriebswirtschaftliche Trends wie der Wandel auf Käufermärkten, verkürzte Produktlebens- und Innovationszyklen, steigende Kundenanforderungen und immer leistungsfähiger werdende Informations- und Kommunikationstechnologien stellen für Unternehmen anspruchsvolle Herausforderungen dar. "Bis Anfang der 90er Jahre dominierten lokale Optimierungsbemühungen im Rahmen einer auf funktionale Spezialisierung ausgerichteten Aufbauorganisation entsprechend den Überlegungen zu Arbeitsteilung von beispielsweise Smith, Taylor und Ford. Aufgrund der vielfältigen Probleme dieses Ansatzes - insbesondere Schnittstellenbildung, demotivierte Mitarbeiter, mangelnde Kundenorientierung und erhöhter Aufwand zur Steuerung und Koordination der funktionalen Einheiten - vollzieht sich seit Beginn der 90er Jahre in den Wirtschaftswissenschaften und der unternehmerischen Praxis ein Paradigmenwechsel weg von der Funktions- hin zur Prozessorientierung." Die anspruchsvollen Probleme können aufgrund ihrer Kompliziertheit nicht mehr durch einfache, lokal anwendbare Maßnahmen gelöst werden. In Zeiten hoher Komplexität und Dynamik werden strategische Planungsaufgaben immer wichtiger für ein langfristig erfolgreiches Management. Entscheidungen mit großer Tragweite müssen im Vorfeld vollständig auf ihre kurz-und langfristigen Auswirkungen innerhalb und außerhalb des Unternehmens überprüft werden. Dabei sind die zeitverzögerten Rückkopplungen besonders wichtig. Es kann vorkommen, dass sich eine kurzfristig erfolgreiche Maßnahme zur Ergebnisverbesserung möglicherweise langfristig extrem negativ auf das Ergebnis auswirkt. System Dynamics, eine Methode zur Untersuchung komplexer, dynamischer Systeme, bietet die Möglichkeit, aus einer Analyse der Systemstruktur und des von ihr verursachten Verhaltens langfristig wirksame Entscheidungsregeln abzuleiten. Dabei werden Unternehmen als offene, äußerst vernetzte soziotechnische Systeme verstanden. System Dynamics, ursprünglich "Industrial Dynamics" genannt, wurde in den 50ern am MasMassachusetts Institute of Technology, MIT, entwickelt. Die Methode nimmt die Komplexität, Nichtlinearität und Rückkopplungsstrukturen, die sozialen und physikalischen Systemen enthalten, als Grundlage. Sie wird inzwischen an einer zunehmenden Zahl von Universitäten gelehrt. Unternehmen und Regierungen nutzen System Dynamics zur Simulation von Management- bzw. Politikentscheidungen. Mit der Hilfe der Methode wird es möglich, komplexe Systeme zu durchschauen, was für Entscheidungsträger eine zunehmende Herausforderung darstellt. Die "System Dynamics Society" ist bemüht, systematisches Denken einer breiten Masse von Anwendern zugänglich zu machen. Die Methode könnte die Menschen dabei unterstützen, die aktuellen Probleme und die langfristigen Auswirkungen des aktuellen Handelns zu verstehen. Die Intention dieser Arbeit ist es nun, zwei betriebswirtschaftliche Anwendungsgebiete der System Dynamics Modellierung mit jeweils einem konkreten Fallbeispiel vorzustellen. Dazu werden zunächst in Kapitel 2 die Grundlagen der Systemtheorie dargestellt. Dabei wird auf die Sichtweise von Forrester fokussiert. Darauf aufbauend wird in Kapitel 3 die die Methode detailliert vorgestellt. Nachdem die historische Entwicklung von System Dynamics aufgezeigt wird, werden die Anwendungsgebiete, die Grundlagen und die Grundsäulen der Modellierung und der Modellierungsprozess dargelegt. Im vierten Kapitel wird das erste Anwendungsgebiet untersucht, in der die System Dynamics Modellierung eingesetzt wird, die Balanced Scorecard, ein populäres Konzept für die Performancemessung in Unternehmen. Nachdem das Konzept vorgestellt wird, werden dessen Grenzen aufgezeigt, die mit der Verknüpfung des System Dynamics Ansatz überwunden werden können. Daraufhin werden die Möglichkeiten der System Dynamics Modellierung erläutert, womit die Schwächen der Balanced Scorecard reduziert werden, bevor anhand eines konkreten Fallbeispiels die Verknüpfung des System Dynamics Modellierung mit dem Balanced Scorecard Konzept vorgestellt wird. Eine abschließende Bewertung über die Anwendung wird dieses Kapitel abschließen. Im nächsten Kapitel wird die Anwendung der System Dynmaics Modellierung im Supply Chain Management untersucht. Zunächst werden die Grundlagen über das Supply Chain Management und Probleme, die damit verbunden sind, erläutert. Anhand des "Beer Game" werden die typischen Koordinationsprobleme mehrstufiger Lieferketten, der Bullwhip Effekt, verdeutlicht. Im nächsten Schritt wird die Anwendung der System Dynamics Modellierung im Supply Chain Management diskutiert, bevor die Umsetzung in einem konkreten Fallbeispiel aufgezeigt wird. Dieses Kapitel schließt mit einer Bewertung über die Anwendung von System Dynamics im Supply Chain Management ab. Mit dem abschließenden Kapitel Fazit wird die Arbeit abgerundet.
The E-KRHyper system is a model generator and theorem prover for first-order logic with equality. It implements the new E-hyper tableau calculus, which integrates a superposition-based handling of equality into the hyper tableau calculus. E-KRHyper extends our previous KRHyper system, which has been used in a number of applications in the field of knowledge representation. In contrast to most first order theorem provers, it supports features important for such applications, for example queries with predicate extensions as answers, handling of large sets of uniformly structured input facts, arithmetic evaluation and stratified negation as failure. It is our goal to extend the range of application possibilities of KRHyper by adding equality reasoning.
The present study deals with the synthesis of N-phenacylpyridinium salts and their use as photoinitiators for epoxy resins. The use and suitability of phenacyl salts as photoinitiators for epoxy resins has already been described in previous studies. The individual impact of the specific components on the rate constants of epoxy reaction has not been investigated in detail. Based on the structure of N-phenacylpyridinium salt the substances described in the present study were varied due to the exchange of counter ion and different substituents. Investigating the impact of the specific substituent with focus on the reaction of epoxy groups there is a dependence found for three main factors. First, depending on whether to use a phenyl or methyl group as substituent there was found an impact on the process of photolysis. Furthermore, concerning the dependences on the pyridine derivative and the counter ion, it was found that pyridine derivatives with electron withdrawing groups and counter ions, which can build strong acids, accelerate the rate constants of the epoxy reaction. Vice versa, pyridine derivatives with electron donating groups and counter ions, which can form weaker acids, decrease the rate constants.
The determined rate constants and the formulation of substances discussed in the present thesis in an adhesive formulation show the suitability of selected substances as photoinitiators for the polymerization of epoxy resins.
Nanoparticles are sensitive and robust systems; they are particularly reactive due to their large surface area and have properties that the bulk material does not have. At the same time, the production of nanoparticles is challenging, because even with the same parameters and conditions, the parameters can vary slightly from run to run. In order to avoid this, this work aims to develop a continuous synthesis in the microjet reactor for nanoceria. The aim is to obtain monodisperse nanoparticles that can be used in biosensors.
This work focuses on two precipitation syntheses with the intermediate steps of cerium carbonate and cerium hydroxide, as well as a microemulsion synthesis for the production of nanoceria. The cerium oxide nanoparticles are compared using different characterisation and application methods. The synthesised nanoparticles will be characterised with respect to their size, stability, chemical composition and catalytic capabilities, by electron microscopy, X-ray diffraction, Raman spectroscopy and photoelectron spectroscopy.
The biosensor systems to evaluate the nanoceria are designed to detect histamine and glucose or hydrogen peroxide, which are resulting from the oxidation of histamine and glucose. Hydrogen peroxide and glucose are detected by an electrochemical sensor and histamine by a colorimetric sensor system.
Sustainable Leadership
(2023)
Thematik:
Die vorliegende Forschungsarbeit befasst sich mit dem Thema Sustainable Leadership. Dieses Forschungsgebiet der nachhaltigkeitsorientierten Forschung hat in den letzten Jah- ren durch die immer stärker werden Auswirkungen des Klimawandels erheblich an Bedeutung gewonnen. In diesem Zusammenhang rücken Unternehmen immer mehr in den Fokus. Diese müssen nun Wege und Methoden finden, die ihre Arbeitsweise und Prozesse nachhaltiger und umweltschonender gestalten. Die in der vorliegenden Arbeit zu untersuchende Forschungsfrage lautet: „Wie gestaltet sich ein Sustainable Leadership Ansatz in Unternehmen?“ Aufbauend auf dieser übergeordneten Forschungsfrage werden in untergeordneten Forschungsfrage insbesondere Merkmale, Kompetenzen und Verhaltensweisen eines nachhaltigkeitsorientierten Führungsstils betrachtet.
Methodik:
Die Forschungsmethodik der vorliegenden Arbeit ist eine qualitative Inhaltsanalyse nach Mayring (2019). Mittels semistrukturierter Interviews wurden Führungspersönlichkeiten aus Großunternehmen befragt. Auf Grundlage dieses Datenmaterials wurden induktive Kategorien abgeleitet und qualitativ analysiert und interpretiert.
Ergebnisse:
Die Ergebnisse der empirischen Forschung wurden in insgesamt fünf Oberkategorien mit jeweils zwei Unterkategorien zusammengefasst. Es wurden die theoretischen Erkennt- nisse des Forschungsbereichs mit den praktischen Erkenntnissen aus den durchgeführten Interviews ergänzt und interpretiert. Des Weiteren wurden die Ergebnisse aus Sicht einer holistischen Unternehmensperspektive in einem konzeptionellen Ansatz modelliert. Ab- schließend wurden aus den Praxiserfahrungen einige Handlungsempfehlungen formuliert und die Merkmale, Kompetenzen, Verhaltensweisen und Auswirkungen eines nachhal- tigkeitsorientierten Führungsstils diskutiert.
Regarding the rising amount of legal regulations, businesses should get the opportunity to use software to fulfill their Compliance Management with the usage of compliance pattern. These patterns are used to represent substantive and structural parts of the processes. This means companies can increase their efficiency and react to new regulations quickly to avoid possible violation which can lead to monetary losses or legal consequences. In the literature are many approaches that deal with compliance pattern but currently there does not exist any list with necessary compliance pattern that companies should face at (Delfmann and Hübers, 2015). The following bachelor thesis classifies 80 research contributions regarding their different approaches of compliance pattern. For that a systematic literature review was executed. As a result, the author developed a graphical classification context that provides an overview of connections between different compliance approaches. Furthermore, an appendix with 32 compliance patterns of the analyzed papers was developed that contains real-world patterns with the classification of the previous sections.
The status of Business Process Management (BPM) recommender systems is not quite clear as research states. The use of recommenders familiarized itself with the world during the rise of technological evolution in the past decade.Ever since then, several BPM recommender systems came about. However, not a lot of research is conducted in this field. It is not well known to what broad are the technologies used and how are they used. Moreover, this master’s thesis aims at surveying the BPM recommender systems existing. Building on this, the recommendations come in different shapes. They can be positionbased where an element is to be placed at an element’s front, back or to autocomplete a missing link. On the other hand, Recommendations can be textual, to fill the labels of the elements. Furthermore, the literature review for BPM recommender systems took place under the guides of a literature review framework. The framework suggests 5stages of consecutive stages for this sake. The first stage is defining a scope for the research. Secondly, conceptualizing the topic by choosing key terms for literature research. After that in the third stage, comes the research stage.As for the fourth stage, it suggests choosing analysis features over which the literature is to be synthesized and compared. Finally, it recommends defining the research agenda to describe the reason for the literature review. By invoking the mentioned methodology, this master’s thesis surveyed 18 BPM recommender systems. It was found as a result of the survey that there
are not many different technologies for implementing the recommenders. It was also found that the majority of the recommenders suggest nodes that are yet to come in the model, which is called forward recommending. Also, one of the results of the survey indicated the scarce use of textual recommendations to BPM labels. Finally, 18 recommenders are considered less than excepted for a developing field therefore as a result, the survey found a shortage in the number of BPM recommender systems. The results indicate several shortages in several aspects in the field of BPM recommender systems. On this basis, this master’s thesis recommends the future work on it the results.
Different techniques (weight loss, electrochemical, and spray
corrosion measurements) have been used to evaluate four sarcosine derivatives to inhibit corrosion and one commercial compound as synergist. The basic metal was low carbon steel CR4 tested at different conditions. As working media mainly neutral water and 0.1 M NaCl was applied. The protective film was formed on the steel surface via direct absorption of the tested substances during the immersion process. A highly improved corrosion protection with direct correlation to the molecular weight and carbon chain length of the tested compounds was detected. The protection of steel CR4 against corrosion in 0.1 M NaCl enhanced with increasing concentration of selected sarcosine compounds. The best inhibitor throughout all tested concentrations and all evaluation systems was Oleoylsarcosine (O) with efficiencies up to 97 % in potentiodynamic polarization (PP), 83 % electrochemical impedance spectroscopy (EIS), and 85 % weight loss (WL) at 100 mmol/L as highest concentration tested here. The second best inhibitor was Myristoylsarcosine (M) with efficiencies up to 82 % in PP, 69 % in EIS, and 75 % in WL at highest concentration. The inhibitor with the shortest hydrocarbon chain in this series is Lauroylsarcosine (L). It showed lowest effects to inhibit corrosion compared to O and M. The efficiencies of L were a bit more than 50 % at 75 and 100 mmol/L and less than 50 % at 25 and 50 mmol/L in all used evaluation systems. Furthermore, the overall efficiency is promoted with longer dip coating times during the steel CR4 immersion as shown for 50 mmol/L for all present derivatives. This survey indicated 10 min as best time in respect of cost and protection efficiency. The commercial inhibitor Oley-Imidazole (OI) improved significantly the effectiveness of compound Cocoylsarcosine (C), which contains the naturally mixture of carbon chain lengths from coconut oil (C8 - C18), and enhanced protection when used in combination (C+OI, 1:1 molar ration). In this system the efficiency increased from 47 % to 91 % in PP, from 40 % to 84 % in EIS, and from 45 % to 82 % in WL at highest concentration. Spray corrosion tests were used to evaluate all present sarcosine substances on steel CR4 in a more realistic system. The best inhibitor after a 24 h test was O followed by the combination C+OI and M with efficiencies up to 99 %, 80 %, and 79 %, respectively. The obtained results indicate a good stability of the protective film formed by the present inhibitors even after 24 h. All evaluation systems used in the current investigation were in good agreement and resulted in the same inhibitor sequence. Furthermore, the adsorption process of the tested compounds is assumed to follow the Langmuir type isotherm. Response surface methodology (RSM) is an optimization method depending on Box- Behnken Design (BBD). It was used in the current system to find the optimum efficiency for inhibitor O to protect steel CR4 against corrosion in salt water. Four independent variables were used here: inhibitor concentration (A), dip coating time (B), temperature (C), and NaCl concentration (D); each with three respective levels: lower (-1), mid (0), and upper (+1). According to the present result, temperature has the greatest effect on the protection process as individual parameter followed by the inhibitor concentration itself. In this investigation an optimum efficiency of 99 % is calculated by the following parameter and level combination: upper level (+1) for inhibitor concentration, dip coating time, and NaCl concentration while lower level (-1) for temperature.
Previous research concerned with early science education revealed that guided play can support young children’s knowledge acquisition. However, the questions whether guided play maintains other important prerequisites such as children’s science self-concept and how guided play should be implemented remain unanswered. The present dissertation encompasses three research articles that investigated 5- to 6-year-old children’s science knowledge, science theories, and science self-concept in the stability domain and their relation to interindividual prerequisites. Moreover, the articles examined whether children’s science knowledge, science theories, and science self-concept can be supported by different play forms, i.e., guided play with material and verbal scaffolds, guided play with material scaffolds, and free play. The general introduction of the present dissertation first highlights children’s cognitive development, their science self-concept, and interindividual prerequisites, i.e., fluid and crystallised intelligence, mental rotation ability, and interest in block play. These prerequisites are applied to possible ways of supporting children during play. The first article focused on the measurement of 5-to-6-year-old children’s stability knowledge and its relation to interindividual prerequisites. Results suggested that children’s stability knowledge could be measured reliably and validly, and was related to their fluid and crystallised intelligence. The second article was concerned with the development of children’s intuitive stability theories over three points of measurement and the effects of guided and free play, children’s prior theories as well as their intelligence on these intuitive theories. Results implied that guided play with material and verbal scaffolds supported children’s stability theories more than the other two play forms, i.e., guided play with material scaffolds and free play. Moreover, consistency of children’s prior theories, their fluid and crystallised intelligence were related to children’s theory adaptation after the intervention. The third article focused on the effect of the playful interventions on children’s stability knowledge and science self-concept over three points of measurement. Furthermore, the reciprocal effects between knowledge acquisition and science self-concept were investigated. Results implied that guided play supported knowledge acquisition and maintained children’s science self-concept. Free play did not support children’s stability knowledge and decreased children’s science self-concept. No evidence for reciprocal effects between children’s stability knowledge and their science self-concept was found. Last, in a general discussion, the findings of the three articles are combined and reflected amidst children’s cognitive development. Summarising, the present dissertation shows that children’s science knowledge, science theories, and science self-concept can be supported through guided play that considers children’s cognitive development.
The annotation of digital media is no new area of research, instead it is widely investigated. There are many innovative ideas for creating the process of annotation. The most extensive segment of related work is about semi automatic annotation. One characteristic is common in the related work: None of them put the user in focus. If you want to build an interface, which is supporting and satsfying the user, you will have to do a user evaluation first. Whithin this thesis we want to analyze, which features an interface should or should not have to meet these requirements of support, user satisfaction and beeing intuitive. After collecting many ideas and arguing with a team of experts, we determined only a few of them. Different combination of these determined variables form the interfaces, we have to investigate in our usability study. The results of the usability leads to the assumption, that autocompletion and suggestion features supports the user. Furthermore coloring tags for grouping them into categories is not disturbing to the user, but has a tendency of being supportive. Same tendencies emerge for an interface consisting of two user interface elements. There is also an example given for the definition differences of being intuitive. This thesis leads to the concolusion that for reasons of user satisfaction and support it is allowed to differ from classical annotation interface features and to implement further usability studies in the section of annotation interfaces.
Large and unknown data sets can be easily and systematically discovered by using faceted search. If implementing applications for smartphones, it needs to be considered that unlike desktop applications you can only use smaller screen sizes and there are limited possibilities for interaction between user and smartphone. These limitations can negatively influence the usability of an application. With FaThumb and MobileFacets, two mobile applications exist, which implement and use faceted search, although only MobileFacets is designed for current smartphones with touchscreen. However, FaThumb provides a novel facet navigation, which is newly realized in MFacets for present smartphones within this work.
Moreover, this work deals with the performance of a summative evaluation between both applications, MFacets and MobileFacets, with regards to usability and presents the evaluated results.
Increasingly, problematic smartphone use behavior (PSU) and excessive consumption are reported. In this study, an experiment was developed to investigate the influence of screen coloration using the grayscale setting on smartphone usage time in repeated measurements. We also investigated how individuals perceived suffering correlates with smartphone usage time and PSU, and whether differences exist by smartphone usage type (social, process, habitual). 240 subjects completed a questionnaire about smartphone usage time, PSU, perceived suffering, and smartphone usage types. Afterward, their smartphones were switched to grayscale setting for at least 24h, and thereafter 92 of these participants completed the second questionnaire. Analyses showed that grayscale setting decreases usage time and that there is a positive correlation between PSU, smartphone usage duration, and perceived suffering. The types of use (process and habitual) influence one’s perceived suffering. Thus, it shows that individuals are aware of their PSU and suffer from it. Using grayscale setting is effective in reducing smartphone use time.
In dieser Arbeit werden jeweils ein Verfahren aus den beiden Bereichen der Bildregistrierung implementiert und beschrieben. Eine direkte und eine merkmalsbasierte Methode werden verglichen und auf ihre Grenzen hin überprüft. Die implementierten Verfahren funktionieren gut und registrieren beide verschiedene Bildserien subpixelgenau. Bei der direkten Methode ist vor allem die Wahl des Transformationsmodells ausschlaggebend. Auch das Einbetten der Methode in eine Gaußpyramidenstruktur hat sich als wichtig herausgestellt. Da die merkmalsbasierte Methode aus verschiedenen Komponenten aufgebaut ist, kann jeder einzelne Schritt durch unterschiedliche Verfahren ausgetauscht werden, so z.B. die Detektion der Merkmale durch Tomasi-Kanade, SIFT oder Moravec. In der direkten Methode kann die Genauigkeit der Ergebnisse zum einen durch den gewählten Schwellwert und zum anderen durch die Anzahl der Pyramidenstufen beeinflusst werden. Bei der merkmalsbasierten Methode wiederum können unterschiedlich viele Merkmale benutzt werden, die einen unterschiedlich hohen Schwellwert besitzen können. Es wird gezeigt, dass beide Methoden zu guten Ergebnissen führen, wenn davon ausgegangen wird, dass die Verschiebung sowie die Rotation gering sind. Bei stärkeren Veränderungen jedoch wird die direkte Methode recht ungenau, während die merkmalsbasierte Methode noch gute Ergebnisse erzielt. An ihre Grenze gerät sie erst, wenn entweder der Bildinhalt sich stark ändert, oder die Rotationen einen Winkel von 20° überschreitet. Beide Verfahren arbeiten also subpixelgenau, können aber unter verschiedenen Voraussetzungen zu Ungenauigkeiten führen. Werden die jeweiligen Probleme der beiden Methoden beachtet und am besten bei der Aufnahme oder vor der Registrierung eliminiert, so können sehr gute Ergebnisse erzielt werden.
This Master Thesis is an exploratory research to determine whether it is feasible to construct a subjectivity lexicon using Wikipedia. The key hypothesis is that that all quotes in Wikipedia are subjective and all regular text are objective. The degree of subjectivity of a word, also known as ''Quote Score'' is determined based on the ratio of word frequency in quotations to its frequency outside quotations. The proportion of words in the English Wikipedia which are within quotations is found to be much smaller as compared to those which are not in quotes, resulting in a right-skewed distribution and low mean value of Quote Scores.
The methodology used to generate the subjectivity lexicon from text corpus in English Wikipedia is designed in such a way that it can be scaled and reused to produce similar subjectivity lexica of other languages. This is achieved by abstaining from domain and language-specific methods, apart from using only readily-available English dictionary packages to detect and exclude stopwords and non-English words in the Wikipedia text corpus.
The subjectivity lexicon generated from English Wikipedia is compared against other lexica; namely MPQA and SentiWordNet. It is found that words which are strongly subjective tend to have high Quote Scores in the subjectivity lexicon generated from English Wikipedia. There is a large observable difference between distribution of Quote Scores for words classified as strongly subjective versus distribution of Quote Scores for words classified as weakly subjective and objective. However, weakly subjective and objective words cannot be differentiated clearly based on Quote Score. In addition to that, a questionnaire is commissioned as an exploratory approach to investigate whether subjectivity lexicon generated from Wikipedia could be used to extend the coverage of words of existing lexica.
Stylized image triangulation
(2019)
Stylized image triangulation is a popular tool of image processing. Results can be found on magazine covers or bought as a piece of art. Common use cases are filters by mobile apps or programs dedicated to automated triangulation. This thesis is based upon a paper that achieves new results formulating the adaptive dynamic triangulation as optimization problem. With this approach, new results concerning visual and technical quality are accomplished. One aim of this thesis is to make this approach accessible to as many users as possible. To reach users, a mobile app called Mesh is designed and implemented. A client-host-system is presented which relieves the app from computing the result requiring a lot of resources. Therefore, transferring the approach to a CPU based solution is part of the thesis. Also, a webserver is implemented that handles the communication between app and algorithm. “Mesh” enables the user to send a arbitrary image to the server whose result can be downloaded.
Part of the research deals with optimizing the method. As the main step, the gradient descent method, which minimizes an approximation error, is examined with three different approaches re-defining the movement of a point: The limitation of the directions of movement in a meaningful manner, diagonal directions and a dynamically repositioning of points are analyzed. Results show no improvement of visual quality using diagonal instead of horizontal and vertical steps. Disallowing a point to take its last position, the limitation of step opportunities results in a loss of visual quality but reaches an intended global error earlier. The dynamically repositioning rests upon a vectorbased solution that weights the directions and applies a factor to each of them. This results in a longer computational time but also in a higher visual quality.
Inspired by the work of Josh Bryan, another part of research aims at imitating an artists style. With the use of pseudo-random events combined with a geometryshader, a more natural look shall be achieved. This method illustrates a way of adding minor details to a rendering. To imitate an artist's work, a more complex and more precise triangulation is needed. As the last aspect, a renderstyle is presented. The style uses a center for its effect moving the triangles of a triangulation apart. The arbitrary choice of placing the centrum enables the renderstyle to be used in animations.
The distributed setting of RDF stores in the cloud poses many challenges. One such challenge is how the data placement on the compute nodes can be optimized to improve the query performance. To address this challenge, several evaluations in the literature have investigated the effects of existing data placement strategies on the query performance. A common drawback in theses evaluations is that it is unclear whether the observed behaviors were caused by the data placement strategies (if different RDF stores were evaluated as a whole) or reflect the behavior in distributed RDF stores (if cloud processing frameworks like Hadoop MapReduce are used for the evaluation). To overcome these limitations, this thesis develops a novel benchmarking methodology for data placement strategies that uses a data-placement-strategy-independent distributed RDF store to analyze the effect of the data placement strategies on query performance.
With this evaluation methodology the frequently used data placement strategies have been evaluated. This evaluation challenged the commonly held belief that data placement strategies that emphasize local computation, such as minimal edge-cut cover, lead to faster query executions. The results indicate that queries with a high workload may be executed faster on hash-based data placement strategies than on, e.g., minimal edge-cut covers. The analysis of the additional measurements indicates that vertical parallelization (i.e., a well-distributed workload) may be more important than horizontal containment (i.e., minimal data transport) for efficient query processing.
Moreover, to find a data placement strategy with a high vertical parallelization, the thesis tests the hypothesis that collocating small connected triple sets on the same compute node while balancing the amount of triples stored on the different compute nodes leads to a high vertical parallelization. Specifically, the thesis proposes two such data placement strategies. The first strategy called overpartitioned minimal edge-cut cover was found in the literature and the second strategy is the newly developed molecule hash cover. The evaluation revealed a balanced query workload and a high horizontal containment, which lead to a high vertical parallelization. As a result these strategies showed a better query performance than the frequently used data placement strategies.
Student misbehavior and its treatment is a major challenge for teachers and a threat to their well-being. Indeed, teachers are obliged to punish student misbehavior on a regular basis. Additionally, teachers’ punishment decisions are among the most frequently reported situations when it comes to students’ experiences of injustice in school. By implication, it is crucial to understand teachers’ treatment of student misbehavior vis-à-vis students’ perceptions. One key dimension of punishment behavior reflects its underlying motivation and goals. People generally intend to achieve three goals when punishing misbehavior, namely, retribution (i.e., evening out the harm caused), special prevention (i.e., preventing recidivism of the offender), and general prevention (i.e., preventing imitation of others). Importantly, people’s support of these punishment goals is subject to hierarchy and power, implying that teachers’ and students’ punishment goal preferences differ. In this dissertation, I present three research projects that shed first light on teachers’ punishment and its goals along with the students’ perception of classroom intervention strategies pursuing these goals. More specifically, I first examined students’ (i.e., children’s) general support of each of the three punishment goals sketched above. Furthermore, I applied an attributional approach to understand and study the goals teachers intend to achieve when punishing student misbehavior. Finally, I investigated teachers’ and students’ support of the punishment goals regarding the same student misbehavior to directly compare their views on these goals and reactions pursuing them. In sum, the findings show that students generally prefer retribution and special prevention to general prevention, whereas teachers prefer general prevention and special prevention to retribution. This ultimately translates into a "mismatch" of teachers and students in their preferences for specific punishment goals, and the findings suggest that this may indeed enhance students’ perception of injustice. Overall, the results of the present research program may be valuable for the development of classroom intervention strategies that may reduce rather than enhance conflicts in student-teacher-interactions.
Speziell in Anwendungen mit intensiver Temperatur- und Korrosionsbeanspruchung finden vermehrt Phosphate als sogenannte chemische Binder für Hochleistungskeramiken Verwendung. Konkret ist die Summe der Reaktionsverläufe während des Bindemechanismus in Folge einer thermisch-induzierten Aushärtung und somit die Wirkungsweise von Phosphatbindern prinzipiell innerhalb der Fachliteratur nicht eindeutig untersucht. Innerhalb dieser Arbeit wurden aufbauend auf einer umfangreichen strukturanalytischen Prüfungsanordnung (Festkörper-NMR, RBA, REM-EDX) einer exemplarischen phosphatgebundenen Al₂O₃-MgAl₂O₄-Hochtemperaturkeramikzusammensetzung unter Einbeziehung verschiedenartiger anorganischer Phosphate grundlegende Bindemechanismen charakterisiert. Mechanisch-physikochemische Eigenschaftsuntersuchungen (STA, Dilatometrie, DMA, KBF) deckten zudem den Einfluss der eingesetzten Phosphate auf die Eigenschaftsentwicklungen der Feuerfestkeramiken bezüglich des Abbindeverhaltens, der Biegefestigkeit sowie der thermischen Längenänderung auf, welche mit Strukturänderungen korreliert wurden. Es wurde gezeigt, dass sich Bindemechanismen bei Verwendung von Phosphaten temperaturgeleitet (20 °C ≤ T ≤ 1500 °C) grundsätzlich aus zwei parallel ablaufenden Reaktionsabfolgen zusammensetzen, wobei die sich entwickelnden Phosphatphasen innerhalb der Keramikmasse quantitativ und qualitativ bezüglich ihrer Bindewirkung bewertet wurden. Zum einen wurde die Bildung eines festigkeitssteigernden Bindenetzwerks aus Aluminiumphosphaten meist amorpher Struktur identifiziert und charakterisiert. Dieses bindungsfördernde, dreidimensionale Aluminiumphosphatnetzwerk baut sich innerhalb der Initialisierungs- und Vernetzungsphasen temperaturgeleitet kontinuierlich über multiple Vernetzungsreaktionen homogen auf. Zum anderen werden Reaktionsabfolgen durch parallel ablaufende Strukturumwandlungen nicht aktiv-bindender Phosphatspezies wie Magnesium-, Calcium- oder Zirkoniumphosphate ergänzt, welche lediglich thermische Umwandlungsreaktionen der Ausgangsphosphate darstellen. Vermehrt bei T > 800 °C geht das phosphatische Bindenetzwerk Festkörperreaktionen mit MgAl₂O₄ unter Ausbildung und Agglomeration von Magnesium-Orthophosphat-Sinterstrukturen ein. Die Bildung dieser niedrigschmelzenden Hochtemperaturphasen führt zu einem teilweisen Bruch des Bindenetzwerks.
The application of artificial intelligences on digital games became more and more successful in recent years. A drawback is, that they need lots of computing power to achieve good results, the more complex the game, the more computing power is needed. In this thesis a strategy learning-system is implemented, which is based on crowd-learned heuristics. The heuristics are given in a wiki. The research is done according to the Design Science Research Methodology. The implemented system is allied to the game Dominion. To do this, an ontology for Dominion is designed. A mapping language is defined and implemented in the system, which allows the mapping of information in the wiki to an ontology. Furthermore, metrics to rate the found strategies are defined. Using the system, users can enter a mapping for the information transfer and apply it. They can also select cards from Dominion, for which the system determines and rates strategies. Finally, the system is evaluated by Dominion-players by rating the strategies, which are found by the system, and the defined metrics.
FinTech is deemed to be an underexplored phenomenon even in academic and real environments. Among (1) “Sustainable FinTech” – the application of information technology as innovation in established financial services providers’ business operation; and (2) “Disruptive FinTech” – the provision of financial products and services by non-incumbents which in most cases are information technology entrepreneurs, the former receives more attention. In order to contribute to Disruptive FinTech category, the thesis strive to examine Entrepreneurial Strategy framework applied for technology players taking part in Vietnam financial market.
One of the greatest goals in computer graphics is the aesthetic representation of objects. In addition to conventional methods, another field focuses on non-photorealistic renderings. The so-called example-based rendering is an area where users can transfer their art style to a pre-computed 3D rendering, using a hand-painted template. There are some algorithms that already provide impressive results, but their problem is that most of these procedures count as offline methods and are not able to produce results in real-time. For this reason, this work show a method that satisfies this condition. In addition, the influence of the run-time reduction on the results is investigated. Requirements are defined, to which the method and its results are examined. Other methods in this field are referenced and compared with their results.
Thousands of chemicals from daily use are being discharged from civilization into the water cycle via different pathways. Ingredients of personal care products, detergents, pharmaceuticals, pesticides, and industrial chemicals thus find their way into the aquatic ecosystems and may cause adverse impacts on the ecology. Pharmaceuticals for instance, represent a central group of anthropogenic chemicals, because of their designed potency to interfere with physiological functions in organisms. Ecotoxicological effects from pharmaceutical burden have been verified in the past. Therapeutic groups with pronounced endocrine disrupting potentials such as steroid hormones gain increasing focus in environmental research as it was reported that they cause endocrine disruption in aquatic organisms even when exposed to environmentally relevant concentrations. This thesis considers the comprehensive investigation of the occurrence of corticosteroids and progestogens in wastewater treatment plant (WWTP) effluents and surface waters as well as the elucidation of the fate and biodegradability of these steroid families during activated sludge treatment. For the first goal of the thesis, a robust and highly sensitive analytical method based on liquid chromatography-tandem mass spectrometry (LC-MS/MS) was developed in order to simultaneously determine the occurrence of around 60 mineralocorticoids, glucocorticoids and progestogens in the aquatic environment. A special focus was set to the compound selection due to the diversity of marketed synthetic steroids. Some analytical challenges have been approved by individual approaches regarding sensitivity enhancement and compound stabilities. These results may be important for further research in environmental analysis of steroid hormones. Reliable and low quantification limits are the perquisite for the determination of corticosteroids and progestogens at relevant concentrations due to low consumption volumes and simultaneously low effect-based trigger values. Achieved quantification limits for all target analytes ranged between 0.02 ng/L and 0.5 ng/L in surface water and 0.05 ng/L to 5 ng/L in WWTP effluents. This sensitivity enabled the detection of three mineralocorticoids, 23 glucocorticoids and 10 progestogens within the sampling campaign around Germany. Many of them were detected for the first time in the environment, particularly in Germany and the EU. To the best of our knowledge, this in-depth steroid screening provided a good overview of single steroid burden and allowed for the identification of predominantly steroids of each steroid
type analyzed for the first time. The frequent detection of highly potent synthetic steroids (e.g. triamcinolone acetonide, clobetasol propionate, betamethasone valerate, dienogest, cyproterone acetate) highlighted insufficient removal during conventional Summary wastewater treatment and indicated the need for regulation to control their emission since the steroid concentrations were found to be above the reported effect-based trigger values for biota. Overall, the study revealed reliable environmental data of poorly or even not analyzed steroids. The results complement the existing knowledge in this field but also providednew information which can beused particularly for compound prioritization in ecotoxicological research and environmental analysis. Based on the data obtained from the monitoring campaign, incubation experiments were conducted to enable the comparison of the biodegradability and transformation processes in activated sludge treatment for structure-related steroids under aerobic and standardized experimental conditions. The compounds were accurately selected to cover manifold structural moieties of commonly used glucocorticoids, including non-halogenated and halogenated steroids, their mono- and diesters, and several acetonide-type steroids. This approach allowed for a structure-based interpretation of the results. The obtained biodegradation rate constants suggested large variations in the biodegradability (half-lifes ranged from < 0.5 h to > 14 d). An increasing stability was identified in the order from non-halogenated steroids (e.g. hydrocortisone), over 9α-halogenated steroids (e.g. betamethasone), to C17-monoesters (e.g. betamethasone 17-valerate, clobetasol propionate), and finally to acetonides (e.g. triamcinolone acetonide), thus suggesting a strong relationship of the biodegradability with the glucocorticoid structure. Some explanations for this behavior have been received by identifying the transformation products (TPs) and elucidating individual transformation pathways. The results revealed the identification of the likelihood of transformation reactions depending on the chemical steroid structure for the first time. Among the identified TPs, the carboxylates (e.g. TPs of fluticasone propionate, triamcinolone acetonide) have been shown persistency in the subsequent incubation experiments. The newly identified TPs furthermore were frequently detected in the effluents of full-scale wastewater treatment plants. These findings emphasized i) the transferability of the lab-scale degradation experiments to real world and that ii) insufficient removals may cause adverse effects in the aquatic environment due to the ability of the precursor steroids and TPs to interact with the endocrine system in biota. For the last goal, the conceptual study for glucocorticoids was applied to progestogens.
Here, two sub-types of the steroid family frequently used for hormonal contraception were selected (17α-hydroxyprogesterone and 19-norstestosterone type). The progestogens showed a fast and complete degradation within six hours, and thus empathizes pronounced biodegradability. However, cyproterone acetate and dienogest Summary have been found to be more recalcitrant in activated sludge treatment. This was consistent with their ubiquitously occurrence during the previous monitoring campaign. The elucidation of TPs again revealed some crucial information regarding the observed behavior and highlighted furthermore the formation of hazardous TPs. It was shown that 19-nortestosterone type steroids are able to undergo aromatization at ring A in contact with activated sludge, leading to the formation of estrogen-like TPs with a phenolic moiety at ring A. In the case of norethisterone the formation of 17α-ethinylestradiol was confirmed, which is a well-known potent synthetic estrogen with elevated ecotoxicological potency. Thus, the results indicated for the very first time an unknown source of estrogenic compounds, particularly for 17α-ethinylestradiol. In conclusion, some steroids were found to be very stable in activated sludge treatment, others degrade well, and others which do degrade but predominantly to active TPs depending on their chemical structure. Fluorinated acetal steroids such as triamcinolone acetonide and fluocinolone acetonide are poorly biodegradable, which is reflected in high concentrations detected ubiquitously in WWTP effluents. Endogenous steroids and their most related synthetic once such as hydrocortisone, prednisolone or 17α-hydroxyprogesterone are readily biodegradable. Regardless their high influent concentrations, they are almost completely removed in conventional WWTPs. Steroids between this range have been found to form elevated quantities of TPs which are partially still active, which particularly the case for betamethasone, fluticasone propionate, cyproterone acetate or dienogest. The thesis illustrates the need for an extensive evaluation of the environmental risks and carried out that corticosteroids and progestogens merit more attention in environmental regulatory and research than it is currently the case
In dieser Arbeit wird ein System zur Erzeugung und Darstellung stereoskopischen Video-Panoramen vorgestellt. Neben der theoretischen Grundlagen werden der Aufbau und die Funktionsweise dieses Systems erläutert.
Dazu werden spezielle Kameras verwendet, die Panoramen aufnehmen
können und zur Wiedergabe synchronisiert werden. Anschließend wird ein Renderer implementiert, welcher die Panoramen mithilfe einer VirtualReality Brille stereoskopisch darstellen kann. Dafür werden separate Aufnahmen für die beiden Augen gemacht und getrennt wiedergegeben. Zum Abschluss wird das entstandene Video-Panorama mit einem Panorama eines schon bestehenden Systems verglichen.
A network like the internet is a set of subnets that are connected to each other by a router. A router is a computer, containing multiple network devices to be connected to multiple subnets. So, it is able to forward packages from one subnet to another. A network can be represented as a graph with its routers as vertices and subnets as edges. This graph is called the topology of the network. A packet send to a host outside the own subnet usually will be send first to the so-called default router. This router (like any router) contains a table (the so-called forwarding table) with every subnet. Additionally for each net, the table contains the router through which the subnet can be reached best. So, the packet will be forwarded from router to router until it reaches the destination subnet. On this way every router looks up in its forwarding table for the best next router. A routing protocol takes care of the automatic exchange of informations between the routers to build the forwarding tables and keep them up to date. If the forwarding tables of all routers are up to date the network is called convergent. The time needed to build or update the routing tables is called the convergence time The RIP routing protocol is a well known and well explored distance vector protocol. But there are only few examinations about the convergence properties (e.g. the time needed to converge or the traffic volume produced by the routing messages). This work tries to examine a relationship between the topology properties of a network and the convergence properties of the rip routing protocol. Therefore, over 5000 single measurements were performed and statistically analyzed. Mathematical formulas have been derived from the results that are able to approximate the convergence properties of a network from its topology properties.
Statistical eco(-toxico)logy
(2017)
Freshwaters are of immense importance for human well-being.
Nevertheless, they are currently facing unprecedented levels of threat from habitat loss and degradation, overexploitation, invasive species and
pollution.
To prevent risks to aquatic ecosystems, chemical substances, like agricultural pesticides, have to pass environmental risk assessment (ERA) before entering the market.
Concurrently, large-scale environmental monitoring is used for surveillance of biological and chemical conditions in freshwaters.
This thesis examines statistical methods currently used in ERA.
Moreover, it presents a national-scale compilation of chemical monitoring data, an analysis of drivers and dynamics of chemical pollution in streams and, provides a large-scale risk assessment by combination with results from ERA.
Additionally, software tools have been developed to integrate different datasets used in ERA.
The thesis starts with a brief introduction to ERA and environmental monitoring and gives an overview of the objectives of the thesis.
Chapter 2 addresses experimental setups and their statistical analyses using simulations.
The results show that current designs exhibit unacceptably low statistical power, that statistical methods chosen to fit the type of data provide higher power and that statistical practices in ERA need to be revised.
In chapter 3 we compiled all available pesticide monitoring data from Germany.
Hereby, we focused on small streams, similar to those considered in ERA and used threshold concentrations derived during ERA for a large-scale assessment of threats to freshwaters from pesticides.
This compilation resulted in the most comprehensive dataset on pesticide exposure currently available for Germany.
Using state-of-the-art statistical techniques, that explicitly take the limits of quantification into account, we demonstrate that 25% of small streams are at threat from pesticides.
In particular neonicotinoid pesticides are responsible for these threats.
These are associated with agricultural intensity and can be detected even at low levels of agricultural use.
Moreover, our results indicated that current monitoring underestimates pesticide risks, because of a sampling decoupled from precipitation events.
Additionally, we provide a first large-scale study of annual pesticide exposure dynamics.
Chapters 4 and 5 describe software solutions to simplify and accelerate the integration of data from ERA, environmental monitoring and ecotoxicology that is indispensable for the development of landscape-level risk assessment.
Overall, this thesis contributes to the emerging discipline of statistical ecotoxicology and shows that pesticides pose a large-scale threat to small streams.
Environmental monitoring can provide a post-authorisation feedback to ERA.
However, to protect freshwater ecosystems ERA and environmental monitoring need to be further refined and we provide software solutions to utilise existing data for this purpose.
Obwohl Kinder beim Bauen mit Bauklötzen bereits früh Erfahrungen mit Aspekten der Stabilität von Bauwerken, wie z. B. Kräfteverteilung, Standfestigkeit sowie Gleich- und Gegengewicht, sammeln und es verschiedene Studien gibt, die den fächerübergreifenden Lerngehalt von Bauklötzen hervorheben, ist das Verständnis bezüglich der Stabilität von Bauklotzanordnungen bisher kaum erforscht. Daher wurde untersucht, welche Vorstellungen sechs- bis siebenjährige Kinder zur Stabilität von Bauklotzanordnungen zeigen und welche Unterstützungsmaßnahmen eine Vorstellungsveränderung anregen. In Studie 1 und 2 zeigt sich, dass Kinder in der Schuleingangsphase erste Vorstellungen zur Stabilität von Bauklotzanordnungen haben. Über verschiedene Analysen wurden schwierigkeitsbestimmende Merkmale bzw. lösungsrelevante Dimensionen identifiziert. Kinder mit ca. fünf Jahren orientieren sich eher an der geometrischen Mitte und beziehen somit die Abstandsdimension ein, bei Neunjährigen berücksichtigen hingegen knapp über die Hälfte bereits den Massenmittelpunkt. Folglich reicht die „Bauerfahrung“ von Kindern bis zum neunten Lebensjahr nicht aus, um die Fähigkeit, Bauklotzanordnungen bezüglich ihrer Stabilität zu beurteilen, vollständig zu entwickeln. Deshalb wurde in Studie 3 untersucht, inwieweit sich eine Vorstellungsveränderung zur Stabilität von Bauklotzanordnungen mithilfe verschiedener Unterstützungsmaßnahmen innerhalb einer kurzen Lerneinheit anregen lässt. Dabei wurden verschiedene materiale und verbale Unterstützungsmaßnahmen in Form von Fotos und gezielten sprachlichen Instruktionen in einem systematischen Vergleich gegenübergestellt. Die Ergebnisse weisen nach, dass bereits mit einer kurzen Lerneinheit ein Wissenszuwachs bei Siebenjährigen angeregt werden kann, insbesondere wenn sie durch Fotos und gezielte sprachliche Instruktionen unterstützt werden. Die Ergebnisse sind ein erster Schritt für die Entwicklung eines naturwissenschaftlichen Unterrichts zur Stabilität von Bauklotzanordnungen.
Die folgende Arbeit soll einen Überblick über bestehende Lösungen zur Interaktion in Erweiterten Realitäten (Augmented Reality) schaffen. Hierzu werden anhand dreier grundlegender Betrachtungsweisen unterschiedliche Interaktionskonzepte und -umsetzungen sowohl von der technischen, als auch von der konzeptuellen Seite her, vorgestellt. Neben Fragen der Visualisierung werden unterschiedliche Typen von Benutzungsschnittstellen vorstellt. Den größten Teil nehmen die drei typischen Interaktionsaufgaben Selektion- und Manipulation, Navigation und Systemkontrolle und die damit verbundenen Interaktionstechniken ein. Die Inhalte des Arbeitsberichts beschränken sich auf den Einsatz von Interaktionelementen in Augmented Reality Umgebungen. Dies geschieht in Abgrenzung zu Forschungsarbeiten auf dem Gebiet zu Interaktionstechniken in Virtual Reality Umgebungen (vollimmersiv oder auch desktoporientiert). Zwar standen und stehen viele Interaktionstechniken aus dem Bereich VR in der AR Pate, doch haben sich gerade im Bereich der AR neue Techniken und Konzepte entwickelt. Folglich sollen VR Techniken nur dann betrachtet werden, wenn Sie in AR Anwendungen angewendet wurden bzw. wenn ihre Anwendung sinnvoll erscheint.
These proceedings contain 6 papers presented at the 1st Interdisciplinary Conference on Gamification and Entrepreneurship (StartPlay) 2022. The conference was held at the University of Koblenz-Landau in Koblenz, Germany, on August 05-06, 2022.
Game-Balance Simulation as a Tool for the Evaluation of
Systematically Designed Gamification Strategies
Authors: David Kessing, Manuel Löwer
A Canvas Framework for Gameful Design Concepts
Authors: Max Höllen, Thomas Voit
Gamified Sustainable Entrepreneurship Education –
A digital Educational Escape Room for economy classes
in German High Schools
Authors: Jürgen Frentz, Marie Tuchscherer, Claudia Wiepcke
Playing Positive Psychology: The Development of a
Positive-Psychological Board Game for Team Building
Authors: Leonie Kloep, Anna-Lena Helten, Corinna Peifer
Gamification Design for Goal Activation and Goal Striving
in Digital Marketing and Innovation Management
Authors: Jenny V. Bittner, Christian Wellmann
Gamification of Assembly Routines: Planned User Study
Evaluating a Level System with Customized Feedback
Elements
Authors: Jessica Ulmer, Sebastian Braun, Jörg Wollert
Especially e-government projects have a complex stakeholder structure: governments, businesses, non-profits and private stakeholders in different forms and roles are involved or affected by e-government projects. Consequently, the success of ICT projects critically depends on the integration of different stakeholder groups in the design processes of e-government solutions. In this context, stakeholder participation is sought therefore useful initiatives like open government and good governance drive this process forward. Although it is important to analyze the objectives, expectations and power characteristics of project participants and project stakeholders, the determinations of the stakeholder groups through a so called stakeholder analysis are insufficiently methodically developed.
The aim of this bachelor thesis is to pick up the stakeholder theories, stakeholder participation theories, stakeholder analysis and transfer it to an administrative level as well as to check their applicability. This will be done by explaining and systematizing appropriate procedures for the classification and mapping of stakeholder groups, through well-known stakeholder theories of Freeman, Mitchel and Rowley. In order to generate recommendations for future e-government projects, the application of stakeholder theories in two e-government projects with stakeholder involvement will be reviewed. It is also important to work out how stakeholder groups can be actively involved in the developmental processes in concrete e-government projects by using Web 2.0 possibilities. The role of Web 2.0 applications as an additional channel of communication and participation tool will be presented and evaluated.
The result is a guide, which supports successful participation of stakeholders in projects, by analyzing the process for stakeholder identification, stakeholder grouping and prioritization and showing instructions and benefits for using Web 2.0.
Wie hingen Sprache und Gewalt im Nationalsozialismus zusammen? Ausgehend von der konstitutiven Sprachphilosophie des kanadischen Philosophen Charles Taylor untersucht die Dissertation die destruktive Wirkmacht der Sprache am Fall des Nationalsozialismus, der auf ungekannte Weise in die Sprache eingriff und damit Gewalt begünstigte, die bis heute ohnegleichen ist.
Diese Arbeit stellt eine Erweiterung zum SpoGA-Server dar. Grundlage war die Idee, kleinen Gruppen einen gesicherten zeitbegenzten Zugang zu einem WLAN zu geben. Diese Idee wurde um die Möglichkeit der Planung ganzer Veranstaltungen erweitert. Nun ist es möglich, ganze Gruppen für Veranstaltungen an einem WLAN anzumelden. In Zusammenarbeit mit den beiden anderen Arbeiten zu diesem Thema wurde ein funktionsfähiger Prototyp entwickelt. SpoGA und E-IMS Server arbeiten beide eigenständig, zusammen jedoch bilden sie ein sehr funktionelles Werkzeug zur Organisation, Planung und Durchführung einer Veranstaltung mit Internetzugang für die Gäste.
Das Einsatzgebiet des im Rahmen von insgesamt drei Studienarbeiten eingeführten "Spontaneous Guest Access (SpoGA) & Extended Invitation Management System (E-IMS)" ist im Firmenumfeld angesiedelt. Gästen einer Firma kann durch Nutzung des Systems ohne großen Aufwand ein Zugang zu einem gesicherten, drahtlosen Firmennetzwerk ermöglicht werden, über welches auch der Zugriff aufs Internet erfolgen kann. Das System soll auch die Planung und Durchführung von Veranstaltungen wie insbesondere Konferenzen erleichtern. SpoGA soll es einem Gastgeber ermöglichen, die Netzwerkgeräte befugter Gäste für einen vorgegebenen Zeitraum automatisiert freizuschalten. Somit kann für bestimmte Geräte komfortabel und ohne großen Aufwand ein temporärer Zugang geschaffen werden, ohne die Sicherheit des Netzes zu beeinflussen. E-IMS soll den Verwaltungsaufwand bei der Organisation und Durchführung von Veranstaltungen so weit wie möglich reduzieren. Durch die Anbindung an SpoGA wird außerdem eine automatisierte Freischaltung der Netzwerkzugänge der Konferenzteilnehmer möglich. Auch Aspekte der Teilnehmer- und Ressourcenverwaltung werden berücksichtigt: automatisierter Versand von Einladungen, selbstständige Buchung der von den Teilnehmern jeweils benötigten Geräte wie Beamer und Projektoren und Übersicht über die angefallenen bzw. laufenden Kosten der Veranstaltung. Gegenstand dieser Arbeit ist die Umsetzung des zweiten Teilbereichs des E-IMS, welcher die Realisierung der Veranstaltungsverwaltung sowie der Funktion zum Abrechnen in Anspruch genommener Ressourcen umfasst. Um Nutzern einen mobilen Zugriff auf die Veranstaltungsdaten zu ermöglichen, wurde die entsprechende Anwendung prototypisch für Mobiltelefone realisiert.
In Zeiten, in denen ein Notebook so selbstverständlich wie ein Taschenrechner ist und als Arbeitsgerät, oder zur Kommunikation bzw. Recherche im Internet genutzt wird, ist es für Gäste von enormem Vorteil, schnell und unkompliziert einen Zugriff auf die vorhandene Netzinfrastruktur einer gastgebenden Firma zu erlangen. Dies erspart einerseits Arbeitsaufwand von Seiten der Administratoren, bzw. wenn es sich um eine kleinere Firma handelt, die nicht über eine eigene IT-Abteilung verfügt, ermöglicht es, ohne die Dienste von Dritten in Anspruch zu nehmen, einen zeitlich begrenzten Zugang für Gäste. Andererseits lassen sich Kosten für die sonst nötigen Arbeitsschritte einsparen, und die Administratoren können sich ihren eigentlichen Aufgaben widmen. Und das Ganze unabhängig von Arbeits- und Urlaubszeiten, frei von lästigen Formalitäten und ohne Vorlaufzeit, um nur einige der Vorteile gegenüber einer manuellen Freischaltung zu nennen. Ein weiterer wichtiger Punkt ist, dabei die Sicherheit der IT-Netzinfrastruktur nicht zu beeinträchtigen. Ein spontaner Zugang sollte zeitlich begrenzt (z.B. für die Dauer einer Veranstaltung) und personenbezogen sein. Genau diese Funktionalität ermöglicht das in diesem Projekt entwickelte SpoGA-System.
Spektroskopie zweiatomiger Moleküle bei Einstrahlung ultrakurzer Laserpulse und ihre Anwendung
(2020)
Even with moderate pulse energies and average powers, ultrashort pulse lasers achieve very high peak powers, whose effect on matter is fundamentally different from that of other light sources. The high electric field strength does not only cause an increase of optically nonlinear effects such as second harmonic generation, but it is also responsible for the “cold“ ablation, which leads to colder plasmas. An investigation of these two circumstances in terms of a simplification of the pulse duration measurement and an improvement of the molecular formation in cooling plasmas is the topic of this work. In this context, it is shown that when selecting suitable process parameters, especially when purposefully defocusing the medium, the use of ultrashort pulse lasers improves the spectroscopy of several emitting molecules such as aluminum oxide. Therefore, their detection is possible even without the time-resolving spectrometers required in literature. In addition, ultrashort pulses enable spatially resolved crystallization of zinc oxide on zinc surfaces prepared by basic means. The resulting wurtzites usually align their c-axis approximately perpendicular to the underlying surface and can be used to generate scattered second harmonics. Fiber-based femtosecond lasers with pulse energies in the microjoule range, pulse durations of a few 100fs and very low maintenance requirements have proven to be a powerful instrument for these purposes. For measuring the pulse duration, the high pulse energy also enables the usage of frequency doublers with much lower conversion eciencies. Despite nonuniform crystal axes, the scattering second harmonic generating aluminum nitride has proven to be particularly suitable for optical autocorrelation. Compared to the commonly used monocrystalline beta-barium borate, the sintered aluminum nitride ceramic plates facilitate the adjustment, simplify the material handling and reduce the expenses by two to three orders of magnitude. The method developed in this work is therefore also suitable for confirmatory measurements of the pulse duration during the production process of such systems – especially when the occurring pulse energies are high or rather too high for beta-barium borate.
Das Bulletin Esskulturen ist aus dem Verbundprojekt „Esskulturen. Objekte, Praktiken, Semantiken“ hervorgegangen, das im Rahmen der Förderlinie „Sprache der Objekte“ von September 2018 bis August 2021 vom Bundesministerium für Bildung und Forschung finanziert wird. In jeder Ausgabe bildet ein Objekt der Stiftung Bürgerliche Wohnkultur, Sammlung Alex Poignard (Landesmuseum Koblenz) den Ausgangspunkt für eine interdisziplinäre Auseinandersetzung mit unterschiedlichen soziokulturellen Fragen rund um das Thema Essen.
Speise-Räume. Atmosphäre und Ambiente,
Bulletin Esskulturen, 3. Jahrgang 2021, Mappe VII, Faszikel 37-42
Inhalt der Ausgabe
Jörg Hahn, „Mein Lieblingsessplatz“. Eine Unterrichtsreihe in einer sechsten Klasse des Wilhelm-Remy-Gymnasiums, Bendorf/Rhein
Mila Brill, Arcadia. Atmosphäre und Materialität in gastronomischen Zwischen-Räumen
Maria Mothes, Ambiguous Commensality. Die Tischgemeinschaft als mehrdeutiger Raum in Mohsin Hamids The Reluctant Fundamentalist
Manja Wilkens, Esszimmer im 19. Jahrhundert
Angela Kaupp, Refektorium – das Speisezimmer der Ordensgemeinschaft
Michaela Bauks, Wo lässt man es sich schmecken? Eine kleine Geschichte des Esszimmers
Impressum
Part-of-Speech tagging is the process of assigning words with similar grammatical properties to a part of speech (PoS). In the English language, PoS-tagging algorithms generally reach very high accuracy. This thesis undertakes the task to test against these accuracies in PoS-tagging as a qualitative measure in classification capabilities for a recently developed neural network model, called graph convolutional network (GCN). The novelty proposed in this thesis is to translate a corpus into a graph as a direct input for the GCN. The experiments in this thesis serve as a proof of concept with room for improvements.
Wild bees are essential for the pollination of wild and cultivated plants. However, within the
last decades, the increasing intensification of modern agriculture has led to both a reduction and fragmentation as well as a degradation of the habitats wild bees need. The resulting loss of pollinators and their pollination poses an immense challenge to global food production. To support wild bees, the availability of flowering resources is essential. However, the flowering period of each resource is temporally limited and has different effects on pollinators and their pollination, depending on the time of their flowering.
Therefore, to efficiently promote and manage wild bee pollinators in agricultural landscapes, we identified species-specific key floral resources of three selected wild bee species and their spatial and temporal availability (CHAPTERS 2, 3 & 4). We examined, which habitat types predominantly provide these resources (CHAPTERS 3 & 4). We also investigated whether floral resource maps based on the use of these key resources and their spatial and temporal availability explain the abundance and development of the selected wild bees (CHAPTERS 3 & 4) and pollination (CHAPTER 5) better than habitat maps, that only indirectly account for the availability of floral resources.
For each of the species studied, we were able to identify different key pollen sources, predominantly woody plants in the early season (April/May) and increasingly herbaceous plants in the later season (June/July; CHAPTERS 2, 3 & 4). The open woody semi-natural habitats of our agricultural landscapes provided about 75% of the floral resources for the buff-tailed bumblebees, 60% for the red mason bees, and 55% for the horned mason bees studied, although they accounted for only 3% of the area (CHAPTERS 3 & 4). In addition, fruit orchards provided about 35% of the floral resources for the horned mason bees on 4% of the landscape area (CHAPTER 3). We showed that both mason bee species benefited from the resource availability in the surrounding landscapes (CHAPTER 3). Yet this was not the case for the bumblebees (CHAPTER 4). Instead, the weight gain of their colonies, the number of developed queen cells and their colony survival were higher with increasing proximity to forests. The proximity to forests also had a positive effect on the mason bees studied (CHAPTER 3). In addition, the red mason bees benefited from herbaceous semi-natural habitats. The proportion of built-up areas had a negative effect on the horned mason bees, and the proportion of arable land on the red mason bees. The habitat maps explained horned mason bee abundances equally well as the floral resource maps, but red mason bee abundances were distinctly better explained by key floral resources. The pollination of field bean increased with higher proportions of early floral resources, whereas synchronous floral resources showed no measurable reduction in their pollination (CHAPTER 5). Habitat maps also explained field bean pollination better than floral resource maps. Here, pollination increased with increasing proportions of built-up areas in the landscapes and decreased with increasing proportions of arable land.
Our results highlight the importance of the spatio-temporal availability of certain key species as resource plants of wild bees in agricultural landscapes. They show that habitat maps are ahead of, or at least equal to, spatio-temporally resolved floral resource maps in predicting wild bee development and pollination. Nevertheless, floral resource maps allow us to draw more accurate conclusions between key floral resources and the organisms studied. The proximity to forest edges had a positive effect on each of the three wild bee species studied. However, besides pure food availability, other factors seem to co-determine the occurrence of wild bees in agricultural landscapes.
The research described in this thesis was designed to yield information on the impact of particle-bound pesticides on organisms living in the interface between sediment and water column in a temporarily open estuary (TOCEs). It was hypothesized that natural variables such as salinity and temperature and anthropogenic stressors such as particle-bound pesticides contribute to the variability of the system. A multiple line of evidence approach is necessary due to the variability in sediment type, contaminant distribution and spatial and temporal variability within the ecosystem in particular within TOCEs. The first aim of this thesis was to identify which particle-bound pesticides are important to the contamination of the Lourens River estuary (Western Cape, South Africa), taking into account their environmental concentrations, physico-chemical and toxicological properties (Exposure assessment). The second aim was to identify spatial and temporal variations in particle bound pesticide contamination, natural environmental variables and benthic community structure (effect assessment). The third aim was to test the hypothesis: "does adaptation to fluctuating salinities lead to enhanced survival of the harpacticoid copepod Mesochra parva when exposed to a combination of particle associated chlorpyrifos exposure and hypo-osmotic stress during a 96 h sediment toxicity test?" The last aim was to identify the driving environmental variables (including natural and anthropogenic stressors) in a "natural" (Rooiels River) compared to a "disturbed" (Lourens River) estuary and to identify if and how these variables change the benthic community structure in both estuaries. Data produced in this research thus provide important information to understand the impact of pesticides and its interaction with natural variables in a temporarily open estuary. To summarise, this study indicated, by the use of the multi-evidence approach, that the pesticides endosulfan and chlorpyrifos posed a risk towards benthic organisms in a temporarily open estuary in particular during spring season. Furthermore an important link between pesticide exposure/ toxicity and salinity was identified, which has important implications for the management of temporarily open estuaries.
This study was conducted in Nyungwe National Park (NNP); a biodiversity hotspot Mountain rainforest of high conservation importance in Central Africa, but with little knowledge of its insect communities including butterflies, good indicators of climate change, and forest ecosystem health. The study aimed at availing baseline data on butterfly species diversity and distribution in NNP, for future use in monitoring climate change-driven shifts and the effects of forest fragmentation on the biodiversity of Nyungwe. Butterflies were collected seasonally using fruit-baited traps and hand nets along elevational transects spanning from 1700 m up to 2950 m of altitude. Two hundred forty-two species including 28 endemics to the Albertine Rift and 18 potential local climate change indicators were documented. Species richness and abundance declined with increasing elevation and higher seasonal occurrence was observed during the dry season. This was the first study on the spatial and temporal distribution of butterflies in NNP and further studies could be conducted to add more species and allow a depth understanding of the ecology of Nyungwe butterflies.
We consider variational discretization of three different optimal control problems.
The first being a parabolic optimal control problem governed by space-time measure controls. This problem has a nice sparsity structure, which motivates our aim to achieve maximal sparsity on the discrete level. Due to the measures on the right hand side of the partial differential equation, we consider a very weak solution theory for the state equation and need an embedding into the continuous functions for the pairings to make sense. Furthermore, we employ Fenchel duality to formulate the predual problem and give results on solution theory of both the predual and the primal problem. Later on, the duality is also helpful for the derivation of algorithms, since the predual problem can be differentiated twice so that we can apply a semismooth Newton method. We then retrieve the optimal control by duality relations.
For the state discretization we use a Petrov-Galerkin method employing piecewise constant states and piecewise linear and continuous test functions in time. For the space discretization we choose piecewise linear and continuous functions. As a result the controls are composed of Dirac measures in space-time, centered at points on the discrete space-time grid. We prove that the optimal discrete states and controls converge strongly in L^q and weakly-* in Μ, respectively, to their smooth counterparts, where q ϵ (1,min{2,1+2/d}] is the spatial dimension. The variational discrete version of the state equation with the above choice of spaces yields a Crank-Nicolson time stepping scheme with half a Rannacher smoothing step.
Furthermore, we compare our approach to a full discretization of the corresponding control problem, precisely a discontinuous Galerkin method for the state discretization, where the discrete controls are piecewise constant in time and Dirac measures in space. Numerical experiments highlight the sparsity features of our discrete approach and verify the convergence results.
The second problem we analyze is a parabolic optimal control problem governed by bounded initial measure controls. Here, the cost functional consists of a tracking term corresponding to the observation of the state at final time. Instead of a regularization term for the control in the cost functional, we consider a bound on the measure norm of the initial control. As in the first problem we observe a sparsity structure, but here the control resides only in space at initial time, so we focus on the space discretization to achieve maximal sparsity of the control. Again, due to the initial measure in the partial differential equation, we rely on a very weak solution theory of the state equation.
We employ a dG(0) approximation of the state equation, i.e. we choose piecewise linear and continuous functions in space, which are piecewise constant in time for our ansatz and test space. Then, the variational discretization of the problem together with the optimality conditions induce maximal discrete sparsity of the initial control, i.e. Dirac measures in space. We present numerical experiments to illustrate our approach and investigate the sparsity structure
As third problem we choose an elliptic optimal control governed by functions of bounded variation (BV) in one space dimension. The cost functional consists of a tracking term for the state and a BV-seminorm in terms of the derivative of the control. We derive a sparsity structure for the derivative of the BV control. Additionally, we utilize the mixed formulation for the state equation.
A variational discretization approach with piecewise constant discretization of the state and piecewise linear and continuous discretization of the adjoint state yields that the derivative of the control is a sum of Dirac measures. Consequently the control is a piecewise constant function. Under a structural assumption we even get that the number of jumps of the control is finite. We prove error estimates for the variational discretization approach in combination with the mixed formulation of the state equation and confirm our findings in numerical experiments that display the convergence rate.
In summary we confirm the use of variational discretization for optimal control problems with measures that inherit a sparsity. We are able to preserve the sparsity on the discrete level without discretizing the control variable.
SPARQL can be employed to query RDF documents using RDF triples. OWL-DL ontologies are a subset of RDF and they are created by using specific OWL-DL expressions. Querying such ontologies using only RDF triples can be complicated and can produce a preventable source of error depending on each query.
SPARQL-DL Abstract Syntax (SPARQLAS) solves this problem using OWL Functional-Style Syntax or a syntax similar to the Manchester Syntax for setting up queries. SPARQLAS is a proper subset of SPARQL and uses only the essential constructs to obtain the desired results to queries on OWL-DL ontologies implying least possible effort in writing.
Due to the decrease in size of the query and having a familiar syntax the user is able to rely on, complex and nested queries on OWL-DL ontologies can be more easily realized. The Eclipse plugin EMFText is utilized for generating the specific SPARQLAS syntax. For further implementation of SPARQLAS, an ATL transformation to SPARQL is included as well. This transformation saves developing a program to directly process SPARQLAS queries and supports embedding SPARQLAS into running development environments.
Computers fundamentally changed the methods used by social scientists during the past decades. It is no exaggeration to state that the wide use and growing user-friendliness of computers and statistical analysis systems helped empirical social research as a subdiscipline to become mainstream. This made a new subdiscipline necessary which is mainly working on adapting and applying computer science methods for social research: social science informatics. This book originated from lecture courses given by the authors from the mid-1980s and developed for computer science students with a minor in social science. Unlike many other introductions to univariate and multivariate data analysis, this book is addressed to advanced scholars and students who apply "classical" statistical methods and who want to get an overview of the mathematical foundations of the methods they apply and who want to avoid the pitfalls of cookbook-like introduction when they interpret their results. The electronic document is a slightly revised version of the printed version of 1994 which has been out of stock for many years.