Refine
Year of publication
Document Type
- Doctoral Thesis (451)
- Part of Periodical (345)
- Bachelor Thesis (275)
- Diploma Thesis (196)
- Master's Thesis (187)
- Study Thesis (138)
- Conference Proceedings (12)
- Article (9)
- Other (8)
- Report (8)
- Book (5)
- Habilitation (5)
- (1)
- Lecture (1)
- Preprint (1)
Language
- German (1115)
- English (520)
- Multiple languages (4)
- Spanish (2)
- (1)
Keywords
- Vorlesungsverzeichnis (55)
- Bildverarbeitung (16)
- Augmented Reality (15)
- Simulation (12)
- Computergraphik (10)
- Computersimulation (10)
- Pestizid (10)
- Robotik (10)
- Computergrafik (9)
- Computervisualistik (9)
Institute
- Institut für Computervisualistik (334)
- Fachbereich 4 (297)
- Zentrale Einrichtungen (171)
- Institut für Wirtschafts- und Verwaltungsinformatik (147)
- Institut für Informatik (141)
- Institut für Management (112)
- Fachbereich 7 (99)
- Institute for Web Science and Technologies (56)
- Institut für Softwaretechnik (54)
- Fachbereich 8 (47)
Real-time operating systems for mixed-criticality systems
must support different types of software, such as
real-time applications and general purpose applications,
and, at the same time, must provide strong spatial and
temporal isolation between independent software components.
Therefore, state-of-the-art real-time operating systems
focus mainly on predictability and bounded worst-case behavior.
However, general purpose operating systems such as Linux
often feature more efficient---but less deterministic---mechanisms
that significantly improve the average execution time.
This thesis addresses the combination of the two contradicting
requirements and shows thread synchronization mechanisms
with efficient average-case behavior, but without sacrificing
predictability and worst-case behavior.
This thesis explores and evaluates the design space of fast paths
in the implementation of typical blocking synchronization
mechanisms, such as mutexes, condition variables, counting
semaphores, barriers, or message queues. The key technique here
is to avoid unnecessary system calls, as system calls have high
costs compared to other processor operations available in user
space, such as low-level atomic synchronization primitives.
In particular, the thesis explores futexes, the state-of-the-art
design for blocking synchronization mechanisms in Linux
that handles the uncontended case of thread synchronization
by using atomic operations in user space and calls into the
kernel only to suspend and wake up threads. The thesis also
proposes non-preemptive busy-waiting monitors that use an
efficient priority ceiling mechanism to prevent the lock holder
preemption problem without using system calls, and according
low-level kernel primitives to construct efficient wait and
notify operations.
The evaluation shows that the presented approaches
improve the average performance comparable
to state-of-the-art approaches in Linux.
At the same time, a worst-case timing analysis shows
that the approaches only need constant or bounded temporal
overheads at the operating system kernel level.
Exploiting these fast paths is a worthwhile approach
when designing systems that not only have to fulfill
real-time requirements, but also best-effort workloads.
Leaf litter breakdown is a fundamental process in aquatic ecosystems, being mainly mediated by decomposer-detritivore systems that are composed of microbial decomposers and leaf-shredding, detritivorous invertebrates. The ecological integrity of these systems can, however, be disturbed, amongst others, by chemical stressors. Fungicides might pose a particular risk as they can have negative effects on the involved microbial decomposers but may also affect shredders via both waterborne toxicity and their diet; the latter by toxic effects due to dietary exposure as a result of fungicides’ accumulation on leaf material and by negatively affecting fungal leaf decomposers, on which shredders’ nutrition heavily relies. The primary aim of this thesis was therefore to provide an in-depth assessment of the ecotoxicological implications of fungicides in a model decomposer-detritivore system using a tiered experimental approach to investigate (1) waterborne toxicity in a model shredder, i.e., Gammarus fossarum, (2) structural and functional implications in leaf-associated microbial communities, and (3) the relative importance of waterborne and diet-related effects for the model shredder.
Additionally, knowledge gaps were tackled that were related to potential differences in the ecotoxicological impact of inorganic (also authorized for organic farming in large parts of the world) and organic fungicides, the mixture toxicity of these substances, the field-relevance of their effects, and the appropriateness of current environmental risk assessment (ERA).
In the course of this thesis, major differences in the effects of inorganic and organic fungicides on the model decomposer-detritivore system were uncovered; e.g., the palatability of leaves for G. fossarum was increased by inorganic fungicides but deteriorated by organic substances. Furthermore, non-additive action of fungicides was observed, rendering mixture effects of these substances hardly predictable. While the relative importance of the waterborne and diet-related effect pathway for the model shredder seems to depend on the fungicide group and the exposure concentration, it was demonstrated that neither path must be ignored due to additive action. Finally, it was shown that effects can be expected at field-relevant fungicide levels and that current ERA may provide insufficient protection for decomposer-detritivore systems. To safeguard aquatic ecosystem functioning, this thesis thus recommends including leaf-associated microbial communities and long-term feeding studies using detritus feeders in ERA testing schemes, and identifies several knowledge gaps whose filling seems mandatory to develop further reasonable refinements for fungicide ERA.
Based on dual process models of information processing, the present research addressed how explicit disgust sensitivity is re-adapted according to implicit disgust sensitivity via self-perception of automatic behavioral cues. Contrary to preceding studies (Hofmann, Gschwendner, & Schmitt, 2009) that concluded that there was a "blind spot" for self- but not for observer perception of automatic behavioral cues, in the present research, a re-adaption process was found for self-perceivers and observers. In Study 1 (N = 75), the predictive validity of an indirect disgust sensitivity measure was tested with a double-dissociation strategy. Study 2 (N = 117) reinvestigated the hypothesis that self-perception of automatic behavioral cues, predicted by an indirect disgust sensitivity measure, led to a re-adaption of explicit disgust sensitivity measures. Using a different approach from Hofmann et al. (2009), the self-perception procedure was modified by (a) feeding back the behavior several times while a small number of cues had to be rated for each feedback condition, (b) using disgust sensitivity as a domain with clearly unequivocal cues of automatic behavior (facial expression, body movements) and describing these cues unambiguously, and (c) using a specific explicit disgust sensitivity measure in addition to a general explicit disgust sensitivity measure. In Study 3 (N = 130), the findings of Study 2 were replicated and display rules and need for closure as moderator effects of predictive validity and cue utilization were additionally investigated. The moderator effects give hints that both displaying a disgusted facial expression and self-perception of one- own disgusted facial expression are subject to a self-serving bias, indicating that facial expression may not be an automatic behavior. Practical implications and implications for future research are discussed.
Replikation einer Multi-Agenten-Simulationsumgebung zur Überprüfung auf Integrität und Konsistenz
(2012)
In dieser Master -Arbeit möchte ich zunächst eine Simulation vorstellen, mit der das Verhalten von Agenten untersucht wird, die in einer generierten Welt versuchen zu über leben und dazu einige Handlungsmöglichkeiten zur Auswahl haben. Anschließend werde ich kurz die theoretischen Aspekte beleuchten, welche hier zu Grunde liegen. Der Hauptteil meiner Arbeit ist meine Replikation einer Simulation, die von Andreas König im Jahr 2000 in Java angefertigt worden ist [Kö2000] . Ich werde hier seine Arbeit in stark verkürzter Form darstellen und anschließend auf meine eigene Entwicklung eingehen.
Im Schlussteil der Arbeit werde ich die Ergebnisse meiner Simulation mit denen von Andreas König vergleichen und die verwendeten Werkzeuge (Java und NetLogo) besprechen. Zum Abschluss werde ich in einem Fazit mein Vorhaben kurz zusammenfassen und berichten was sich umsetzen ließ, was nicht funktioniert hat und warum.
Standards are widely-used in the computer science and IT industry. Different organizations like the International Organization for Standardization (SO) are involved in the development of computer related standards. An important domain of standardization is the specification of data formats enabling the exchange of information between different applications. Such formats can be expressed in a variety of schema languages thereby defining sets of conformant documents. Often the use of multiple schema languages is required due to their varying expressive power and different kind of validation requirements.rnThis also holds for the Specification Common Cartridge which is maintained by the IMS Global Learning Consortium. The specification defines valid zip packages that can be used to aggregate different learning objects. These learning objects are represented by a set of files which are a part of the package and can be imported into a learning management system. The specification makes use of other specifications to constrain the contents of valid documents. Such documents are expressed in the eXtensible Markup Language and may contain references to other files also part of the package. The specification itself is a so-called domain profile. A domain profile allows the modification of one or more specifications to meet the needs of a particular community. Test rules can be used to determine a set of tasks in order to validate a concrete package. The execution is done by a testsystem which, as we will show, can be created automatically. Hence this method may apply to other package based data formats that are defined as a part of a specification.
This work will examine the applicability of this generic test method to the data formats that are introduced by the so called Virtual Company Dossier. These formats are used in processes related to public e-procurement. They allow the packaging of evidences that are needed to prove the fulfillment of criteria related to a public tender. The work first examines the requirements that are common to both specifications. This will introduce a new view on the requirements by introducing a higher level of abstraction. The identified requirements will then be used to create different domain profiles each capturing the requirements of a package based data format. The process is normally guided by supporting tools that ease the capturing of a domain profile and the creation of testsystems. These tools will be adapted to support the new requirements. Furtheron the generic testsystem will be modified. This system is used as a basis when a concrete testsystem is created.
Finally the author comes to a positive conclusion. Common requirements have been identified and captured. The involved systems have been adapted allowing the capturing of further types of requirements that have not been supported before. Furthermore the background of the specifications quite differ. This indicates that the use of domain profiles and generic test technologies may be suitable in a wide variety of other contexts.
Planning routes for trucks with a trailer is a complex procedure. In order to simplify this process, a route is segmented into elementary components, which represents basic motions of the considered vehicle. These elementary components are called maneuvers and are composed of two party. First, paths are constructed for certain reference points. Second, the vehicle is enclosed by a corridor during the execution of a maneuver. The paths of the vehicle have to take driveability into consideration. They must respect the kinematic constraints of the vehicle. The maneuver corridor can be used as a basis to guarantee collision-free motion planing. No part of the vehicle leaves the corridor during the maneuver. There are different types of maneuvers. Currently, the bending maneuver, the cusp maneuver and the straight line maneuver can be distinguished. In addition, a maneuver can be created with two different construction methods, the conventional and the iterative method.
In this thesis, a data structure to construct a maneuver is designed and implemented. The data structure is integrated into an already existing tool. The user can interact with the software to adjust various parameters of a maneuver. Afterwards the maneuver is generated based on these parameters. This also includes a visualization within the software, which can plot the parts of a maneuver. The visualization can be exported to an image file.
Web-programming is a huge field of different technologies and concepts. Each technology implements a web-application requirement like content generation or client-server communication. Different technologies within one application are organized by concepts, for example architectural patterns. The thesis describes an approach for creating a taxonomy about these web-programming components using the free encyclopaedia Wikipedia. Our 101companies project uses implementations to identify and classify the different technology sets and concepts behind a web-application framework. These classifications can be used to create taxonomies and ontologies within the project. The thesis also describes, how we priorize useful web-application frameworks with the help of Wikipedia. Finally, the created implementations concerning web-programming are documented.
Wikipedia is the biggest, free online encyclopaedia that can be expanded by any-one. For the users, who create content on a specific Wikipedia language edition, a social network exists. In this social network users are categorised into different roles. These are normal users, administrators and functional bots. Within the networks, a user can post reviews, suggestions or send simple messages to the "talk page" of another user. Each language in the Wikipedia domain has this type of social network.
In this thesis characteristics of the three different roles are analysed in order to learn how they function in one language network of Wikipedia and apply them to another Wikipedia network to identify bots. Timestamps from created posts are analysed to reveal noticeable characteristics referring to continuous messages, message rates and irregular behaviour of a user are discovered. Through this process we show that there exist differences between the roles for the mentioned characteristics.
The present thesis deals with the realization of a stepper motor driver on an 8-bit microcontroller by the company Atmel. The focus is on the devel- opment of a current control, which allows microstepping in addition to the basic modes of operation like full- and halfstep. For this purpose, a PI con- troller is derived using physical and control engineering principles, which is implemented on the microcontroller. In this context, essential knowledge for the practical implementation will be discussed. In addition, the development of the hardware is documented, which is of great significance for the current measurement.
Placing questions before the material or after the material constitute different reading situations. To adapt to these reading situations, readers may apply appropriate reading strategies. Reading strategy caused by location of question has been intensively explored in the context of text comprehension. (1) However, there is still not enough knowledge about whether text plays the same role as pictures when readers apply different reading strategies. To answer this research question, three reading strategies are experimentally manipulated by displaying question before or after the blended text and picture materials: (a) Unguided processing with text and pictures and without the question. (b) Information gathering to answer the questions after the prior experience with text and pictures. (c) Comprehending text and pictures to solve the questions with the prior information of the questions. (2) Besides, it is arguable whether readers prefer text or pictures when the instructed questions are in different difficulty levels. (3) Furthermore, it is still uncertain whether students from higher school tier (Gymnasium) emphasize more on text or on pictures than students from lower school tier (Realschule). (4) Finally, it is rarely mentioned whether higher graders are more able to apply reading strategies in text processing and picture processing than lower graders.
Two experiments were undertaken to investigate the usage of text and pictures in the perspectives of task orientation, question difficulty, school and grade. For a 2x2(x2x2x2) mixed design adopting eye tracking method, participants were recruited from grade 5 (N = 72) and grade 8 (N = 72). In Experiment 1, thirty-six 5th graders were recruited from higher tier (Gymnasium) and thirty-six 5th graders were from lower tier (Realschule). In Experiment 2, thirty-six 8th graders were recruited from higher tier and thirty-six were from lower tier. They were supposed to comprehend the materials combining text and pictures and to answer the questions. A Tobii XL60 eye tracker recorded their eye movements and their answers to the questions. Eye tracking indicators were analyzed and reported, such as accumulated fixation duration, time to the first fixation and transitions between different Areas of Interest. The results reveal that students process text differently from pictures when they follow different reading strategies. (1) Consistent with Hypothesis 1, students mainly use text to construct their mental model in unguided spontaneous processing of text and pictures. They seem to mainly rely on the pictures as external representations when trying to answer questions after the prior experience with the material. They emphasize on both text and pictures when questions are presented before the material. (2) Inconsistent with Hypothesis 2, students are inclined to emphasize on text and on pictures as question difficulty increases. However, the increase of focus on pictures is more than on text when the presented question is difficult. (3) Different from Hypothesis 3, the current study discovers that higher tier students did not differ from lower tier students in text processing. Conversely, students from higher tier attend more to pictures than students from lower tier. (4) Differed from Hypothesis 4, 8th graders outperform 5th graders mainly in text processing. Only a subtle difference is found between 5th graders and 8th graders in picture processing.
To sum up, text processing differs from picture processing when applying different reading strategies. In line with the Integrative Model of Text and Picture Comprehension by Schnotz (2014), text is likely to play a major part in guiding the processing of meaning or general reading, whereas pictures are applied as external representations for information retrieval or selective reading. When question is difficulty, pictures are emphasized due to their advantages in visualizing the internal structure of information. Compared to lower tier students (poorer problem solvers), higher tier students (good problem solvers) are more capable of comprehending pictures rather than text. Eighth graders are more efficient than 5th graders in text processing rather than picture processing. It also suggests that in designing school curricula, more attention should be paid to students’ competence on picture comprehension or text-picture integration in the future.
With the appearance of modern virtual reality (VR) headsets on the consumer market, there has been the biggest boom in the history of VR technology. Naturally, this was accompanied by an increasing focus on the problems of current VR hardware. Especially the control in VR has always been a complex topic.
One possible solution is the Leap Motion, a hand tracking device that was initially developed for desktop use, but with the last major software update it can be attached to standard VR headsets. This device allows very precise tracking of the user’s hands and fingers and their replication in the virtual world.
The aim of this work is to design virtual user interfaces that can be operated with the Leap Motion to provide a natural method of interaction between the user and the VR environment. After that, subject tests are performed to evaluate their performance and compare them to traditional VR controllers.
The present thesis gives an overview of the general conditions for the programming of graphics cards. For this purpose, the most important Application Programming Interfaces (APIs) available on the market are presented and compared. Subsequently, two standard algorithms from the field data processing, prefix sum and radixsort are presented and examined with regard to the implementation with parallel programming on the GPU. Both algorithms were implemented using the OpenGL-API and OpenGL compute shaders. Finally, the execution times of the two algorithms were compared.
To meet the growing demands in the automotive industry, car manufacturers constantly reduce the depth of production and shift value-adding processes to the suppliers. This requires that companies work together more closely and promotes the creation of complex logistics networks. To meet the requirements for information exchange, a consortium of automobile manufacturers launched the project RFID-based Automotive Network (RAN) in 2009. The initiative aims at creating a standardized architecture for efficient material flow management along the entire supply chain. Core component of this architecture is the Informationbroker, an information unit which automatically communicates data which is captured via Auto-ID technology to supply chain participants. The thesis focuses in cooperation with the IBS AG, a software company and consortium partner in the project, on the exchange of goods data.
At first, theoretical foundations are presented by describing the characteristics of a supply chain and explaining standardization efforts and related processes. The chapter on the supply chain focuses on trends in the automotive industry to create a link to the project. The topic of standardization provides in-depth information on electronic data exchange standards in order to additionally create a transition to the Informationbroker concept. In the analytical part, reference projects will be presented with a similar problem and set in relation to RAN. According to project documents, system requirements will be defined and models will be created in order to illustrate the problem. Rich Pictures are used to describe the basis and target state.
Based on these models, the flow of goods related data is depicted between two companies and the role of the Informationbroker for the information exchange is clarified. The thesis aims at establishing an understanding of the challenges of the project and how the proposed concepts of the initiative can lead to an optimization of an automotive supply chain.
The estimation of various social objects is necessary in different fields of social life, science, education, etc. This estimation is usually used for forecasting, for evaluating of different properties and for other goals in complex man-machine systems. At present this estimation is possible by means of computer and mathematical simulation methods which is connected with significant difficulties, such as: - time-distributed process of receiving information about the object; - determination of a corresponding mathematical device and structure identification of the mathematical model; - approximation of the mathematical model to real data, generalization and parametric identification of the mathematical model; - identification of the structure of the links of the real social object. The solution of these problems is impossible without a special intellectual information system which combines different processes and allows predicting the behaviour of such an object. However, most existing information systems lead to the solution of only one special problem. From this point of view the development of a more general technology of designing such systems is very important. The technology of intellectual information system development for estimation and forecasting the professional ability of respondents in the sphere of education can be a concrete example of such a technology. Job orientation is necessary and topical in present economic conditions. It helps tornsolve the problem of expediency of investments to a certain sphere of education. Scientifically validated combined diagnostic methods of job orientation are necessary to carry out professional selection in higher education establishments. The requirements of a modern society are growing, with the earlier developed techniques being unable to correspond to them sufficiently. All these techniques lack an opportunity to account all necessary professional and personal characteristics. Therefore, it is necessary to use a system of various tests. Thus, the development of new methods of job orientation for entrants is necessary. The information model of the process of job orientation is necessary for this purpose. Therefore, it would be desirable to have an information system capable of giving recommendations concerning the choice of a trade on the basis of complex personal characteristics of entrants.
Die vorliegende Arbeit betrachtet den Einfluss von Wald- und Wirtschaftswegen auf Abflussentstehung und Bodenerosionsraten innerhalb eines bewaldeten Einzugsgebiets im Naturschutzgebiet Laacher See. Hierfür wurden sowohl bestehende Erosions- und Akkumulationsformen im Gelände kartiert, als auch Erosionssimulationen mittels einer Kleinberegnungsanlage durchgeführt. Zuletzt erfolgte eine Modellierung des Erosionspotentials auf Grundlage der Simulationsergebnisse.
Die Analyse bestehender Erosions- und Akkumulationsformen im Gelände gab einen Hinweis auf Bodenerosionsraten von Wegoberflächen, die zwischen 27,3 und 93,5 t ha-1 a-1 und somit in derselben Größenordnung wie Erosionsraten unter intensiver ackerbaulicher Nutzung lagen.
Die Simulationsläufe zeigten, dass persistente Waldwege ein deutlich verändertes Infiltrationsverhalten aufwiesen. Auf natürlichen Waldböden lag der Anteil des infiltrierten Niederschlags bei durchschnittlich 96%. Im Falle von Waldwegen nahm dieser Anteil im Mittel auf 14% bis 7% ab. Besonders auffällig waren die Ergebnisse auf Rückegassen, auf denen ein erheblicher Einfluss der Bodenverdichtung durch Befahrung nachgewiesen werden konnte. Hier sank der Anteil des infiltrierten Niederschlags auf 31% in den Fahrspuren, zwischen den Spuren wurden noch 76 % infiltriert.
Während der Simulationsläufe konnten maximale Sedimentmengen von 446 g m-2 erodiert werden, was einer mittleren Bodenerosionsrate von 4,96 g m-2 min-1 entspricht. Diese hohen Abtragsraten wurden auf persistenten Wegen mit geringer Befestigung gemessen. Rückegassen wiesen die geringsten Abtragswerte auf, maximal konnten 37 g m-2 erodiert werden, gleichbedeutend mit einer Abtragsrate von 0,41 g m-2 min-1. Die erodierten Sedimentmengen betrugen im Mittel bei Wegen 167 bis 319 g m-2 und im Falle von Rückegassen 17 g m-2. Anhand von Vergleichsmessungen auf Waldstandorten, bei denen ein mittlerer Bodenabtrag von ca. 5 g m-2 festgestellt wurde, konnte eine erhöhte Erodierbarkeit für jedwede Form der Weganlage bestätigt werden.
Auf Basis der im Gelände gemessenen Abtragsraten wurden die Modellierungen kalibriert. Die Ergebnisse der ABAG / DIN 19708 zeigten für das betrachtete Untersuchungsgebiet eine mittlere jährliche Bodenerosionsgefährdung von 2,4 - 5,8 t ha-1 a-1 für persistente Wege und von 0,5 t ha-1 a-1 für Rückegassen. Im Vergleich zum Mittelwert weitgehend unbeeinflusster Waldflächen im Untersuchungsgebiet von 0,1 t ha-1 a-1 zeigte sich abermals ein erhöhtes Abtragspotential. Die physikalisch basierte Modellierung der Beregnungsversuche mittels WEPP zeigte ein zufriedenstellendes Ergebnis bei der Einschätzung des Abflussverhaltens, so wurden für persistente Wege nur Abweichungen von maximal -5% festgestellt. Die Abflussmodellierung auf Rückegassen sowie die generelle Modellierung der Bodenerosion während der Beregnungsversuche zeigte sich im Kontrast hierzu noch fehlerbehaftet, was ursächlich mit der für ein physikalisches Modell relativ geringen Eingangsdatentiefe zu begründen ist.
Es wurde nachgewiesen, dass Waldwege einen bedeutenden Einfluss auf den Wasserhaushalt und das Bodenerosionsgeschehen haben. Der Rückhalt von Niederschlägen wird gemindert und es kommt zu intensivierten Bodenerosionsprozessen. Schlecht befestigte Wege zeigten einen stark erhöhten Bodenabtrag, der zu ökologischen Folgeschäden führen kann. Der Abtrag kann ebenso zu einer Beeinträchtigung der Befahrbarkeit führen. Anhand der Folgen lässt sich die Relevanz der Betrachtung von Abfluss- und Bodenerosionsprozessen auf Wald- und Wirtschaftswegen deutlich machen. Die vorliegende Arbeit stellt die erste Studie dar, innerhalb derer Abfluss- und Bodenerosionspozesse für Walderschließungsnetzwerke in Mitteleuropa untersucht wurden.
Innovation can help a forward-looking company to rise up very quickly, furthermore, innovative products and services bring a company to a stage where it can win new segments of customers and be ahead of the competition. For their innovation process, the companies can distinguish between open and closed innovation. In this case, we will focus on open innovation and how companies share their innovation processes for the benefit of the company. They use information and innovation systems to define their innovation process, as well as, track innovative ideas and the phase of their development. There are always pros and cons when it comes to open innovation processes in an organization. We will try to look at certain examples in the business world to illustrate how good or bad an open innovation process can be for a company. In this Bachelor thesis, we will try to point out the essential criteria for an open innovation process and illustrate companies which have used open innovation processes. In some cases it went successfully and for some companies it went unsuccessfully.
In this work, some network protocols with Wireshark Protokollanalyser should be observed and described the deal with them. Wireshark is an offshoot of "Ethereal", one of the most popular protocol analyzer. Wireshark analysis network traffic, draws on it and make it clear . For the simulation of the network is used VNUML. Since VNUML can only be used under Linux, andLinux is running as a virtual machine in between to work in Windows to be able to.
While the 1960s and 1970s still knew permanent education (Council of Europe), recurrent education (OECD) and lifelong education (UNESCO), over the past 20 years, lifelong learning has become the single emblem for reforms in (pre-) primary, higher and adult education systems and international debates on education. Both highly industrialized and less industrialized countries embrace the concept as a response to the most diverse economic, social and demographic challenges - in many cases motivated by international organizations (IOs).
Yet, literature on the nature of this influence, the diffusion of the concept among IOs and their understanding of it is scant and usually focuses on a small set of actors. Based on longitudinal data and a large set of education documents, the work identifies rapid diffusion of the concept across a heterogeneous, expansive and dynamic international field of 88 IOs in the period 1990-2013, which is difficult to explain with functionalist accounts.
Based on the premises of world polity theory, this paper argues that what diffuses resembles less the bundle of systemic reforms usually associated with the concept in the literature and more a surprisingly detailed model of a new actor " the lifelong learner.
Background: Somatoform symptoms are a prevalent and disabling condition in primary practice, causing high medical care utilization. Objective: To compare the short and long term effects of cognitive behavioral outpatient group-therapy to a relaxation-group and a waiting-control-group, on physical symptoms, anxiety, depression, functional health, symptomspecific cognitions and illness-behavior. Methods: 135 subjects were treated and assessed in a randomized control group design. The manualized interventions comprised eight sessions. Results: The cognitive-behavioral group treatment lead to lower levels of somatoform symptoms (SOMS-7) and enhanced mental health (SF-12). There were no differential effects between cognitive-behavioral therapy and relaxation treatment on any of the analysed variables. Conclusions: This brief cognitive-behavioral group therapy has beneficial effects on ambulatory patients with somatoform. To enhance effect sizes and facilitate differential effects, future studies should consider applying increased therapy dosage.
In diesem Bericht wird der Einsatz von drahtlosen Sensornetzen zur Temperaturmessung in Fließgewässern untersucht. Es wird dargestellt, inwieweit solche Netze als Bindeglied zwischen Fernerkundung und stationären Sensoren eingesetzt werden können. Es werden die Anforderungen an Sensornetze für die Anwendung Gewässermonitoring ermittelt und eine prototypische Realisierung von Netzknoten für ein solches Sensornetz dargestellt. Als Ergebnis dieser Arbeit werden die Genauigkeit von Temperaturmessungen mit solchen Sensorknoten im Vergleich zu einem Temperaturlogger als Referenzsystem dargestellt. Die Messungen zeigen, dass eine vergleichsweise gute Messgenauigkeit zu geringen Kosten erreichbar ist. Durch Weiterentwicklung des hier dargestellten Prototypen steht für die Temperaturüberwachung in Gewässern ein vielversprechendes und kostengünstiges neues Messinstrument zur Verfügung. Dieses kann auf der einen Seite in tieferen Regionen Gewässertemperaturen messen, als dies mit Fernerkundung möglich ist, und auf der anderen Seite eine höhere räumliche Auflösung als stationäre Messstationen erreichen. Zusätzlich dienen die Literaturrecherche und die Formulierung der Kriterien einer Eingrenzung des Anwendungsbereichs für weiterführende Arbeiten.
Foliicolous lichens are one of the most abundant epiphytes in tropical rainforests and one of the few groups of organisms that characterize these forests. Tropical rainforests are increasingly affected by anthropogenic disturbance resulting in forest destruction and degradation. However, not much is known on the effects of anthropogenic disturbance on the diversity of foliicolous lichens. Understanding such effects is crucial for the development of appropriate measures for the conservation of such organisms. In this study, foliicolous lichens diversity was investigated in three tropical rainforests in East Africa. Godere Forest in Southwest Ethiopia is a transitional rainforest with a mixture of Afromontane and Guineo-Congolian species. The forest is secondary and has been affected by shifting cultivation, semi-forest coffee management and commercial coffee plantation. Budongo Forest in West Uganda is a Guineo-Congolian rainforest consisting of primary and secondary forests. Kakamega Forest in western Kenya is a transitional rainforest with a mixture of Guineo-Congolian and Afromontane species. The forest is a mosaic of near-primary forest, secondary forests of different seral stages, grasslands, plantations, and natural glades.
The purpose of this thesis is to explore the sentiment distributions of Wikipedia concepts.
We analyse the sentiment of the entire English Wikipedia corpus, which includes 5,669,867 articles and 1,906,375 talks, by using a lexicon-based method with four different lexicons.
Also, we explore the sentiment distributions from a time perspective using the sentiment scores obtained from our selected corpus. The results obtained have been compared not only between articles and talks but also among four lexicons: OL, MPQA, LIWC, and ANEW.
Our findings show that among the four lexicons, MPQA has the highest sensitivity and ANEW has the lowest sensitivity to emotional expressions. Wikipedia articles show more sentiments than talks according to OL, MPQA, and LIWC, whereas Wikipedia talks show more sentiments than articles according to ANEW. Besides, the sentiment has a trend regarding time series, and each lexicon has its own bias regarding text describing different things.
Moreover, our research provides three interactive widgets for visualising sentiment distributions for Wikipedia concepts regarding the time and geolocation attributes of concepts.
Analyse TV-basierter Interaktion für Senioren: Umsetzung und Evaluation im Gesundheitsbereich
(2007)
Climate change is an existential threat to human survival, the social organization of society, and the stability of ecosystems. It is thereby profoundly frightening. In the face of threat, people often want to protect themselves instead of engaging in mitigating behaviors. When psychological resources are insufficient to cope, people often respond with different forms of denial. In this dissertation, I contribute original knowledge to the understanding of the multifaceted phenomenon of climate denial from a psychological perspective.
There are four major gaps in the literature on climate denial: First, the spectrum of climate denial as a self-protective response to the climate crisis has not received attention within psychology. Second, basic psychological need satisfaction, a fundamental indicator of human functioning and the ability to cope with threat, has not been investigated as a predictor of climate denial. Third, relations of the spectrum of climate denial to climate-relevant emotions, specifically climate anxiety, have not been examined empirically. Forth, it has not been investigated how the spectrum of climate denial relates to established predictors of climate denial, namely right-wing ideological convictions and male gender. To address those gaps, I investigate what the spectrum of climate denial looks like in the German context and how it relates to basic psychological need satisfaction and frustration, pro-environmental behavior, climate anxiety, ideological conviction, and gender.
Five manuscripts reveal that climate denial exists on a spectrum in the German context, ranging from the distortion of facts (interpretive climate denial, specifically denial of personal and global outcome severity) to the denial of the implications of climate change (implicatory climate denial, specifically avoidance, denial of guilt, and rationalization of one's own involvement). Across analyses, low basic psychological need satisfaction predicted the spectrum of climate denial, which was negatively related to pro-environmental behavior. Climate denial was generally negatively related to climate anxiety, except for a positive association of avoidance and climate anxiety. Right-wing ideological conviction was the strongest predictor of climate denial across the spectrum. However, low need satisfaction and male gender were additional weaker predictors of implicatory climate denial.
These findings suggest that the spectrum of climate denial serves many psychological functions. Climate denial is possibly both a self-protective strategy to downregulate emotions and to protect oneself from loss of privilege. In short, it represents a barrier to climate action that may only be resolved once people have sufficient psychological resources to face the threat of climate change and cope with their underlying self-protective, emotional responses.
Remote rendering services offer the possibility to stream high quality images to lower powered devices. Due to the transmission of data the interactivity of applications is afflicted with a delay. A method to reduce delay of the camera manipulation on the client is called 3d-warping. This method causes artifacts. In this thesis different approaches of remote rendering setups will be shown. The artifacts and improvements of the warping method will be described. Methods to reduce the artifacts will be implemented and analyzed.
In the last years the e-government concentrated on the administrative aspects of administrative modernisation. In the next step the e-discourses will gain in importance as an instrument of the public-friendliness and means of the e-democracy/e-participation. With growing acceptance of such e-discourses, these will fastly reach a complexity, which could not be mastered no more by the participants. Many impressions, which could be won from presence discussions, will be lacking now. Therefore the exposed thesis has the objective of the conception and the prototypical implementation of an instrument (discourse meter), by which the participants, in particular the moderators of the e-discourse, are capable to overlook the e-discourse at any time and by means of it, attain their discourse awareness. Discourse awareness of the present informs about the current action in the e-discourse and discourse awareness of the past about the past action, by which any trends become visible. The focus of the discourse awareness is located in the quantitative view of the action in the e-discourse. From the model of e-discourse, which is developed in this thesis, the questions of discourse awareness are resulting, whose concretion is the basis for the implementation of the discourse meter. The discourse sensors attached to the model of the e-discourse are recording the actions of the e-discourse, showing events of discourse, which are represented by the discourse meter in various forms of visualizations. The concept of discourse meter offers the possibility of discourse awareness relating to the present as monitoring and the discourse awareness relating to the past as query (quantitative analysis) to the moderators of the e-discourse.
Einer der Forschungsschwerpunkte der AG Rechnernetze ist das Routing Information Protocol. Eine eigene kompatible Erweiterung dieses Routingprotokolls ist der Routing Information Protocol with Metric-based Topology Investigation (RMTI [ehemals RIP-MTI]). Um dieses Protokoll testen und mit seinem Vorgänger vergleichen zu können, wird die Virtualisierungssoftware VNUML eingesetzt. In diesen virtualisierten Netzwerken werden Router, die das RMTI-Protokoll einsetzten, mittels der Zebra/Quagga Routing Software Suite beobachtet. Dabei wird das Verhalten des Protokolls in unterschiedlichsten Simulationssituationen analysiert und bewertet. Um solche Testabläufe zentral zu steuern und protokollieren zu können wurde die Anwendung RIP-XT (XTPeer) erstellt und in fortführenden Diplomarbeiten kontinuierlich erweitert. Sie ist Schnittstelle zu den Zebra/Quagga-Routern und kann diese steuern. Zusätzlich sammelt und analysiert sie die Routing-Informationen der Router.Über ein GUI kann ein Benutzer diese Abläufe steuern. Um einen visuellen Überblick über eine Netzwerktopologie zu erhalten, besitzt das GUI auch eine Topologie-Anzeige. Die Anzeige repräsentiert das Gesamte Netzwerk durch Symbole, über die auch Interaktionen mit der Simulation möglich sind. Ziel dieser Diplomarbeit war es, die bisherige Topologie-Anzeige zu überarbeiten, um sie für neue Anforderungen anzupassen. Des weiteren wurden funktionale Erweiterungen in das GUI des RIP-XTs eingebettet.
This thesis addresses the automated identification and localization of a time-varying number of objects in a stream of sensor data. The problem is challenging due to its combinatorial nature: If the number of objects is unknown, the number of possible object trajectories grows exponentially with the number of observations. Random finite sets are a relatively new theory that has been developed to derive at principled and efficient approximations. It is based around set-valued random variables that contain an unknown number of elements which appear in arbitrary order and are themselves random. While extensively studied in theory, random finite sets have not yet become a leading paradigm in practical computer vision and robotics applications. This thesis explores random finite sets in visual tracking applications. The first method developed in this thesis combines set-valued recursive filtering with global optimization. The problem is approached in a min-cost flow network formulation, which has become a standard inference framework for multiple object tracking due to its efficiency and optimality. A main limitation of this formulation is a restriction to unary and pairwise cost terms. This circumstance makes integration of higher-order motion models challenging. The method developed in this thesis approaches this limitation by application of a Probability Hypothesis Density filter. The Probability Hypothesis Density filter was the first practically implemented state estimator based on random finite sets. It circumvents the combinatorial nature of data association itself by propagation of an object density measure that can be computed efficiently, without maintaining explicit trajectory hypotheses. In this work, the filter recursion is used to augment measurements with an additional hidden kinematic state to be used for construction of more informed flow network cost terms, e.g., based on linear motion models. The method is evaluated on public benchmarks where a considerate improvement is achieved compared to network flow formulations that are based on static features alone, such as distance between detections and appearance similarity. A second part of this thesis focuses on the related task of detecting and tracking a single robot operator in crowded environments. Different from the conventional multiple object tracking scenario, the tracked individual can leave the scene and later reappear after a longer period of absence. Therefore, a re-identification component is required that picks up the track on reentrance. Based on random finite sets, the Bernoulli filter is an optimal Bayes filter that provides a natural representation for this type of problem. In this work, it is shown how the Bernoulli filter can be combined with a Probability Hypothesis Density filter to track operator and non-operators simultaneously. The method is evaluated on a publicly available multiple object tracking dataset as well as on custom sequences that are specific to the targeted application. Experiments show reliable tracking in crowded scenes and robust re-identification after long term occlusion. Finally, a third part of this thesis focuses on appearance modeling as an essential aspect of any method that is applied to visual object tracking scenarios. Therefore, a feature representation that is robust to pose variations and changing lighting conditions is learned offline, before the actual tracking application. This thesis proposes a joint classification and metric learning objective where a deep convolutional neural network is trained to identify the individuals in the training set. At test time, the final classification layer can be stripped from the network and appearance similarity can be queried using cosine distance in representation space. This framework represents an alternative to direct metric learning objectives that have required sophisticated pair or triplet sampling strategies in the past. The method is evaluated on two large scale person re-identification datasets where competitive results are achieved overall. In particular, the proposed method better generalizes to the test set compared to a network trained with the well-established triplet loss.
There has been little research on out-of-school places of learning and their effec-tiveness in the context of ESD education measures. With the help of a multi-stage analysis, this study identifies out-of-school places of learning with reference to the ESD education concept in the Rhineland-Palatinate study area. To this end, qualita-tive literature analyses were first used to generate ESD criteria, which were opera-tionalised as a methodological instrument in the form of an ESD checklist for out-of-school places of learning. The data obtained in this way provide the basis for the creation of a geographically oriented learning location database with ESD refer-ence. A cartographic visualisation of the data results in a spatial distribution pattern: Thus, there are districts and cities that are well supplied with ESD learning loca-tions, but also real ESD learning location deserts where there is a need to catch up. Furthermore, there is an accumulation of ESD learning sites in areas close to for-ests.
A guideline-based explorative interview with two ESD experts provides additional insights into the question of how ESD has been implemented in the federal state of Rhineland-Palatinate, the extent to which there is a need for optimisation, and which continuing measures are being taken for ESD outside schools within the framework of Agenda 2030.
In addition, a quantitative questionnaire study was carried out with 1358 pupils at 30 out-of-school places of learning after participation in an educational measure, in which environmental awareness, attitudes towards environmental behaviour and local learning were also considered. By including non-ESD learning locations, a comparative study on the effectiveness of ESD learning locations became possible. The statistical data evaluation leads to a variety of interesting results. Contra-intuitively, for instance, the type of learning location (ESD or non-ESD learning lo-cation) is not a significant predictor for the environmental awareness and environ-mental behaviour of the surveyed students, whereas communication structures within educational measures at extracurricular learning locations, the multimediality and action orientation and the duration of educational measures have a significant influence.
Keywords: extracurricular learning locations, education for sustainable develop-ment (ESD), ESD criteria, learning location landscape Rhineland-Palatinate, ESD learning locations, environmental awareness, environmental behaviour.
MP3 Player for Nintendo DS
(2007)
Ziel der Arbeit ist es, einen MP3-Player zu entwickeln, der eine Benutzerinteraktion ermöglicht, wie es gängige Computerprogramme zur Wiedergabe von Musik tun. Der Benutzer soll über eine grafische Oberfläche MP3-Dateien laden, abspielen und in Playlisten organisieren können. Ferner soll es möglich sein, Metadaten wie Titel, Autor, Genre, Veröffentlichungsjahr und vieles weitere als zusätzlichen Tag zu speichern und zu editieren. Diese Informationen soll die Software auch beim Abspielen eines Musikstückes auslesen und dem Nutzer übersichtlich anzeigen. Hier scheitern die meisten Player aufgrund ihres kleinen Displays. Außerdem soll der MP3-Player auch rudimentäre Funktionen zur Echtzeitmanipulation der Musikwiedergabe bieten. Als Hardware zum Abspielen der Musikdateien dient die Spielekonsole Nintendo DS, welche aufgrund ihrer beiden Displays genügend Anzeigemöglichkeiten für eine grafische Benutzerführung bietet. Eines der beiden Displays dient zudem als Touchscreen und kann für Eingaben verwendet werden.
Die Forschung im Bereich der modellbasierten Objekterkennung und Objektlokalisierung hat eine vielversprechende Zukunft, insbesondere die Gebäudeerkennung bietet vielfaltige Anwendungsmöglichkeiten. Die Bestimmung der Position und der Orientierung des Beobachters relativ zu einem Gebäude ist ein zentraler Bestandteil der Gebäudeerkennung.
Kern dieser Arbeit ist es, ein System zur modellbasierten Poseschätzung zu entwickeln, das unabhängig von der Anwendungsdomäne agiert. Als Anwendungsdomäne wird die modellbasierte Poseschätzung bei Gebäudeaufnahmen gewählt. Vorbereitend für die Poseschätzung bei Gebäudeaufnahmen wird die modellbasierte Erkennung von Dominosteinen und Pokerkarten realisiert. Eine anwendungsunabhängige Kontrollstrategie interpretiert anwendungsspezifische Modelle, um diese im Bild sowohl zu lokalisieren als auch die Pose mit Hilfe dieser Modelle zu bestimmen. Es wird explizit repräsentiertes Modellwissen verwendet, sodass Modellbestandteilen Bildmerkmale zugeordnet werden können. Diese Korrespondenzen ermöglichen die Kamerapose aus einer monokularen Aufnahme zurückzugewinnen. Das Verfahren ist unabhängig vom Anwendungsfall und kann auch mit Modellen anderer rigider Objekte umgehen, falls diese der definierten Modellrepräsentation entsprechen. Die Bestimmung der Pose eines Modells aus einem einzigen Bild, das Störungen und Verdeckungen aufweisen kann, erfordert einen systematischen Vergleich des Modells mit Bilddaten. Quantitative und qualitative Evaluationen belegen die Genauigkeit der bestimmten Gebäudeposen.
In dieser Arbeit wird zudem ein halbautomatisches Verfahren zur Generierung eines Gebäudemodells vorgestellt. Das verwendete Gebäudemodell, das sowohl semantisches als auch geometrisches Wissen beinhaltet, den Aufgaben der Objekterkennung und Poseschätzung genügt und sich dennoch an den bestehenden Normen orientiert, ist Voraussetzung für das Poseschätzverfahren. Leitgedanke der Repräsentationsform des Modells ist, dass sie für Menschen interpretierbar bleibt. Es wurde ein halbautomatischer Ansatz gewählt, da die automatische Umsetzung dieses Verfahrens schwer die nötige Präzision erzielen kann. Das entwickelte Verfahren erreicht zum einen die nötige Präzision zur Poseschätzung und reduziert zum anderen die Nutzerinteraktionen auf ein Minimum. Eine qualitative Evaluation belegt die erzielte Präzision bei der Generierung des Gebäudemodells.
The automatic detection of position and orientation of subsea cables and pipelines in camera images enables underwater vehicles to make autonomous inspections. Plants like algae growing on top and nearby cables and pipelines however complicate their visual detection: the determination of the position via border detection followed by line extraction often fails. Probabilistic approaches are here superior to deterministic approaches. Through modeling probabilities it is possible to make assumptions on the state of the system even if the number of extracted features is small. This work introduces a new tracking system for cable/pipeline following in image sequences which is based on particle filters. Extensive experiments on realistic underwater videos show robustness and performance of this approach and demonstrate advantages over previous works.
Autonomous exhaustive exploration of unknown indoor environments with the mobile robot "Robbie"
(2007)
Rettungsroboter helfen nach Katastrophen wie z.B. Erdbeben dabei, in zerstörten Gebäuden Überlebende zu finden. Die Aufgabe, die Umgebung effizient möglichst vollständig abzusuchen und dabei eine Karte zu erstellen, die den Rettungskräften bei der Bergung der Opfer zur Orientierung dient, soll der Roboter autonom erfüllen. Hierzu wird eine Explorationsstrategie benötigt; eine Strategie zur Navigation in bekanntem und zur Erkundung von unbekanntem Gelände. Für den mobilen Roboter "Robbie" der Arbeitsgruppe Aktives Sehen wurde in dieser Arbeit ein Grenzen-basierter Ansatz zur Lösung des Explorationsproblems ausgewählt und implementiert. Hierzu werden Grenzen zu unbekanntem Gelände aus der Karte, die der Roboter erstellt, extrahiert und angefahren. Grundlage der Navigation zu einem so gefundenen Wegpunkt bildet die sog. Pfad-Transformation (Path-Transform).
The Internet of Things is still one of the most relevant topics in the field of economics and research powered by the increasing demand of innovative services. Cost reductions in manufacturing of IoT hardware and the development of completely new communication ways has led to the point of bil-lions of devices connected to the internet. But in order to rule this new IoT landscape a standardized solution to conquer these challenges must be developed, the IoT Architecture.
This thesis examines the structure, purpose and requirements of IoT Architecture Models in the global IoT landscape and proposes an overview across the selected ones. For that purpose, a struc-tured literature analysis on this topic is conducted within this thesis, including an analysis on three existing research approaches trying to frame this topic and a tool supported evaluation of IoT Archi-tecture literature with over 200 accessed documents.
Furthermore, a coding of literature with the help of the specialised coding tool ATLAS.ti 8 is conduct-ed on 30 different IoT Architecture Models. In a final step these Architecture Models are categorized and compared to each other showing that the environment of IoT and its Architectures gets even more complex the further the research goes.
Wie bereitet man komplizierte, technische Sachverhalte einfach und verständlich auf, damit sie auch der normalen Benutzer ohne tiefergehendes technisches Hintergrundwissen schnell und ohne lange Einarbeitungszeit und langwierige Erklärungen zu nutzen weiß? In dieser Studenarbeit geht es um genau diese Frage - Nichtinformatikern die Vorzüge und die Arbeit mit semantischen (Such)anfragen zu erleichtern, wenn nicht sogar überhaupt erst zu ermöglichen, sowie die Neuentwicklung und SPARQL-Erweiterung Networked Graphs von Simon Schenk innerhalb der AG Staab/Universität Koblenz zu präsentieren.
In dieser Arbeit wird ein System zur Erzeugung und Darstellung stereoskopischen Video-Panoramen vorgestellt. Neben der theoretischen Grundlagen werden der Aufbau und die Funktionsweise dieses Systems erläutert.
Dazu werden spezielle Kameras verwendet, die Panoramen aufnehmen
können und zur Wiedergabe synchronisiert werden. Anschließend wird ein Renderer implementiert, welcher die Panoramen mithilfe einer VirtualReality Brille stereoskopisch darstellen kann. Dafür werden separate Aufnahmen für die beiden Augen gemacht und getrennt wiedergegeben. Zum Abschluss wird das entstandene Video-Panorama mit einem Panorama eines schon bestehenden Systems verglichen.
Der Wettbewerb um die besten Technologien zur Realisierung des autonomen Fahrens ist weltweit in vollem Gange.
Trotz großer Anstrengungen ist jedoch die autonome Navigation in strukturierter und vor allem unstrukturierter Umgebung bisher nicht gelöst.
Ein entscheidender Baustein in diesem Themenkomplex ist die Umgebungswahrnehmung und Analyse durch passende Sensorik und entsprechende Sensordatenauswertung.
Insbesondere bildgebende Verfahren im Bereich des für den Menschen sichtbaren Spektrums finden sowohl in der Praxis als auch in der Forschung breite Anwendung.
Dadurch wird jedoch nur ein Bruchteil des elektromagnetischen Spektrums genutzt und folglich ein großer Teil der verfügbaren Informationen zur Umgebungswahrnehmung ignoriert.
Um das vorhandene Spektrum besser zu nutzen, werden in anderen Forschungsbereichen schon seit Jahrzehnten \sog spektrale Sensoren eingesetzt, welche das elektromagnetische Spektrum wesentlich feiner und in einem größeren Bereich im Vergleich zu klassischen Farbkameras analysieren. Jedoch können diese Systeme aufgrund technischer Limitationen nur statische Szenen aufnehmen. Neueste Entwicklungen der Sensortechnik ermöglichen nun dank der \sog Snapshot-Mosaik-Filter-Technik die spektrale Abtastung dynamischer Szenen.
In dieser Dissertation wird der Einsatz und die Eignung der Snapshot-Mosaik-Technik zur Umgebungswahrnehmung und Szenenanalyse im Bereich der autonomen Navigation in strukturierten und unstrukturierten Umgebungen untersucht. Dazu wird erforscht, ob die aufgenommen spektralen Daten einen Vorteil gegenüber klassischen RGB- \bzw Grauwertdaten hinsichtlich der semantischen Szenenanalyse und Klassifikation bieten.
Zunächst wird eine geeignete Vorverarbeitung entwickelt, welche aus den Rohdaten der Sensorik spektrale Werte berechnet. Anschließend wird der Aufbau von neuartigen Datensätzen mit spektralen Daten erläutert. Diese Datensätze dienen als Basis zur Evaluation von verschiedenen Klassifikatoren aus dem Bereich des klassischen maschinellen Lernens.
Darauf aufbauend werden Methoden und Architekturen aus dem Bereich des Deep-Learnings vorgestellt. Anhand ausgewählter Architekturen wird untersucht, ob diese auch mit spektralen Daten trainiert werden können. Weiterhin wird die Verwendung von Deep-Learning-Methoden zur Datenkompression thematisiert. In einem nächsten Schritt werden die komprimierten Daten genutzt, um damit Netzarchitekturen zu trainieren, welche bisher nur mit RGB-Daten kompatibel sind. Abschließend wird analysiert, ob die hochdimensionalen spektralen Daten bei der Szenenanalyse Vorteile gegenüber RGB-Daten bieten
Social media platforms such as Twitter or Reddit allow users almost unrestricted access to publish their opinions on recent events or discuss trending topics. While the majority of users approach these platforms innocently, some groups have set their mind on spreading misinformation and influencing or manipulating public opinion. These groups disguise as native users from various countries to spread frequently manufactured articles, strong polarizing opinions in the political spectrum and possibly become providers of hate-speech or extremely political positions. This thesis aims to implement an AutoML pipeline for identifying second language speakers from English social media texts. We investigate style differences of text in different topics and across the platforms Reddit and Twitter, and analyse linguistic features. We employ feature-based models with datasets from Reddit, which include mostly English conversation from European users, and Twitter, which was newly created by collecting English tweets from selected trending topics in different countries. The pipeline classifies language family, native language and origin (Native or non-Native English speakers) of a given textual input. We evaluate the resulting classifications by comparing prediction accuracy, precision and F1 scores of our classification pipeline to traditional machine learning processes. Lastly, we compare the results from each dataset and find differences in language use for topics and platforms. We obtained high prediction accuracy for all categories on the Twitter dataset and observed high variance in features such as average text length especially for Balto-Slavic countries.
Will Eisners Graphic Novels zeugen von einer tiefgehenden Identitifation mit dem Judentum als Volkszugehörigkeit, Religion und Kultur und spiegeln das Judentum in all seinen Facetten wider. Dabei ist besonders hervorzuheben, dass die Entwicklung des Gesamtwerks parallel verläuft zur Geschichte der Emanzipation der jüdischen Bevölkerung in New York City. Der Band clustert die jüdischen Aspekte in Eisners Werk in beispielsweise Faktoren kollektiver Erinnerung, Umgang mit und Kampf gegen Antisemitismus und religiöse Besonderheiten.
As Enterprise 2.0 (E2.0) initiatives are gradually moving out of the early experimentation phase it is time to focus greater attention on examining the structures, processes and operations surrounding E2.0 projects. In this paper we present the findings of an empirical study to investigate and understand the reasons for initiating E2.0 projects and the benefits being derived from them. Our study comprises seven in-depth case studies of E2.0 implementations. We develop a classification and means of visualising the scope of E2.0 initiatives and use these methods to analyse and compare projects.
Our findings indicate a wide range of motivations and combinations of technology in use and show a strong emphasis towards the content management functionality of E2.0 technologies.
Remote Working Study 2022
(2022)
The Remote Working Study 2022 is focused on the transition to work from home (WFH) triggered by the stay at home directives of 2020. These directives required employees to work in their private premises wherever possible to reduce the transmission of the coronavirus. The study, conducted by the Center for Enterprise Information Research (CEIR) at the University of Koblenz from December 2021 to January 2022, explores the transition to remote working.
The objective of the survey is to collect baseline information about organisations’ remote work experiences during and immediately following the COVID-19 lockdowns. The survey was completed by the key persons responsible for the implementation and/or management of the digital workplace in 19 German and Swiss organisations.
The data presented in this report was collected from member organisations of the IndustryConnect initiative. IndustryConnect is a university-industry research programme that is coordinated by researchers from the University of Koblenz. It focuses on research in the areas of the digital workplace and enterprise collaboration technologies, and facilitates the generation of new research insights and the exchange of experiences among user companies.
The Internet of Things (IoT) is a concept in which connected physical objects are integrated into the virtual world to become active partakers of businesses and everyday processes (Uckelmann, Harrison and Michahelles, 2011; Shrouf, Ordieres and Miragliotta, 2014). It is expected to have a major impact on businesses (Council, Nic and Intelligence, 2008), but small and medium enterprises’ business models are threatened if they do not adopt the new concept (Sommer, 2015). Thus, this thesis aims to showcase a sample implementation of connected devices in a small enterprise, demonstrating its added benefits for the business.
Design Science Research (DSR) is used to develop a prototype based on a use case provided by a carpentry. The prototype comprises a hardware sensor and a web application which can be used by the wood shop to improve their processes. The thesis documents the iterative process of developing a prototype from the grounds up to useable hard- and software.
This contribution provides an example of how IoT can be used and implemented at a small business.
Im Vergleich zu herkömmlicher Computergrafik (perspektivische Projektion) bietet Raytracing entscheidende Vorteile, die hauptsächlich in der vergleichsweise hohen physikalischen Korrektheit der Methode begründet sind. Die Schwächen liegen hingegen im immensen Rechenaufwand.
Ein Raytracer ist vergleichsweise so rechenintensiv, weil für jeden Pixel mindestens ein Strahl verschickt werden muss. Dieser muss gegen alle Objekte im Raum geschnitten werden. Hinzu kommen noch die Strahlen, die entstehen, wenn Strahlen an Objekten reflektiert werden (Rekursion). Um diesen Rechenaufwand zu verkleinern und zusätzlich ein besseres Bild zu erzeugen, soll der adaptive Sampler den Raytracer unterstützen. Der adaptive Sampler soll während des Rendervorgangs den progressiven Fortschritt in der Bildgenerierung beobachten und Pixel von der weiteren Berechnung ausschließen, für die sich ein zusätzliches Verschießen von Strahlen nicht mehr lohnt.
Anders als der rein progressive Raytracer hört der adaptive Sampler mit dem Konvergieren des Bildes auf zu rechnen. Der adaptive Sampler soll so dafür sorgen, dass schneller ein besseres Bild erzeugt wird und somit die Performanz gesteigert wird. Insgesamt erwartet man sich vom adaptiven Sampler Vorteile bei der Berechnung von bestimmten Szenen. Unter anderem eine Verbesserung bei Szenen mit rein diffus beleuchteten Bildbereichen, sowie eine Verbesserung bei Szenen mit unterschiedlich rechenintensiven Bildbereichen. Ein normaler Raytracer kann nicht beurteilen, wie sinnvoll seine Schüsse sind. Er kann nur mehr Strahlen verschießen, in der Hoffnung, das Bild damit effektiv zu verbessern.
Es gibt jedoch viele Szenarien, bei denen eine linear steigende Schussanzahl pro Pixel keine gleichmäßige Verbesserung im Bild erzeugt. Das bedeutet, dass Bereiche im Bild schon gut aussehen, während andere noch sehr verrauscht sind. Man möchte nun Bildbereiche, die bereits konvergiert sind, in denen sich ein weiterer Beschuss also nicht mehr bemerkbar macht, ausschließen und die Rechenleistung dort nutzen, wo man sie noch braucht.
Wichtig dabei ist, dass Pixel nicht ungewollt zu früh von der Berechnung ausgeschlossen werden, die nicht weit genug konvergiert sind. Der adaptive Sampler soll so lange arbeiten, bis jeder Pixel dauerhaft keine Änderungen mehr vorweist. Das bedeutet, dass die Wahrscheinlichkeit für eine signifikante Farbänderung eines Pixels durch Verschießen eines Strahls (bei mehreren Lichtquellen in RenderGin mehrere Strahlen pro Pixel) klein genug ist. Es wird zwar intern keine Wahrscheinlichkeit berechnet, jedoch bekommt der Raytracer eine Art Gedächtnis: Er speichert die Veränderungen im beleuchteten Bild und deren Verlauf in eigenen Gedächtnisbildern. Das "Gedächtnis" für das alte Bild (Zustand des Bildes in der letzten Iteration über die Pixel) repräsentiert dabei das Kurzzeitgedächtnis. Es ist absolut genau. Das Langzeitgedächtnis wird von drei verschiedenen Bildern repräsentiert. Das erste gibt die Anzahl der verschossenen Strahlen pro Pixel an. Das zweite ist ein Wahrheitswertebild, das für jeden Pixel angibt, ob dieser noch in die Berechnung einbezogen werden soll. Das dritte Bild gibt an, wie oft jeder Pixel eine Farbänderung vollzogen hat, die geringer ist als der geforderte Maximalabstand eines Pixels zu sich selbst (vor und nach dem Verschießen eines weiteren Strahls).
Mit diesen drei Bildern ist es möglich, zusätzliche quantitative Informationen zu den qualitativen Informationen des Vergleichs vom neuen und alten Bild zu berücksichtigen.
In dieser Arbeit kläre ich die Frage, ob die gewünschten Effekte eintreten und ob bei Integration in die bestehende Struktur von RenderGin ein Performanzgewinn möglich ist. Die Umsetzung eines adaptiven Samplers ist als Plug-In in der Software RenderGin von Numenus GmbH geschehen. RenderGin ist ein echtzeitfähiger, progressiver Raytracer, der sich durch seine Performanz auszeichnet. Die Bildgenerierung geschieht allein auf der CPU, die Grafikkarte wird lediglich zur Anzeige des erzeugten Bildes benötigt.
Die Umsetzung und Programmierung des Plug-Ins ist in Microsoft Visual Studio 2010 geschehen unter Verwendung des RenderGin SDK der Numenus GmbH.
In der vorliegenden Untersuchung stehen geometrische Aufgaben und die in den seit 2004 national verbindlichen Bildungsstandards im Fach Mathematik für den Primarbereich formulierten Anforderungsbereiche im Zentrum. Diese zeigen die kognitiven Anforderungen an Schülerinnen und Schüler bei der Bearbeitung von Aufgaben auf, wobei zwischen „Reproduzieren", „Zusammenhänge herstellen" und „Verallgemeinern und Reflektieren" unterschieden wird (KMK, 2005a, S. 13).
Durch die drei Anforderungsbereiche sollen Lehrkräfte unter anderem die Chance zur Entwicklung einer anforderungsbezogenen Aufgabenkultur erhalten. Des Weiteren soll die Integration von Aufgaben aus allen drei Anforderungsbereichen im Unterricht angeregt und einem einseitig ausgerichteten Unterricht entgegen gewirkt werden.
Da die Anforderungsbereiche bislang nicht empirisch validiert wurden und in den Veröffentlichungen der Kultusministerkonferenz nicht klar zur Schwierigkeit von Aufgaben abgegrenzt werden (KMK, 2005a, S. 13; KMK, 2005b, S. 17; KMK, 2004b, S. 13), wurde in der vorliegenden Untersuchung zum einen die Möglichkeit der eindeutigen Zuordnung geometrischer Aufgaben zu den drei Anforderungsbereichen geprüft.
Zum anderen wurde untersucht, inwiefern die in den geometrischen Aufgaben enthaltenen kognitiven Anforderungen in Zusammenhang mit der empirischen Schwierigkeit von Aufgaben, der mathematischen Leistungsfähigkeit von Schülerinnen und Schülern, dem Geschlecht und den Anforderungen der im Unterricht gestellten Aufgaben stehen.
Vor dem Hintergrund der dem deutschen Mathematikunterricht nachgesagten Kalkül- beziehungsweise Fertigkeitsorientierung (Baumert et al., 2001, S. 296; Granzer & Walther, 2008, S. 9) und den damit einhergehenden Stärken deutscher Schülerinnen und Schüler im Bereich von Routineaufgaben und Schwächen im Bereich von Aufgaben mit höheren kognitiven Anforderungen (Grassmann et al., 2014, S. 11; Reiss & Hammer, 2013, S. 82; Schütte, 2008, S. 41) wurde zudem die Verteilung der im Rahmen der Untersuchung gewonnenen, schriftlich fixierten geometrischen Schulbuch- und Unterrichtsaufgaben auf die drei Anforderungsbereiche analysiert.
Durch die Betrachtung geometrischer Aufgaben konnte stichprobenartig der quantitative Geometrieanteil in den Schulbüchern und im Unterricht der vierten Jahrgangsstufe ermittelt werden, um so den Forschungsstand zum Stellenwert des Geometrieunterrichts (Maier, 1999; Backe-Neuwald, 2000; Roick, Gölitz & Hasselhorn, 2004) zu aktualisieren beziehungsweise zu ergänzen.
In this work has been examined, how the existing model of the simulation of cables and hoses can be advanced. Therefore an investigation has been made on the main influences to the shape simulation and the factors of constraints and side conditions were analyzed. For the validation of the accuracy, the simulation has to be compared to real specimen behavior. To obtain a very precise digitalization of the shape, the choice was made to use a laser scanner that converts the pointcloud into a .vrml file which can be imported into the simulation environment. The assumption was that the simulation method itself has the highest impact to the simulated shape. This is why the capabilities of the most sophisticated methods have been analyzed. The main criterion for the success of a simulation approach proved not to be accuracy, as expected. Process integration and usability showed to be of higher interest for the efficient exertion. Other factors like the pricing, the functionality and the real-time capability were assayed as well. The analyzed methods are based on the solution of the equations of elasticity with different ways of discetization, finite-elements and a spring-impulse-system. Since the finite-element-system takes several minutes for the computation of the shape and the spring-impulse-system reacts retarded on user manipulation, the competitiveness of these approaches is low. The other methods distinguish more in real-time performance, data interfaces and functionality than in accuracy. For the accuracy of a system, the consideration of other factors proved to be very important. As one of these main factors, the accurate assignment of the material properties was indicated. Until the start of this work, only the finite-element-approach dealt with this factor, but no documentation or validation is provided. In the publications of the other methods, the material properties are estimated to obtain a plausible simulation shape. Therefore the specific material values of real specimen have been measured and assigned to the simulation. With the comparison to the real shape it has been proven that the accuracy is very high with the measured properties. Since these measurements are very costly and time consuming, an investigation on a faster and cheaper way to obtain these values has been made. It has been assumed that with the knowledge of the cross-section it should be possible to compute the specimen behavior. Since the braid distribution changes individually from specimen to specimen, a more general way to obtain the values needed to be found. The program composer has been developed, where only the number of the different braids and the taping is entered. It computes with very high precision the stiffness, the density and the final diameter of the bundle. With the measured values and the fitting to the real shape it has been proven that the simulation approach reflects the precise behavior of cables and hoses. Since the stiffness of the single braids is wasteful to measure, a measurement setup was created where the stiffness has a large impact to the shape. With known density, the stiffness of the specimen can be reconstructed precisely. Thus a fast and beneficial way of obtaining the stiffness of a cable has been invented. The poissons ratio of cables and bundles cannot be measured with a tensile test, since the inner structure is very complex. For hoses, the variation of the inner diameter has been measured during the tensile test as well. The resulting values were reasonable, but their accuracy could not be proven. For cables and hoses, it has been tried to obtain the poissons ratio via the computation of the cross section, but the influence of individual changes and the crosstalk of the braids is very high. Therefore a setup was constructed where the torsion stiffness can be measured. For cables and hoses, the individual cross-sections and taping lead to varying results. For hoses, expected and repeatable good values for the poissons ratio were obtained. The low influence of the poisons ratio in the range between 0 and 0.5 has been proven. Therefore we decided to follow the advice of [Old06] and our own experiences to set the poisons ratio for cables and bundles to 0.25. With the knowledge of the measurability and the capabilities of the developed program composer, a procedure to obtain material properties for bundles has been designed. 1. Measurement of the braid density with via pyknometer or mass, length and diameter. 2. Empirical reconstruction of the stiffness with the designed setup. 3. Composing the bundle with the program composer. 4. Adding a factor for the taping and transfer the values to the simulation. The model of the cable simulation has been improved as follows: The main influences in the simulation of cables and hoses are the simulation method, the material properties and the geometric constraints. To obtain higher accuracy, an investigation on the correct material properties is indispensable. The scientific determination of material properties for the simulation of cables, bundles and hoses has been performed for the first time. The influence of geometrical constraints has been analyzed and documented. The next steps are the analysis of pre-deformation and further investigations to the determination of the poisons ratio with a more precise torsion test. All analysis were made with the simulation approach fleXengine. A comparison to other simulation methods would be of high interest.
Agricultural land-use may lead to brief pulse exposures of pesticides in edge-of-field streams, potentially resulting in adverse effects on aquatic macrophytes, invertebrates and ecosystem functions. The higher tier risk assessment is mainly based on pond mesocosms which are not designed to mimic stream-typical conditions. Relatively little is known on exposure and effect assessment using stream mesocosms.
Thus the present thesis evaluates the appliacability of the stream mesocosms to mimic stream-typical pulse exposures, to assess resulting effects on flora and fauna and to evaluate aquatic-terrestrial food web coupling. The first objective was to mimic stream-typical pulse exposure scenarios with different durations (≤ 1 to ≥ 24 hours). These exposure scenarios established using a fluorescence tracer were the methodological basis for the effect assessment of an herbicide and an insecticide. In order to evaluate the applicability of stream mesocosms for regulatory purposes, the second objective was to assess effects on two aquatic macrophytes following a 24-h pulse exposure with the herbicide iofensulfuron-sodium (1, 3, 10 and 30 µg/L; n = 3). Growth inhibition of up to 66 and 45% was observed for the total shoot length of Myriophyllum spicatum and Elodea canadensis, respectively. Recovery of this endpoint could be demonstrated within 42 days for both macrophytes. The third objective was to assess effects on structural and functional endpoints following a 6-h pulse exposure of the pyrethroid ether etofenprox (0.05, 0.5 and 5 µg/L; n = 4). The most sensitive structural (abundance of Cloeon simile) and functional (feeding rates of Asellus aquaticus) endpoint revealed significant effects at 0.05 µg/L etofenprox. This concentration was below field-measured etofenprox concentrations and thus suggests that pulse exposures adversely affect invertebrate populations and ecosystem functions in streams. Such pollutions of streams may also result in decreased emergence of aquatic insects and potentially lead to an insect-mediated transfer of pollutants to adjacent food webs. Test systems capable to assess aquatic-terrestrial effects are not yet integrated in mesocosm approaches but might be of interest for substances with bioaccumulation potential. Here, the fourth part provides an aquatic-terrestrial model ecosystem capable to assess cross-ecosystem effects. Information on the riparian food web such as the contribution of aquatic (up to 71%) and terrestrial (up to 29%) insect prey to the diet of the riparian spider Tetragnatha extensa was assessed via stable isotope ratios (δ13C and δ15N). Thus, the present thesis provides the methodological basis to assess aquatic-terrestrial pollutant transfer and effects on the riparian food web.
Overall the results of this thesis indicate, that stream mesocosms can be used to mimic stream-typical pulse exposures of pesticides, to assess resulting effects on macrophytes and invertebrates within prospective environmental risk assessment (ERA) and to evaluate changes in riparian food webs.
Since software influences nearly every aspect of everyday life, the security of software systems is more important than ever before. The evaluation of the security of a software system still poses a significant challenge in practice, mostly due to the lack of metrics, which can map the security properties of source code onto numeric values. It is a common assumption, that the occurrence of security vulnerabilities and the quality of the software design stand in direct correlation, but there is currently no clear evidence to support this. A proof of an existing correlation could help to optimize the measurements of program security, making it possible to apply quality measurements to evaluate it. For this purpose, this work evaluates fifty open-source android applications, using three security and seven quality metrics. It also considers the correlations between the metrics. The quality metrics range from simple code metrics to high-level metrics such as object-oriented anti-patterns, which together provide a comprehensive picture of the quality. Two visibility metrics, along with a metric that computes the minimal permission request for mobile applications, were selected to illustrate the security. Using the evaluation projects, it was found that there is a clear correlation between most quality metrics. By contrast, no significant correlations were found using the security metrics. This work discusses the correlations and their causes as well as further recommendations based on the findings.
The work presented in this thesis investigated interactions of selected biophysical processes that affect zooplankton ecology at smaller scales. In this endeavour, the extent of changes in swimming behaviour and fluid disturbances produced by swimming Daphnia in response to changing physical environments were quantified. In the first research question addressed within this context, size and energetics of hydrodynamic trails produced by Daphnia swimming in non-stratified still waters were characterized and quantified as a function of organisms’ size and their swimming patterns.
The results revealed that neither size nor the swimming pattern of Daphnia affects the width of induced trails or dissipation rates. Nevertheless, as the size and swimming velocity of the organisms increased, trail volume increased in proportional to the cubic power of Reynolds number, and the biggest trail volume was about 500 times the body volume of the largest daphnids. Larger spatial extent of fluid perturbation and prolonged period to decay caused by bigger trail volumes would play a significant role in zooplankton ecology, e.g. increasing the risk of predation.
The study also found that increased trail volume brought about significantly enhanced total dissipated power at higher Reynolds number, and the magnitudes of total dissipated power observed varied in the range of (1.3-10)X10-9 W.
Furthermore, this study provided strong evidence that swimming speed of Daphnia and total dissipated power in Daphnia trails exceeded those of some other selected zooplankton species.
In recognizing turbulence as an intrinsic environmental perturbation in aquatic habitats, this thesis also examined the response of Daphnia to a range of turbulence flows, which correspond to turbu-lence levels that zooplankton generally encounter in their habitats. Results indicated that within the range of turbulent intensities to which the Daphnia are likely to be exposed in their natural habitats, increasing turbulence compelled the organisms to enhance their swimming activity and swim-ming speed. However, as the turbulence increased to extremely high values (10-4 m2s-3), Daphnia began to withdraw from their active swimming behaviour. Findings of this work also demonstrated that the threshold level of turbulence at which animals start to alleviate from largely active swimming is about 10-6 m2s-3. The study further illustrated that during the intermediate range of turbu-lence; 10-7 - 10-6 m2s-3, kinetic energy dissipation rates in the vicinity of the organisms is consistently one order of magnitude higher than that of the background turbulent flow.
Swarming, a common conspicuous behavioural trait observed in many zooplankton species, is considered to play a significant role in defining freshwater ecology of their habitats from food exploitation, mate encountering to avoiding predators through hydrodynamic flow structures produced by them, therefore, this thesis also investigated implications of Daphnia swarms at varied abundance & swarm densities on their swimming kinematics and induced flow field.
The results showed that Daphnia aggregated in swarms with swarm densities of (1.1-2.3)x103 L-1, which exceeded the abundance densities by two orders of magnitude (i.e. 1.7 - 6.7 L-1). The estimated swarm volume decreased from 52 cm3 to 6.5 cm3, and the mean neighbouring distance dropped from 9.9 to 6.4 body lengths. The findings of this work also showed that mean swimming trajectories were primarily horizontal concentric circles around the light source. Mean flow speeds found to be one order of magnitude lower than the corresponding swimming speeds of Daphnia. Furthermore, this study provided evidences that the flow fields produced by swarming Daphnia differed considerably between unidirectional vortex swarming and bidirectional swimming at low and high abundances respectively.