000 Informatik, Informationswissenschaft, allgemeine Werke
Refine
Year of publication
Document Type
- Part of Periodical (109)
- Master's Thesis (6)
- Bachelor Thesis (4)
- Doctoral Thesis (3)
Keywords
- Vorlesungsverzeichnis (34)
- Astrophysik (1)
- Augmented Reality (1)
- COVID-19 (1)
- Computer Supported Cooperative Work (1)
- DMN (1)
- Fluidsimulation (1)
- GPGPU (1)
- Gamebased (1)
- Gamebasiertheit (1)
- Heimarbeit (1)
- Internet of Things (1)
- KI-Strategie (1)
- Kleine und mittlere Unternehmen (1)
- Knowledge Engineering (1)
- Künstliche Intelligenz (1)
- MSR (1)
- Maschinelles Lernen (1)
- Multimodal Medical Image Analysis Cochlea Spine Non-rigid Registration Segmentation ITK VTK 3D Slicer CT MRI CBCT (1)
- Software Repositories (1)
- Sterne (1)
- Sustainability (1)
- Telearbeit (1)
- Tourism (1)
- Tourismus (1)
- Umfrage (1)
- Verification (1)
- Virtual Reality (1)
- change (1)
- digital transformation (1)
- digital workplace (1)
- enterprise collaboration platforms (1)
- hybrid work (1)
- information infrastructure (1)
- longitudinal (1)
- machine learning (1)
- media competence model (1)
- privacy and personal data (1)
- privacy competence model (1)
- remote work (1)
- risk (1)
- security awareness (1)
- tracking (1)
- virtual reality (1)
- work from anywhere (1)
- work from home (1)
Institute
- Zentrale Einrichtungen (95)
- Institut für Computervisualistik (10)
- Fachbereich 4 (6)
- Institut für Wirtschafts- und Verwaltungsinformatik (6)
- Fachbereich 5 (5)
- Institut für Erziehungswissenschaft (4)
- Institut für Management (3)
- Institut für Informatik (1)
- Institut für Pädagogik, Abteilung Schulpädagogik/Allgemeine Didaktik (1)
Künstliche Intelligenz (KI) hat sich in den letzten Jahren in der Gesellschaft stark ver-breitet und auch in Unternehmen einen zunehmenden Einsatz gefunden. Bekannt für den Einsatz sind meistens große internationale Konzerne, welche eine führende Position in den Märkten einnehmen. Um Deutschland als KI-Standort zu stärken, hat deshalb die Bundesregierung 2018 beschlossen, mit 5 Milliarden Euro deutsche Unternehmen in der Verwendung von KI zu fördern. Stand September 2021 wurden von diesen allerdings nur 346 Millionen abgerufen, wodurch sich die Frage stellt, wie besonders kleine und mittlere Unternehmen (KMUs) in Deutschland mit dem Thema KI umgehen.
Ziel dieser Arbeit ist es deshalb, die Perspektive der KMUs auf Akzeptanzkriterien, Hindernisse und Potenziale des Einsatzes von KI zu untersuchen. Um die Forschungs-fragen zu beantworten, wurde eine qualitative Interviewstudie mit fünf KMUs durchge-führt, welche im Anschluss mit dem Ansatz der qualitativen Inhaltsanalyse nach May-ring ausgewertet wurde.
Die Arbeit zeigt, dass Unternehmen ein großes Interesse am Thema KI haben und sich der Einsatz immer weiter verbreitet. KMUs sehen viel Potenzial durch den Einsatz der neuen Technologie, sehen aber noch einige Hindernisse, die es zu überwinden gilt.
Remote Working Study 2022
(2022)
The Remote Working Study 2022 is focused on the transition to work from home (WFH) triggered by the stay at home directives of 2020. These directives required employees to work in their private premises wherever possible to reduce the transmission of the coronavirus. The study, conducted by the Center for Enterprise Information Research (CEIR) at the University of Koblenz from December 2021 to January 2022, explores the transition to remote working.
The objective of the survey is to collect baseline information about organisations’ remote work experiences during and immediately following the COVID-19 lockdowns. The survey was completed by the key persons responsible for the implementation and/or management of the digital workplace in 19 German and Swiss organisations.
The data presented in this report was collected from member organisations of the IndustryConnect initiative. IndustryConnect is a university-industry research programme that is coordinated by researchers from the University of Koblenz. It focuses on research in the areas of the digital workplace and enterprise collaboration technologies, and facilitates the generation of new research insights and the exchange of experiences among user companies.
Efficient Cochlear Implant (CI) surgery requires prior knowledge of the cochlea’s size and its characteristics. This information helps to select suitable implants for different patients. Registered and fused images helps doctors by providing more informative image that takes advantages of different modalities. The cochlea’s small size and complex structure, in addition to the different resolutions and head positions during imaging, reveals a big challenge for the automated registration of the different image modalities. To obtain an automatic measurement of the cochlea length and the volume size, a segmentation method of cochlea medical images is needed. The goal of this dissertation is to introduce new practical and automatic algorithms for the human cochlea multi-modal 3D image registration, fusion, segmentation and analysis. Two novel methods for automatic cochlea image registration (ACIR) and automatic cochlea analysis (ACA) are introduced. The proposed methods crop the input images to the cochlea part and then align the cropped images to obtain the optimal transformation. After that, this transformation is used to align the original images. ACIR and ACA use Mattes mutual information as similarity metric, the adaptive stochastic gradient descent (ASGD) or the stochastic limited memory Broyden–Fletcher–Goldfarb–Shanno (s-LBFGS) optimizer to estimate the parameters of 3D rigid transform. The second stage of nonrigid registration estimates B-spline coefficients that are used in an atlas-model-based segmentation to extract cochlea scalae and the relative measurements of the input image. The image which has segmentation is aligned to the input image to obtain the non-rigid transformation. After that the segmentation of the first image, in addition to point-models are transformed to the input image. The detailed transformed segmentation provides the scala volume size. Using the transformed point-models, the A-value, the central scala lengths, the lateral and the organ of corti scala tympani lengths are computed. The methods have been tested using clinical 3D images of total 67 patients: from Germany (41 patients) and Egypt (26 patients). The atients are of different ages and gender. The number of images used in the experiments is 217, which are multi-modal 3D clinical images from CT, CBCT, and MRI scanners. The proposed methods are compared to the state of the arts ptimizers related medical image registration methods e.g. fast adaptive stochastic gradient descent (FASGD) and efficient preconditioned tochastic gradient descent (EPSGD). The comparison used the root mean squared distance (RMSE) between the ground truth landmarks and the resulted landmarks. The landmarks are located manually by two experts to represent the round window and the top of the cochlea. After obtaining the transformation using ACIR, the landmarks of the moving image are transformed using the resulted transformation and RMSE of the transformed landmarks, and at the same time the fixed image landmarks are computed. I also used the active length of the cochlea implant electrodes to compute the error aroused by the image artifact, and I found out an error ranged from 0.5 mm to 1.12 mm. ACIR method’s RMSE average was 0.36 mm with a standard deviation (SD) of 0.17 mm. The total time average required for registration of an image pair using ACIR was 4.62 seconds with SD of 1.19 seconds. All experiments are repeated 3 times for justifications. Comparing the RMSE of ACIR2017 and ACIR2020 using paired T-test shows no significant difference (p-value = 0.17). The total RMSE average of ACA method was 0.61 mm with a SD of 0.22 mm. The total time average required for analysing an image was 5.21 seconds with SD of 0.93 seconds. The statistical tests show that there is no difference between the results from automatic A-value method and the manual A-value method (p-value = 0.42). There is no difference also between length’s measurements of the left and the right ear sides (p-value > 0.16). Comparing the results from German and Egypt dataset shows there is no difference when using manual or automatic A-value methods (p-value > 0.20). However, there is a significant difference when using ACA2000 method between the German and the Egyptian results (p-value < 0.001). The average time to obtain the segmentation and all measurements was 5.21 second per image. The cochlea scala tympani volume size ranged from 38.98 mm3 to 57.67 mm3 . The combined scala media and scala vestibuli volume size ranged from 34.98 mm 3 to 49.3 mm 3 . The overall volume size of the cochlea should range from 73.96 mm 3 to 106.97 mm 3 . The lateral wall length of scala tympani ranged from 42.93 mm to 47.19 mm. The organ-of-Corti length of scala tympani ranged from 31.11 mm to 34.08 mm. Using the A-value method, the lateral length of scala tympani ranged from 36.69 mm to 45.91 mm. The organ-of-Corti length of scala tympani ranged from 29.12 mm to 39.05 mm. The length from ACA2020 method can be visualised and has a well-defined endpoints. The ACA2020 method works on different modalities and different images despite the noise level or the resolution. In the other hand, the A-value method works neither on MRI nor noisy images. Hence, ACA2020 method may provide more reliable and accurate measurement than the A-value method. The source-code and the datasets are made publicly available to help reproduction and validation of my result.
Enterprise collaboration platforms are increasingly gaining importance in organisations. Integrating groupware functionality and enterprise social software (ESS), they have substantially been transforming everyday work in organisations. While traditional collaboration systems have been studied in Computer Supported Cooperative Work (CSCW) for many years, the large-scale, infrastructural and heterogeneous nature of enterprise collaboration platforms remains uncharted. Enterprise collaboration platforms are embedded into organisations’ digital workplace and come with a high degree of complexity, ambiguity, and generativity. When introduced, they are empty shells with no pre-determined purposes of use. They afford interpretive flexibility, and thus are shaping and being shaped by and in their social context. Outcomes and benefits emerge and evolve over time in an open-ended process and as the digital platform is designed through use. In order to make the most of the platform and associated continuous digital transformation, organisations have to develop the necessary competencies and capabilities.
Extant literature on enterprise collaboration platforms has proliferated and provide valuable insights on diverse topics, such as implementation strategies, adoption hurdles, or collaboration use cases, however, they tend to disregard their evolvability and related multiple time frames and settings. Thus, this research aims to identify, investigate, and theorise the ways that enterprise collaboration platforms are changing over time and space and the ways that organisations build digital transformation capabilities. To address this research aim two different case study types are conducted: i) in-depth longitudinal qualitative case study, where case narratives and visualisations capturing hard-to-summarise complexities in the enterprise collaboration platform evolution are developed and ii) multiple-case studies to capture, investigate, and compare cross-case elements that contribute to the shaping of enterprise collaboration platforms in different medium-sized and large organisations from a range of industries. Empirical data is captured and investigated through a multi-method research design (incl. focus groups, surveys, in-depth interviews, literature reviews, qualitative content analysis, descriptive statistics) with shifting units of analysis. The findings reveal unique change routes with unanticipated outcomes and transformations, context-specific change strategies to deal with multiple challenges (e.g. GDPR, works council, developments in the technological field, competing systems, integration of blue-collar workers), co-existing platform uses, and various interacting actors from the immediate setting and broader context. The interpretation draws on information infrastructure (II) as a theoretical lens and related sociotechnical concepts and perspectives (incl. inscriptions, social worlds, biography of artefacts). Iteratively, a conceptual model of the building of digital transformation capabilities is developed, integrating the insights gained from the study of enterprise collaboration platform change and developed monitoring change tools (e.g. MoBeC framework). It assists researchers and practitioners in understanding the building of digital transformation capabilities from a theoretical and practical viewpoint and organisations implement the depicted knowledge in their unique digital transformation processes.
The industry standard Decision Model and Notation (DMN) has enabled a new way for the formalization of business rules since 2015. Here, rules are modeled in so-called decision tables, which are defined by input columns and output columns. Furthermore, decisions are arranged in a graph-like structure (DRD level), which creates dependencies between them. With a given input, the decisions now can be requested by appropriate systems. Thereby, activated rules produce output for future use. However, modeling mistakes produces erroneous models, which can occur in the decision tables as well as at the DRD level. According to the Design Science Research Methodology, this thesis introduces an implementation of a verification prototype for the detection and resolution of these errors while the modeling phase. Therefore, presented basics provide the needed theoretical foundation for the development of the tool. This thesis further presents the architecture of the tool and the implemented verification capabilities. Finally, the created prototype is evaluated.
This paper describes the robots TIAGo and Lisa used by
team homer@UniKoblenz of the University of Koblenz-Landau, Germany,
for the participation at the RoboCup@Home 2019 in Sydney,
Australia. We ended up first at RoboCup@Home 2019 in the Open Platform
League and won the competition in our league now three times
in a row (four times in total) which makes our team the most successful
in RoboCup@Home. We demonstrated approaches for learning from
demonstration, touch enforcing manipulation and autonomous semantic
exploration in the finals. A special focus is put on novel system components
and the open source contributions of our team. We have released
packages for object recognition, a robot face including speech synthesis,
mapping and navigation, speech recognition interface, gesture recognition
and imitation learning. The packages are available (and new packages
will be released) on http://homer.uni-koblenz.de.
Business Process Querying (BPQ) is a discipline in the field of Business Process Man- agement which helps experts to understand existing process models and accelerates the development of new ones. Its queries can fetch and merge these models, answer questions regarding the underlying process, and conduct compliance checking in return. Many languages have been deployed in this discipline but two language types are dominant: Logic-based languages use temporal logic to verify models as finite state machines whereas graph-based languages use pattern matching to retrieve subgraphs of model graphs directly. This thesis aims to map the features of both language types to features of the other to identify strengths and weaknesses. Exemplarily, the features of Computational Tree Logic (CTL) and The Diagramed Modeling Language (DMQL) are mapped to one another. CTL explores the valid state space and thus is better for behavioral querying. Lacking certain structural features and counting mechanisms it is not appropriate to query structural properties. In contrast, DMQL issues structural queries and its patterns can reconstruct any CTL formula. However, they do not always achieve exactly the same semantic: Patterns treat conditional flow as sequential flow by ignoring its conditions. As a result, retrieved mappings are invalid process execution sequences, i.e. false positives, in certain scenarios. DMQL can be used for behavioral querying if these are absent or acceptable. In conclusion, both language types have strengths and are specialized for different BPQ use cases but in certain scenarios graph-based languages can be applied to both. Integrating the evaluation of conditions would remove the need for logic-based languages in BPQ completely.
The Internet of Things (IoT) is a fast-growing, technological concept, which aims to integrate various physical and virtual objects into a global network to enable interaction and communication between those objects (Atzori, Iera and Morabito, 2010). The application possibilities are manifold and may transform society and economy similarly to the usage of the internet (Chase, 2013). Furthermore, the Internet of Things occupies a central role for the realisation of visionary future concepts, for example, Smart City or Smart Healthcare. In addition, the utilisation of this technology promises opportunities for the enhancement of various sustainability aspects, and thus for the transformation to a smarter, more efficient and more conscious dealing with natural resources (Maksimovic, 2017). The action principle of sustainability increasingly gains attention in the societal and academical discourse. This is reasoned by the partly harmful consumption and production patterns of the last century (Mcwilliams et al., 2016). Relating to sustainability, the advancing application of IoT technology also poses risks. Following the precautionary principle, these risks should be considered early (Harremoës et al., 2001). Risks of IoT for sustainability include the massive amounts of energy and raw materials which are required for the manufacturing and operation of IoT objects and furthermore, the disposal of those objects (Birkel et al., 2019). The exact relations in the context of IoT and sustainability are insufficiently explored to this point and do not constitute a central element within the discussion of this technology (Behrendt, 2019). Therefore, this thesis aims to develop a comprehensive overview of the relations between IoT and sustainability.
To achieve this aim, this thesis utilises the methodology of Grounded Theory in combination with a comprehensive literature review. The analysed literature primarily consists of research contributions in the field of Information Technology (IT). Based on this literature, aspects, solution approaches, effects and challenges in the context of IoT and sustainability were elaborated. The analysis revealed two central perspectives in this context. IoT for Sustainability (IoT4Sus) describes the utilisation and usage of IoT-generated information to enhance sustainability aspects. In contrast, Sustainability for IoT (Sus4IoT) fo-cuses on sustainability aspects of the applied technology and highlights methods to reduce negative impacts, which are associated with the manufacturing and operation of IoT. Elaborated aspects and relations were illustrated in the comprehensive CCIS Framework. This framework represents a tool for the capturing of relevant aspects and relations in this context and thus supports the awareness of the link between IoT and sustainability. Furthermore, the framework suggests an action principle to optimise the performance of IoT systems regarding sustainability.
The central contribution of this thesis is represented by the providence of the CCIS Framework and the contained information regarding the aspects and relations of IoT and sustainability.
Tracking is an integral part of many modern applications, especially in areas like autonomous systems and Augmented Reality. For performing tracking there are a wide array of approaches. One that has become a subject of research just recently is the utilization of Neural Networks. In the scope of this master thesis an application will be developed which uses such a Neural Network for the tracking process. This also requires the creation of training data as well as the creation and training of a Neural Network. Subsequently the usage of Neural Networks for tracking will be analyzed and evaluated. This includes several aspects. The quality of the tracking for different degrees of freedom will be checked as well as the the impact of the Neural Network on the applications performance. Additionally the amount of required training data is investigated, the influence of the network architecture and the importance of providing depth data as part of the networks input. This should provide an insight into how relevant this approach could be for its adoption in future products.