Institut für Wirtschafts- und Verwaltungsinformatik
Refine
Year of publication
Document Type
- Part of Periodical (34)
- Bachelor Thesis (27)
- Diploma Thesis (27)
- Master's Thesis (17)
- Study Thesis (12)
- Doctoral Thesis (8)
- Book (1)
Keywords
- Logistik (4)
- computer clusters (4)
- Datenaustausch (3)
- Datenschutz (3)
- E-Partizipation (3)
- Evaluation (3)
- Instant Messaging (3)
- Internet of Things (3)
- artififfcial neural networks (3)
- parallel algorithms (3)
Institute
Retrospektive Analyse der Ausbreitung und dynamische Erkennung von Web-Tracking durch Sandboxing
(2018)
Aktuelle quantitative Analysen von Web-Tracking bieten keinen umfassenden Überblick über dessen Entstehung, Ausbreitung und Entwicklung. Diese Arbeit ermöglicht durch Auswertung archivierter Webseiten eine rückblickende Erfassung der Entstehungsgeschichte des Web-Trackings zwischen den Jahren 2000 und 2015. Zu diesem Zweck wurde ein geeignetes Werkzeug entworfen, implementiert, evaluiert und zur Analyse von 10000 Webseiten eingesetzt. Während im Jahr 2005 durchschnittlich 1,17 Ressourcen von Drittparteien eingebettet wurden, zeigt sich ein Anstieg auf 6,61 in den darauffolgenden 10 Jahren. Netzwerkdiagramme visualisieren den Trend zu einer monopolisierten Netzstruktur, in der bereits ein einzelnes Unternehmen 80 % der Internetnutzung überwachen kann.
Trotz vielfältiger Versuche, dieser Entwicklung durch technische Maßnahmen entgegenzuwirken, erweisen sich nur wenige Selbst- und Systemschutzmaßnahmen als wirkungsvoll. Diese gehen häufig mit einem Verlust der Funktionsfähigkeit einer Webseite oder mit einer Einschränkung der Nutzbarkeit des Browsers einher. Mit der vorgestellten Studie wird belegt, dass rechtliche Vorschriften ebenfalls keinen hinreichenden Schutz bieten. An Webauftritten von Bildungseinrichtungen werden Mängel bei Erfüllung der datenschutzrechtlichen Pflichten festgestellt. Diese zeigen sich durch fehlende, fehlerhafte oder unvollständige Datenschutzerklärungen, deren Bereitstellung zu den Informationspflichten eines Diensteanbieters gehören.
Die alleinige Berücksichtigung klassischer Tracker ist nicht ausreichend, wie mit einer weiteren Studie nachgewiesen wird. Durch die offene Bereitstellung funktionaler Webseitenbestandteile kann ein Tracking-Unternehmen die Abdeckung von 38 % auf 61 % erhöhen. Diese Situation wird durch Messungen von Webseiten aus dem Gesundheitswesen belegt und aus technischer sowie rechtlicher Perspektive bewertet.
Bestehende systemische Werkzeuge zum Erfassen von Web-Tracking verwenden für ihre Messung die Schnittstellen der Browser. In der vorliegenden Arbeit wird mit DisTrack ein Framework zur Web-Tracking-Analyse vorgestellt, welches eine Sandbox-basierte Messmethodik verfolgt. Dies ist eine Vorgehensweise, die in der dynamischen Schadsoftwareanalyse erfolgreich eingesetzt wird und sich auf das Erkennen von Seiteneffekten auf das umliegende System spezialisiert. Durch diese Verhaltensanalyse, die unabhängig von den Schnittstellen des Browsers operiert, wird eine ganzheitliche Untersuchung des Browsers ermöglicht. Auf diese Weise können systemische Schwachstellen im Browser aufgezeigt werden, die für speicherbasierte Web-Tracking-Verfahren nutzbar sind.
The Internet of Things (IoT) is a concept in which connected physical objects are integrated into the virtual world to become active partakers of businesses and everyday processes (Uckelmann, Harrison and Michahelles, 2011; Shrouf, Ordieres and Miragliotta, 2014). It is expected to have a major impact on businesses (Council, Nic and Intelligence, 2008), but small and medium enterprises’ business models are threatened if they do not adopt the new concept (Sommer, 2015). Thus, this thesis aims to showcase a sample implementation of connected devices in a small enterprise, demonstrating its added benefits for the business.
Design Science Research (DSR) is used to develop a prototype based on a use case provided by a carpentry. The prototype comprises a hardware sensor and a web application which can be used by the wood shop to improve their processes. The thesis documents the iterative process of developing a prototype from the grounds up to useable hard- and software.
This contribution provides an example of how IoT can be used and implemented at a small business.
This thesis connects the endeavors of the winemaker’s intention in perfect and profitable wine making with an innovative technological application to use Internet of Things. Thereby the winemaker’s work may be supported and enriched – and enables until recent years still unthinkable optimization of managing and planning of his business, including close state control of different areas of his vineyard, and more than that, not ending up with the single grapevine. It is exemplarily shown in this thesis how to measure, transmit, store and make data available, exemplarily demonstrated with “live” temperature, air and soil humidity values from the vineyard. A modular architecture was designed for the system presented, which allows the use of current sensors, and similar low-voltage sensors, which will be developed in the future.
By using IoT devices in the vineyard, the winemaker advances to a new quality of precision of forecasted data, starting from live data of his vineyard. Of more and more importance, the winemaker can start immediate action, when unforeseen heavy weather conditions occur. Immediate use of current data enabled by a Cloud Infrastructure. For this system, an open service infrastructure is employed. In contrast to other published commercial approaches, the described solution is based on open source.
As an alone-standing part of this work, a physical prototype for measuring relevant parameters in the vineyard was de-novo designed and developed until fulfilling the set of specifications. The outlined features and requirements for a functioning data collection and autonomously transmitting device was developed, described, and the fulfilment by the prototype device were demonstrated. Through literature research and supportive orientationally live interviews of winemakers, the theory and the practical application were synchronized and qualified.
For the development of the prototype the general principles of development of an electronic device were followed, in particular the Design Science Research development rules, and principles of Quality Function Deployment. As a characteristic of the prototype, some principles like re-use of approved construction and material price of the building blocks of the device were taken into consideration as well (e.g. housing; Arduino; PCB). Parts reduction principles, decomplexation and simplified assembly, testing and field service were integrated to the development process by the modular design of the functional vineyard device components, e.g. with partial reference to innovative electrical cabinet construction system Modular-3.
The software architectural concept is based on a three-layer architecture inclusive the TTN infrastructure. The front end is realized as a rich web client, using a WordPress plugin. WordPress was chosen due to the wide adoption through the whole internet, enabling fast and easy user familiarization. Relevant quality issues have been tested and discussed in the view of exemplary functionality, extensibility, requirements fulfilment, as usability and durability of the device and the software.
The prototype was characterized and tested with success in the laboratory and in field exposition under different conditions, in order to allow a measurement and analysis of the fulfilment of all requirements by the selected and realized electronic construction and layout.
The solution presented may serve as a basis for future development and application in this special showcase and within similar technologies. A prognosis of future work and applications concludes this work.
Entwicklung eines Social Collaboration Analytics Dashboard-Prototyps für Beiträge von UniConnect
(2018)
Seit der vergangenen Dekade steigt die Nutzung von sogenannten Enterprise Collaboration Systems (ECS) in Unternehmen. Diese versprechen sich mit der Einführung eines solchen zur Gattung der Social Software gehörenden Kollaborationssystems, die menschliche Kommunikation und Kooperation der eigenen Mitarbeiter zu verbessern. Durch die Integration von Funktionen, wie sie aus Social Media bekannt sind, entstehen große Mengen an Daten. Darunter befinden sich zu einem erheblichen Teil textuelle Daten, die beispielsweise mit Funktionen wie Blogs, Foren, Statusaktualisierungen oder Wikis erstellt wurden. Diese in unstrukturierter Form vorliegenden Daten bieten ein großes Potenzial zur Analyse und Auswertung mittels Methoden des Text Mining. Die Forschung belegt dazu jedoch, dass Umsetzungen dieser Art momentan nicht gebräuchlich sind. Aus diesem Grund widmet sich die vorliegende Arbeit diesem Mangel. Ziel ist die Erstellung eines Dashboard-Prototyps, der sich im Rahmen von Social Collaboration Analytics (SCA) mit der Auswertung von textuellen Daten befasst. Analyseziel ist die Identifikation von populären Themen, die innerhalb von Communities oder communityübergreifend von den Plattformnutzern in den von ihnen erstellten Beiträgen aufgegriffen werden. Als Datenquelle wurde das auf IBM Connections aufbauende ECS UniConnect ausgewählt. Dieses wird vom University Competence Center for Collaboration Technologies (UCT) an der Universität Koblenz-Landau betrieben. Grundlegend für die korrekte Funktionsweise des Dashboards sind mehrere Java-Klassen, deren Umsetzungen auf verschiedenen Methoden des Text Mining basieren. Vermittelt werden die Analyseergebnisse im Dashboard durch verschiedene Diagrammarten, Wordclouds und Tabellen.
Smart Building Solutions - Generischer Ansatz für die Identifikation von Raumsteuerungsfunktionen
(2018)
40 percent of current housing and real estate companies plan to integrate intelligent control systems into their properties during new construction and modernization. At the same time, Internet companies are pushing their devices into homes and apartments, promising intelligent services for their users. The term "Smart Home" is used for both types of new technologies. The first group of systems has its origins in the field of "Building Automation", the second group developed from the concept of the "Internet of Things".
In order to discover what the differences are and what common foundations exist, both the areas of Building Automation and Internet of Things are analyzed and compared.
The central contribution of this thesis is the realization that both domains are based on similar concepts and an integration is possible, without compromising the integrity of the systems themselves. In addition, the work provides an approach to designing Building Automation Systems with the integration of the Internet of Things.
During the last couple of years the extension of the internet into the real world, also referred to as the Internet of Things (IoT), was positively affected by an ongoing digitalization (Mattern and Floerkemeier, 2010; Evans, 2013). Furthermore, one of the most active IoT domains is the personal health ecosystem (Steele and Clarke, 2013). However, this thesis proposes a gamification framework which is supported and enabled by IoT to bring personal health and IoT together in the context of health-insurances. By examining gamification approaches and identifying the role of IoT in such, a conceptual model of a gamification approach was created which indicates where and how IoT is ap-plicable to it. Hence, IoT acts as enabler and furthermore as enhancer of gamified activities. Especial-ly the necessity of wearable devices was highlighted. A stakeholder analysis shed light on respective benefits which concluded in the outcome, that IoT enabled two paradigm shifts for both, the insur-ance and their customer. While taking the results of the examination and the stakeholder analysis as input, the previously made insights were used to develop an IoT supported gamification framework. The framework includes a multi-level structure which is meant to guide through the process of creat-ing an approach but also to analyze already existing approaches. Additionally, the developed frame-work was instantiated based on the application Pokémon Go to identify occurring issues and explain why it failed to retain their customer in the long term. The thesis provides a foundation on which fur-ther context related research can be orientated.
Social Business Documents: An Investigation of their Nature, Structure and Long-term Management
(2018)
Business documents contain valuable information. In order to comply with legal requirements, to serve as organisational knowledge and to prevent risks they need to be managed. However, changes in technology with which documents are being produced introduced new kinds of documents and new ways of interacting with documents. Thereby, the web 2.0 led to the development of Enterprise Collaboration Systems (ECS), which enable employees to use wiki, blog or forum applications for conducting their business. Part of the content produced in ECS can be called Social Business Documents (SBD). Compared to traditional digital documents SBD are different in their nature and structure as they are, for example, less well-structured and do not follow a strict lifecycle. These characteristics bring along new management challenges. However, currently research literature lacks investigations on the characteristics of SBD, their peculiarities and management.
This dissertation uses document theory and documentary practice as theoretical lenses to investigate the new challenges of the long-term management of SBD in ECS. By using an interpretative, exploratory, mixed methods approach the study includes two major research parts. First, the nature and structure of Social Business Documents is addressed by analysing them within four different systems using four different modelling techniques each. The findings are used to develop general SBD information models, outlining the basic underlying components, structure, functions and included metadata, as well as a broad range of SBD characteristics. The second phase comprises a focus group, a case study including in-depth interviews and a questionnaire, all conducted with industry representatives. The focus group identified that the kind of SBD used for specific content and the actual place of storage differ between organisations as well as that there are currently nearly no management practices for SBD at hand. The case study provided deep insights into general document management activities and investigated requirements, challenges and actions for managing SBD. Finally, the questionnaire consolidated and deepened the previous findings. It provides insights about the value of SBD, their current management practices as well as management challenges and needs. Despite all participating organisations storing information worth managing in SBD most are not addressing them with management activities and many challenges remain.
Together, the investigations enable a contribution to practice and theory. The progress in practice is summarised through a framework, addressing the long-term management of Social Business Documents. The framework identifies and outlines the requirements and challenges of and the actions for SBD management. It also indicates the dependencies of the different aspects. Furthermore, the findings enable the progress in theory within documentary practice by discussing the extension of document types to include SBD. Existing problems are outlined along the definitions of records and the newly possible characteristics of documents emerging through Social Business Documents are taken into account.
Companies try to utilise Knowledge Management (KM) to gain more efficiency and effectiveness in business. The major problem is that most of these KM projects are not or rarely based on sustainable analyses or established theories about KM. Often there is a big gap between the expectations and the real outcome of such KM initiatives. So the research question to be answered is: What challenges arise in KM projects, which KM requirements can be derived from them and which recommendations support the goal of meeting the requirements for KM? As theoretical foundation a set of KM frameworks is examined. Subsequently KM challenges from literature are analysed and best practices from case studies are used to provide recommendations for action on this challenges. The main outcome of this thesis is a best practice guideline,which allows Chief Knowledge Officers (CKOs) and KM project managers to examine the challenges mentioned in this thesis closely, and to find a suitable method to master these challenge in an optimal way. This guideline shows that KM can be positively and negatively influenced in a variety of ways. Mastering Knowledge Management (KM) in a company is a big and far-reaching venture and that technology respectively Information Technology (IT) is only a part of the big picture.
In dieser Forschungsarbeit wird eine Methode zur anwendungsbasierten Verknüpfung von Anforde-rungen und Enterprise Collaboration Softwarekompenten vorgestellt. Basierend auf dem etablierten IRESS Modell wird dabei ein praxistaugliches Mappingschema entwickelt, welches Use Cases über Kol-laborationsszenarien, Collaborative Features und Softwarekomponenten mit ECS verbindet. Somit las-sen sich Anforderungen von Unterhemen in Form von Use Cases und Kollaborationsszenarien model-lieren und anschließend über das Mappingschema mit konkreten ECS verbinden. Zusätzlich wird eine Methodik zur Identifikation von in Softwarekomponenten enthaltenen Collaborative Features vorge-stellt und exemplarisch angewandt.
Anschließend wird ein Konzept für eine Webapplikation entworfen, welches das vorgestellte Mapping automatisiert durchführt, und somit nach Eingabe der Anforderungen in Form vom Use Cases oder Kol-laborationsszenarien, die ECS ausgibt, die eben diese Anforderungen unterstützen.
The Internet of Things (IoT) recently developed from the far-away vision of ubiquitous computing into very tangible endeavors in politics and economy, implemented in expensive preparedness programs. Experts predict considerable changes in business models that need to be addressed by organizations in order to respond to competition. Although there is a need to develop strategies for upcoming transformations, organizational change literature did not turn to the specific change related to the new technology yet. This work aims at investigating IoT-related organizational change by identifying and classifying different change types. It therefore combines the methodological approach of grounded theory with a discussion and classification of identified change informed by a structured literature review of organizational change literature. This includes a meta-analysis of case studies using a qualitative, exploratory coding approach to identify categories of organizational change related to the introduction of IoT. Furthermore a comparison of the identified categories to former technology-related change is provided using the example of Electronic Business (e-business), Enterprise Resource Planning (ERP) systems, and Customer Relationship Management (CRM) systems. As a main result, this work develops a comprehensive model of IoT-related business change. The model presents two main themes of change indicating that personal smart things will transform businesses by means of using more personal devices, suggesting and scheduling actions of their users, and trying to avoid hazards. At the same time, the availability of information in organizations will further increase to a state where information is available ubiquitously. This will ultimately enable accessing real time information about objects and persons anytime and from any place. As a secondary result, this work gives an overview on concepts of technology-related organizational change in academic literature.
This thesis explores the possibilities of probabilistic process modelling for the Computer Supported Cooperative Work (CSCW) systems in order to predict the behaviour of the users present in the CSCW system. Toward this objective applicability, advantages, limitations and challenges of probabilistic modelling are excavated in context of CSCW systems. Finally, as a primary goal seven models are created and examined to show the feasibilities of probabilistic process discovery and predictions of the users behaviour in CSCW systems.
The extensive literature in the data visualization field indicates that the process of creating efficient data visualizations requires the data designer to have a large set of skills from different fields (such as computer science, user experience, and business expertise). However, there is a lack of guidance about the visualization process itself. This thesis aims to investigate the different processes for creating data visualizations and develop an integrated framework to guide the process of creating data visualizations that enable the user to create more useful and usable data visualizations. Firstly, existing frameworks in the literature will be identified, analyzed and compared. During this analysis, eight views of the visualization process are developed. These views represent the set of activities which should be done in the visualization process. Then, a preliminary integrated framework is developed based on an analysis of these findings. This new integrated framework is tested in the field of Social Collaboration Analytics on an example from the UniConnect platform. Lastly, the integrated framework is refined and improved based on the results of testing with the help of diagrams, visualizations and textual description. The results show that the visualization process is not a waterfall type. It is the iterative methodology with the certain phases of work, demonstrating how to address the eight views with different levels of stakeholder involvement. The findings are the basis for a visualization process which can be used in future work to develop the fully functional methodology.
Identifikation und Analyse von Konfigurationen zwischenbetrieblicher Integration in der Möbelbranche
(2017)
In der Möbelbranche in Deutschland existieren seit mehreren Jahren branchenweite Bemühungen, zwischenbetriebliche Kooperationen durch organisatorische und technische Lösungen (z.B. Standardisierung von Austauschformaten) zu unterstützen. In einigen Branchensegmenten (vor allem Küche und Polster) konnten sich Infrastrukturen zur Integration etablieren, die von vielen Branchenteilnehmern akzeptiert und genutzt werden. Trotz dieser Entwicklung sind in den genannten Branchensegmenten überraschende Phänomene in jüngerer Vergangenheit zu beobachten: Mängel in der Prozessintegration zwischen Branchenakteuren, Unterschiede im Standardisierungsfortschritt zwischen den Branchensegmenten und die Entstehung unterschiedlicher Infrastrukturen zur zwischenbetrieblichen Integration. Bei näherer Betrachtung der beobachteten Phänomene offenbart sich ein hoch-komplexes Zusammenspiel der in der Möbelbranche agierenden Akteure auf drei unterschiedlichen Ebenen (institutionell, organisatorisch und technisch). Um die verantwortlichen Zusammenhänge zu verstehen, ist eine ganzheitliche Betrachtung relevanter Faktoren erforderlich. Dafür gibt es bislang nur wenige überzeugende Konzepte. Die Konfigurationsanalyse nach Lyytinen und Damsgaard betrachtet Unternehmensnetzwerke als Konfigurationen zwischenbetrieblicher Integration, die einem stabilen Muster von Kooperationsformen entsprechen. Sie ist insbesondere dafür geeignet, Unternehmensnetzwerke auf Branchenebene zu untersuchen. In der vorliegenden Arbeit wird die Konfigurationsanalyse mittels eines speziell dafür entwickelten bzw. angepassten Forschungsdesigns zum ersten Mal operationalisiert. Dazu wurden in 21 Interviews mit 19 Organisationen Daten zu Konfigurationen zwischenbetrieblicher Integration in zwei Branchensegmenten der Möbelbranche (Küche und Polster) in Deutschland erhoben, analysiert und Erklärungsansätze für die drei oben genannten Phänomene entwickelt. Die Ergebnisse zeigen, dass komplexe Konstellationen von Brancheneigenschaften die existierenden Konfigurationen begründen. Insgesamt wurden vier Konfigurationstypen (Mittler, Branche, Dyade und Triade) identifiziert, die in 17 konkreten Konfigurationen zum Einsatz kommen (Mittler (4-mal), Branche (5-mal), Dyade (6-mal) und Triade (2-mal)). Die Ursachen für die beobachteten Phänomene sind vielfältig. Insbesondere sind bestimmte branchenbezogene Charakteristika (u.a. Produkteigenschaften),der Wettbewerb zwischen Akteursklassen und zuletzt die wirtschaftliche Überlegenheit des Handels gegenüber den Herstellern ausschlaggebend. Der Standardisierungsfortschritt in den Branchensegmenten und die zugrunde liegenden Infrastrukturen sind vor allem von dem betriebswirtschaftlichen Nutzenpotenzial abhängig, das ihnen jeweils zugeschrieben wird. Die Prozessintegration hingegen benötigt eine zentral steuernde Instanz zur erfolgreichen IOIS-Diffusion.
In the last few years the Internet of things has gained increased attention from authors as well as companies due to its innovation potential. The rising interest in the Internet of Things has also affected the logistics, which currently suffers from the effects of the globalization and the ever-increasing competitive pressure. Thus, there are efforts to discover how the logistics can profit from the use of IoT concepts, ideas and technologies to help it overcome its challenges. This research study focuses on the identification of these efforts and the corresponding research for logistics processes. For that purpose the researcher explored current literature referring to this topic. The final outcome of this paper is a structured overview of the identified IoT use-cases, their corresponding technologies and devices and finally their affected stakeholders. Whether the expectations regarding the IoT implementation in logistics processes are met, how companies can profit from these use-cases and which problems potentially arise by using IoT devices and technologies in logistics are answered at the end of this paper.
With global and distributed project teams being increasingly common Collaborative Project Management is becoming the prevalent paradigm for the work in most organisations. Software has for many years been one of the most used tools for supporting Project Management and with the focus on Collaborative Project Management and accompanied by the emergence of Enterprise Collaboration Systems (ECS), Collaborative Project Management Software (CPMS) is gaining increased attention. This thesis examines the capabilities of CPMS for the long-term management of information which not only includes the management of files within these systems, but the management of all types of digital business documents, particularly social business documents. Previous research shows that social content in collaboration software is often poorly managed which poses challenges to meeting performance and conformance objectives in a business. Based on literature research, requirements for the long-term management of information in CPMS are defined and 7 CPMS tools are analysed regarding the content they contain and the functionalities for the long-term management of this content they offer. The study shows that CPMS by and large are not able to meet the long-term information management needs of an organisation on their own and that only the tools geared towards enterprise customers have sufficient capabilities to support the implementation of an Enterprise Information Management strategy.
The Internet of Things (IoT) is a network of addressable, physical objects that contain embedded sensing, communication and actuating technologies to sense and interact with their environment (Geschickter 2015). Like every novel paradigm, the IoT sparks interest throughout all domains both in theory and practice, resulting in the development of systems pushing technology to its limits. These limits become apparent when having to manage an increasing number of Things across various contexts. A plethora of IoT architecture proposals have been developed and prototype products, such as IoT platforms, been introduced. However, each of these architectures and products apply their very own interpretations of an IoT architecture and its individual components so that IoT is currently more an Intranet of Things than an Internet of Things (Zorzi et al. 2010). Thus, this thesis aims to develop a common understanding of the elements forming an IoT architecture and provide high-level specifications in the form of a Holistic IoT Architecture Framework.
Design Science Research (DSR) is used in this thesis to develop the architecture framework based on the pertinent literature. The development of the Holistic IoT Architecture Framework includes the identification of two new IoT Architecture Perspectives that became apparent during the analysis of the IoT architecture proposals identified in the extant literature. While applying these novel perspectives, the need for a new component for the architecture framework, which was merely implicitly mentioned in the literature, became obvious as well. The components of various IoT architecture proposals as well as the novel component, the Thing Management System, were combined, consolidated and related to each other to develop the Holistic IoT Architecture Framework. Subsequently, it was shown that the specifications of the architecture framework are suitable to guide the implementation of a prototype.
This contribution provides a common understanding of the basic building blocks, actors and relations of an IoT architecture.
Diese Arbeit kombiniert zwei Themengebiete, welche in unserer Gesellschaft und Wirtschaft immer präsenter und aktueller werden. Das erste Thema beinhaltet die Nachhaltigkeit, welche sich in dieser Arbeit in die drei Säulen Ökologie, Ökonomie und Soziales untergliedert. Die erste Säule, Ökologie, beschäftigt sich hauptsächlich mit der Bekämpfung von Umweltproblemen und dem nachhaltigen Erhalt der Natur. Der Bereich Ökonomie befasst sich damit, die Ressourcen nachhaltig zu nutzen, um ein langfristiges Erzielen von wirtschaftlichen Erträgen zu gewährleisten. Die letzte Säule hat den Zweck die Soziale Nachhaltigkeit zu fördern, indem der gesellschaftliche Zusammenhalt gesichert und die Chance auf Arbeit ermöglicht wird, um gleichzeitig für gute Arbeitsbedingungen zu sorgen. Alle drei Säulen sind daher auch für Unternehmen relevant und sollten von diesen stetig beachtet werden, um eine sogenannte unternehmerische Nachhaltigkeit umzusetzen. Zur Unterstützung dieser Umsetzung, soll hier das zweite Thema hinzugezogen werden, das sogenannte Internet of Things. Das Internet of Things gewinnt, wie auch das Thema Nachhaltigkeit, zunehmend an Bedeutung und bietet viele Vorteile zur Unterstützung nachhaltiger Unternehmen. Dabei sollen immer mehr Geräte zu intelligenten Geräten gewandelt werden um eine Integration in ein Informationsnetzwerk zu gewährleisten. Dort können gesammelte Daten sinnvoll analysiert und genutzt werden, wodurch viele Bereiche effizienter gestaltet und viele Handlungen erleichtert werden können.
Aufbauend auf diesen zwei Themengebiete werden, im weiteren Verlauf dieser Arbeit, IoT- Technologien vorgestellt, die der Unterstützung, in den Bereichen Ökologie, Ökonomie und Soziales, von Unternehmen dienen. Zu den aufgezeigten Technologien werden anschließend Beispiele präsentiert und, wenn vorhanden, auch die jeweiligen Anbieter.
Da eine Nutzung von IoT-Technologien nicht nur von Vorteil ist, sondern auch Herausforderungen mit sich bringt, werden diese abschließend aufgezeigt. Diesen Herausforderungen gilt es sich seitens der Unternehmen, der Gesellschaft und auch der Politik zu stellen, um eine effiziente Nutzung zu ermöglichen.
The purpose of this research is to examine various existing cloud-based Internet of Things (IoT) development platforms and evaluate one platform (IBM Watson IoT) in detail using a use case scenario. Internet of Things IoT is an emerging technology that has a vision of interconnecting the virtual world (e.g. clouds, social networks) and the physical world (e.g. device, cars, fridge, people, animals) through the Internet technology. For example, the IoT concept of smart cities which has the objectives to improve the efficiency and development of business, social and cultural services in the city, can be achieved by using sensors, actuators, clouds and mobile devices (IEEE, 2015). A sensor (e.g. temperature sensor) in the building (global world) can send the real-time data to the IoT cloud platform (virtual world), where it can be monitored, stored, analysed, or used to trigger some action (e.g. turn on the cooling system in the building if temperature exceeds a threshold limit). Although, the IoT creates vast opportunities in different areas (e.g. transportation, healthcare, manufacturing industry), it also brings challenges such as standardisation, interoperability, scalability, security and privacy. In this research report, IoT concepts and related key issues are discussed.
The focus of this research is to compare various cloud-based IoT platforms in order to understand the business and technical features they offer. The cloud-based IoT platforms from IBM, Google, Microsoft, PTC and Amazon have been studied.
To design the research, the Design Science Research (DSR) methodology has been followed, and to model the real-time IoT system the IOT-A modelling approach has been used.
The comparison of different cloud based IoT development platforms shows that all of the studied platforms provide basic IoT functionalities such as connecting the IoT devices to the cloud based IoT platform, collecting data from the IoT devices, data storage and data analytics. However, the IBM’s IoT platform appears to have an edge over the other platforms studied in this research because of the integrated run-time environment which also makes it more developer friendly. Therefore, IBM Watson IoT for Bluemix is selected for further examination of its capabilities. The IBM Watson IoT for Bluemix offerings include analytics, risk management, connect and information management. A use case was implemented to assess the capabilities that IBM Watson IoT platform offers. The digital artifacts (i.e. applications) are produced to evaluate the IBM’s IoT solution. The results show that IBM offers a very scalable, developer and deployment friendly IoT platform. Its cognitive, contextual and predictive analytics provide a promising functionality that can be used to gain insights from the IoT data transmitted by the sensors and other IoT devices.
Die vorliegende Arbeit bildet den Abschluss eines Forschungspraktikums von Studierenden der Masterstudiengänge Informationsmanagement und Wirtschaftsinformatik unter Betreuung der wissenschaftlichen Mitarbeiterin Daniela Simić-Draws und von Prof. Dr. Rüdiger Grimm. Eine wesentliche Vorlage zu dieser Arbeit war ein Vorgehensmodell zur Sicherheitsanalyse von Geschäftsprozessen, das von D. Simić-Draws im Rahmen ihrer Dissertation erarbeitet wird und zu dessen laufender Verbesserung dieses studentische Forschungspraktikum wertvolle Hinweise liefern konnte. Als Anwendungsbeispiel wurden die sicherheitskritischen Prozesse "Kommunalwahl" und "Geldauszahlung am Bankautomaten" gewählt, weil die Arbeitsgruppe von Prof. Grimm in diesen beiden Anwendungen aus vorhergehender wissenschaftlicher Arbeit Erfahrung gesammelt hat. Insbesondere zum Anwendungsbeispiel "Kommunalwahl" hatte sich dankenswerterweise das Ordnungsamt Koblenz, das für die Kommunalwahlen zuständig ist, unter aktiver Mithilfe ihres Leiters Dirk Urmersbach zur Zusammenarbeit angeboten, so dass dieser Geschäftsprozess wirklichkeitsnah untersucht werden konnte.
Im Rahmen dieser Arbeit wird der Einfluss des Wahlszenarios auf die Geheimheit und Öffentlichkeit der Wahl herausgearbeitet. Ein Wahlszenario wird durch seine Wahlform und die verwendete Wahltechnik bestimmt. Bei der Wahl-form kann zwischen einer Präsenz- und einer Fernwahl unterschieden werden. Bei der Wahltechnik zwischen der Papier- und der elektronischen Wahl. Mit der Papier-Präsenzwahl, der Briefwahl (Papier-Fernwahl) und der Internetwahl (elektronische Fernwahl) werden drei prominente Wahlszenarien und ihr Einfluss auf Geheimheit, Privatheit und Öffentlichkeit untersucht.
This thesis is providing an overview over the current topics and influences of mobile components on Enterprise Content Management (ECM). With a literature review the core topics of enterprise mobility and ECM have been identified and projected on the context of using mobile Apps within the environment of ECM. An analysis of three ECM systems and their mobile software lead to an understanding of the functionalities and capabilities mobile systems are providing in the ECM environment. These findings lead to a better un- derstanding for the usage of mobile Enterprise Content Management and is preparation. The thesis focuses the most important topics, which need to be considered for the usage and adoption of mobile Apps in ECM.
We are entering the 26th year from the time the World Wide Web (WWW) became reality. Since the birth of the WWW in 1990, the Internet and therewith websites have changed the way businesses compete, shifting products, services and even entire markets.
Therewith, gathering and analysing visitor traffic on websites can provide crucial information to un- derstand customer behavior and numerous other aspects.
Web Analytics (WA) tools offer a quantity of diverse functionality, which calls for complex decision- making in information management. Website operators implement Web Analytic tools such as Google Analytics to analyse their website for the purpose of identifying web usage to optimise website design and management. The gathered data leads to emergent knowledge, which provides new marketing opportunities and can be used to improve business processes and understand customer behavior to increase profit. Moreover, Web Analytics plays a significant role to measure performance and has therefore become an important component in web-based environments to make business decisions.
However, many small and medium –sized enterprises try to keep up with the web business competi- tion, but do not have the equivalent resources in manpower and knowledge to stand the pace, there- fore some even resign entirely on Web Analytics.
This research project aims to develop a Web Analytics framework to assist small and medium-sized enterprises in making better use of Web Analytics. By identifying business requirements of SMEs and connecting them to the functionality of Google Analytics, a Web Analytics framework with attending guidelines is developed, which guides SMEs on how to proceed in using Google Analytics to achieve actionable outcomes.
Massenprozessmanagement
(2015)
This dissertation answers the research question which basically suitable approaches and which necessary information technologies are to be considered for the management of business processes in large amounts (Mass Business Process Management, MBPM) in service companies. It could be shown that for the execution of mass processes a special approach that uses methods of the manufacturing industry is necessary. The research aim to develop an MBPM approach for service companies was accomplished by using the Design Science Research approach and is explained in this dissertation in consecutive steps. For the development of the MBPM approach a longitudinal indepth case study was conducted with a business process outsourcing provider to gain insights from his approach. Outsourcing providers have to produce their services in a very efficient and effective way, otherwise they will not be able to offer their products at favorable conditions. It was shown that the factory-oriented approach of the out-sourcing service provider in the observation period of ten years was suitable to execute mass processes of highest quality, at constantly decreasing prices with less and less people.
The assumed need for research concerning MBPM was verified on the basis of an extensive literature research based on the Journal Rating VHB-Jourqual and other literature sources. As many approaches for the introduction of BPM were found, a selection of BPM approaches was analyzed to gain further insights for the development of the MBPM approach. Based on the analysis and the comparison of the different BPM approaches as well as the comparison with the approach of the process outsourcing provider it was found, that BPM and MBPM differ in many aspects. MBPM has a strong operational focus and needs intensive IT support. The operative focus mainly shows in the operative control of processes and people as well as in the corresponding high demands on process transparency. With detailed monitoring and fine grained process measurements as well as timely reporting this process transparency is achieved. Information technology is needed for example to conduct process monitoring timely but also to give internal as well as external stakeholders the desired overview of the current workload and of the invoicing of services.
Contrary to the approach of the process outsourcing provider it could also been shown, that change management can influence the implementation, the continuous operation and the constant change associated with MBPM in a positive way.
This research examines information audit methodologies and information capturing methods for enterprise social software which are an elementary part of the audit process. Information auditing is lacking of a standardized definition and methodology because the scope of the audit process is diversified and dependent on the organization undertaking the audit. The benefits of information auditing and potential challenges of Enterprise 2.0 the audit can overcome are comprehensive and provide a major incentive for managers to conduct an audit. Information asset registers as a starting point for information auditing are not specifically focusing on social software assets. Therefore this research pro-ject combines asset registers from different areas to create a new register suitable for the requirements of Enterprise 2.0. The necssary adaptations caused by the new character of the assets are minor. The case study applying the asset register for the first time however reveals several problematic areas for information auditors completing the register. Rounding up the thesis a template is developed for setting up new work spaces on enterprise social software systems with appropriate metadata taking into account the meaningful metadata discovered in the asset register.
This thesis conducts a text and network analysis of criminological files. The specific focus during the research is the field money laundering. The analysis showed the most important concepts present in the text which were classified in eleven different classes. The relationships of those concepts were analysed using ego networks, key entity identification and clustering. Some of the statements given about money laundering could be validated by the findings of this analysis and their interpretation. Specific concepts like banks and organizations as well as foreign subsidiaries were identified. Aggregating these concepts with the statements in chapter 1.4.3 on the circular process of money laundering it can be stated that different organizations and individuals, present in the criminological files, were placing money through different banks, organizations and investments in the legal financial market. At last this thesis tries to validate the benefits of the used tools for the kind of conducted research process. An estimation on ORA's and Automap's applicability for this kind of research is given in the end.
In this work a framework is developed that is used to create an evaluation scheme for the evaluation of text processing tools. The evaluation scheme is developed using a model-dependent software evaluation approach and the focus of the model-dependent part is the text-processing process which is derived from the Conceptual Analysis Process developed in the GLODERS project. As input data a German court document is used containing two incidents of extortion racketeering which happened in 2011 and 2012. The evaluation of six different tools shows that one tool offers great results for the given dataset when it is compared to manual results. It is able to identify and visualize relations between concepts without any additional manual work. Other tools also offer good results with minor drawbacks. The biggest drawback for some tools is the unavailability of models for the German language. They can perform automated tasks only on English documents. Nonetheless some tools can be enhanced by self-written code which allows users with development experience to apply additional methods.
The aim of this paper is to identify and understand the risks and issues companies are experiencing from the business use of social media and to develop a framework for describing and categorising those social media risks. The goal is to contribute to the evolving theorisation of social media risk and to provide a foundation for the further development of social media risk management strategies and processes. The study findings identify thirty risk types organised into five categories (technical, human, content, compliance and reputational). A risk-chain is used to illustrate the complex interrelated, multi-stakeholder nature of these risks and directions for future work are identified.
This work compiles the comparison of different medical drug-dispensers. A drug-dispenser is a device which allows it from a larger amount of drugs to take a smaller one. To perform this comparison 15 requirements for the dispensers were found. The requirements "organization", "remind" and "ergonomics" are assigned to the taking-easement. "Compliance", "adaptability", "selectivity", "persistence", "functionality", "correctness" and "specificity / sensitivity" belong to the compliance, which is therapy loyalty. With it this category makes most demands. The category storage collects "hygiene", "pharmaceutical forms" and "robustness". Finally, the requirements "clarity" and "data protection" were assigned to other requirements. After this, different dispenser-concepts were first introduced and analyzed on the fulfillment of the requirements. The following concepts were analyzed: pillbox, oneweek-dispenser, Blister, tubular bag, MEMS, OtCM, electronic Dispenser, smartphone application. According to the analysis the dispensers could be compared with each other. It turned out that all concepts show deficits. Hence the author developed an own concept which fulfils all requirements except for two from well to very well. It represents the mightiest concept.
German politicians have identified a need for greater citizen involvement in decision-making than in the past, as confirmed by a recent German parliamentarians study ("DEUPAS"). As in other forms of social interactions, the Internet provides significant potential to serve as the digital interface between citizens and decision-makers: in the recent past, dedicated electronic participation ("e-participation") platforms (e.g. dedicated websites) have been provided by politicians and governments in an attempt to gather citizens" feedback and comment on a particular issue or subject. Some of these have been successful, but a large proportion of them are grossly under-used " often only small numbers of citizens use them. Over the same time period, enthusiasm of Society for social networks has increased and is now commonplace. Many citizens use social networks such as Facebook and Twitter for all kinds of purposes, and in some cases to discuss political issues.
Social networks are therefore obviously attractive to politicians " from local government to federal agencies, politicians have integrated social media into their daily work. However, there is a significant challenge regarding the usefulness of social networks. The problem is the continuous increase in digital information: social networks contain vast amounts of information, and it is impossible for a human to manually filter the relevant information from the irrelevant (so-called "information overload"). Even using the search tools provided by social networks, it is still a huge task for a human to determine meanings and themes from the multitude of search results. New technologies and concepts have been proposed to provide summaries of masses of information through lexical analysis of social media messages, and therefore they promise an easy and quick overview of the information.
This thesis examines the relevance of these analyses" results, for the use in everyday political life, with the emphasis on the social networks Facebook and Twitter as data sources. Here we make use of the WeGov Toolbox and its analysis components that were developed during the EU project WeGov. The assessment has been performed in consultation with actual policy-makers from different levels of German government: policy-makers from the German Federal Parliament, the State Parliament North Rhine-Westphalia, the State Chancellery of the Saarland and the cities of Cologne and Kempten all took part in the study. Our method was to execute the analyses on data collected from Facebook and Twitter, and present the results to the policy-makers, who would then evaluate them using a mixture of qualitative methods.
The responses of the participants have provided us with some useful conclusions:
1) None of the participants believe that e-participation is possible in this way. But participants confirm that "citizen-friendliness" can be supported by this approach.
2) The most likely users for the summarisation tools are those who have experience with social networks, but are not "power users". The reason being is that "power users" already knew the relevant information provided by analysis tools. But without any experiences for social networks it is hard to interpret the analysis results the right way.
3) The evaluation has considered geographical aspects, and related this to e.g. a politician- constituency as a local area of social networks. Comparing the rural to the urban areas, it is shown that the amount of relevant political information in the rural areas is low. While the proportion of publicly available information in urban areas is relatively high, the proportion in the rural areas is much lower.
The findings that result from the engagement with policy-makers will be systematically surveyed and validated within this thesis.
Die Diplomarbeit "Entwicklung eines Telemedizinregister-Anforderungskatalog" behandelt die Erstellung eines Anforderungskatalogs für die Entwicklung eines im telemedizinischen Bereich anwendbaren Registers zur Unterstützung von Abrechnungsvorgängen. Diese werden im deutschen Gesundheitswesen zwischen telemedizinischen Dienstleistern und Kostenträgern in Zusammenhang mit der integrierten Versorgungsform durchgeführt, um die Finanzierung durchgeführter telemedizinischer Behandlungen abzurechnen. Dabei dient das Telemedizinregister als eine datenvorhaltende Speicherstelle, die Kopien von Behandlungsdaten telemedizinischer Dienstleister aufnimmt und deren Verarbeitungsprozesse im Register protokolliert. Den beteiligten Kostenträgern wird Zugriff auf dieses Telemedizinregister gewährt, um die Validität der Therapiedaten überprüfen zu können, die ihnen durch telemedizinische Dienstleister zur Analyse vorgelegt werden. Die Arbeit beschreibt die theoretischen Grundlagen der Bereiche Datenschutz und Telemedizin, mit denen Anforderungslisten und ein SOLL-Modell eines Telemedizinregisters erstellt werden. Dieses Modell setzt sich aus Datenmodellen und Prozessbeschreibungen zusammen und wird mit Hilfe eines praktischen Beispiels einer telemedizinischen Behandlung überprüft. Die Integration verschiedener Standards, welche bei Datenaustausch-Prozessen eingesetzt werden können, ist ein weiterer Teil zur Konzeptionierung des Telemedizinregisters, zu dem mögliche Anwendungsfelder zur Erweiterung der Funktionalität beschrieben werden.
This diploma thesis describes the concept and implementation of a software router for policy-based Internet regulation. It is based on the ontology InFO described by Kasten and Scherp. InFO is destined for a system-independent description of regulation mechanisms. Additionally, InFO enables a transparent regulation by linking background information to the regulation mechanisms. The InFO extension RFCO extends the ontology with router-specific entities. A software router is developed to implement RFCO at the IP level. The regulation is designed to be transparent by letting the router inform affected users about the regulation measures. The router implementation is exemplarily tested in a virtual network environment.
Iterative Signing of RDF(S) Graphs, Named Graphs, and OWL Graphs: Formalization and Application
(2013)
When publishing graph data on the web such as vocabulariesrnusing RDF(S) or OWL, one has only limited means to verify the authenticity and integrity of the graph data. Today's approaches require a high signature overhead and do not allow for an iterative signing of graph data. This paper presents a formally defined framework for signing arbitrary graph data provided in RDF(S), Named Graphs, or OWL. Our framework supports signing graph data at different levels of granularity: minimum self-contained graphs (MSG), sets of MSGs, and entire graphs. It supports for an iterative signing of graph data, e. g., when different parties provide different parts of a common graph, and allows for signing multiple graphs. Both can be done with a constant, low overhead for the signature graph, even when iteratively signing graph data.
Virtual Goods + ODRL 2012
(2012)
This is the 10th international workshop for technical, economic, and legal aspects of business models for virtual goods incorporating the 8th ODRL community group meeting. This year we did not call for completed research results, but we invited PhD students to present and discuss their ongoing research work. In the traditional international group of virtual goods and ODRL researchers we discussed PhD research from Belgium, Brazil, and Germany. The topics focused on research questions about rights management in the Internet and e-business stimulation. In the center of rights management stands the conception of a formal policy expression that can be used for human readable policy transparency, as well as for machine readable support of policy conformant systems behavior up to automatic policy enforcement. ODRL has proven to be an ideal basis for policy expressions, not only for digital copy rights, but also for the more general "Policy Awareness in the World of Virtual Goods". In this sense, policies support the communication of virtual goods, and they are a virtualization of rules-governed behavior themselves.
Integration von CRM-Systemen mit Kollaborations-Systemen am Beispiel von DocHouse und Lotus Quickr
(2012)
Der vorliegende Arbeitsbericht "Integration von CRM-Systemen mit Kollaborations-Systemen am Beispiel von DocHouse/ BRM und IBM Lotus Quickr" ist Ergebnis einer studentischen Projektarbeit. Ziel des Projekts war es Integrationsszenarien zwischen einem CRM-System und einem Kollaborati-onssystem zu erarbeiten und eine prototypische Schnittstelle mit entsprechender Funktion zwischen den Systemen DocHouse/ BRM und IBM Lotus Quickr zu implementieren.
Ein besonderer Dank geht in diesem Zusammenhang an Herr Wolfgang Brugger (Geschäftsführer der DocHouse GmbH), der die Idee einer solchen Entwicklung hatte und die FG BAS mit deren Durchführung betraute. Die Erstellung des Konzepts und des Prototyps wurde vom Winter 2010 bis Sommer 2011 von den Studenten Björn Lilge, Ludwig Paulsen, Marco Wolf, Markus Aldenhövel, Martin Surrey und Mike Reuthers im Rahmen ihres Projektpraktikums durchgeführt. Das Projektteam wurde bei der Konzeption und Implementierung inhaltlich und organisatorisch von Dipl.-Wirt.-Inform. Roland Diehl betreut.
Die vorliegende Fallstudie entstand als Untersuchungsobjekt zu einer Bachelorarbeit und wurde nach der eXperience Fallstudienmethodik erstellt. Ziel der Bachelorarbeit war die Identifizierung von Nutzenaspekten in diesem konkreten Fall. Im Anwenderunternehmen DOCHOUSE wurde hier eine Schnittstelle zwischen dem internen CRM-System und einem kollaborativen System für den externen Zugriff eingeführt.
Ein besonderer Dank geht in diesem Zusammenhang an Herr Wolfgang Brugger (Geschäftsführer DOCHOUSE GmbH), der die Erstellung der Fallstudie motiviert hat und die FG BAS mit deren Durchführung betraute. Die Fallstudie wurde im Winter 2011 von dem Studenten Martin Surrey und Roland Diehl, wissenschaftlicher Mitarbeiter der Forschungsgruppe, erhoben.
Seit Beginn des World Wide Web hat sich die Erzeugung und Verteilung digitaler Güter (digital assets) entschieden verändert. Zur Erzeugung, Bearbeitung, Verteilung und Konsumierung bedarf es heute nicht mehr spezieller physischer Gerätschaften. Dadurch hat sich die Geschwindigkeit, in der Medien generiert und transportiert werden, enorm gesteigert. Auch die Möglichkeiten der Kooperation waren dadurch einem Wandel unterlegen bzw. wurden mancherorts erst möglich gemacht.
Die Nutzung des Internets ermöglichte zwar die Loslösung digitaler Güter von ihren physischen Trägermedien, die Bestimmungen des Urheberrechts gelten jedoch weiterhin. Dies führt gerade bei juristisch weniger erfahrenen Nutzern zu Unsicherheit darüber, wie ein konkretes digitales Gut genutzt werden darf. Andererseits wird von vielen Nutzern das gewohnte Tauschen von Medien auch auf das digitale Umfeld übertragen. Die Urheberrechtsverletzungen, die zuvor im privaten Umfeld im kleinen Rahmen stattfanden, geschehen nun global und für alle sichtbar. Da diese Form des Tausches das primäre Geschäftsmodell der Verwerter gefährdet, wird versucht, die Nutzung digitaler Güter einzuschränken bzw. für nicht berechtigte Nutzer zu unterbinden. Dies geschah und geschieht unter anderem mit Verfahren der digitalen Rechte-Verwaltung (Digital Rights Management - DRM).
Diese Verfahren sind unter Nutzern bisweilen umstritten oder werden sogar offen abgelehnt, da sie die Nutzung digitaler Güter im Vergleich zum physischen Pendant erschweren können. Zudem erwiesen sich viele dieser Verfahren als nicht sicher, so dass die verwendeten Verschlüsselungsverfahren gebrochen wurden. Mit einer "Nutzungsrechte-Verwaltung" (Usage Rights Management - URM) soll DRM im Kernprinzip zwar erhalten bleiben. Die praktische Umsetzung soll aber in eine andere Richtung vorstoßen. Der Nutzer bekommt die volle Kontrolle über die digitalen Güter (ohne die restriktiven Maßnahmen klassischer DRM-Umsetzungen), aber auch wieder die volle Verantwortung. Unterstützt wird er dabei von Software, die ihn über die rechtlichen Möglichkeiten informiert und auf Wunsch des Nutzers auch software-technische Schranken in der Benutzung setzt, ähnlich der Rechtedurchsetzung (Enforcement) bei klassischen DRM-Systemen.
URM nutzt dabei die offene Rechtedefinitionssprache ODRL. Die vorliegende Studienarbeit ist Teil des URM-Projektes der Forschungsgruppe IT-Risk-Management, welches wiederum Teil des SOAVIWA-Projektes ist. Ziel der Studienarbeit ist es, eine Java-Klasse zu entwickeln, mit der in ODRL verfasste Lizenzen als Java-Objekte abgebildet werden. Weitere zu entwickelnde Komponenten sollen diese Objekte verwalten und das Modifizieren und Erzeugen neuer Objekte zulassen. Alle Komponenten sollen Bestandteil des bereits anfänglich implementierten Toolkit für URM (TURM) sein.
Cloud Computing is a topic that has gained momentum in the last years. Current studies show that an increasing number of companies is evaluating the promised advantages and considering making use of cloud services. In this paper we investigate the phenomenon of cloud computing and its importance for the operation of ERP systems. We argue that the phenomenon of cloud computing could lead to a decisive change in the way business software is deployed in companies. Our reference framework contains three levels (IaaS, PaaS, SaaS) and clarifies the meaning of public, private and hybrid clouds. The three levels of cloud computing and their impact on ERP systems operation are discussed. From the literature we identify areas for future research and propose a research agenda.
This paper describes results of the simulation of social objects, the dependence of schoolchildren's professional abilities on their personal characteristics. The simulation tool is the artififfcial neural network (ANN) technology. Results of a comparison of the time expense for training the ANN and for calculating the weight coefficients with serial and parallel algorithms, respectively, are presented.
An estimation of the number of multiplication and addition operations for training artififfcial neural networks by means of consecutive and parallel algorithms on a computer cluster is carried out. The evaluation of the efficiency of these algorithms is developed. The multilayer perceptron, the Volterra network and the cascade-correlation network are used as structures of artififfcial neural networks. Different methods of non-linear programming such as gradient and non-gradient methods are used for the calculation of the weight coefficients.
Die vorliegende Arbeit beschäftigt sich mit den Nutzenpotentialen des GeoPortal.rlp. Hierzu werden zwei empirische Studien erhoben. Die erste Studie ist eine Nutzerumfrage bezüglich des Portals. Die zweite Studie besteht aus Experteninterviews um potentielle Kooperationspartner zu identifizieren. Aus den Ergebnissen der beiden Studien werden Handlungsempfehlungen abgeleitet.
This paper describes a parallel algorithm for selecting activation functionsrnof an artifcial network. For checking the efficiency of this algorithm a count of multiplicative and additive operations is used.
In diesem Arbeitsbericht werden zuvor nicht identifizierte Bedrohungen bezüglich des Wahlgeheimnisses des in [BKG11] vorgeschlagenen Konzeptes zur Authentifizierung von Wählern bei elektronischen Wahlen mittels des neuen Personalausweises aufgezeigt. Überdies wird mit der Einführung einer zwischengelagerten Anonymisierungsschicht eine Lösung vorgeschlagen, wie eben diese Bedrohungen abgewehrt werden können.
The paper is devoted to solving the problem of assessing the quality of the medical electronic service. A variety of dimensions and factors of quality, methods and models applied in different scopes of activity for assessing quality of service is researched. The basic aspects, requirements and peculiarities of implementing medical electronic services are investigated. The results of the analysis and the set of information models describing the processes of assessing quality of the electronic service "Booking an appointment with a physician" and developed for this paper allowed us to describe the methodology and to state the problem of the assessment of quality of this service.
The estimation of various social objects is necessary in different fields of social life, science, education, etc. This estimation is usually used for forecasting, for evaluating of different properties and for other goals in complex man-machine systems. At present this estimation is possible by means of computer and mathematical simulation methods which is connected with significant difficulties, such as: - time-distributed process of receiving information about the object; - determination of a corresponding mathematical device and structure identification of the mathematical model; - approximation of the mathematical model to real data, generalization and parametric identification of the mathematical model; - identification of the structure of the links of the real social object. The solution of these problems is impossible without a special intellectual information system which combines different processes and allows predicting the behaviour of such an object. However, most existing information systems lead to the solution of only one special problem. From this point of view the development of a more general technology of designing such systems is very important. The technology of intellectual information system development for estimation and forecasting the professional ability of respondents in the sphere of education can be a concrete example of such a technology. Job orientation is necessary and topical in present economic conditions. It helps tornsolve the problem of expediency of investments to a certain sphere of education. Scientifically validated combined diagnostic methods of job orientation are necessary to carry out professional selection in higher education establishments. The requirements of a modern society are growing, with the earlier developed techniques being unable to correspond to them sufficiently. All these techniques lack an opportunity to account all necessary professional and personal characteristics. Therefore, it is necessary to use a system of various tests. Thus, the development of new methods of job orientation for entrants is necessary. The information model of the process of job orientation is necessary for this purpose. Therefore, it would be desirable to have an information system capable of giving recommendations concerning the choice of a trade on the basis of complex personal characteristics of entrants.
Diese Diplomarbeit beschreibt die Entwicklung einer mobilen Anwendung als Mittel der E-Partizipation am Beispiel der einheitlichen Behördenrufnummer 115 ("D115"). D115 ist ein Projekt des Bundesministerium des Innern (BMI), bei dem Bürgerinnen und Bürger unter einer einzigen Telefonnummer Auskünfte über Leistungen der öffentlichen Verwaltung erhalten. Im Rahmen dieser Diplomarbeit wird ein Client-Server-Ansatz entwickelt, der solche Anfragen und Meldungen an die Verwaltung mittels mobiler Endgeräte verarbeitet. Dabei werden Aspekte der E-Partizipation, des Ubiquitous Computing und der Location-based Services vereint. Gestützt auf ein Geo-Informationssystem soll sich ein Nutzer jederzeit und jederorts über Angelegenheiten der Verwaltung informieren und aktiv daran beteiligen können, sei es, um einen umgestürzten Baum, eine gesperrte Straße oder Vandalismus an einer Parkbank zu melden.
Multi-agent systems are a mature approach to model complex software systems by means of Agent-Oriented Software Engineering (AOSE). However, their application is not widely accepted in mainstream software engineering. Parallel to this the interdisciplinary field of Agent-based Social Simulation (ABSS) finds increasing recognition beyond the purely academic realm which starts to draw attention from the mainstream of agent researchers. This work analyzes factors to improve the uptake of AOSE as well as characteristics which separate the two fields AOSE and ABSS to understand their gap. Based on the efficiency-oriented micro-agent concept of the Otago Agent Platform (OPAL) we have constructed a new modern and self-contained micro-agent platform called µ². The design takes technological trends into account and integrates representative technologies, such as the functionally-inspired JVM language Clojure (with its Transactional Memory), asynchronous message passing frameworks and the mobile application platform Android. The mobile version of the platform shows an innovative approach to allow direct interaction between Android application components and micro-agents by mapping their related internal communication mechanisms. This empowers micro-agents to exploit virtually any capability of mobile devices for intelligent agent-based applications, robotics or simply act as a distributed middleware. Additionally, relevant platform components for the support of social simulations are identified and partially implemented. To show the usability of the platform for simulation purposes an interaction-centric scenario representing group shaping processes in a multi-cultural context is provided. The scenario is based on Hofstede's concept of 'Cultural Dimensions'. It does not only confirm the applicability of the platform for simulations but also reveals interesting patterns for culturally augmented in- and out-group agents. This explorative research advocates the potential of micro-agents as a powerful general system modelling mechanism while bridging the convergence between mobile and desktop systems. The results stimulate future work on the micro-agent concept itself, the suggested platform and the deeper exploration of mechanisms for seemless interaction of micro-agents with mobile environments. Last but not least the further elaboration of the simulation model as well as its use to augment intelligent agents with cultural aspects offer promising perspectives for future research.