Refine
Year of publication
- 2012 (67) (remove)
Document Type
- Diploma Thesis (22)
- Bachelor Thesis (21)
- Part of Periodical (12)
- Master's Thesis (11)
- Conference Proceedings (1)
Keywords
- Customer Relationship Management (3)
- Akkreditierung (2)
- Android <Systemplattform> (2)
- DOCHOUSE (2)
- E-Government (2)
- Fallstudie (2)
- IBM Lotus Quickr (2)
- Petri-Netze (2)
- Tablet PC (2)
- probability propagation nets (2)
- 101companies (1)
- API Analysis (1)
- API Migratiom (1)
- Ad-Hoc Routing (1)
- Aktualisierung (1)
- Android (1)
- Android 2 (1)
- Android 3 (1)
- Android 4 (1)
- App (1)
- App <Programm> (1)
- ArcGIS (1)
- Archivierung (1)
- Aspect-orientation (1)
- Aspekt-Orientierung (1)
- Automatisches Beweisverfahren (1)
- B2B (1)
- BPMN (1)
- Beschreibungslogik (1)
- C-Programmierung (1)
- CamInSens (1)
- Cloud Computing (1)
- Computersimulation (1)
- Computerspiel (1)
- Congestion Control (1)
- Content Management (1)
- Core Ontology on Multimedia (COMM) (1)
- Criteria Matrix (1)
- Design Pattern (1)
- Dienstleistungsproduktivität (1)
- E-Hyper Tableau (1)
- E-KRHyper (1)
- E-Learning (1)
- E-Partizipation (1)
- E-Procurement (1)
- ECMS 2012 (1)
- Echtzeitsystem (1)
- Electronic Government (1)
- Elektrischer Servomotor (1)
- Emil (1)
- Enterprise 2.0 (1)
- Enterprise Information Management (1)
- Entwurfsmuster (1)
- European Conference on Modelling and Simulation (1)
- Forschungslandkarte (1)
- Framework (1)
- Freiheitsgrad (1)
- GPGPU (1)
- Germany (1)
- Good Government (1)
- Historisierung (1)
- Hypertableaux (1)
- Informatik (1)
- Interaktion (1)
- Java (1)
- Java <Programmiersprache> (1)
- Kinematik (1)
- Kommunikationsprotokoll (1)
- Linked Open Data (1)
- Logik (1)
- Maschinelles Sehen (1)
- Mikrocontroller AVR (1)
- Mindestlohn (1)
- Minimum wage (1)
- Mobile BI (1)
- Mobile Business Intelligence (1)
- Modell (1)
- Natural Feature Tracking (1)
- Netlogo (1)
- OCR (1)
- ODRL (1)
- Objective-C (1)
- Online-Befragung (1)
- Ontology alignment (1)
- Optische Zeichenerkennung (1)
- PHP (1)
- Parallelmanipulator (1)
- Petrinetz (1)
- Proceedings (1)
- Prozedurale Synthese (1)
- Pulsweitenmodulation (1)
- Qualität (1)
- Rechnernetz (1)
- Registratur (1)
- Repast (1)
- Research Map (1)
- Routing (1)
- Schema Information (1)
- Schlussfolgern (1)
- Serielle Schnittstelle (1)
- Simulation (1)
- Smartphone (1)
- Software-Customizing (1)
- Softwareentwicklung (1)
- Softwarespezifikation (1)
- Softwarewartung (1)
- Stakeholder (1)
- Stakeholder-Beteiligung (1)
- Stakeholderanalyse (1)
- Staukontrollmechanismen (1)
- Stewart-Plattform (1)
- Stomp (1)
- TCP/IP (1)
- Taxonomy (1)
- Temporale Datenhaltung (1)
- Tesseract (1)
- Tokens (1)
- Tool Evaluation (1)
- Translation (1)
- Umfrage (1)
- University (1)
- Virtual Networks (1)
- Visualisierung (1)
- Web 2.0- basierte Stakeholderplattform (1)
- Web Analytics (1)
- Web Analytics Framework (1)
- Web Mining (1)
- Web-application framework (1)
- Web-programming technologies (1)
- Wikipedia (1)
- Xcode (1)
- agent-based model (1)
- agentenbasierte Modellierung (1)
- archiving (1)
- automatisierte Videoüberwachung (1)
- computer science (1)
- description logic (1)
- design thinking (1)
- deutsche Hochschulen (1)
- distinct object identifiers (1)
- entrepreneurial design thinking (1)
- entrepreneurial thinking (1)
- entrepreneurship education (1)
- ethnic conflicts (1)
- ethnische Konflikte (1)
- faceted search (1)
- hypertableaux (1)
- iOS (1)
- iOS 5 (1)
- iOS-SDK (1)
- metadata formats (1)
- metadata standards (1)
- mobile devices (1)
- multimedia metadata (1)
- online survey (1)
- procedural content generation (1)
- quality (1)
- shiq (1)
- social media data (1)
- survey (1)
- teams (1)
- tool-integration (1)
- unique name assumption (1)
- video games (1)
- virtual goods (1)
- virtuelle Netze (1)
- Übersetzung (1)
- Überwachung (1)
- Überwachungstechnik (1)
Institute
- Fachbereich 4 (67) (remove)
Development of an Android Application for the Recognition and Translation of Words in Camera Scenes
(2012)
This bachelor- thesis describes the conception and implementation of a translation software for the Android platform. The specific feature of the software is the independent text recognition based on the view of the camera. This approach aims to enhance and accelerate the process of translation in certain situations. After an introduction into text recognition, the underlying technologies and the operation system Android useful applications are described. Then the concept of the software is created and the implementation examined. Finally an evaluation is conducted to identify strengths and weaknesses of the software.
Dualizing marked Petri nets results in tokens for transitions (t-tokens). A marked transition can strictly not be enabled, even if there are sufficient "enabling" tokens (p-tokens) on its input places. On the other hand, t-tokens can be moved by the firing of places. This permits flows of t-tokens which describe sequences of non-events. Their benefiit to simulation is the possibility to model (and observe) causes and effects of non-events, e.g. if something is broken down.
In this paper, we demonstrate by means of two examples how to work with probability propagation nets (PPNs). The fiirst, which comes from the book by Peng and Reggia [1], is a small example of medical diagnosis. The second one comes from [2]. It is an example of operational risk and is to show how the evidence flow in PPNs gives hints to reduce high losses. In terms of Bayesian networks, both examples contain cycles which are resolved by the conditioning technique [3].
The paper deals with a specific introduction into probability propagation nets. Starting from dependency nets (which in a way can be considered the maximum information which follows from the directed graph structure of Bayesian networks), the probability propagation nets are constructed by joining a dependency net and (a slightly adapted version of) its dual net. Probability propagation nets are the Petri net version of Bayesian networks. In contrast to Bayesian networks, Petri nets are transparent and easy to operate. The high degree of transparency is due to the fact that every state in a process is visible as a marking of the Petri net. The convenient operability consists in the fact that there is no algorithm apart from the firing rule of Petri net transitions. Besides the structural importance of the Petri net duality there is a semantic matter; common sense in the form of probabilities and evidencebased likelihoods are dual to each other.
Standards are widely-used in the computer science and IT industry. Different organizations like the International Organization for Standardization (SO) are involved in the development of computer related standards. An important domain of standardization is the specification of data formats enabling the exchange of information between different applications. Such formats can be expressed in a variety of schema languages thereby defining sets of conformant documents. Often the use of multiple schema languages is required due to their varying expressive power and different kind of validation requirements.rnThis also holds for the Specification Common Cartridge which is maintained by the IMS Global Learning Consortium. The specification defines valid zip packages that can be used to aggregate different learning objects. These learning objects are represented by a set of files which are a part of the package and can be imported into a learning management system. The specification makes use of other specifications to constrain the contents of valid documents. Such documents are expressed in the eXtensible Markup Language and may contain references to other files also part of the package. The specification itself is a so-called domain profile. A domain profile allows the modification of one or more specifications to meet the needs of a particular community. Test rules can be used to determine a set of tasks in order to validate a concrete package. The execution is done by a testsystem which, as we will show, can be created automatically. Hence this method may apply to other package based data formats that are defined as a part of a specification.
This work will examine the applicability of this generic test method to the data formats that are introduced by the so called Virtual Company Dossier. These formats are used in processes related to public e-procurement. They allow the packaging of evidences that are needed to prove the fulfillment of criteria related to a public tender. The work first examines the requirements that are common to both specifications. This will introduce a new view on the requirements by introducing a higher level of abstraction. The identified requirements will then be used to create different domain profiles each capturing the requirements of a package based data format. The process is normally guided by supporting tools that ease the capturing of a domain profile and the creation of testsystems. These tools will be adapted to support the new requirements. Furtheron the generic testsystem will be modified. This system is used as a basis when a concrete testsystem is created.
Finally the author comes to a positive conclusion. Common requirements have been identified and captured. The involved systems have been adapted allowing the capturing of further types of requirements that have not been supported before. Furthermore the background of the specifications quite differ. This indicates that the use of domain profiles and generic test technologies may be suitable in a wide variety of other contexts.
With the reaccreditation of the degree programs in the department of computer science at the University of Koblenz-Landau new trendsetting degree programs will be offered. For further planning and design of the individual degree programs the opinion of the students is a considerable indicator. Information about the new degree programs aren"t available during the accreditation process. Students have an interest in knowing about the new degree programs and the new examination regulation and therefore being part in the decision process would be desirable. The concept of e-participation is an opportunity to satisfy this need. It offers the possibility to discuss topics of the accreditation and to bring in own ideas and opinions into the decision process. This bachelor thesis describes an e-participation project at the University of Koblenz-Landau about the accreditation of the degree programs of the faculty of computer science. By using the reference framework of Scherer and Wimmer (2011) the project will be carried out. Furthermore the accreditation process will be modeled to get a better understanding of the whole process and to identify the possibilities for e-participation project integration. The results of this project are going to be covered by an online survey about the e-participation platform. Using the results of the survey and the experiences gained after the project recommendations are given for further e-participation projects. Moreover the reference framework of Scherer and Wimmer (2011) will be analyzed critically.
Parallelmanipulatoren, welche den Stewartmechanismus nutzen, ermöglichen die präzise Ausführung von Aufgaben in einem begrenzten Arbeitsraum. Durch die Nutzung von sechs Freiheitsgraden wird eine hohe Flexibilität der Positionierung erreicht. Die robuste Konstruktion sorgt zudem für ein sehr gutes Verhältnis von Gewicht zu Nutzlast.
Diese Bachelorarbeit befasst sich mit der Entwicklung einer flexiblen Softwarelösung zur Ansteuerung einer Stewartplattform. Dies umfasst ein Modell der Plattform, welches zu Testzwecken dient. Es werden zunächst die mathematischen Grundlagen der Inversen Kinematik erarbeitet aufbauend auf einem zuvor definierten Bewegungsmodell. Es folgt die Entwicklung einer generischen Architektur zur Übermittlung und Auswertung von Steuerkommandos vom PC. Die Implementierung geschieht in C und wird in verschiedene Module aufgeteilt, welche jeweils einen Aufgabenbereich der Positionskontrolle oder der Hardwarekommunikation abdecken. Es wird zudem eine graphische Nutzeroberfläche vorgestellt, über die man die Position der Plattform manuell verändern kann. Eine automatische Ansteuerung wird im folgenden Anwendungsbeispiel beschrieben, wo die Plattform mit frequentiellen Beschleunigungswerten einer Achterbahnsimulation beliefert wird.
Die Messung der Produktivität von Dienstleistungen ist trotz zunehmender Relevanz immer noch ein wenig durchdrungenes Forschungsfeld. Ursachen hierfür sind vor allem in den besonderen Merkmalen von Dienstleistungen - Immaterialität und Integrativität - zu suchen. Eine typische Dienstleistung der B2B Softwarebranche ist die Anpassung von Systemen an die Bedürfnisse des Kunden - das sogenannte Customizing. Die Ausschöpfung des Customizing-Potentials von Standardsoftware und -produkten und eine stärkere Einbeziehung des Kunden in Innovationsprozesse werden jedoch dadurch erschwert, dass die Produktivität dieser Dienstleistung nur unzureichend mess- und somit bewertbar zu sein scheint.
Der vorliegende Beitrag beschreibt die Entwicklung eines Modells zur Messung der Produktivität von Dienstleistungen auf der Basis unterschiedlicher Vorstudien im Rahmen des CustomB2B Projektes an der Universität Koblenz-Landau.
Virtual Goods + ODRL 2012
(2012)
This is the 10th international workshop for technical, economic, and legal aspects of business models for virtual goods incorporating the 8th ODRL community group meeting. This year we did not call for completed research results, but we invited PhD students to present and discuss their ongoing research work. In the traditional international group of virtual goods and ODRL researchers we discussed PhD research from Belgium, Brazil, and Germany. The topics focused on research questions about rights management in the Internet and e-business stimulation. In the center of rights management stands the conception of a formal policy expression that can be used for human readable policy transparency, as well as for machine readable support of policy conformant systems behavior up to automatic policy enforcement. ODRL has proven to be an ideal basis for policy expressions, not only for digital copy rights, but also for the more general "Policy Awareness in the World of Virtual Goods". In this sense, policies support the communication of virtual goods, and they are a virtualization of rules-governed behavior themselves.
Aspect-orientation in PHP
(2012)
Diese Diplomarbeit hat das Thema der fehlenden Cross-Cutting-Concerns(CCCs)-Unterstützung in PHP zum Inhalt. Die Basis bilden zu definierende Anforderungen an eine AOP-Realisierung im PHP-Umfeld. Es wird analysiert, wie und ob verwandte Sprachen und Paradigmen es gestatten, CCCs zu unterstützen. Darüber hinaus wird die Möglichkeit erörtert, AOP in PHP ohne PHP-Erweiterung zu realisieren. Weiter werden die bisherigen Ansätze, AOP in PHP umzusetzen, qualitativ untersucht. Die vorliegende Arbeit zielt darauf ab, eine eigene AOP-PHP-Lösung zu präsentieren, die nicht die Schwächen existierender Lösungen teilt.
Regarding the rapidly growing amount of data produced every year and the increasing acceptance of Enterprise 2.0 enterprises have to care about the management of their data more and more. Content created and stored in an uncoordinated manner can lead to data-silos (Williams & Hardy 2011, p.57), which result in long search times, inaccessible data and in consequence monetary losses. The "expanding digital universe" forces enterprises to develop new archiving solutions and records management policies (Gantz et al. 2007, p.13). Enterprise Content Management (ECM) is the research field that deals with these challenges. It is placed in the scientific context of Enterprise Information Management. This thesis aims to find out to what extent current Enterprise Content Management Systems (ECMS) support these new requirements, especially concerning the archiving of Enterprise 2.0 data. For this purpose, three scenarios were created to evaluate two different kinds of ECMS (one Open Source - and one proprietary system) chosen on the basis of a short marketrnresearch. The application of the scenarios reveals that the system vendors actually face the industry- concerns: both tools provide functionality for the archiving of data arising from online collaboration and also business records management capabilities but the integration of those topics is not, or is only inconsistently solved. At this point new questions - such as, "Which datarngenerated in an Enterprise 2.0 is worth being a record?" - arise and should be examined in future research.
Procedural content generation, the generation of video game content using pseudo-random algorithms, is a field of increasing business and academic interest due to its suitability for reducing development time and cost as well as the possibility of creating interesting, unique game spaces. Although many contemporary games feature procedurally generated content, the author perceived a lack of games using this approach to create realistic outer-space game environments, and the feasibility of employing procedural content generations in such a game was examined. Using current scientific models, a real-time astronomical simulation was developed in Python which generates star and planets object in a fictional galaxy procedurally to serve as the game space of a simple 2D space exploration game where the player has to search for intelligent life.
Schema information about resources in the Linked Open Data (LOD) cloud can be provided in a twofold way: it can be explicitly defined by attaching RDF types to the resources. Or it is provided implicitly via the definition of the resources´ properties.
In this paper, we analyze the correlation between the two sources of schema information. To this end, we have extracted schema information regarding the types and properties defined in two datasets of different size. One dataset is a LOD crawl from TimBL- FOAF profile (11 Mio. triple) and the second is an extract from the Billion Triples Challenge 2011 dataset (500 Mio. triple). We have conducted an in depth analysis and have computed various entropy measures as well as the mutual information encoded in this two manifestations of schema information.
Our analysis provides insights into the information encoded in the different schema characteristics. It shows that a schema based on either types or properties alone will capture only about 75% of the information contained in the data. From these observations, we derive conclusions about the design of future schemas for LOD.
Quadrokopter sind Helikopter mit vier in einer Ebene angeordneten Rotoren. Kleine unbemannte Modelle, die oft nur eine Schubkraft von wenigen Newton erzeugen können, sind im Spielzeug- und Modellbaubereich beliebt, werden aber auch von Militär und Polizei als Drohne für Aufklärungs- und Überwachungsaufgaben eingesetzt. Diese Diplomarbeit befasst sich mit den theoretischen Grundlagen der Steuerung eines Quadrokopters und entwickelt darauf aufbauend eine kostengünstige Steuerplatine für einen Modellquadrokopter.
Die theoretischen Grundlagen enthalten eine Untersuchung der Dynamik eines frei fliegenden Quadrokopters, bei der Bewegungsgleichungen hergeleitet und mit den Ergebnissen verglichen werden, die in "Design and control of quadrotors with application to autonomous flying" ([Bou07]) vorgestellt wurden. Weiterhin wird die Funktionsweise verschiedener Sensoren beschrieben, die zur Bestimmung der aktuellen räumlichen Ausrichtung geeignet sind, und es werden Verfahren besprochen, mit denen die Ausrichtung aus den Messwerten dieser Sensoren abgeschätzt werden kann. Zusätzlich wird in den Schiefkörper der Quaternionen eingeführt, in dem dreidimensionale Rotationen kompakt dargestellt und effizient verkettet werden können.
Daran anschließend wird die Entwicklung einer einfachen Steuerplatine beschrieben, die sowohl einen autonomen Schwebeflug als auch Fernsteuerung ermöglicht. Die Platine wurde auf einem X-Ufo-Quadrokopter der Firma Silverlit entwickelt und getestet, der daher ebenfalls vorgestellt wird. Die eingesetzten Bauteile und deren Zusammenspiel werden besprochen. Dabei ist insbesondere die WiiMotionPlus hervorzuheben, die als kostengünstiges Gyrosensormodul verwendet wird. Daneben werden verschiedene Aspekte der Steuersoftware erläutert: die Auswertung der Sensordaten, die Zustandsschätzung mit Hilfe des expliziten komplementären Filters nach Mahony et al. ([MHP08]), die Umsetzung des Ausrichtungsreglers sowie die Erzeugung der Steuersignale für die Motoren. Sowohl die Steuersoftware als auch Schaltplan und Platinenlayout der Steuerplatine liegen dieser Arbeit auf einer CD bei. Schaltplan und Platinenlayout sind zusätzlich im Anhang der Arbeit abgedruckt.
In this master thesis some new helpful features will be added to the Spanning Tree Simulator. This simulator was created by Andreas Sebastian Janke in his bachelor thesis [Jan10b] in 2010. It is possible to visualize networks which are defined in a configuration file. Each of them is a XML representation of a network consisting of switches and hosts. After loading such a file into the program it is possible to run the Spanning Tree Algorithm IEEE 802.1D. In contrast to the previous version only the switches are implemented as threads. When the algorithm is finished a spanning tree is built. This means, that messages cannot run into loops anymore. This is important because loops can cause a total breakdown of the communication in a network, if the running routing protocols cannot handle loops.
Magnetic resonance (MR) tomography is an imaging method, that is used to expose the structure and function of tissues and organs in the human body for medical diagnosis. Diffusion weighted (DW) imaging is a specific MR imaging technique, which enables us to gain insight into the connectivity of white matter pathways noninvasively and in vivo. It allows for making predictions about the structure and integrity of those connections. In clinical routine this modality finds application in the planning phase of neurosurgical operations, such as in tumor resections. This is especially helpful if the lesion is deeply seated in a functionally important area, where the risk of damage is given. This work reviews the concepts of MR imaging and DW imaging. Generally, at the current resolution of diffusion weighted data, single white matter axons cannot be resolved. The captured signal rather describes whole fiber bundles. Beside this, it often appears that different complex fiber configurations occur in a single voxel, such as crossings, splittings and fannings. For this reason, the main goal is to assist tractography algorithms who are often confound in such complex regions. Tractography is a method which uses local information to reconstruct global connectivities, i.e. fiber tracts. In the course of this thesis, existing reconstruction methods such as diffusion tensor imaging (DTI) and q-ball imaging (QBI) are evaluated on synthetic generated data and real human brain data, whereas the amount of valuable information provided by the individual reconstruction mehods and their corresponding limitations are investigated. The output of QBI is the orientation distribution function (ODF), where the local maxima coincides with the underlying fiber architecture. We determine those local maxima. Furthermore, we propose a new voxel-based classification scheme conducted on diffusion tensor metrics. The main contribution of this work is the combination of voxel-based classification, local maxima from the ODF and global information from a voxel- neighborhood, which leads to the development of a global classifier. This classifier validates the detected ODF maxima and enhances them with neighborhood information. Hence, specific asymmetric fibrous architectures can be determined. The outcome of the global classifier are potential tracking directions. Subsequently, a fiber tractography algorithm is designed that integrates along the potential tracking directions and is able to reproduce splitting fiber tracts.
Activity recognition with smartphones is possible by using its internal sensors without using any external sensor. First of all, previous works and their techniques will be regarded and from these works an own implementation for the activity recognition will be derived. Most of the previous works only use the accelerometer for the activity recognition task. For this reason, this bachelor thesis analyzes the benefit of further sensors, such as the magnetic field, the linear acceleration or the gyroscope. The activity recognition is performed by classification algorithms. Decision Tree, Naive Bayes and Support Vector machines will be used. Sensor data of subjects will be collected and saved by using an own developed application. This data is needed as training data for the classification algorithms.
The result is a model which represents the structure of the data. To validate the model, a test dataset will be used which is different from the training dataset. The results confirm previous works which indicated that the activity recognition task is possible by only using the accelerometer. Orientation, gyroscope and linear acceleration cannot be used for all problems of the activity recognition. Apart from that, the Decision Tree seems to be the best classification algorithm if the model has no training data of the current user.
The objective of this contribution is to conceptually analyze the potentials of entrepreneurial design thinking as being a rather new method for entrepreneurship education. Based on a literature review of different design thinking concepts we carve out a generic design thinking model upon we conceptually build a new model that considers entrepreneurial thinking as a valuable characteristic.
The results of our work show that the characteristics of entrepreneurial design thinking can enhance entrepreneurship education by supporting respective action fields of entrepreneurial learning. In addition we reveal that entrepreneurial design thinking offers beneficial guidelines for the design of entrepreneurship education programs.
Cloud-Computing ist aktuell ein besonderer Trend in der IT-Branche. ERP-Systeme sind in den heutigen Unternehmen ebenfalls nicht mehr wegzudenken. Durch die Analyse ausgewählter Literatur wird aufgezeigt, dass Cloud-Computing als Betriebsmodell für ERP-Systeme besonderer Untersuchung bedarf, da beim Zusammenspiel dieser Technologien noch unterschiedliche Herausforderungen geklärt werden müssen. Darauf aufbauend werden mit Hilfe von drei verschiedenen Praxispartnern Fallstudien zu deren Cloud-ERP-Lösungen erstellt, um in einem nächsten Schritt die theoretische Literatur mit den praktischen Ergebnissen zu vergleichen.
Ziel dieser Arbeit ist es mit Hilfe der Forschungsfragen, differenzierte Nutzenaspekte von Cloud-ERP-Lösungen aufzudecken und zu erklären, wie die Theorie mit praktischen Erfahrungswerten von Experten übereinstimmt. Durch die Fallstudien wird deutlich, dass sich die drei unterschiedlichen Cloud-ERP-Anbieter vor allem durch den Umfang ihrer Software und die Unternehmensgrößen der Zielgruppen im Markt differenzieren. Zusätzlich zeigt sich im Analyseteil und Fazit der Arbeit, dass über die in der Theorie identifizierten Nutzenaspekte hinaus, zusätzliche Nutzenaspekte durch die Kombination von Cloud-ERP-Lösungen generiert werden können und dass es aktuell schon eine spezielle Zielgruppe für Cloud-ERP-Lösungen gibt. Für die Zukunft bleibt abzuwarten, wie sich der Cloud-ERP-Markt weiterentwickeln wird und welche weiteren Funktionalitäten in die Cloud ausgelagert werden können, sodass sich On-Demand ERP-Systeme zu einer Konkurrenz von On-Premise-Lösungen entwickeln können.