Refine
Document Type
- Bachelor Thesis (1)
- Doctoral Thesis (1)
Keywords
- 3D (1)
- Befahrbarkeit (1)
- Bildverarbeitung (1)
- Dimension 3 (1)
- Gelände (1)
- Hindernis (1)
- Klassifikation (1)
- Laser (1)
- Roboter (1)
- Straßenzustand (1)
Institute
This thesis addresses the problem of terrain classification in unstructured outdoor environments. Terrain classification includes the detection of obstacles and passable areas as well as the analysis of ground surfaces. A 3D laser range finder is used as primary sensor for perceiving the surroundings of the robot. First of all, a grid structure is introduced for data reduction. The chosen data representation allows for multi-sensor integration, e.g., cameras for color and texture information or further laser range finders for improved data density. Subsequently, features are computed for each terrain cell within the grid. Classification is performedrnwith a Markov random field for context-sensitivity and to compensate for sensor noise and varying data density within the grid. A Gibbs sampler is used for optimization and is parallelized on the CPU and GPU in order to achieve real-time performance. Dynamic obstacles are detected and tracked using different state-of-the-art approaches. The resulting information - where other traffic participants move and are going to move to - is used to perform inference in regions where the terrain surface is partially or completely invisible for the sensors. Algorithms are tested and validated on different autonomous robot platforms and the evaluation is carried out with human-annotated ground truth maps of millions of measurements. The terrain classification approach of this thesis proved reliable in all real-time scenarios and domains and yielded new insights. Furthermore, if combined with a path planning algorithm, it enables full autonomy for all kinds of wheeled outdoor robots in natural outdoor environments.
The present work starts with an introduction of methods for three-dimensional curve skeletonization. Different kinds of historic and recent skeletonization approaches are analysed in detail. Later on, a state-of-the-art skeletonization algorithm is introduced. This algorithm deals as a basis for the own approach presented subsequently. After the description and definition of a new method improving the state-of-the-art algorithm, experiments are conducted to get appraisable results. Next, a ground truth is described which has been set up manually by humans. The human similarity evaluations are compared with the results of the automatic computer-based similarity measures provided by the own approach. For this comparison, standard evaluation criteria from the field of information retrieval have been used.