Refine
Human action recognition from a video has received growing attention in computer vision and has made significant progress in recent years. Action recognition is described as a requirement to decide which human actions appear in videos. The difficulties involved in distinguishing human actions are due to the high complexity of human behaviors as well as appearance variation, motion pattern variation, occlusions, etc. Many applications use human action recognition on captured video from cameras, resulting in video surveillance systems, health monitoring, human-computer interaction, and robotics. Action recognition based on RGB-D data has increasingly drawn more attention to it in recent years. RGB-D data contain color (Red, Green, and Blue (RGB)) and depth data that represent the distance from the sensor to every pixel in the object (object point). The main problem that this thesis deals with is how to automate the classification of specific human activities/actions through RGB-D data. The classification process of these activities utilizes a spatial and temporal structure of actions. Therefore, the goal of this work is to develop algorithms that can distinguish these activities by recognizing low-level and high-level activities of interest from one another. These algorithms are developed by introducing new features and methods using RGB-D data to enhance the detection and recognition of human activities. In this thesis, the most popular state-of-the-art techniques are reviewed, presented, and evaluated. From the literature review, these techniques are categorized into hand-crafted features and deep learning-based approaches. The proposed new action recognition framework is based on these two categories that are approved in this work by embedding novel methods for human action recognition. These methods are based on features extracted from RGB-D data that are
evaluated using machine learning techniques. The presented work of this thesis improves human action recognition in two distinct parts. The first part focuses on improving current successful hand-crafted approaches. It contributes into two significant areas of state-of-the-art: Execute the existing feature detectors, and classify the human action in the 3D spatio-temporal domains by testing a new combination of different feature representations. The contributions of this part are tested based on machine learning techniques that include unsupervised and supervised learning to evaluate this suitability for the task of human action recognition. A k-means clustering represents the unsupervised learning technique, while the supervised learning technique is represented by: Support Vector Machine, Random Forest, K-Nearest Neighbor, Naive Bayes, and Artificial Neural Networks classifiers. The second part focuses on studying the current deep-learning-based approach and how to use it with RGB-D data for the human action recognition task. As the first step of each contribution, an input video is analyzed as a sequence of frames. Then, pre-processing steps are applied to the video frames, like filtering and smoothing methods to remove the noisy data from each frame. Afterward, different motion detection and feature representation methods are used to extract features presented in each frame. The extracted features
are represented by local features, global features, and feature combination besides deep learning methods, e.g., Convolutional Neural Networks. The feature combination achieves an excellent accuracy performance that outperforms other methods on the same RGB-D datasets. All the results from the proposed methods in this thesis are evaluated based on publicly available datasets, which illustrate that using spatiotemporal features can improve the recognition accuracy. The competitive experimental results are achieved overall. In particular, the proposed methods can be better applied to the test set compared to the state-of-the-art methods using the RGB-D datasets.