Development of a Python application for recognizing gestures from a video stream of RGB and RGBD cameras
Views: 207 / PDF downloads: 176
DOI:
https://doi.org/10.32523/bulmathenu.2021/3.1Keywords:
depth camera, gesture recognition, convolutional neural network, RealSense, OpenCV, Python, VGG-16Abstract
Gesture recognition systems have changed a lot recently, due to the development of modern data capture
devices (sensors) and the development of new recognition algorithms. The article presents the results of a study for
recognizing static and dynamic hand gestures from a video stream from RGB and RGBD cameras, namely from the
Logitech HD Pro Webcam C920 webcam and from the Intel RealSense D435 depth camera. Software implementation
is done using Python 3.6 tools. Open source Python libraries provide robust implementations of image processing and
segmentation algorithms. The feature extraction and gesture classification subsystem is based on the VGG-16 neural
network architecture implemented using the TensorFlow and Keras deep learning frameworks. The technical characteristics
of the cameras are given. The algorithm of the application is described. The research results aimed at comparing data
capture devices under various experimental conditions (distance and illumination) are presented. Experimental results show
that using the Intel RealSense D435 depth camera provides more accurate gesture recognition under various experimental
conditions.