Recent Trends in Hand Gesture Recognition
GCSR Volume 3
ISBN 978-618-81418-5-8 (print) – ISBN 978-618-81418-4-1 (e-book)
Chapter 1 - Android Based Portable Hand Sign Recognition System
Jagdish L. Raheja, A. Singhal, Sadab and Ankit Chaudhary
(pages 1-18) DOI: 10.15579/gcsr.vol3.ch1
These days mobile devices like phones or tablets are very common among people of all age. They are connected with network and provide seamless communications through internet or cellular services. These devices can be a big help for the people who are not able to communicate properly and even in emergency conditions. For a disabled person who is not able to speak or a person who speaks a different language, these devices can be a boon as understanding, translating and speaking systems for these people. This chapter discusses a portable android based hand sign recognition system which can be used by disabled people. This chapter shows a part of an ongoing project. Computer Vision based techniques were used for image analysis and PCA was used after image tokenizer for recognition. This method was tested with webcam results to make the system more robust.
Chapter 2 - Feature Extraction Technique for Static Hand Gesture Recognition
Haitham Badi, Sameem Abdul Kareem and Sabah Husien
(pages 19-41) DOI: 10.15579/gcsr.vol3.ch2
The goal of static hand gesture recognition is to classify the given hand gesture data represented by some features into some predefined finite number of gesture classes.
The main objective of this effort is to explore the utility of two feature extraction methods, namely, hand contour and complex moments to solve the hand gesture recognition problem by identifying the primary advantages and disadvantages of each method. Artificial neural network is built for the purpose of classification by using the back-propagation learning algorithm. The proposed system presents a recognition algorithm to recognize a set of six specific static hand gestures, namely: Open, Close, Cut, Paste, Maximize, and Minimize. The hand gesture image is passed through three stages, namely, pre-processing, feature extraction, and classification. In the pre-processing stage some operations are applied to extract the hand gesture from its background and prepare the hand gesture image for the feature extraction stage. In the first method, the hand contour is used as a feature which treats scaling and translation problems (in some cases). The complex moments algorithm is, however, used to describe the hand gesture and treat the rotation problem in addition to the scaling and translation. The back-propagation learning algorithm is employed in the multi-layer neural network classifier. The results show that hand contour method has a performance of 71.30% recognition, while complex moments has a better performance of 86.90% recognition rate.
Chapter 3 - Hand Gesture Recognition Based on Signals Cross-Correlation
Anna Lekova and Mo Adda
(pages 43-74) DOI: 10.15579/gcsr.vol3.ch3
Interactive gestures and body movements let us control and interact with mobile devices, screens and robots. Vision-based gesture recognition systems analyze the detected infrared and visible light after converting them into some measurable signal, e.g. voltage or current. Since, infrared and visible light are electromagnetic waves (EMW) with particular wavelength between 0.4 and 1.6μm, we introduce a concept of a new kind of sensor for direct perception of EMW to see objects. We propose a novel framework for hand gesture featuring, profiling and recognizing based on signal processing and cross correlation of detected signals instead of Euclidean space analysis of image pixels by visual-based algorithms. Hand segmentation is accomplished on infrared radiation, while hand joints are categorized according to the intensity of visible light on hand edges. The meaning of a gesture is described by wave-based profiles representing the informative features of hand joints and their spatial relations over some period of time. A hand joint profile is a waveform of known shape obtained by superposition of feature waves. During the hand segmentation, we use online fuzzy clustering to categorize the infrared radiation. During the feature extraction, the clustering algorithm categorizes the grayscale light intensity on hand edges. During the training, the hand joint profiles are stored in the database as sampled sequences corresponding to the superposition of sine waves with amplitudes and frequencies derived from the obtained clusters. During the recognition phase, the current hand gesture is matched to the hand joint profiles in the database by fast signals crosscorrelation. Our first implementation of the proposed framework inputs the raw data of Microsoft Kinect infrared and RGB image sensors that are wavelength dependent and produce a signal for electric current that is directly proportional to the changes in the focused flow of reflected light during hand gesturing.
Chapter 4 - Compositional and Hierarchical Semantic Frameworks for Hand Gesture Recognition
G. Simion, C. David, C. Caleanu and V. Gui
(pages 75-114) DOI: 10.15579/gcsr.vol3.ch4
This chapter presents some new approaches in hand gesture recognition from monocular and from 3D images. After introducing the main trends from literature, the chapter addresses the hand gesture recognition problem in a compositional framework. The ability of compositional methods to capture and extract semantic information from selected salient parts of the images is demonstrated for the hand gesture application. Performances on images with static hand gestures executed on uniform backgrounds rival the state of the art, while living a lot of room for further optimization and improvement. The second work reported in this chapter is focused on using multiple cues to overcome difficult problems arising when dynamic gestures are executed in front of heterogeneous backgrounds, with camouflage and sudden illumination changes. Ideas from robust estimation are integrated in the proposed approach. Finally, some preliminary results of hand gesture recognition obtained with images generated by 3D time of flight image sensors, presumed to become prevalent in the near future, are presented.