1390Accesses
57Citations
ACorrection to this article was published on 02 December 2017
This article has beenupdated
Abstract
Gesture recognition and hand pose tracking are applicable techniques in human–computer interaction fields. Depth data obtained by depth cameras present a very informative explanation of the body or in particular hand pose that it can be used for more accurate gesture recognition systems. The hand detection and feature extraction process are very challenging task in the RGB images that they can be effectively dissolved with simple ways with depth data. However, depth data could be combined with the color information for more reliable recognition. A common hand gesture recognition system requires identifying the hand and its position or direction, extracting some useful features and applying a suitable machine-learning method to detect the performed gesture. This paper presents the novel fusion of the enhanced features for the classification of static signs of the sign language. It begins by explaining how the hand can be separated from the scene by depth data. Then, a combination feature extraction method is introduced for extracting some appropriate features of the images. Finally, an artificial neural network classifier is trained with these fused features and applied to critically analyze various descriptors performance.
This is a preview of subscription content,log in via an institution to check access.
Access this article
Subscribe and save
- Get 10 units per month
- Download Article/Chapter or eBook
- 1 Unit = 1 Article or 1 Chapter
- Cancel anytime
Buy Now
Price includes VAT (Japan)
Instant access to the full article PDF.














Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Change history
02 December 2017
In the original publication, the first author name and his affiliation were incorrectly published. The correct author name and his affiliation are as follows:
References
Saba T, Almazyad AS, Rehman A (2015) Online versus offline arabic script classification. Neural Comput Appl. doi:10.1007/s00521-015-2001-1
Soleimanizadeh S, Mohamad D, Saba T, Rehman A (2015) Recognition of partially occluded objects based on the three different color spaces (RGB, YCbCr, HSV). 3D Res 6(3):1–10. doi:10.1007/s13319-015-0052-9
Han J, Shao L, Xu D, Shotton J (2013) Enhanced computer vision with Microsoft Kinect sensor: a review. IEEE Trans Cybern 43(5):1318–1334
Neamah K, Mohamad D, Saba T, Rehman A (2014) Discriminative features mining for offline handwritten signature verification. 3D Res. doi:10.1007/s13319-013-0002-3
Muhsin ZF, Rehman A, Altameem A, Saba T, Uddin M (2014) Improved quadtree image segmentation approach to region information. Imaging Sci J 62(1):56–62. doi:10.1179/1743131X13Y.0000000063
Rehman A, Saba T (2014) Neural networks for document image preprocessing. State Art 42(2):253–273DOI: 10.1007/s10462-012-9337-z.
Joudaki S, Mohamad D, Saba T, Rehman A, Al-Rodhaan M, Al-Dhelaan A (2014) Vision-based sign language classification: a directional review. IETE Tech Rev 31(5):383–391. doi:10.1080/02564602.2014.961576
Saba T, Rehman A, Elarbi-Boudihir M (2014) Methods and strategies on off-line cursive touched characters segmentation: a directional review. Artif Intell Rev 42(4):1047–1066. doi:10.1007/s10462-011-9271-5
Wachs JP, Kölsch M, SternH Edan Y (2011) Vision-based hand-gesture applications. ACM Commun 54(2):60–71
Zabulis X, Baltzakis H, Argyros A (2009) Vision-based hand gesture recognition for human–computer interaction. In: The universal access handbook, human factors and ergonomics, Chapter 34. IEEE, Hong Kong, pp 34.1–34.30
Saba T, Rehman A (2012) Effects of artificially intelligent tools on pattern recognition. Int J Mach Learn Cybernet 4:155–162. doi:10.1007/s13042-012-0082-z
Saba T, Rehman A (2012) Machine learning and script recognition. Lambert Academic Publisher, Saarbrueken, pp 56–68
Kurakin A, Zhang Z, Liu Z (2012) A real-time system for dynamic hand gesture recognition with a depth sensor. In: Proceedings of EUSIPCO
Ren Z, Meng J, Yuan J (2011) Depth camera based hand gesture recognition and its applications in human–computer-interaction. In: Proceedings of international conference on information, communications and signal processing (ICICS), December 2011, pp 1–5
Ren Z, Yuan J, Zhang Z (2011) Robust hand gesture recognition based on finger-earth mover’s distance with a commodity depth camera. In: Proceedings of the 19th ACM international conference on multimedia, MM’11. ACM, NY, USA, pp 1093–1096
Li Y (2012) Hand gesture recognition using Kinect. In: IEEE 3rd international conference on software engineering and service science (ICSESS), June 2012, pp 196–199
Wen Y, Hu C, Yu G, Wang C (2012) A robust method of detecting hand gestures using depth sensors. In: Proceedings of haptic audio visual environments and games (HAVE), 2012, pp 72–77
Pedersoli F, Adami N, Benini S, Leonardi R (2012). XKin—extendable hand pose and gesturere cognition library for Kinect. In: Proceedings of ACM conference on multimedia 2012—opensource competition, Nara, Japan, October 20
Pedersoli F, Benini S, Adami N, Leonardi R (2014) XKin: an open source framework for hand pose and gesture recognition using Kinect. Vis Comput 35:1–16
Suryanarayan P, Subramanian A, Mandalapu D (2010) Dynamic hand pose recognition using depth data. In: Proceedings of international conference on pattern recognition (ICPR), August 2010, pp 3105–3108
Wang J, Liu Z, Chorowski J, Chen Z, Wu Y (2012) Robust 3d action recognition with random occupancy patterns. In: Proceedings of the European conference on computer vision (ECCV)
Pugeault N, Bowden R (2011) Spelling it out: real-time ASL finger spelling recognition. In: Proceedings of the 1st IEEE workshop on consumer depth cameras for computer vision, pp 1114–1119
Keskin C, Kıraç F, Kara YE, Akarun L (2012) Hand pose estimation and hand shape classification using multi-layered randomized decision forests. In: Proceedings of the European conference on computer vision (ECCV), pp 852–863
Biswas K, Basu S (2011) Gesture recognition using Microsoft Kinect. In: 5th International conference on automation, robotics and applications (ICARA), December 2011, pp 100–103
Doliotis P, Stefan A, McMurrough C, Eckhard D, Athitsos V (2011) Comparing gesture recognition accuracy using color and depth information. In: Proceedings of the 4th international conference on pervasive technologies related to assistive environments (PETRA’11), pp 20:1–20:7
Wan T, Wang Y, Li J (2012) Hand gesture recognition system using depth data. In: Proceedings of 2nd international conference on consumer electronics, communications and networks (CECNet), April 2012, pp 1063–1066
SunC Zhang T, BaoBK XuC, Mei T (2013) Discriminative exemplar coding for sign language recognition with Kinect. IEEE Trans Cybern 43(5):1418–1428
Ballan L, Taneja A, Gall J, Van Gool L, Pollefeys M (2012) Motion capture of hands in action using discriminative salient points. In: Proceedings of the European conference on computer vision (ECCV), Firenze, October 2012
Keskin G, Kirac G, Kara YE, Akarun L (2011) Real time hand pose estimation using depth sensors. In: ICCV workshops, November 2011, pp 1228–1234
Keyes L, Winstanley AC (2001) Using moment invariants for classifying shapes on large scale maps. Comput Environ Urban Syst 25:1–13
Author information
Authors and Affiliations
Faculty of Computing, Universiti Teknologi Malaysia, 81310, Skudai, Johor, Malaysia
Saba Jadooki & Dzulkifli Mohamad
College of Computer and Information Sciences, Prince Sultan University, Riyadh, Kingdom of Saudi Arabia
Tanzila Saba
College of Computer and Information Sciences, King Saud University, Riyadh, Kingdom of Saudi Arabia
Abdulaziz S. Almazyad
College of Computer and Information Systems, Al-Yamamah University, Riyadh, Kingdom of Saudi Arabia
Abdulaziz S. Almazyad & Amjad Rehman
- Saba Jadooki
You can also search for this author inPubMed Google Scholar
- Dzulkifli Mohamad
You can also search for this author inPubMed Google Scholar
- Tanzila Saba
You can also search for this author inPubMed Google Scholar
- Abdulaziz S. Almazyad
You can also search for this author inPubMed Google Scholar
- Amjad Rehman
You can also search for this author inPubMed Google Scholar
Corresponding author
Correspondence toAmjad Rehman.
Additional information
A correction to this article is available online athttps://doi.org/10.1007/s00521-017-3294-z.
Rights and permissions
About this article
Cite this article
Jadooki, S., Mohamad, D., Saba, T.et al. Fused features mining for depth-based hand gesture recognition to classify blind human communication.Neural Comput & Applic28, 3285–3294 (2017). https://doi.org/10.1007/s00521-016-2244-5
Received:
Accepted:
Published:
Issue Date:
Share this article
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative