- P. Balaji ORCID:orcid.org/0000-0002-9785-44618,
- Debadutta Subudhi ORCID:orcid.org/0000-0001-9801-49698 &
- Manivannan Muniyandi8
Part of the book series:Lecture Notes in Computer Science ((LNCS,volume 13319))
Included in the following conference series:
1455Accesses
Abstract
Upper Limbs are expected to have precise functionality to do normal activities and perform a specific occupation. The functionality loss will impair the performance accuracy of the tasks. The amputation of a limb can lead to a great reduction in the standard of life and daily activities. Each activity follows a particular intent pattern. The grasp is one of the primary activities to interact with the real or virtual worlds. The grasp intent incorporation in the development of an advanced prosthetic hand needs grasp intent detection using multimodal sensorial data. Multimodal sensorial data is expected to detect precise movements and reduce motion artifacts by using modern Tech innovations. We develop a great classification algorithm to predict the specific grasp intent depending on the type of object through continuous feedback during the approach towards the object using multisensorial data from IMU sensors, EMG and cameras. The deep learning (DL) approach has been developed to increase the accuracy of the grasp intent by continuously predicting the intent class while moving the hand towards the object. The deep learning network archives an accuracy of 92.3% over 89% as in literature using hybrid network Convolutional Neural Network (CNN) and Long Short-term Memory (LSTM) networks on visual feed, inertial measurement unit (IMU) and electromyography (EMG) data.
Supported by organization IITM.
This is a preview of subscription content,log in via an institution to check access.
Access this chapter
Subscribe and save
- Get 10 units per month
- Download Article/Chapter or eBook
- 1 Unit = 1 Article or 1 Chapter
- Cancel anytime
Buy Now
- Chapter
- JPY 3498
- Price includes VAT (Japan)
- eBook
- JPY 8579
- Price includes VAT (Japan)
- Softcover Book
- JPY 10724
- Price includes VAT (Japan)
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Bao, T., Zaidi, S.A.R., Xie, S., Yang, P., Zhang, Z.Q.: A CNN-LSTM hybrid model for wrist kinematics estimation using surface electromyography. IEEE Trans. Instrum. Meas.70, 1–9 (2020)
Bitzer, S., Smagt, P.V.D.: Learning EMG control of a robotic hand: towards active prostheses, pp. 2819–2823. IEEE (2006)
DeGol, J., Akhtar, A., Manja, B., Bretl, T.: Automatic grasp selection using a camera in a hand prosthesis. In: 2016 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), pp. 431–434. IEEE (2016)
Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: ImageNet: a large-scale hierarchical image database, pp. 248–255. IEEE (2009)
Farina, D., Merletti, R.: Comparison of algorithms for estimation of EMG variables during voluntary isometric contractions. J. Electromyogr. Kinesiol.10(5), 337–349 (2000)
Fu, X., Wang, J., Hu, Z., Guo, Y., Wang, R.: Automated segmentation for whole human eye OCT image using RM multistage mask R-CNN. Appl. Opt.60(9), 2518–2529 (2021)
Gers, F.A., Schmidhuber, E.: LSTM recurrent networks learn simple context-free and context-sensitive languages. IEEE Trans. Neural Netw.12(6), 1333–1340 (2001)
Ghazaei, G., Alameer, A., Degenaar, P., Morgan, G., Nazarpour, K.: Deep learning-based artificial vision for grasp classification in myoelectric hands. J. Neural Eng.14, 036025 (2017)
Gigli, A., Gregori, V., Cognolato, M., Atzori, M., Gijsberts, A.: Visual cues to improve myoelectric control of upper limb prostheses, pp. 783–788. IEEE (2018)
Günay, S.Y., Quivira, F., Erdoğmuş, D.: Muscle synergy-based grasp classification for robotic hand prosthetics, pp. 335–338 (2017)
Günay, S.Y., Yarossi, M., Brooks, D.H., Tunik, E., Erdoğmuş, D.: Transfer learning using low-dimensional subspaces for EMG-based classification of hand posture, pp. 1097–1100. IEEE (2019)
Han, M., Günay, S.Y., Schirner, G., Padır, T., Erdoğmuş, D.: HANDS: a multimodal dataset for modeling toward human grasp intent inference in prosthetic hands. Intell. Serv. Robot.13(1), 179–185 (2019).https://doi.org/10.1007/s11370-019-00293-8
Jogin, M., Madhulika, M., Divya, G., Meghana, R., Apoorva, S., et al.: Feature extraction using convolution neural networks (CNN) and deep learning. In: 2018 3rd IEEE International Conference on Recent Trends in Electronics, Information & Communication Technology (RTEICT), pp. 2319–2323. IEEE (2018)
Karsch, K., Liu, C., Kang, S.B.: Depth extraction from video using non-parametric sampling. In: Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C. (eds.) ECCV 2012. LNCS, vol. 7576, pp. 775–788. Springer, Heidelberg (2012).https://doi.org/10.1007/978-3-642-33715-4_56
Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems 25 (2012)
LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE86(11), 2278–2324 (1998)
Leeb, R., Sagha, H., Chavarriaga, R., del R Millán, J.: A hybrid brain-computer interface based on the fusion of electroencephalographic and electromyographic activities. J. Neural Eng.8, 025011 (2011)
Levine, S., Pastor, P., Krizhevsky, A., Ibarz, J., Quillen, D.: Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection. Int. J. Robot. Res.37, 421–436 (2018)
Manaswi, N.K.: RNN and LSTM. In: Manaswi, N.K. (ed.) Deep Learning with Applications Using Python, pp. 115–126. Apress, Berkeley (2018).https://doi.org/10.1007/978-1-4842-3516-4_9
Maufroy, C., Bargmann, D.: CNN-based detection and classification of grasps relevant for worker support scenarios using SEMG signals of forearm muscles. In: 2018 IEEE International Conference on Systems, Man, and Cybernetics (SMC), pp. 141–146. IEEE (2018)
Mopuri, K.R., Garg, U., Babu, R.V.: CNN fixations: an unraveling approach to visualize the discriminative image regions. IEEE Trans. Image Process.28(5), 2116–2125 (2018)
Redmon, J., Angelova, A.: Real-time grasp detection using convolutional neural networks, pp. 1316–1322. IEEE (2015)
Wong, S.C., Gatt, A., Stamatescu, V., McDonnell, M.D.: Understanding data augmentation for classification: when to warp? In: 2016 international conference on digital image computing: techniques and applications (DICTA), pp. 1–6. IEEE (2016)
Yap, H.K., Ang, B.W., Lim, J.H., Goh, J.C., Yeow, C.H.: A fabric-regulated soft robotic glove with user intent detection using EMG and RFID for hand assistive application. In: 2016 IEEE International Conference on Robotics and Automation (ICRA), pp. 3537–3542. IEEE (2016)
Author information
Authors and Affiliations
Touch Lab, Department of Applied Mechanics, Indian Institute of Technology Madras, Chennai, 600036, India
P. Balaji, Debadutta Subudhi & Manivannan Muniyandi
- P. Balaji
You can also search for this author inPubMed Google Scholar
- Debadutta Subudhi
You can also search for this author inPubMed Google Scholar
- Manivannan Muniyandi
You can also search for this author inPubMed Google Scholar
Corresponding author
Correspondence toP. Balaji.
Editor information
Editors and Affiliations
School of Industrial Engineering, Purdue University, West Lafayette, IN, USA
Vincent G. Duffy
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Balaji, P., Subudhi, D., Muniyandi, M. (2022). Grasp Intent Detection Using Multi Sensorial Data. In: Duffy, V.G. (eds) Digital Human Modeling and Applications in Health, Safety, Ergonomics and Risk Management. Anthropometry, Human Behavior, and Communication. HCII 2022. Lecture Notes in Computer Science, vol 13319. Springer, Cham. https://doi.org/10.1007/978-3-031-05890-5_9
Download citation
Published:
Publisher Name:Springer, Cham
Print ISBN:978-3-031-05889-9
Online ISBN:978-3-031-05890-5
eBook Packages:Computer ScienceComputer Science (R0)
Share this paper
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative