Emotion Recognition in Human–Robot Interaction Using the NAO Robot



Abstract
:1. Introduction
2. Related Work
3. System Modules
3.1. General Functionality
- (1)
- The humanoid robot NAO is tasked with greeting the user and explaining the game rules, including that the final outcome can be affected by the answers, but the user can always change an answer if NAO notices negative emotions. NAO also explains how the emotion system works, asking them to assume a facial expression of either happiness, anger, fear, disgust, surprise, sadness, or neutrality each time the result of the answer is announced.
- (2)
- It was decided that the robot should wait 1 s for the user to assume their choice of expression and then capture one image via the web cameras located on its forehead.
- (3)
- The robot sends the image to the server, where the emotion recognition process is set to start, and waits for a result.
- (4)
- The server saves the image and uses it on a face attribute analyzer in order to extract the most dominant emotion.
- (5)
- The result is transferred to the robot, which acts accordingly, e.g., moving on to the next question if the user is happy or allowing for a new answer if the user is sad.
3.2. Face Attribute Analyzer
3.3. NAO and Choregraphe
4. System Architecture
4.1. Image Capture and Sockets
- (1)
- Specify the IP address and port number;
- (2)
- Bind the socket to the specified IP and port;
- (3)
- Listen for new connections;
- (4)
- The server establishes a connection with the client and initializes the data transfer. The data are sent in chunks until the added string that signifies the end of data is met. Each data chunk is appended to an array, which, by the end of the data transfer, holds all of the image data. Algorithm 1 shows the data reception.
Algorithm 1. Receive data from client. | |||
Input: A socket connection | |||
Output: The image data in bytes | |||
Initialization of variables: Assign an empty byte array to variable final_data | |||
1 | while(true)do | ||
2 | try | ||
3 | img_data ←receive packet(data) | ||
4 | catch | ||
5 | exit | ||
6 | binary_array← img_data | ||
7 | if img_dataendswith “[END OF IMAGE]”then | ||
8 | binary_array ←replace “[END OF IMAGE]” with empty string | ||
9 | final_data ← binary_array | ||
10 | exit | ||
11 | else | ||
12 | final_data ← final_data + binary_array | ||
13 | end |
- (5)
- The Pickle module is used to reconstruct the image using the data stored in the array. The reconstructed image is saved on the server.
- (6)
- The emotion analyzer is called afterwards, utilizing the image as input in order to return the dominant emotion. When the emotion is extracted, the server sends the result to the client and the connection is terminated.
4.2. NAO and Choregraphe
5. Evaluation Results
6. Conclusions, Limitations, and Future Work
6.1. Limitations
6.2. Future Work
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Troussas, C.; Espinosa, K.J.; Virvou, M. Affect Recognition through Facebook for Effective Group Profiling Towards Personalized Instruction.Inform. Educ.2016,15, 147–161. [Google Scholar] [CrossRef]
- Krouska, A.; Troussas, C.; Virvou, M. Deep Learning for Twitter Sentiment Analysis: The Effect of Pre-trained Word Embedding. InMachine Learning Paradigms. Learning and Analytics in Intelligent Systems; Tsihrintzis, G., Jain, L., Eds.; Springer: Cham, Switzerland, 2020; Volume 18. [Google Scholar] [CrossRef]
- Pei, E.; Zhao, Y.; Oveneke, M.C.; Jiang, D.; Sahli, H. A Bayesian Filtering Framework for Continuous Affect Recognition from Facial Images.IEEE Trans. Multimed.2022, 1. [Google Scholar] [CrossRef]
- Troussas, C.; Krouska, A.; Virvou, M. Trends on Sentiment Analysis over Social Networks: Pre-processing Ramifications, Stand-Alone Classifiers and Ensemble Averaging. InMachine Learning Paradigms. Intelligent Systems Reference Library; Tsihrintzis, G., Sotiropoulos, D., Jain, L., Eds.; Springer: Cham, Switzerland, 2019; Volume 149. [Google Scholar] [CrossRef]
- Troussas, C.; Krouska, A.; Virvou, M. A Multicriteria Framework for Assessing Sentiment Analysis in Social and Digital Learning: Software Review. In Proceedings of the 2018 9th International Conference on Information, Intelligence, Systems and Applications (IISA), Zakynthos, Greece, 23–25 July 2018; pp. 1–7. [Google Scholar] [CrossRef]
- Caruelle, D.; Shams, P.; Gustafsson, A.; Lervik-Olsen, L. Affective Computing in Marketing: Practical Implications and Research Opportunities Afforded by Emotionally Intelligent Machines.Mark. Lett.2022,33, 163–169. [Google Scholar] [CrossRef]
- Andrej, L.; Panagiotis, B.; Madga, H.-A. Affective computing and medical informatics: State of the art in emotion-aware medical applications.Stud. Health Technol. Inform.2008,136, 517–522. [Google Scholar] [CrossRef]
- Stein, G.; Ledeczi, A. Enabling Collaborative Distance Robotics Education for Novice Programmers. In Proceedings of the 2021 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC), St Louis, MO, USA, 10–13 October 2021; pp. 1–5. [Google Scholar] [CrossRef]
- Ververi, C.; Koufou, T.; Moutzouris, A.; Andreou, L.-V. Introducing Robotics to an English for Academic Purposes Curriculum in Higher Education: The Student Experience. In Proceedings of the 2020 IEEE Global Engineering Education Conference (EDUCON), Porto, Portugal, 27–30 April 2020; pp. 20–21. [Google Scholar] [CrossRef]
- Tengler, K.; Kastner-Hauler, O.; Sabitzer, B. A Robotics-based Learning Environment Supporting Computational Thinking Skills—Design and Development. In Proceedings of the 2021 IEEE Frontiers in Education Conference (FIE), Lincoln, NE, USA, 13–16 October 2021; pp. 1–6. [Google Scholar] [CrossRef]
- Cavedini, P.; Bertagnolli, S.D.C.; Peres, A.; Oliva, R.S.; Locatelli, E.L.; Caetano, S.V.N. Educational Robotics and Physical Education: Body and movement in learning laterality in Early Childhood Education. In Proceedings of the 2021 International Symposium on Computers in Education (SIIE), Malaga, Spain, 23–24 September 2021; pp. 1–6. [Google Scholar] [CrossRef]
- NAO Robot. Available online:https://www.softbankrobotics.com/emea/en/nao (accessed on 1 April 2022).
- SoftBank Robotics NAO Documentation: ALMood. Available online:https://doc.aldebaran.com/2-5/naoqi/core/almood.html (accessed on 1 April 2022).
- DEEPFACE, Face Recognition and Facial Attribute Analysis Library for Python. Available online:https://github.com/serengil/deepface (accessed on 1 April 2022).
- Python PICKLE Module. Available online:https://github.com/python/cpython/blob/3.10/Lib/pickle.py (accessed on 1 April 2022).
- Choregraphe Suite. Available online:https://doc.aldebaran.com/2-8/software/choregraphe/choregraphe_overview.html (accessed on 1 April 2022).
- SoftBank Robotics NAO Documentation: QiChat. Available online:https://doc.aldebaran.com/2-1/naoqi/audio/dialog/dialog.html#dialog-concepts (accessed on 1 April 2022).
- She, T.; Ren, F. Enhance the Language Ability of Humanoid Robot NAO through Deep Learning to Interact with Autistic Children.Electronics2021,10, 2393. [Google Scholar] [CrossRef]
- Ismail, S.B.; Shamsuddin, S.; Yussof, H.; Hashim, H.; Bahari, S.; Jaafar, A.; Zahari, I. Face detection technique of Humanoid Robot NAO for application in robotic assistive therapy. In Proceedings of the 2011 IEEE International Conference on Control System, Computing and Engineering, Penang, Malaysia, 25–27 November 2011; pp. 517–521. [Google Scholar] [CrossRef]
- Torta, E.; Werner, F.; Johnson, D.O.; Juola, J.F.; Cuijpers, R.H.; Bazzani, M.; Oberzaucher, J.; Lemberger, J.; Lewy, H.; Bregman, J. Evaluation of a Small Socially-Assistive Humanoid Robot in Intelligent Homes for the Care of the Elderly.J. Intell. Robot. Syst.2014,76, 57–71. [Google Scholar] [CrossRef]
- Scassellati, B.; Boccanfuso, L.; Huang, C.-M.; Mademtzi, M.; Qin, M.; Salomons, N.; Ventola, P.; Shic, F. Improving social skills in children with ASD using a long-term, in-home social robot.Sci. Robot.2018,3, eaat7544. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Ramis, S.; Buades, J.M.; Perales, F.J. Using a Social Robot to Evaluate Facial Expressions in the Wild.Sensors2020,20, 6716. [Google Scholar] [CrossRef] [PubMed]
- Lopez-Rincon, A. Emotion recognition using facial expressions in children using the NAO Robot. In Proceedings of the International Conference on Electronics, Communications and Computers (CONIELECOMP), Cholula, Mexico, 27 February–1 March 2019; pp. 146–153. [Google Scholar] [CrossRef]
- Hu, M. Facial Emotional Recognition with Deep Learning on Pepper Robot. Bachelor’s Thesis, Vaasan Ammattikorkeakoulu University of Applied Sciences, Vasa, Finland, 2019. [Google Scholar]
- Onder, T.; Fatma, G.; Duygun, E.B.; Hatice, K. An Emotion Analysis Algorithm and Implementation to NAO Humanoid Robot.Eurasia Proc. Sci. Technol. Eng. Math. EPSTEM2017,1, 316–330. [Google Scholar]
- Filippini, C.; Perpetuini, D.; Cardone, D.; Merla, A. Improving Human–Robot Interaction by Enhancing NAO Robot Awareness of Human Facial Expression.Sensors2021,21, 6438. [Google Scholar] [CrossRef] [PubMed]
- SoftBank Robotics NAO Documentation: NAOqi. Available online:https://doc.aldebaran.com/2-5/naoqi/core/index.html (accessed on 1 April 2022).
- Castellano, G.; De Carolis, B.; Macchiarulo, N.; Rossano, V. Learning waste Recycling by playing with a Social Robot. In Proceedings of the 2019 IEEE International Conference on Systems, Man and Cybernetics (SMC), Bari, Italy, 6–9 October 2019; pp. 3805–3810. [Google Scholar] [CrossRef]
- PEPPER Robot. Available online:https://www.softbankrobotics.com/emea/en/pepper (accessed on 1 April 2022).
- Vrochidou, E.; Najoua, A.; Lytridis, C.; Salonidis, M.; Ferelis, V.; Papakostas, G.A. Social Robot NAO as a Self-Regulating Didactic Mediator: A Case Study of Teaching/Learning Numeracy. In Proceedings of the 26th International Conference on Software, Telecommunications and Computer Networks (SoftCOM), Split, Croatia, 13–15 September 2018; pp. 1–5. [Google Scholar] [CrossRef]
- Ozkul, A.; Kose, H.; Yorganci, R.; Ince, G. Robostar: An interaction game with humanoid robots for learning sign language. In Proceedings of the 2014 IEEE International Conference on Robotics and Biomimetics (ROBIO), Bali, Indonesia, 5–10 December 2014; pp. 522–527. [Google Scholar] [CrossRef]
- Ali, S.; Moroso, T.; Breazeal, C. Can Children Learn Creativity from a Social Robot? In Proceedings of the 2019 on Creativity and Cognition (C&C ‘19). Association for Computing Machinery, New York, NY, USA, 23–26 June 2019; pp. 359–368. [Google Scholar] [CrossRef]
- Miskam, M.A.; Shamsuddin, S.; Samat, M.R.A.; Yussof, H.; Ainudin, H.A.; Omar, A.R. Humanoid robot NAO as a teaching tool of emotion recognition for children with autism using the Android app. In Proceedings of the 2014 International Symposium on Micro-NanoMechatronics and Human Science (MHS), Nagoya, Japan, 10–12 November 2014; pp. 1–5. [Google Scholar] [CrossRef]
- Gordon, G.; Spaulding, S.; Westlund, J.K.; Lee, J.J.; Plummer, L.; Martinez, M.; Das, M.; Breazeal, C. Affective personalization of a social robot tutor for children’s second language skills. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence (AAAI’16), 12–17 February 2016; pp. 3951–3957. [Google Scholar]
- Štuikys, V.; Burbaite, R.; Damaševicius, R. Teaching of Computer Science Topics Using Meta-Programming-Based GLOs and LEGO Robots.Inform. Educ.2013,12, 125–142. [Google Scholar] [CrossRef]
- Damaševičius, R.; Narbutaitė, L.; Plauska, I.; Blažauskas, T. Advances in the Use of Educational Robots in Project-Based Teaching.TEM J.2017,6, 342–348. [Google Scholar]
- Dederichs-Koch, A.; Zwiers, U. Project-based learning unit: Kinematics and object grasping in humanoid robotics. In Proceedings of the 2015 16th International Conference on Research and Education in Mechatronics (REM), Bochum, Germany, 18–20 November 2015; pp. 216–220. [Google Scholar] [CrossRef]
- Karaman, S.; Anders, A.; Boulet, M.; Connor, J.; Gregson, K.; Guerra, W.; Guldner, O.; Mohamoud, M.; Plancher, B.; Shin, R. Project-based, collaborative, algorithmic robotics for high school students: Programming self-driving race cars at MIT. In Proceedings of the 2017 IEEE Integrated STEM Education Conference (ISEC), Princeton, NJ, USA, 11 March 2017; pp. 195–203. [Google Scholar] [CrossRef] [Green Version]
- Taigman, Y.; Yang, M.; Ranzato, M.; Wolf, L. DeepFace: Closing the Gap to Human-Level Performance in Face Verification. In Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 1701–1708. [Google Scholar] [CrossRef]
- SoftBank Robotics NAO Documentation: ALVideoDevice. Available online:http://doc.aldebaran.com/2-1/naoqi/vision/alvideodevice-api.html (accessed on 1 April 2022).
Did You Enjoy Interacting with the System? | Did the System Help You Get Acquainted with Environmental Issues? | |||
---|---|---|---|---|
Group A | Group B | Group A | Group B | |
Mean | 4.8 | 3.28 | 4.4 | 3.12 |
Variance | 0.167 | 0.71 | 0.5 | 0.527 |
Observations | 25 | 25 | 25 | 25 |
Pooled Variance | 0.438 | 0.513 | ||
Hypothesized Mean Difference | 0 | 0 | ||
df | 48 | 48 | ||
t Stat | 8.117 | 6.316 | ||
P(T <= t) two-tail | 1.47 × 10−10 | 8.23 × 10−8 | ||
t Critical two-tail | 2.011 | 2.011 |
Human Input | Emotion Analyzer Output | |
---|---|---|
1 | Happy | Happy |
2 | Angry | Angry |
3 | Neutral | Neutral |
4 | Happy | Happy |
5 | Fear | Fear |
6 | Surprise | Surprise |
7 | Neutral | Neutral |
8 | Happy | Happy |
9 | Fear | Disgust1 |
10 | Surprise | Fear1 |
11 | Neutral | Neutral |
12 | Surprise | Angry |
13 | Sad | Sad |
14 | Neutral | Neutral |
15 | Sad | Sad |
16 | Happy | Happy |
17 | Angry | Neutral |
18 | Disgust | Sad |
19 | Happy | Happy |
20 | Sad | Sad |
21 | Fear | Disgust1 |
22 | Fear | Fear |
23 | Surprise | Fear |
24 | Angry | Angry |
25 | Surprise | Sad |
Predicted | |||||||||
---|---|---|---|---|---|---|---|---|---|
Happy | Angry | Neutral | Fear | Surprise | Sad | Disgust | |||
Actual | Happy | 5 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
Angry | 0 | 2 | 1 | 0 | 0 | 0 | 0 | 3 | |
Neutral | 0 | 0 | 4 | 0 | 0 | 0 | 0 | 4 | |
Fear | 0 | 0 | 0 | 4 | 0 | 0 | 0 | 4 | |
Surprise | 0 | 1 | 0 | 1 | 2 | 1 | 0 | 5 | |
Sad | 0 | 0 | 0 | 0 | 0 | 3 | 0 | 3 | |
Disgust | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | |
5 | 3 | 5 | 5 | 2 | 5 | 0 | N = 25 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Valagkouti, I.A.; Troussas, C.; Krouska, A.; Feidakis, M.; Sgouropoulou, C. Emotion Recognition in Human–Robot Interaction Using the NAO Robot.Computers2022,11, 72. https://doi.org/10.3390/computers11050072
Valagkouti IA, Troussas C, Krouska A, Feidakis M, Sgouropoulou C. Emotion Recognition in Human–Robot Interaction Using the NAO Robot.Computers. 2022; 11(5):72. https://doi.org/10.3390/computers11050072
Chicago/Turabian StyleValagkouti, Iro Athina, Christos Troussas, Akrivi Krouska, Michalis Feidakis, and Cleo Sgouropoulou. 2022. "Emotion Recognition in Human–Robot Interaction Using the NAO Robot"Computers 11, no. 5: 72. https://doi.org/10.3390/computers11050072
APA StyleValagkouti, I. A., Troussas, C., Krouska, A., Feidakis, M., & Sgouropoulou, C. (2022). Emotion Recognition in Human–Robot Interaction Using the NAO Robot.Computers,11(5), 72. https://doi.org/10.3390/computers11050072