Part of the book series:Lecture Notes in Computer Science ((LNCS,volume 13356))
Included in the following conference series:
4709Accesses
Abstract
In-classroom observations often rely on developed protocols and human observers. However, it requires a lot of human effort. This study investigates how accurately the pre-trained action recognition model can label teacher’s behaviors in the classroom. We adopt SlowFast, a state of the art action recognition model, to a real classroom at a junior-high school mathematics class in Japan. In a pilot study of a mathematics class in a junior high school, the pre-trained model had 92.7% accuracy to identify teacher's posture, 31.7% related to the teacher's interaction with objects, and 26.8% related to teacher-student interaction. Compared to the existing baseline (34.3%), our results indicate that the pre-trained model adopts well to classroom videos as well. Possible reasons for the low accuracy of the verbs in the last two categories are (1) the pre-trained model could not sufficiently deal with objects unique to the classroom, such as a whiteboard, and (2) the teacher wore masks as an infection control measure, which made it difficult to recognize teacher’s talking behavior. This study provides an initial automated approach to have a teacher's in-classroom interaction dataset extracted from the class videos. One needs to be aware of the ethical implementation and then such deep learning technologies have potential for a data-driven paradigm for the teacher’s in action reflection.
This is a preview of subscription content,log in via an institution to check access.
Access this chapter
Subscribe and save
- Get 10 units per month
- Download Article/Chapter or eBook
- 1 Unit = 1 Article or 1 Chapter
- Cancel anytime
Buy Now
- Chapter
- JPY 3498
- Price includes VAT (Japan)
- eBook
- JPY 11439
- Price includes VAT (Japan)
- Softcover Book
- JPY 14299
- Price includes VAT (Japan)
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Walkington, C., Michael, M.: Classroom observation and value-added models give complementary information about quality of mathematics teaching. In: Designing Teacher Evaluation Systems: New Guidance from the Measures of Effective Teaching Project, pp. 234–277, Josey Bass, San Francisco (2013)
Volpe, R.J., DiPerna, J.C., Hintze, J.M., Shapiro, E.S.: Observing students in classroom settings: a review of seven coding schemes. School Psych. Rev.34, 454–474 (2005)
Feichtenhofer, C., Fan, H., Malik, J., He, K.: Slowfast networks for video recognition. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 6202–6211 (openaccess.thecvf.com, 2019)
Gu, C., et al.: Ava: a video dataset of spatio-temporally localized atomic visual actions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6047–6056 (2018)
Zhu, Y., et al.: A comprehensive study of deep video action recognition. arXiv preprintarXiv:2012.06567 (2020)
Karpathy, A., Toderici, G., Shetty, S., Leung, T., Sukthankar, R., Fei-Fei, L.: Large-scale video classification with convolutional neural networks. In: Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pp. 1725–1732 (2014)
Donahue, J., et al.: Long-term recurrent convolutional networks for visual recognition and description. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2625–2634 (2015)
Li, X., Wang, M., Zeng, W., Lu, W.: A students’ action recognition database in smart classroom. In: 2019 14th International Conference on Computer Science & Education (ICCSE), pp. 523–527 (2019)
Sharma, V., Gupta, M., Kumar, A., Mishra, D.: EduNet: a new video dataset for understanding human activity in the classroom environment. Sensors 21 (2021)
Ahuja, K., et al.: EduSense: practical classroom sensing at Scale. In: Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 3(3), pp. 1–26 (2019)
Acknowledgement
This study is supported by JST JPM-JAX20AA, JSPS 21J14514, SPIRITS 2020 of Kyoto University, JSPS 20K20131, JSPS 22H03902, JSPS 16H06304, NEDO JPNP18013, and NEDO JPNP20006.
Author information
Authors and Affiliations
Kyoto University, Yoshida-honcho, Kyoto, Japan
Hiroyuki Kuromiya, Rwitajit Majumdar & Hiroaki Ogata
- Hiroyuki Kuromiya
You can also search for this author inPubMed Google Scholar
- Rwitajit Majumdar
You can also search for this author inPubMed Google Scholar
- Hiroaki Ogata
You can also search for this author inPubMed Google Scholar
Corresponding author
Correspondence toHiroyuki Kuromiya.
Editor information
Editors and Affiliations
Ateneo De Manila University, Quezon, Philippines
Maria Mercedes Rodrigo
Department of Computer Science, North Carolina State University, Raleigh, NC, USA
Noburu Matsuda
Durham University, Durham, UK
Alexandra I. Cristea
University of Leeds, Leeds, UK
Vania Dimitrova
Rights and permissions
Copyright information
© 2022 Springer Nature Switzerland AG
About this paper
Cite this paper
Kuromiya, H., Majumdar, R., Ogata, H. (2022). Detecting Teachers’ in-Classroom Interactions Using a Deep Learning Based Action Recognition Model. In: Rodrigo, M.M., Matsuda, N., Cristea, A.I., Dimitrova, V. (eds) Artificial Intelligence in Education. Posters and Late Breaking Results, Workshops and Tutorials, Industry and Innovation Tracks, Practitioners’ and Doctoral Consortium. AIED 2022. Lecture Notes in Computer Science, vol 13356. Springer, Cham. https://doi.org/10.1007/978-3-031-11647-6_74
Download citation
Published:
Publisher Name:Springer, Cham
Print ISBN:978-3-031-11646-9
Online ISBN:978-3-031-11647-6
eBook Packages:Computer ScienceComputer Science (R0)
Share this paper
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative