技术领域technical field
本发明涉及计算机技术领域,具体涉及一种基于面部识别的学生网课学习状态评价方法及系统。The present invention relates to the field of computer technology, in particular to a method and system for evaluating students' online learning status based on facial recognition.
背景技术Background technique
目前,高校考虑到节约教学开支,节省人力物力,丰富教学内容等原因,逐渐引入网络授课的学习方式。学生可以通过电子设备和互联网来在线观看视频音频等来学习知识、技能。At present, colleges and universities are gradually introducing online learning methods for reasons such as saving teaching expenses, saving manpower and material resources, and enriching teaching content. Students can learn knowledge and skills by watching video and audio online through electronic devices and the Internet.
本申请发明人在实施本发明的过程中,发现现有技术的方法,至少存在如下技术问题:In the process of implementing the present invention, the inventor of the present application found that the method of the prior art has at least the following technical problems:
网课方便师生的同时,也存在着因条件限制,对上网课学生无法监管的问题。仅仅观看视频,而没有类似课堂的师生交流互动的过程也使得部分学生在网课学习的过程中并未真正的在“听课”,学生的听课效果和专注程度不像传统的课堂教学能够及时反馈给教师,导致学生上网课的听课质量大打折扣。现有技术中的方法虽然可以通过摄像头等设备来监测学习状态,但是需要通过人工的方式来判断学生是否在认真听课,这种方法费时费力,效率不高。While online classes are convenient for teachers and students, there is also the problem that students cannot be supervised due to conditions. Just watching the video without the teacher-student interaction process similar to the classroom also makes some students not really "listen to the class" in the process of learning the online class. The students' listening effect and concentration are not as timely as traditional classroom teaching. Feedback to teachers has greatly reduced the quality of students' online classes. Although the methods in the prior art can monitor the learning status through cameras and other equipment, it needs to manually judge whether the students are listening to the class seriously. This method is time-consuming and laborious, and the efficiency is not high.
由此可知,现有技术中的方法存在效率不高的技术问题。It can be seen that the method in the prior art has the technical problem of low efficiency.
发明内容Contents of the invention
有鉴于此,本发明提供了一种基于面部识别的学生网课学习状态评价方法及系统,用以解决或者至少部分解决现有技术中的方法存在的效率不高的技术问题。In view of this, the present invention provides a method and system for evaluating students' learning status in online courses based on facial recognition, to solve or at least partially solve the technical problems of low efficiency existing in the methods in the prior art.
为了解决上述技术问题,本发明第一方面提供了一种基于面部识别的学生网课学习状态评价方法,包括:In order to solve the above technical problems, the first aspect of the present invention provides a method for evaluating students' online learning status based on facial recognition, including:
S1:获取学生的面部图像、学生回答问题情况以及学生信息;S1: Obtain the student's face image, student's answer to the question and student information;
S2:根据学生回答问题情况与参考答案的对比情况,获得学生的回答问题结果;S2: According to the comparison between the student's answer to the question and the reference answer, the result of the student's answer to the question is obtained;
S3:将采集的面部图像进行标准化处理成一致的画面信息后输入到训练好的微表情识别卷积神经网络模型中,得到学生的网课听课理解程度状态;S3: Standardize the collected facial images into consistent picture information and input them into the trained micro-expression recognition convolutional neural network model to obtain the status of students' understanding of online classes;
S4:对采集的面部图像进行人脸识别,提取出人脸图片并进行面部特征提取,得到学生的面部尺寸和眼睛张开高度,其中,学生的面部尺寸包括面部长度和宽度;根据学生的面部长度与宽度的比值与预设标准面部长宽比的比较情况、学生的眼睛张开高度与预设标准面部眼睛张开高度的比较情况,得到学生的专注度;S4: Perform face recognition on the collected face images, extract face pictures and perform facial feature extraction to obtain the student's face size and eye opening height, where the student's face size includes face length and width; according to the student's face The comparison between the ratio of length to width and the length-to-width ratio of the preset standard face, and the comparison between the eye opening height of the student and the eye opening height of the preset standard face, to obtain the concentration of the student;
S5:将学生的回答问题结果、网课听课理解程度状态以及专注度作为学生网课学习状态评价结果。S5: Take the results of students' answers to questions, the level of comprehension of online lectures, and the degree of concentration as the evaluation results of students' online learning status.
在一种实施方式中,S3中训练好的微表情识别卷积神经网络模型的构建方法包括:In one embodiment, the construction method of the trained micro-expression recognition convolutional neural network model in S3 includes:
在微表情数据库中寻找分别符合愉悦、理解、困惑状态特征的人脸微表情图片,将理解程度状态对应的图片经过压缩、拉伸、锐化等过程过后,处理成统一尺寸和格式的图片信息作为训练数据,其中,将学生的网课听课理解程度状态划分三个等级:愉悦、理解、困惑,愉悦对应的面部特征包括眼睛张开、面部正对屏幕和嘴角上扬,理解对应的面部特征包括面部正对屏幕和眉毛舒展,困惑对应的面部特征包括眉头紧锁、眼睛微眯和嘴角向下;In the micro-expression database, look for facial micro-expression pictures that meet the characteristics of joy, understanding, and confusion, and process the pictures corresponding to the state of understanding into uniform size and format after compression, stretching, and sharpening. As the training data, the students’ online class comprehension status is divided into three levels: pleasure, understanding, and confusion. The facial features corresponding to pleasure include eyes open, the face facing the screen and the corners of the mouth raised, and the facial features corresponding to understanding include The face is facing the screen and the eyebrows are stretched. The facial features corresponding to confusion include frowning, slightly narrowed eyes, and downward corners of the mouth;
确定微表情识别卷积神经网络模型的结构,模型的结构包括输入层,第一卷积层、第一池化层、第二卷积层、第二池化层、特征层、全连接层、分类层以及输出层;Determine the structure of the convolutional neural network model for micro-expression recognition. The structure of the model includes an input layer, a first convolutional layer, a first pooling layer, a second convolutional layer, a second pooling layer, a feature layer, a fully connected layer, Classification layer and output layer;
根据预设模型参数,采用训练数据对微表情识别卷积神经网络模型进行训练,得到训练好的微表情识别卷积神经网络模型。According to the preset model parameters, the training data is used to train the micro-expression recognition convolutional neural network model, and the trained micro-expression recognition convolutional neural network model is obtained.
在一种实施方式中,S3具体包括:In one embodiment, S3 specifically includes:
S3.1:采集的面部图像对应的通过输入层进入图片信息输入第一卷积层,通过第一卷积层进行特征提取;S3.1: The collected facial image corresponds to the image information entered into the first convolutional layer through the input layer, and feature extraction is performed through the first convolutional layer;
S3.2:通过第一池化层对S4.1得到的图像进行降维压缩处理;S3.2: Perform dimensionality reduction and compression processing on the image obtained in S4.1 through the first pooling layer;
S3.3:通过第二卷积层对降维压缩处理后的图像进行特征提取,再通过第二池化层进行降维压缩;S3.3: Use the second convolutional layer to perform feature extraction on the image after dimensionality reduction and compression processing, and then perform dimensionality reduction compression through the second pooling layer;
S3.4:通过特征层将S4.3得到的图像压缩为一个一维向量后输出到全连接层;S3.4: Compress the image obtained in S4.3 into a one-dimensional vector through the feature layer and output it to the fully connected layer;
S3.5:通过由多个神经元向前连接构成的全连接层输出到分类层中;S3.5: Output to the classification layer through a fully connected layer composed of multiple neurons connected forward;
S3.6:通过分类层将全连接层输出的结果与对应的理解程度状态进行匹配,得到图片对应的理解程度状态;S3.6: Match the output result of the fully connected layer with the corresponding understanding level state through the classification layer, and obtain the understanding level state corresponding to the picture;
S3.7:通过输出层输出图片对应的理解程度状态。S3.7: Through the output layer, output the state of understanding level corresponding to the picture.
在一种实施方式中,在S3.7之后所述方法还包括:对不同的理解程度状态赋予不同的分值。In one embodiment, after S3.7, the method further includes: assigning different scores to different understanding levels.
在一种实施方式中,输出层输出图片对应的理解程度状态为学生在一时刻的理解程度状态,所述方法还包括:In one embodiment, the state of the degree of understanding corresponding to the output picture of the output layer is the state of the degree of understanding of the students at a moment, and the method further includes:
根据赋予的分值得到对应的上课状态评分ui,Get the corresponding class status score ui according to the assigned score,
根据课状态评分ui获得每个阶段学生网课学习的理解程度评分Uk;According to the class status score ui, the score Uk of the understanding degree of students' online learning at each stage is obtained;
其中,N表示时刻的数量,K表示阶段。Among them, N represents the number of moments, and K represents the stage.
在一种实施方式中,S4中根据学生的面部长度与宽度的比值与预设标准面部长宽比的比较情况、学生的眼睛张开高度与预设标准面部眼睛张开高度的比较情况,得到学生的专注度,包括:In one embodiment, in S4, according to the comparison between the ratio of the student's face length and width and the preset standard face aspect ratio, and the comparison between the student's eye opening height and the preset standard face eye opening height, it is obtained Student engagement, including:
S4.1:根据学生的面部长度与宽度的比值与预设标准面部长宽比的比较情况对i时刻学生是否面部正对屏幕进行判定,其中,没有正对屏幕则判定为不专注,正对屏幕则进行下一步判定,正对屏幕的判定公式如下:S4.1: According to the comparison between the ratio of the student's face length and width and the preset standard face length-to-width ratio, determine whether the student's face is facing the screen at time i. The screen will proceed to the next step of judgment. The judgment formula for the screen is as follows:
其中,Li和Wi为i时刻学生的面部的长度和宽度,Ls和Ws为学生标准的面部的长度和宽度;Among them, Li and Wi are the length and width of the student's face at time i, and Ls and Ws are the length and width of the student's standard face;
S4.2:根据学生的眼睛张开高度与预设标准面部眼睛张开高度的比较情况,对学生的眼睛张度进行判定,得到学生i时刻的专注程度,判定公式如下式所示:S4.2: According to the comparison between the student's eye opening height and the preset standard facial eye opening height, judge the student's eye opening degree, and obtain the student's concentration at moment i. The judgment formula is as follows:
其中,Hi为i时刻学生的眼睛张开高度,Hs为学生标准的眼睛张开高度,Li为i时刻学生的面部的长度,Ls为学生标准的面部的长度,如果大于,判定i时刻学生是专注的,如果小于,判定i时刻学生是不专注的;Among them, Hi is the height of the student's eye opening at time i, Hs is the student's standard eye opening height, Li is the length of the student's face at time i, Ls is the length of the student's standard face, if it is greater than that, judge The student is focused at time i, if it is less than, it is judged that the student is not focused at time i;
根据学生i时刻的专注程度,连续监测预设时长内学生状态是否为不专注,如果均为不专注,则判定学生状态为不专注。According to the degree of concentration of the student at moment i, continuously monitor whether the student's state is inattentive within the preset period of time. If all are inattentive, the student's state is determined to be inattentive.
在一种实施方式中,所述方法还包括将每节网课根据学生提问作答的时间分为不同的阶段。In one embodiment, the method further includes dividing each online class into different stages according to the time for students to answer questions.
在一种实施方式中,在S5之后,所述方法还包括:In one embodiment, after S5, the method further includes:
将学习状态评价结果上传至服务器,并根据学生信息,将获得的学习状态评价结果反馈至对应的学生终端,将所有学生的网课学习学习状态进行汇总反馈到对应的教学老师终端。Upload the learning status evaluation results to the server, and feed back the obtained learning status evaluation results to the corresponding student terminals according to the student information, and summarize and feed back the online learning status of all students to the corresponding teaching teacher terminals.
基于同样的发明构思,本发明第二方面提供了一种基于面部识别的学生网课学习状态评价系统,包括:Based on the same inventive concept, the second aspect of the present invention provides a system for evaluating students' online learning status based on facial recognition, including:
信息获取模块,用于获取学生的面部图像、学生回答问题情况以及学生信息;The information acquisition module is used to acquire students' facial images, students' answers to questions, and student information;
学生的回答问题评估模块,用于根据学生回答问题情况与参考答案的对比情况,获得学生的回答问题结果;The student's answering question evaluation module is used to obtain the student's answering question result according to the comparison between the student's answering question and the reference answer;
理解程度识别模块,用于将采集的面部图像进行标准化处理成一致的画面信息后输入到训练好的微表情识别卷积神经网络模型中,得到学生的网课听课理解程度状态;Comprehension degree recognition module, which is used to standardize the collected facial images into consistent picture information and then input them into the trained micro-expression recognition convolutional neural network model to obtain the status of students' understanding degree of online lectures;
专注度识别模块,用于对采集的面部图像进行人脸识别,提取出人脸图片并进行面部特征提取,得到学生的面部尺寸和眼睛张开高度,其中,学生的面部尺寸包括面部长度和宽度;根据学生的面部长度与宽度的比值与预设标准面部长宽比的比较情况、学生的眼睛张开高度与预设标准面部眼睛张开高度的比较情况,得到学生的专注度;The concentration recognition module is used to perform face recognition on the collected facial images, extract the face pictures and perform facial feature extraction, and obtain the student's face size and eye opening height, where the student's face size includes face length and width ;According to the comparison between the ratio of the length and width of the student's face and the preset standard facial length-to-width ratio, and the comparison between the student's eye opening height and the preset standard facial eye opening height, the concentration of the student is obtained;
评价结果模块,用于将学生的回答问题结果、网课听课理解程度状态以及专注度作为学生网课学习状态评价结果。The evaluation result module is used to use the results of students' answering questions, the state of comprehension level of online class listening, and the degree of concentration as the evaluation results of students' online class learning status.
基于同样的发明构思,本发明第三方面提供了一种计算机可读存储介质,其上存储有计算机程序,该程序被执行时实现第一方面所述的方法。Based on the same inventive concept, the third aspect of the present invention provides a computer-readable storage medium on which a computer program is stored, and when the program is executed, the method described in the first aspect is implemented.
本申请实施例中的上述一个或多个技术方案,至少具有如下一种或多种技术效果:The above one or more technical solutions in the embodiments of the present application have at least one or more of the following technical effects:
本发明提供的一种基于面部识别的学生网课学习状态评价方法,在获取学生的面部图像、学生回答问题情况以及学生信息后,根据学生回答问题情况与参考答案的对比情况,获得学生的回答问题结果;将采集的面部图像进行标准化处理成一致的画面信息后输入到训练好的微表情识别卷积神经网络模型中,得到学生的网课听课理解程度状态;对采集的面部图像进行人脸识别,提取出人脸图片并进行面部特征提取,根据学生的面部长度与宽度的比值与预设标准面部长宽比的比较情况、学生的眼睛张开高度与预设标准面部眼睛张开高度的比较情况,得到学生的专注度;再将学生的回答问题结果、网课听课理解程度状态以及专注度作为学生网课学习状态评价结果。The present invention provides a method for evaluating students' online learning status based on facial recognition. After acquiring the student's facial image, student's answer to the question, and student information, the student's answer is obtained according to the comparison between the student's answer to the question and the reference answer. The result of the problem; the collected facial images are standardized and processed into consistent picture information, and then input into the trained micro-expression recognition convolutional neural network model to obtain the status of students' understanding of online classes; Recognition, extracting face pictures and extracting facial features, according to the comparison between the ratio of the length and width of the student's face and the preset standard face length-to-width ratio, the ratio of the student's eye opening height to the preset standard face eye opening height Compare the situation to get the concentration of the students; and then use the results of the students' answering questions, the level of comprehension of the online class, and the degree of concentration as the evaluation results of the students' online learning status.
相对于现有技术中通过人工方式进行判别相比,本发明通过构建微表情识别卷积神经网络模型对学生的网课听课理解程度状态进行识别,通过对学生进行微表情识别,可以捕捉其表情细微的变化及面部特征从而与专注度状态进行匹配,得到学生在网络课程学习中的实时专注度情况。根据学生的面部长度与宽度的比值与预设标准面部长宽比的比较情况、学生的眼睛张开高度与预设标准面部眼睛张开高度的比较情况,得到学生的专注度;通过学生面部的长宽比以及眼睛张开高度可以判断学生面部是否正对屏幕,以及眼睛的张开高度是否大于阈值,从而得到学生的专注度,一方面可以提高识别的效率,还可以提高识别的准确性,另一方面,本发明从三个不同的维度:学生回答问题情况、听课理解程度状态以及专注度对学生学习状态进行评价,可以提高综合的评价效果。Compared with the manual discrimination in the prior art, the present invention recognizes the students' comprehension status of online lectures by constructing a micro-expression recognition convolutional neural network model, and can capture the students' expressions by recognizing the students' micro-expressions Subtle changes and facial features are matched with the state of concentration to obtain the real-time concentration of students in online course learning. According to the comparison between the ratio of the length and width of the student's face and the preset standard face length-to-width ratio, and the comparison between the student's eye opening height and the preset standard face eye opening height, the student's concentration is obtained; through the student's face The aspect ratio and eye opening height can determine whether the student’s face is facing the screen, and whether the eye opening height is greater than the threshold, so as to obtain the student’s concentration. On the one hand, it can improve the efficiency of recognition, and can also improve the accuracy of recognition. On the other hand, the present invention evaluates the learning state of the students from three different dimensions: students' answering of questions, state of listening comprehension, and degree of concentration, which can improve the comprehensive evaluation effect.
附图说明Description of drawings
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the following will briefly introduce the drawings that need to be used in the description of the embodiments or the prior art. Obviously, the accompanying drawings in the following description are For some embodiments of the present invention, those skilled in the art can also obtain other drawings based on these drawings without creative work.
图1为本发明提供的一种基于面部识别的学生网课学习状态评价方法的流程图;Fig. 1 is a flow chart of a method for evaluating students' online learning status based on facial recognition provided by the present invention;
图2为本发明实施例中学生的网课听课理解程度状态识别示意图;Fig. 2 is a schematic diagram of state identification of students' online class listening comprehension level in the embodiment of the present invention;
图3为本发明中基于卷积神经网络的微表情识别模型的示意图;Fig. 3 is the schematic diagram of the micro-expression recognition model based on convolutional neural network among the present invention;
图4为本发明实施例中学生上课专注度评价的流程图;Fig. 4 is the flow chart of student's concentration evaluation in class in the embodiment of the present invention;
图5本发明实施例中判定学生在t1-t2时间段内不专注的示意图;Fig. 5 is a schematic diagram of judging that students are not focused during the time period t1 -t2 in the embodiment of the present invention;
图6为本发明实施例中提供的一种基于面部识别的学生网课学习状态评价系统的结构框图;Fig. 6 is a structural block diagram of a system for evaluating students' online learning status based on facial recognition provided in an embodiment of the present invention;
图7为本发明实施例中种基于面部识别的学生网课学习状态评价系统的实现流程图;Fig. 7 is the implementation flow chart of a student's online class learning status evaluation system based on facial recognition in an embodiment of the present invention;
图8为本发明实施例中一种计算机可读存储介质的结构框图。FIG. 8 is a structural block diagram of a computer-readable storage medium in an embodiment of the present invention.
具体实施方式Detailed ways
本发明的目的在于提供一种基于面部识别的学生网课学习状态评价方法及系统,用以解决或者至少部分解决现有技术中的方法存在的效率不高的技术问题。The purpose of the present invention is to provide a method and system for evaluating students' online learning status based on facial recognition, so as to solve or at least partially solve the technical problems of low efficiency in the methods in the prior art.
为了达到上述目的,本发明的主要构思如下:In order to achieve the above object, the main idea of the present invention is as follows:
首先获取学生的面部图像、学生回答问题情况以及学生信息;然后根据学生回答问题情况与参考答案的对比情况,获得学生的回答问题结果;接着将采集的面部图像进行标准化处理成一致的画面信息后输入到训练好的微表情识别卷积神经网络模型中,得到学生的网课听课理解程度状态;接下来对采集的面部图像进行人脸识别,提取出人脸图片并进行面部特征提取,得到学生的面部尺寸和眼睛张开高度,根据学生的面部长度与宽度的比值与预设标准面部长宽比的比较情况、学生的眼睛张开高度与预设标准面部眼睛张开高度的比较情况,得到学生的专注度;再将学生的回答问题结果、网课听课理解程度状态以及专注度作为学生网课学习状态评价结果。Firstly, obtain the students' facial images, students' answers to questions, and student information; then, according to the comparison between students' answers to questions and reference answers, obtain the results of students' answers to questions; then standardize the collected facial images into consistent screen information Input it into the trained micro-expression recognition convolutional neural network model to obtain the status of students' understanding of online classes; then perform face recognition on the collected facial images, extract face pictures and perform facial feature extraction, and obtain students According to the comparison of the ratio of the student's face length to width and the preset standard facial length-to-width ratio, and the comparison between the student's eye opening height and the preset standard facial eye opening height, we can get Students' concentration; then the results of students' answers to questions, the status of online class listening comprehension, and concentration are used as the evaluation results of students' online class learning status.
为使本发明实施例的目的、技术方案和优点更加清楚,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。In order to make the purpose, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below in conjunction with the drawings in the embodiments of the present invention. Obviously, the described embodiments It is a part of embodiments of the present invention, but not all embodiments. Based on the embodiments of the present invention, all other embodiments obtained by persons of ordinary skill in the art without making creative efforts belong to the protection scope of the present invention.
实施例一Embodiment one
本实施例提供了一种基于面部识别的学生网课学习状态评价方法,请参见图1,该方法包括:This embodiment provides a method for evaluating students' online learning status based on facial recognition, please refer to Figure 1, the method includes:
S1:获取学生的面部图像、学生回答问题情况以及学生信息;S1: Obtain the student's face image, student's answer to the question and student information;
具体来说,在学生开始网课学习的过程中,学生开启电脑的摄像头以获取学生的面部信息。采集的面部图像、学生回答问题情况和学生信息作为输入上传至服务器,相关的模块,获取这些输入信息。Specifically, in the process of students starting the online class, the students turn on the computer's camera to obtain the students' facial information. The collected facial images, students' answers to questions and student information are uploaded to the server as input, and the relevant modules obtain these input information.
其中视频流用来监测学生的听课状况,具体实施过程中,可以将每节网课根据学生提问作答的时间分为不同的阶段,例如四个阶段,将每一阶段的学生视频流、学生回答问题情况和学生信息上传至服务器。具体实施过程中,考虑到学生的学习状况在短时间内不会有较大改变,可对学生的视频资源进行低频采样(1Hz),即每秒采集一次视频信息,用于学生此时刻上课状态的评价。Among them, the video stream is used to monitor the students' listening status. In the specific implementation process, each online class can be divided into different stages according to the time for students to answer questions. The situation and student information are uploaded to the server. In the specific implementation process, considering that the learning status of the students will not change significantly in a short period of time, low-frequency sampling (1Hz) can be performed on the video resources of the students, that is, the video information is collected once per second, which is used for the status of the students in class at this moment. evaluation of.
S2:根据学生回答问题情况与参考答案的对比情况,获得学生的回答问题结果。S2: According to the comparison between the student's answer to the question and the reference answer, the result of the student's answer to the question is obtained.
具体来说,将学生上传的学生回答问题情况与参考答案进行对比后,可以根据对比情况进行评分,从而可以得出学生每一阶段的回答问题百分制得分Qk,k表示第K阶段。Specifically, after comparing the students' answers to the questions uploaded by the students with the reference answers, they can be scored according to the comparison, so that the student's question answering percentile score Qk at each stage can be obtained, and k represents the K stage.
S3:将采集的面部图像进行标准化处理成一致的画面信息后输入到训练好的微表情识别卷积神经网络模型中,得到学生的网课听课理解程度状态。S3: Standardize the collected facial images into consistent picture information and input them into the trained micro-expression recognition convolutional neural network model to obtain the status of students' understanding of online classes.
具体来说,人脸识别是基于人的脸部特征信息进行身份识别的生物识别技术,集成了人工智能、机器识别、机器学习、模型理论、专家系统、视频图像处理等多种专业技术。人脸识别系统主要由四个部分组成:图像采集及检测、图像预处理、图像特征提取以及匹配与识别。Specifically, face recognition is a biometric technology for identification based on human facial feature information, which integrates artificial intelligence, machine recognition, machine learning, model theory, expert systems, video image processing and other professional technologies. The face recognition system is mainly composed of four parts: image acquisition and detection, image preprocessing, image feature extraction, and matching and recognition.
微表情识别作为人脸识别技术的一种延伸,近些年得到了广泛的关注。人脸表情是人类情绪和心理的直观反映。不同于常规的人脸表情,微表情是一种特殊的面部微小动作,可以作为判断人主观情绪的重要依据。随着机器识别和深度学习技术的发展,微表情识别的可行性和可靠性得到极大的提升。As an extension of face recognition technology, micro-expression recognition has received extensive attention in recent years. Facial expressions are an intuitive reflection of human emotions and psychology. Different from normal facial expressions, micro-expression is a kind of special small facial movements, which can be used as an important basis for judging people's subjective emotions. With the development of machine recognition and deep learning technology, the feasibility and reliability of micro-expression recognition have been greatly improved.
本发明申请人通过大量的研究与实践发现,学生在网课学习的过程中,一般来说情绪不会有较大波动,因此对学生进行表情识别如开心、难过等学生的情绪特征并不能反映学生的学习状态。而对学生进行微表情识别,可以捕捉其表情细微的变化及面部特征从而与理解程度状态进行匹配,得到学生在网络课程学习中的实时理解程度状态情况,因而提出了一种微表情识别模块。Through a lot of research and practice, the applicant of the present invention found that students generally do not have large emotional fluctuations in the process of learning online courses, so the emotional characteristics of students, such as happy and sad, cannot be reflected student's learning status. The micro-expression recognition of students can capture the subtle changes in their expressions and facial features to match with the state of understanding, and obtain the real-time state of understanding of students in online course learning. Therefore, a micro-expression recognition module is proposed.
卷积神经网络是深度学习方法中方法的一种,其在计算机视觉和图像处理领域得到了广泛的应用。相比于其他机器学习方法,卷积神经网络能够有效地处理大规模数据信息,也符合网课学习平台上学生多需要处理信息量大的要求。卷积神经网络通过给定输入和对应的期望输出的训练方式将原始图像作为输入进行自动训练和特征自主提取,从而得到对应的识别模型,即微表情识别卷积神经网络模型。通过微表情识别卷积神经网络模型进行理解程度状态的识别过程如图2所示。Convolutional neural network is one of the deep learning methods, which has been widely used in the fields of computer vision and image processing. Compared with other machine learning methods, the convolutional neural network can effectively process large-scale data information, and it also meets the requirements that students on the online learning platform need to process a large amount of information. The convolutional neural network uses the original image as the input for automatic training and feature extraction through the training method of the given input and the corresponding expected output, so as to obtain the corresponding recognition model, that is, the micro-expression recognition convolutional neural network model. The recognition process of the understanding degree state through the micro-expression recognition convolutional neural network model is shown in Figure 2.
通过S3可以进一步减少人工预处理的时间并且适用于大规模的图片训练,从而可以提高识别效率。S3 can further reduce the time of manual preprocessing and is suitable for large-scale image training, thereby improving recognition efficiency.
S4:对采集的面部图像进行人脸识别,提取出人脸图片并进行面部特征提取,得到学生的面部尺寸和眼睛张开高度,其中,学生的面部尺寸包括面部长度和宽度;根据学生的面部长度与宽度的比值与预设标准面部长宽比的比较情况、学生的眼睛张开高度与预设标准面部眼睛张开高度的比较情况,得到学生的专注度;S4: Perform face recognition on the collected face images, extract face pictures and perform facial feature extraction to obtain the student's face size and eye opening height, where the student's face size includes face length and width; according to the student's face The comparison between the ratio of length to width and the length-to-width ratio of the preset standard face, and the comparison between the eye opening height of the student and the eye opening height of the preset standard face, to obtain the concentration of the student;
具体来说,S4是对网课学习过程中的学生是否专注学习进行检测,并对学生的专注度进行评价,其中,标准面部的长宽比和预设标准面部眼睛张开高度可以预先获取,通过面部的长宽比的比较可以初步判断面部是否正对屏幕,然后进一步根据眼睛张开高度比较判断眼睛的张开程度,从而可以对学生的专注程度。Specifically, S4 is to detect whether students are concentrating on learning during the online course learning process, and to evaluate the concentration of students. Among them, the aspect ratio of the standard face and the eye opening height of the preset standard face can be obtained in advance. By comparing the aspect ratio of the face, we can preliminarily judge whether the face is facing the screen, and then further judge the degree of eye opening according to the eye opening height comparison, so as to be able to focus on the students.
S5:将学生的回答问题结果、网课听课理解程度状态以及专注度作为学生网课学习状态评价结果。S5: Take the results of students' answers to questions, the level of comprehension of online lectures, and the degree of concentration as the evaluation results of students' online learning status.
具体来说,本步骤将前述的回答问题结果、理解程度状态以及专注度作为最终的评价结果,可以从不同的方面或维度对学生的学习状态进行评价,从而可以提高评价的客观性和准确性。Specifically, this step takes the aforementioned question answering results, comprehension status, and concentration as the final evaluation results, and can evaluate students' learning status from different aspects or dimensions, thereby improving the objectivity and accuracy of the evaluation .
在一种实施方式中,S3中训练好的微表情识别卷积神经网络模型的构建方法包括:In one embodiment, the construction method of the trained micro-expression recognition convolutional neural network model in S3 includes:
在微表情数据库中寻找分别符合愉悦、理解、困惑状态特征的人脸微表情图片,将理解程度状态对应的图片经过压缩、拉伸、锐化等过程过后,处理成统一尺寸和格式的图片信息作为训练数据,其中,将学生的网课听课理解程度状态划分三个等级:愉悦、理解、困惑,愉悦对应的面部特征包括眼睛张开、面部正对屏幕和嘴角上扬,理解对应的面部特征包括面部正对屏幕和眉毛舒展,困惑对应的面部特征包括眉头紧锁、眼睛微眯和嘴角向下;In the micro-expression database, look for facial micro-expression pictures that meet the characteristics of joy, understanding, and confusion, and process the pictures corresponding to the state of understanding into uniform size and format after compression, stretching, and sharpening. As the training data, the students’ online class comprehension status is divided into three levels: pleasure, understanding, and confusion. The facial features corresponding to pleasure include eyes open, the face facing the screen and the corners of the mouth raised, and the facial features corresponding to understanding include The face is facing the screen and the eyebrows are stretched. The facial features corresponding to confusion include frowning, slightly narrowed eyes, and downward corners of the mouth;
确定微表情识别卷积神经网络模型的结构,模型的结构包括输入层,第一卷积层、第一池化层、第二卷积层、第二池化层、特征层、全连接层、分类层以及输出层;Determine the structure of the convolutional neural network model for micro-expression recognition. The structure of the model includes an input layer, a first convolutional layer, a first pooling layer, a second convolutional layer, a second pooling layer, a feature layer, a fully connected layer, Classification layer and output layer;
根据预设模型参数,采用训练数据对微表情识别卷积神经网络模型进行训练,得到训练好的微表情识别卷积神经网络模型。According to the preset model parameters, the training data is used to train the micro-expression recognition convolutional neural network model, and the trained micro-expression recognition convolutional neural network model is obtained.
具体来说,可以将学生的上课状态即对课堂的理解程度分为三个等级:愉悦、理解、困惑。学生进行网课学习时,愉悦对应的面部特征为:眼睛张开、面部正对屏幕、嘴角上扬等。理解对应的面部特征为:面部正对屏幕、眉毛舒展等。困惑对应的面部特征为:眉头紧锁、眼睛微眯、嘴角向下等。Specifically, students' class status, that is, their understanding of the class, can be divided into three levels: pleasure, understanding, and confusion. When students are studying online, the facial features corresponding to joy are: eyes open, face facing the screen, corners of the mouth raised, etc. Understand that the corresponding facial features are: the face is facing the screen, the eyebrows are stretched, etc. The facial features corresponding to confusion are: frowning, slightly narrowed eyes, downturned corners of the mouth, etc.
采用卷积神经网络构建建立微表情识别卷积神经网络模型,其结构如图3所示。微表情识别卷积神经网络模型主要包括几个部分:输入层,卷积层1、池化层1、卷积层2、池化层2、特征层、全连接层、分类层和输出层。各层之间相互作用使得模型能够提取面部画面的特征并与学生上课时的理解程度状态进行匹配,从而实现根据学生听课时的面部画面预测此时学生上课时的理解程度状态。Convolutional neural network is used to build a micro-expression recognition convolutional neural network model, and its structure is shown in Figure 3. The micro-expression recognition convolutional neural network model mainly includes several parts: input layer, convolutional layer 1, pooling layer 1, convolutional layer 2, pooling layer 2, feature layer, fully connected layer, classification layer and output layer. The interaction between the layers enables the model to extract the features of the facial image and match it with the state of the student's understanding level in class, so as to predict the state of the student's understanding level in class based on the facial image of the student while listening to the class.
在微表情数据库中寻找分别符合愉悦、理解、困惑状态特征的人脸微表情图片。将图片经过压缩、拉伸、锐化等过程过后,处理成统一尺寸和格式的图片信息。图片信息输入到卷积层1后,卷积层1对图片进行特征提取。后输入到池化层1进行降维压缩处理。再输入到卷积层2和池化层2进行重复操作。特征层将图片压缩至一个一维向量后输出到全连接层。全连接层是经典的神经网络结构,由多个神经元向前连接构成。输出到分类器中与对应的理解程度状态进行匹配。从而达到卷积神经网络微表情识别模型训练的目的。使得模型自动学习并储存图片特征和对应理解程度度状态的内在联系。In the micro-expression database, find the micro-expression pictures of human faces that meet the characteristics of joy, understanding, and confusion. After the image is compressed, stretched, sharpened, etc., it is processed into image information of a uniform size and format. After the image information is input to the convolutional layer 1, the convolutional layer 1 performs feature extraction on the image. It is then input to the pooling layer 1 for dimensionality reduction and compression processing. Then input to convolutional layer 2 and pooling layer 2 for repeated operations. The feature layer compresses the image into a one-dimensional vector and outputs it to the fully connected layer. The fully connected layer is a classic neural network structure, which consists of multiple neurons connected forward. The output is matched to the corresponding understanding level state in the classifier. So as to achieve the purpose of convolutional neural network micro-expression recognition model training. Make the model automatically learn and store the internal relationship between image features and corresponding understanding degree status.
卷积神经网络模型训练完成后,微表情识别模型就得以建立。然后将学生的视频画面进行标准化处理成一致的画面信息并输入到训练好的微表情识别卷积神经网络模型中,模型输出画面对应的理解程度。After the training of the convolutional neural network model is completed, the micro-expression recognition model can be established. Then the students' video images are standardized and processed into consistent image information and input into the trained micro-expression recognition convolutional neural network model, and the model outputs the degree of understanding corresponding to the images.
在一种实施方式中,S3具体包括:In one embodiment, S3 specifically includes:
S3.1:采集的面部图像对应的通过输入层进入图片信息输入第一卷积层,通过第一卷积层进行特征提取;S3.1: The collected facial image corresponds to the image information entered into the first convolutional layer through the input layer, and feature extraction is performed through the first convolutional layer;
S3.2:通过第一池化层对S4.1得到的图像进行降维压缩处理;S3.2: Perform dimensionality reduction and compression processing on the image obtained in S4.1 through the first pooling layer;
S3.3:通过第二卷积层对降维压缩处理后的图像进行特征提取,再通过第二池化层进行降维压缩;S3.3: Use the second convolutional layer to perform feature extraction on the image after dimensionality reduction and compression processing, and then perform dimensionality reduction compression through the second pooling layer;
S3.4:通过特征层将S4.3得到的图像压缩为一个一维向量后输出到全连接层;S3.4: Compress the image obtained in S4.3 into a one-dimensional vector through the feature layer and output it to the fully connected layer;
S3.5:通过由多个神经元向前连接构成的全连接层输出到分类层中;S3.5: Output to the classification layer through a fully connected layer composed of multiple neurons connected forward;
S3.6:通过分类层将全连接层输出的结果与对应的理解程度状态进行匹配,得到图片对应的理解程度状态;S3.6: Match the output result of the fully connected layer with the corresponding understanding level state through the classification layer, and obtain the understanding level state corresponding to the picture;
S3.7:通过输出层输出图片对应的理解程度状态。S3.7: Through the output layer, output the state of understanding level corresponding to the picture.
具体来说,S3.1~3.7介绍了微表情识别卷积神经网络模型的处理过程,最终可以得到理解程度状态。Specifically, S3.1-3.7 introduce the processing process of the convolutional neural network model for micro-expression recognition, and finally the understanding degree status can be obtained.
在一种实施方式中,在S3.7之后所述方法还包括:对不同的理解程度状态赋予不同的分值。In one embodiment, after S3.7, the method further includes: assigning different scores to different understanding levels.
具体来说,将学生的上课状态即对课堂的理解程度分为三个等级:愉悦、理解、困惑,例如,每一个等级对应的理解程度评分为:100、80和40分。Specifically, the class status of students, that is, the degree of understanding of the class, is divided into three levels: pleasure, understanding, and confusion. For example, the corresponding level of understanding for each level is scored as: 100, 80, and 40 points.
在一种实施方式中,输出层输出图片对应的理解程度状态为学生在一时刻的理解程度状态,所述方法还包括:In one embodiment, the state of the degree of understanding corresponding to the output picture of the output layer is the state of the degree of understanding of the students at a moment, and the method further includes:
根据赋予的分值得到对应的上课状态评分ui,Get the corresponding class status score ui according to the assigned score,
根据课状态评分ui获得每个阶段学生网课学习的理解程度评分Uk;According to the class status score ui, the score Uk of the understanding degree of students' online learning at each stage is obtained;
其中,N表示时刻的数量,K表示阶段。Among them, N represents the number of moments, and K represents the stage.
具体来说,通过前述方法可以得到某一时刻的理解程度,然后求取平均值,可以得到该阶段的理解程度状态。Specifically, the degree of understanding at a certain moment can be obtained through the aforementioned method, and then the average value can be calculated to obtain the state of the degree of understanding at this stage.
在一种实施方式中,S4中根据学生的面部长度与宽度的比值与预设标准面部长宽比的比较情况、学生的眼睛张开高度与预设标准面部眼睛张开高度的比较情况,得到学生的专注度,包括:In one embodiment, in S4, according to the comparison between the ratio of the student's face length and width and the preset standard face aspect ratio, and the comparison between the student's eye opening height and the preset standard face eye opening height, it is obtained Student engagement, including:
S4.1:根据学生的面部长度与宽度的比值与预设标准面部长宽比的比较情况对i时刻学生是否面部正对屏幕进行判定,其中,没有正对屏幕则判定为不专注,正对屏幕则进行下一步判定,正对屏幕的判定公式如下:S4.1: According to the comparison between the ratio of the student's face length and width and the preset standard face length-to-width ratio, determine whether the student's face is facing the screen at time i. The screen will proceed to the next step of judgment. The judgment formula for the screen is as follows:
其中,Li和Wi为i时刻学生的面部的长度和宽度,Ls和Ws为学生标准的面部的长度和宽度;Among them, Li and Wi are the length and width of the student's face at time i, and Ls and Ws are the length and width of the student's standard face;
S4.2:根据学生的眼睛张开高度与预设标准面部眼睛张开高度的比较情况,对学生的眼睛张度进行判定,得到学生i时刻的专注程度,判定公式如下式所示:S4.2: According to the comparison between the student's eye opening height and the preset standard facial eye opening height, judge the student's eye opening degree, and obtain the student's concentration at moment i. The judgment formula is as follows:
其中,Hi为i时刻学生的眼睛张开高度,Hs为学生标准的眼睛张开高度,Li为i时刻学生的面部的长度,Ls为学生标准的面部的长度,如果大于,判定i时刻学生是专注的,如果小于,判定i时刻学生是不专注的;Among them, Hi is the height of the student's eye opening at time i, Hs is the student's standard eye opening height, Li is the length of the student's face at time i, Ls is the length of the student's standard face, if it is greater than that, judge The student is focused at time i, if it is less than, it is judged that the student is not focused at time i;
根据学生i时刻的专注程度,连续监测预设时长内学生状态是否为不专注,如果均为不专注,则判定学生状态为不专注。According to the degree of concentration of the student at moment i, continuously monitor whether the student's state is inattentive within the preset period of time. If all are inattentive, the student's state is determined to be inattentive.
具体来说,对网课学习过程中的学生是否专注学习进行检测,并对学生的专注度进行评价的实施过程如图4所示。Specifically, the implementation process of detecting whether students are focused on learning during the online course learning process and evaluating students' concentration is shown in Figure 4.
学生进行网课学习时,需要盯着电脑屏幕进行学习。鉴于网课学习的特殊性,本发明对学生进行网课学习时专注与否的评价标准是:面部是否正对屏幕,以及眼睛的张度是否大于阈值,例如50%。When students study online, they need to stare at the computer screen to study. In view of the particularity of online class learning, the evaluation criteria of the present invention for students' concentration during online class learning are: whether the face is directly facing the screen, and whether the opening of the eyes is greater than a threshold, such as 50%.
学生登录后,要求采集学生的标准面部画面,即学生正对电脑屏幕,眼睛睁开,采集的标准面部画面上传至服务器进行保存。After the students log in, they are required to collect the standard facial images of the students, that is, the students are facing the computer screen with their eyes open, and the collected standard facial images are uploaded to the server for storage.
将学生的标准面部画面进行人脸识别,提取出人脸图片并进行面部特征提取,得到标准的面部尺寸(包括长Ls×宽Ws)和该学生的眼睛张开时的高度Hs。将学生进行网课学习的实时面部画面进行监测,并进行人脸识别,提取出人脸图片并进行面部特征提取,得到i时刻学生的面部尺寸(长Li×宽Wi)和此时学生的眼睛张开时的高度Hi。Perform face recognition on the student's standard face picture, extract the face picture and perform facial feature extraction to obtain the standard face size (including length Ls × width Ws ) and the height Hs of the student's eyes when they are opened. Monitor the real-time facial images of students learning online classes, and perform face recognition, extract face pictures and perform facial feature extraction to obtain the student's facial size (length Li ×width Wi ) and the student's face size at time i. Height Hi when the eyes are open.
将采集到的i时刻学生的面部尺寸(长Li×宽Wi)和此时学生的眼睛张开时的高度Hi和标准的面部尺寸(长Ls×宽Ws)和学生的眼睛张开时的高度Hs共同输入到专注度识别模型中对该时刻学生的专注度状态进行判定。Collect the student's facial size (length Li ×width Wi ) at moment i and the height Hi when the student's eyes are open at this time, and the standard facial size (length Ls ×width Ws ) and the student's eye opening height H i The height Hs is jointly input into the concentration recognition model to judge the student's concentration state at this moment.
首先对i时刻学生是否面部正对屏幕进行判定,没有正对屏幕则判定为不专注,正对屏幕则进行下一步判定。正对屏幕的判定公式如下:First, judge whether the student's face is facing the screen at time i. If the student is not facing the screen, it will be judged as inattentive. If the student is facing the screen, the next step will be judged. The judgment formula for facing the screen is as follows:
式(2)中:Li和Wi为i时刻学生的面部的长和宽,Ls和Ws为学生标准的面部的长和宽。In formula (2): Li and Wi are the length and width of the student's face at time i, and Ls and Ws are the length and width of the student's standard face.
当学生扭头或低头时,视频捕捉到学生的面部的长和宽会发生变化。但考虑到学生上网课时有可能会前后移动,此时学生的面部的长和宽也会发生变化,因此将面部的长宽比作为参考依据—当学生面部正对屏幕时前后移动,采集到的面部等比例变大变小,而面部的长宽比不变。所以当i时刻学生的面部的长宽比与标准状态下相差较大时(考虑到电脑屏幕有一定宽度、人脸会时不时转动等原因,将比例的区间定为(0.9,1.1)),判定该学生此时没有正对屏幕,为不专注。The video captured the students' faces changing in length and width as they turned their heads or lowered their heads. However, considering that students may move back and forth during online classes, the length and width of the student's face will also change at this time, so the aspect ratio of the face is used as a reference—when the student moves back and forth when the face is facing the screen, the collected The face becomes larger and smaller proportionally, while the aspect ratio of the face remains unchanged. Therefore, when the aspect ratio of the student's face at time i is quite different from the standard state (considering that the computer screen has a certain width and the face will rotate from time to time, etc., the ratio range is set as (0.9, 1.1)), determine The student is not facing the screen at this time, because he is not concentrating.
由于如果学生正对屏幕时,有可能出现睡觉和发呆等情况,此时学生并未专注于课堂。故判定学生的面部正对屏幕后,需进一步对学生的眼睛张度进行判定,如式(3)所示:Because if the students are facing the screen, there may be situations such as sleeping and daze, and the students are not concentrating on the class at this time. Therefore, after determining that the student's face is facing the screen, it is necessary to further determine the student's eye opening, as shown in formula (3):
式中:Hi为i时刻学生的眼睛张开高度,Hs为学生标准的眼睛张开高度,Li为i时刻学生的面部的长度,Ls为学生标准的面部的长度。In the formula: Hi is the student's eye opening height at time i, Hs is the student's standard eye opening height, Li is the length of the student's face at time i, and Ls is the student's standard face length.
因为i时刻学生与电脑屏幕的距离有可能与标准时不一致,导致面部的大小可能不一致。当学生面部是正对屏幕,i时刻时面部的尺寸与标准的面部尺寸时等比例的关系。根据三角函数,得出缩放比例再将i时刻学生的眼睛张开高度Hi乘以缩放比例/>后与学生标准的眼睛张开高度Hs相比等到i时刻学生的眼睛张度,并判断其是否大于50%,如果大于,判定i时刻学生是专注的,如果小于,判定i时刻学生是不专注的。Because the distance between the student and the computer screen at time i may be inconsistent with the standard time, the size of the face may be inconsistent. When the student's face is directly facing the screen, the size of the face at moment i is proportional to the standard face size. According to the trigonometric function, the scaling ratio is obtained Then multiply the student's eye opening height Hi by the scaling ratio at time i Then compare it with the student’s standard eye opening height Hs and wait until the student’s eye opening at time i, and judge whether it is greater than 50%. If it is greater, determine that the student is focused at time i; if it is less, determine whether the student is not at time i concentrated.
通过以上方法对学生每一时刻是否专注进行判定。考虑到学生上课专注期间也会有眨眼、低头等小动作。所以对学生专注与否不能针对每一秒,应该考虑一个连续的过程。当连续监测到学生10s中的专注度状态均为不专注时(标注10s中第一个不专注的时刻为进入不专注状态的时刻t1),直到连续监测到学生10s中的专注度状态均为专注为止(标注10s中第一个专注的时刻为离开不专注状态的时刻t2)。则学生学习不专注时间段为t1-t2,其余时间段视为学生上课专注,如图5所示。Through the above methods, it is judged whether the students are concentrating at each moment. Considering that students will blink, bow their heads and other small movements when they are concentrating in class. Therefore, whether students are focused or not can not be aimed at every second, but a continuous process should be considered. When it is continuously monitored that the concentration state of the students in 10s is not focused (the first moment of not focusing in the 10s is the moment t1 of entering the state of non-focus), until the continuous monitoring of the concentration state of the students in the 10s is Till concentration (mark the first moment of concentration in 10s as the moment t2 of leaving the state of non-focus). Then the period of time when the students do not focus on learning is t1 -t2 , and the rest of the time period is regarded as the period of concentration of the students in class, as shown in Figure 5 .
按照上述方法得出学生的不专注的时间段,并得出不专注的时间为Ti,学生总的不专注的时间为:According to the above method, the students' inattention time period is obtained, and the inattention time is obtained as Ti , and the total inattention time of the students is:
式中Ti为总的不专注的时间,m为不专注时间段个数。In the formula, Ti is the total inattention time, and m is the number of inattention time periods.
在一种实施方式中,所述方法还包括将每节网课根据学生提问作答的时间分为不同的阶段。In one embodiment, the method further includes dividing each online class into different stages according to the time for students to answer questions.
在一种实施方式中,在S5之后,所述方法还包括:In one embodiment, after S5, the method further includes:
将学习状态评价结果上传至服务器,并根据学生信息,将获得的学习状态评价结果反馈至对应的学生终端,将所有学生的网课学习学习状态进行汇总反馈到对应的教学老师终端。Upload the learning status evaluation results to the server, and feed back the obtained learning status evaluation results to the corresponding student terminals according to the student information, and summarize and feed back the online learning status of all students to the corresponding teaching teacher terminals.
具体来说,在得出学生每个阶段的回答问题评分、学习专注度评分和不专注总时间后,上传至教务处进行保存。可以作为学生网课学习状态的反映和最终学生网课成绩评定的依据。得出学生每个阶段的回答问题评分、学习专注度评分和不专注总时间后,上传至服务器。并根据学生信息标签,将每个阶段学生学习状况的评分结果传输给对应的学生进行反馈。当每节网课结束后,将所有学生的网课学习学习状态进行汇总反馈给教学老师,可作为网课教学质量评判依据和教学改进的参考。Specifically, after obtaining the student's question answering score, learning concentration score and total non-focus time at each stage, it is uploaded to the Office of Academic Affairs for storage. It can be used as a reflection of the learning status of students' online courses and the basis for the final evaluation of students' online courses. After obtaining the student's score of answering questions, learning concentration score and total time of non-focus at each stage, upload it to the server. And according to the student information label, the scoring results of the students' learning status at each stage are transmitted to the corresponding students for feedback. After each online class is over, the online learning status of all students will be summarized and fed back to the teaching teacher, which can be used as a basis for evaluating the quality of online class teaching and a reference for teaching improvement.
实施例二Embodiment two
基于同样的发明构思,本实施例提供了一种基于面部识别的学生网课学习状态评价系统,请参见图6,该系统包括:Based on the same inventive concept, this embodiment provides a system for evaluating students' online learning status based on facial recognition, please refer to Figure 6, the system includes:
信息获取模块201,用于获取学生的面部图像、学生回答问题情况以及学生信息;The information acquisition module 201 is used to acquire the facial image of the student, the student's answer to the question and the student's information;
学生的回答问题评估模块202,用于根据学生回答问题情况与参考答案的对比情况,获得学生的回答问题结果;The student's answering question evaluation module 202 is used to obtain the student's answering question result according to the comparison between the student's answering question and the reference answer;
理解程度识别模块203,用于将采集的面部图像进行标准化处理成一致的画面信息后输入到训练好的微表情识别卷积神经网络模型中,得到学生的网课听课理解程度状态;Comprehension degree identification module 203, which is used to standardize the collected facial images into consistent picture information and then input them into the trained micro-expression recognition convolutional neural network model to obtain the student's online class listening comprehension status;
专注度识别模块204,用于对采集的面部图像进行人脸识别,提取出人脸图片并进行面部特征提取,得到学生的面部尺寸和眼睛张开高度,其中,学生的面部尺寸包括面部长度和宽度;根据学生的面部长度与宽度的比值与预设标准面部长宽比的比较情况、学生的眼睛张开高度与预设标准面部眼睛张开高度的比较情况,得到学生的专注度;Concentration recognition module 204, is used for carrying out face recognition to the facial image of gathering, extracts face picture and carries out facial feature extraction, obtains the student's face size and eye opening height, wherein, the student's face size comprises facial length and Width: According to the comparison of the ratio of the student's face length to width with the preset standard face length-to-width ratio, and the comparison of the student's eye opening height with the preset standard face eye opening height, the student's concentration is obtained;
评价结果模块205,用于将学生的回答问题结果、网课听课理解程度状态以及专注度作为学生网课学习状态评价结果。The evaluation result module 205 is used to use the results of students' answering questions, the state of comprehension degree of online class listening, and the degree of concentration as the evaluation results of students' online class learning status.
其中,本实施例提供的系统的总体实现流程如图7所示。Wherein, the overall implementation process of the system provided by this embodiment is shown in FIG. 7 .
本发明的有点和有益技术效果如下:Some points and beneficial technical effects of the present invention are as follows:
1.提出了一套基于卷积神经网络的针对学生的微表情进行学习理解程度的识别方法及模块,可以提高理解程度识别效率的同时提高识别准确性。1. A set of recognition methods and modules based on convolutional neural network for learning and understanding of students' micro-expressions are proposed, which can improve the efficiency of recognition of understanding and improve the accuracy of recognition.
2.提供了一套基于每个独立学生的面部特征对学生进行实时专注度识别的方法及模块,可以提高专注程度识别效率的同时提高识别准确性。2. Provide a set of methods and modules for real-time concentration recognition of students based on the facial features of each individual student, which can improve the efficiency of recognition of concentration and improve the accuracy of recognition.
3.构建了一套学生网课学习状态评价和反馈系统,可以提高综合的评价效果。3. Constructed a set of evaluation and feedback system for students' online learning status, which can improve the comprehensive evaluation effect.
由于本发明实施例二所介绍的系统,为实施本发明实施例一中基于面部识别的学生网课学习状态评价方法所采用的系统,故而基于本发明实施例一所介绍的方法,本领域所属人员能够了解该系统的具体结构及变形,故而在此不再赘述。凡是本发明实施例一的方法所采用的系统都属于本发明所欲保护的范围。Since the system introduced in Embodiment 2 of the present invention is the system used to implement the method for evaluating the learning status of students' online courses based on facial recognition in Embodiment 1 of the present invention, it is based on the method described in Embodiment 1 of the present invention. Personnel can understand the specific structure and deformation of the system, so details will not be repeated here. All the systems used in the method of Embodiment 1 of the present invention belong to the intended protection scope of the present invention.
实施例三Embodiment three
基于同样的发明构思,本实施例提供了一种计算机可读存储介质,其上存储有计算机程序,该程序被执行时实现实施例一中所述的方法。Based on the same inventive concept, this embodiment provides a computer-readable storage medium on which a computer program is stored, and when the program is executed, the method described in Embodiment 1 is implemented.
由于本发明实施例三所介绍的计算机可读存储介质为实施本发明实施例一中基于面部识别的学生网课学习状态评价方法所采用的计算机可读存储介质,故而基于本发明实施例一所介绍的方法,本领域所属人员能够了解该计算机可读存储介质的具体结构及变形,故而在此不再赘述。凡是本发明实施例一中方法所采用的计算机可读存储介质都属于本发明所欲保护的范围。Since the computer-readable storage medium introduced in the third embodiment of the present invention is the computer-readable storage medium used to implement the method for evaluating the learning status of students' online courses based on facial recognition in the first embodiment of the present invention, the computer-readable storage medium based on the first embodiment of the present invention With the method introduced, those skilled in the art can understand the specific structure and deformation of the computer-readable storage medium, so details are not repeated here. All computer-readable storage media used in the method in Embodiment 1 of the present invention fall within the scope of protection intended by the present invention.
本领域内的技术人员应明白,本发明的实施例可提供为方法、系统、或计算机程序产品。因此,本发明可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本发明可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。Those skilled in the art should understand that the embodiments of the present invention may be provided as methods, systems, or computer program products. Accordingly, the present invention can take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
本发明是参照根据本发明实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It should be understood that each procedure and/or block in the flowchart and/or block diagram, and a combination of procedures and/or blocks in the flowchart and/or block diagram can be realized by computer program instructions. These computer program instructions may be provided to a general purpose computer, special purpose computer, embedded processor, or processor of other programmable data processing equipment to produce a machine such that the instructions executed by the processor of the computer or other programmable data processing equipment produce a An apparatus for realizing the functions specified in one or more procedures of the flowchart and/or one or more blocks of the block diagram.
尽管已描述了本发明的优选实施例,但本领域内的技术人员一旦得知了基本创造性概念,则可对这些实施例做出另外的变更和修改。所以,所附权利要求意欲解释为包括优选实施例以及落入本发明范围的所有变更和修改。While preferred embodiments of the present invention have been described, additional changes and modifications can be made to these embodiments by those skilled in the art once the basic inventive concept is appreciated. Therefore, it is intended that the appended claims be construed to cover the preferred embodiment as well as all changes and modifications which fall within the scope of the invention.
显然,本领域的技术人员可以对本发明实施例进行各种改动和变型而不脱离本发明实施例的精神和范围。这样,倘若本发明实施例的这些修改和变型属于本发明权利要求及其等同技术的范围之内,则本发明也意图包含这些改动和变型在内。Apparently, those skilled in the art can make various changes and modifications to the embodiments of the present invention without departing from the spirit and scope of the embodiments of the present invention. In this way, if the modifications and variations of the embodiments of the present invention fall within the scope of the claims of the present invention and equivalent technologies, the present invention also intends to include these modifications and variations.
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202010043578.0ACN111242049B (en) | 2020-01-15 | 2020-01-15 | Face recognition-based student online class learning state evaluation method and system |
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202010043578.0ACN111242049B (en) | 2020-01-15 | 2020-01-15 | Face recognition-based student online class learning state evaluation method and system |
| Publication Number | Publication Date |
|---|---|
| CN111242049A CN111242049A (en) | 2020-06-05 |
| CN111242049Btrue CN111242049B (en) | 2023-08-04 |
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202010043578.0AActiveCN111242049B (en) | 2020-01-15 | 2020-01-15 | Face recognition-based student online class learning state evaluation method and system |
| Country | Link |
|---|---|
| CN (1) | CN111242049B (en) |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN111797324A (en)* | 2020-08-07 | 2020-10-20 | 广州驰兴通用技术研究有限公司 | Distance education method and system for intelligent education |
| CN116018789B (en)* | 2020-09-14 | 2024-08-09 | 华为技术有限公司 | Method, system and medium for context-based assessment of student attention in online learning |
| CN112215973A (en)* | 2020-09-21 | 2021-01-12 | 彭程 | Data display method, multimedia platform and electronic equipment |
| CN112735213A (en)* | 2020-12-31 | 2021-04-30 | 奇点六艺教育科技股份有限公司 | Intelligent teaching method, system, terminal and storage medium |
| CN112818754A (en)* | 2021-01-11 | 2021-05-18 | 广州番禺职业技术学院 | Learning concentration degree judgment method and device based on micro-expressions |
| CN112907408A (en)* | 2021-03-01 | 2021-06-04 | 北京安博创赢教育科技有限责任公司 | Method, device, medium and electronic equipment for evaluating learning effect of students |
| CN113222791A (en)* | 2021-04-28 | 2021-08-06 | 泰州学院 | Inorganic chemical course teaching tutoring management method based on big data and artificial intelligence |
| CN113239841B (en)* | 2021-05-24 | 2023-03-24 | 桂林理工大学博文管理学院 | Classroom concentration state detection method based on face recognition and related instrument |
| CN113536893A (en)* | 2021-05-26 | 2021-10-22 | 深圳点猫科技有限公司 | Online teaching learning concentration degree identification method, device, system and medium |
| CN113657146B (en)* | 2021-06-30 | 2024-02-06 | 北京惠朗时代科技有限公司 | Student non-concentration learning low-consumption recognition method and device based on single image |
| CN114373206B (en)* | 2021-12-27 | 2024-11-26 | 中国民航大学 | An experimental process evaluation method and device based on AI |
| CN116543609A (en)* | 2022-03-22 | 2023-08-04 | 上海工程技术大学 | A learning method, storage medium and system |
| CN114493952A (en)* | 2022-04-18 | 2022-05-13 | 北京梦蓝杉科技有限公司 | Education software data processing system and method based on big data |
| CN115631074B (en)* | 2022-12-06 | 2023-06-09 | 南京熊大巨幕智能科技有限公司 | Informationized network science and education method, system and equipment |
| CN116469148B (en)* | 2023-03-09 | 2025-02-18 | 山东省大健康精准医疗产业技术研究院 | Probability prediction system and prediction method based on facial structure recognition |
| CN116341983A (en)* | 2023-03-31 | 2023-06-27 | 西安音乐学院 | Method, system, electronic equipment and medium for evaluation and early warning of concentration |
| CN116996722B (en)* | 2023-06-29 | 2024-06-04 | 广州慧思软件科技有限公司 | Virtual synchronous classroom teaching system in 5G network environment and working method thereof |
| CN117909587A (en)* | 2024-01-19 | 2024-04-19 | 广州铭德教育投资有限公司 | Personalized recommendation method and system for after-school exercises for students based on AI |
| CN118780953B (en)* | 2024-09-05 | 2024-12-17 | 禾辰纵横信息技术有限公司 | Online education supervision method and system based on artificial intelligence |
| CN120183024B (en)* | 2025-05-22 | 2025-08-01 | 杭州电子科技大学 | An Emotional Focus Method for AI Online Education |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN107292271A (en)* | 2017-06-23 | 2017-10-24 | 北京易真学思教育科技有限公司 | Learning-memory behavior method, device and electronic equipment |
| KR101960815B1 (en)* | 2017-11-28 | 2019-03-21 | 유엔젤주식회사 | Learning Support System And Method Using Augmented Reality And Virtual reality |
| KR20190043513A (en)* | 2019-04-18 | 2019-04-26 | 주식회사 아이티스테이션 | System For Estimating Lecture Attention Level, Checking Course Attendance, Lecture Evaluation And Lecture Feedback |
| CN109815795A (en)* | 2018-12-14 | 2019-05-28 | 深圳壹账通智能科技有限公司 | Classroom student's state analysis method and device based on face monitoring |
| CN110334600A (en)* | 2019-06-03 | 2019-10-15 | 武汉工程大学 | A multi-feature fusion method for driver's abnormal expression recognition |
| CN110674701A (en)* | 2019-09-02 | 2020-01-10 | 东南大学 | A fast detection method of driver fatigue state based on deep learning |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN106878677B (en)* | 2017-01-23 | 2020-01-07 | 西安电子科技大学 | Multi-sensor-based assessment system and method for students' classroom mastery |
| CN108021893A (en)* | 2017-12-07 | 2018-05-11 | 浙江工商大学 | It is a kind of to be used to judging that student to attend class the algorithm of focus |
| CN108710829A (en)* | 2018-04-19 | 2018-10-26 | 北京红云智胜科技有限公司 | A method of the expression classification based on deep learning and the detection of micro- expression |
| US20190362138A1 (en)* | 2018-05-24 | 2019-11-28 | Gary Shkedy | System for Adaptive Teaching Using Biometrics |
| CN108875606A (en)* | 2018-06-01 | 2018-11-23 | 重庆大学 | A kind of classroom teaching appraisal method and system based on Expression Recognition |
| CN109657529A (en)* | 2018-07-26 | 2019-04-19 | 台州学院 | Classroom teaching effect evaluation system based on human facial expression recognition |
| CN110334626B (en)* | 2019-06-26 | 2022-03-04 | 北京科技大学 | Online learning system based on emotional state |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN107292271A (en)* | 2017-06-23 | 2017-10-24 | 北京易真学思教育科技有限公司 | Learning-memory behavior method, device and electronic equipment |
| KR101960815B1 (en)* | 2017-11-28 | 2019-03-21 | 유엔젤주식회사 | Learning Support System And Method Using Augmented Reality And Virtual reality |
| CN109815795A (en)* | 2018-12-14 | 2019-05-28 | 深圳壹账通智能科技有限公司 | Classroom student's state analysis method and device based on face monitoring |
| KR20190043513A (en)* | 2019-04-18 | 2019-04-26 | 주식회사 아이티스테이션 | System For Estimating Lecture Attention Level, Checking Course Attendance, Lecture Evaluation And Lecture Feedback |
| CN110334600A (en)* | 2019-06-03 | 2019-10-15 | 武汉工程大学 | A multi-feature fusion method for driver's abnormal expression recognition |
| CN110674701A (en)* | 2019-09-02 | 2020-01-10 | 东南大学 | A fast detection method of driver fatigue state based on deep learning |
| Title |
|---|
| Two-level attention with two-stage multi-task learning for facial emotion recognition;Wang Xiaohua et al.;《ELSEVIER》;217-225* |
| 基于面部表情特征的驾驶员疲劳状态识别;马添翼;成波;;汽车安全与节能学报(03);38-42* |
| 智慧学习环境中学习画面的情感识别及其应用;徐振国;《中国博士学位论文全文数据库社会科学Ⅱ辑》;H127-21* |
| Publication number | Publication date |
|---|---|
| CN111242049A (en) | 2020-06-05 |
| Publication | Publication Date | Title |
|---|---|---|
| CN111242049B (en) | Face recognition-based student online class learning state evaluation method and system | |
| CN110991381B (en) | A real-time classroom student status analysis and instruction reminder system and method based on behavior and voice intelligent recognition | |
| CN108399376B (en) | Method and system for intelligent analysis of students' interest in classroom learning | |
| CN110334626B (en) | Online learning system based on emotional state | |
| CN112183238B (en) | Remote education attention detection method and system | |
| CN109522815A (en) | A kind of focus appraisal procedure, device and electronic equipment | |
| CN107316261A (en) | A kind of Evaluation System for Teaching Quality based on human face analysis | |
| CN108304793A (en) | Online learning analysis system and method | |
| CN114973126B (en) | Real-time Visual Analysis Method for Student Engagement in Online Courses | |
| US12400428B2 (en) | Automatic classification method and system of teaching videos based on different presentation forms | |
| CN108764047A (en) | Group's emotion-directed behavior analysis method and device, electronic equipment, medium, product | |
| CN112883867A (en) | Student online learning evaluation method and system based on image emotion analysis | |
| CN115797829A (en) | Online classroom learning state analysis method | |
| CN111523445B (en) | Examination behavior detection method based on improved Openpost model and facial micro-expression | |
| CN112926412A (en) | Self-adaptive teaching classroom monitoring method and system | |
| CN113239794B (en) | Online learning-oriented learning state automatic identification method | |
| CN112418068B (en) | On-line training effect evaluation method, device and equipment based on emotion recognition | |
| CN119026999A (en) | A classroom information evaluation management system based on deep learning | |
| CN115829234A (en) | Automatic supervision system based on classroom detection and working method thereof | |
| CN111666829A (en) | Multi-scene multi-subject identity behavior emotion recognition analysis method and intelligent supervision system | |
| CN111402096A (en) | Online teaching quality management method, system, equipment and medium | |
| CN114187640A (en) | An online classroom-based learning situation observation method, system, equipment and medium | |
| CN116563929A (en) | A method of academic emotion recognition based on fusion of human body features | |
| CN117392741A (en) | Intelligent examination room behavior analysis and detection system based on image recognition and voice recognition | |
| CN111178263B (en) | Real-time expression analysis method and device |
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |