Movatterモバイル変換


[0]ホーム

URL:


CN114821824A - A living body detection method, device, equipment and medium - Google Patents

A living body detection method, device, equipment and medium
Download PDF

Info

Publication number
CN114821824A
CN114821824ACN202210508920.9ACN202210508920ACN114821824ACN 114821824 ACN114821824 ACN 114821824ACN 202210508920 ACN202210508920 ACN 202210508920ACN 114821824 ACN114821824 ACN 114821824A
Authority
CN
China
Prior art keywords
information
user
emotion
video
collection device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210508920.9A
Other languages
Chinese (zh)
Inventor
李宇明
丁菁汀
李亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alipay Hangzhou Information Technology Co Ltd
Original Assignee
Alipay Hangzhou Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alipay Hangzhou Information Technology Co LtdfiledCriticalAlipay Hangzhou Information Technology Co Ltd
Priority to CN202210508920.9ApriorityCriticalpatent/CN114821824A/en
Publication of CN114821824ApublicationCriticalpatent/CN114821824A/en
Pendinglegal-statusCriticalCurrent

Links

Images

Landscapes

Abstract

Translated fromChinese

本说明书实施例公开了一种活体检测方法、装置、设备及介质,方案包括:获取第一信息采集设备采集的第一信息;所述第一信息为与情绪引导视频的播放过程相关的信息;所述第一信息包含用户的第一生物特征;获取第二信息采集设备采集的第二信息;所述第二信息为与所述情绪引导视频的播放过程相关的信息;所述第二信息包含用户的第二生物特征;根据多模态情绪识别算法,基于所述第一信息和所述第二信息,确定所述用户的测定情绪类型;判断所述测定情绪类型与所述情绪引导视频对应的设定情绪类型是否一致;若所述测定情绪类型与所述情绪引导视频对应的设定情绪类型一致,则确定所述用户为活体对象。

Figure 202210508920

The embodiments of this specification disclose a method, device, device, and medium for detecting a living body. The solution includes: acquiring first information collected by a first information collecting device; the first information is information related to the playback process of an emotion-guided video; The first information includes the first biometric feature of the user; the second information collected by the second information collection device is acquired; the second information is information related to the playing process of the emotion guidance video; the second information includes The second biometric feature of the user; according to a multimodal emotion recognition algorithm, based on the first information and the second information, determine the user's measured emotion type; determine that the measured emotion type corresponds to the emotion guide video Whether the set emotion type of the user is consistent; if the measured emotion type is consistent with the set emotion type corresponding to the emotion guidance video, it is determined that the user is a living object.

Figure 202210508920

Description

Translated fromChinese
一种活体检测方法、装置、设备及介质A living body detection method, device, equipment and medium

技术领域technical field

本申请涉及计算机技术领域,尤其涉及一种活体检测方法、装置、设备及介质。The present application relates to the field of computer technology, and in particular, to a method, apparatus, device, and medium for detecting a living body.

背景技术Background technique

随着计算机技术的发展,人脸识别技术在近年来得到了广泛的应用。目前大多数人脸识别技术是基于采集的人脸图像进行识别的,攻击者使用照片、屏幕展示、面具等手段可能会达到冒用身份的目的,存在“活体攻击”的风险。With the development of computer technology, face recognition technology has been widely used in recent years. At present, most face recognition technologies are based on the collected face images. Attackers may use photos, screen displays, masks and other means to achieve the purpose of fraudulent identity, and there is a risk of "living attack".

因此,如何实现有效的活体检测是亟待解决的技术问题。Therefore, how to achieve effective living detection is an urgent technical problem to be solved.

发明内容SUMMARY OF THE INVENTION

本说明书实施例提供一种活体检测方法、装置、设备及介质,以实现有效地活体检测。Embodiments of the present specification provide a method, apparatus, device, and medium for living body detection, so as to realize effective living body detection.

为解决上述技术问题,本说明书实施例是这样实现的:In order to solve the above-mentioned technical problems, the embodiments of this specification are implemented as follows:

本说明书实施例提供的一种活体检测方法,包括:A method for detecting a living body provided in the embodiments of this specification includes:

获取第一信息采集设备采集的第一信息;所述第一信息为与情绪引导视频的播放过程相关的信息;所述第一信息包含用户的第一生物特征;obtaining first information collected by a first information collection device; the first information is information related to the playing process of the emotion-guided video; the first information includes the first biometric feature of the user;

获取第二信息采集设备采集的第二信息;所述第二信息为与所述情绪引导视频的播放过程相关的信息;所述第二信息包含用户的第二生物特征;Acquiring second information collected by a second information collection device; the second information is information related to the playing process of the emotion guidance video; the second information includes the second biometric feature of the user;

根据多模态情绪识别算法,基于所述第一信息和所述第二信息,确定所述用户的测定情绪类型;According to a multimodal emotion recognition algorithm, based on the first information and the second information, determine the user's measured emotion type;

判断所述测定情绪类型与所述情绪引导视频对应的设定情绪类型是否一致;Judging whether the measured emotion type is consistent with the set emotion type corresponding to the emotion guidance video;

若所述测定情绪类型与所述情绪引导视频对应的设定情绪类型一致,则确定所述用户为活体对象。If the measured emotion type is consistent with the set emotion type corresponding to the emotion guidance video, it is determined that the user is a living object.

本说明书实施例提供的一种活体检测装置,包括:A device for detecting a living body provided in the embodiments of this specification includes:

第一获取模块,用于获取第一信息采集设备采集的第一信息;所述第一信息为与情绪引导视频的播放过程相关的信息;所述第一信息包含用户的第一生物特征;a first acquisition module, configured to acquire first information collected by a first information collection device; the first information is information related to the playing process of the emotion-guided video; the first information includes the first biometric feature of the user;

第二获取模块,用于获取第二信息采集设备采集的第二信息;所述第二信息为与所述情绪引导视频的播放过程相关的信息;所述第二信息包含用户的第二生物特征;The second acquisition module is configured to acquire the second information collected by the second information collection device; the second information is information related to the playing process of the emotion guidance video; the second information includes the second biometric feature of the user ;

第一确定模块,用于根据多模态情绪识别算法,基于所述第一信息和所述第二信息,确定所述用户的测定情绪类型;a first determination module, configured to determine the measured emotion type of the user based on the first information and the second information according to a multimodal emotion recognition algorithm;

判断模块,用于判断所述测定情绪类型与所述情绪引导视频对应的设定情绪类型是否一致;A judgment module, for judging whether the measured emotion type is consistent with the set emotion type corresponding to the emotion guidance video;

第二确定模块,用于若所述测定情绪类型与所述情绪引导视频对应的设定情绪类型一致,则确定所述用户为活体对象。The second determination module is configured to determine that the user is a living object if the measured emotion type is consistent with the set emotion type corresponding to the emotion guidance video.

本说明书实施例提供的一种活体检测设备,包括:A device for detecting a living body provided in the embodiments of this specification includes:

至少一个处理器;以及,at least one processor; and,

与所述至少一个处理器通信连接的存储器;其中,a memory communicatively coupled to the at least one processor; wherein,

所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够:The memory stores instructions executable by the at least one processor, the instructions being executed by the at least one processor to enable the at least one processor to:

获取第一信息采集设备采集的第一信息;所述第一信息为与情绪引导视频的播放过程相关的信息;所述第一信息包含用户的第一生物特征;obtaining first information collected by a first information collection device; the first information is information related to the playing process of the emotion-guided video; the first information includes the first biometric feature of the user;

获取第二信息采集设备采集的第二信息;所述第二信息为与所述情绪引导视频的播放过程相关的信息;所述第二信息包含用户的第二生物特征;Acquiring second information collected by a second information collection device; the second information is information related to the playing process of the emotion guidance video; the second information includes the second biometric feature of the user;

根据多模态情绪识别算法,基于所述第一信息和所述第二信息,确定所述用户的测定情绪类型;According to a multimodal emotion recognition algorithm, based on the first information and the second information, determine the user's measured emotion type;

判断所述测定情绪类型与所述情绪引导视频对应的设定情绪类型是否一致;Judging whether the measured emotion type is consistent with the set emotion type corresponding to the emotion guidance video;

若所述测定情绪类型与所述情绪引导视频对应的设定情绪类型一致,则确定所述用户为活体对象。If the measured emotion type is consistent with the set emotion type corresponding to the emotion guidance video, it is determined that the user is a living object.

本说明书实施例提供的一种计算机可读介质,其上存储有计算机可读指令,所述计算机可读指令可被处理器执行以实现一种活体检测方法。A computer-readable medium provided by an embodiment of the present specification stores computer-readable instructions thereon, and the computer-readable instructions can be executed by a processor to implement a method for detecting a living body.

本说明书一个实施例实现了能够达到以下有益效果:An embodiment of the present specification achieves the following beneficial effects:

通过第一信息采集设备采集第一信息,第二信息采集设备采集第二信息,并基于第一信息中包含的第一生物特征和第二信息中包含的第二生物特征信息,确定用户的情绪类型,通过判断与情绪引导视频中设定的情绪类型是否一致判断用户是否为活体对象。本说明书实施例中的活体检测方法,可以有效地实现活体检测。The first information is collected by the first information collection device, the second information is collected by the second information collection device, and the emotion of the user is determined based on the first biometric feature included in the first information and the second biometric feature information included in the second information Type, whether the user is a living object is determined by judging whether it is consistent with the emotion type set in the emotion guidance video. The living body detection method in the embodiments of the present specification can effectively realize the living body detection.

进一步地说明书实施例中的活体检测方法可以基于情绪类型对用户进行活体检测,还可以对高清屏攻击、高精度头模攻击等攻击具有良好的防范作用。It is further described that the living body detection method in the embodiment can detect the living body of the user based on the emotion type, and can also have a good preventive effect on attacks such as high-definition screen attacks and high-precision head model attacks.

附图说明Description of drawings

为了更清楚地说明本说明书实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请中记载的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the technical solutions in the embodiments of the present specification or the prior art, the following briefly introduces the accompanying drawings required in the description of the embodiments or the prior art. Obviously, the accompanying drawings in the following description are only These are some embodiments described in this application. For those of ordinary skill in the art, other drawings can also be obtained based on these drawings without any creative effort.

图1为本说明书实施例中一种活体检测系统的架构图;FIG. 1 is a structural diagram of a living body detection system in an embodiment of the present specification;

图2为本说明书实施例提供的一种活体检测方法的流程示意图;2 is a schematic flowchart of a method for detecting a living body according to an embodiment of the present specification;

图3为本说明书实施例中提供的一种活体检测方法的泳道图;Fig. 3 is the swimming lane diagram of a kind of living body detection method provided in the embodiment of this specification;

图4为本说明书实施例提供的一种活体检测装置的结构示意图;FIG. 4 is a schematic structural diagram of a living body detection device according to an embodiment of the present specification;

图5为本说明书实施例提供的一种活体检测设备的结构示意图。FIG. 5 is a schematic structural diagram of a living body detection device according to an embodiment of the present specification.

具体实施方式Detailed ways

为使本说明书一个或多个实施例的目的、技术方案和优点更加清楚,下面将结合本说明书具体实施例及相应的附图对本说明书一个或多个实施例的技术方案进行清楚、完整地描述。显然,所描述的实施例仅是本说明书的一部分实施例,而不是全部的实施例。基于本说明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本说明书一个或多个实施例保护的范围。In order to make the objectives, technical solutions and advantages of one or more embodiments of this specification clearer, the technical solutions of one or more embodiments of this specification will be clearly and completely described below with reference to the specific embodiments of this specification and the corresponding drawings. . Obviously, the described embodiments are only some of the embodiments of the present specification, but not all of the embodiments. Based on the embodiments in this specification, all other embodiments obtained by those of ordinary skill in the art without creative efforts fall within the protection scope of one or more embodiments of this specification.

以下结合附图,详细说明本说明书各实施例提供的技术方案。The technical solutions provided by the embodiments of the present specification will be described in detail below with reference to the accompanying drawings.

目前对人脸识别系统而言,安全风险最大的是“活体攻击”,即攻击者使用照片、屏幕展示、面具等手段达到冒用身份的目的。由于目前大多数活体检测系统采用单帧图像作为活体评判的输入。例如:静默活体检测方法,采用单张图像进行活体检测,该方法可以对一些简单的活体攻击例如手机屏幕、低分辨率屏幕和打印照片等有较好的拦截,但对于高清屏幕和高精度头模没有拦截作用。又如,基于眨眼、摇头等动作的活体检测方法,此类方法基于简单的交互动作来进行活体检测。但是对于眨眼、摇头等简单动作,录制的高清视频可以简单绕过,这类方法对于高清视频的防护作用非常有限。At present, for the face recognition system, the biggest security risk is "living attack", that is, the attacker uses photos, screen display, masks and other means to achieve the purpose of using false identity. Since most of the current living detection systems use a single frame image as the input for living judgment. For example: silent living detection method, which uses a single image for living detection. This method can intercept some simple living attacks such as mobile phone screens, low-resolution screens and printed photos, but for high-definition screens and high-precision heads The mod has no interception effect. Another example is a method of living body detection based on actions such as blinking, shaking his head, etc., such methods are based on simple interactive actions to perform living body detection. However, for simple actions such as blinking, shaking your head, etc., the recorded high-definition video can be easily bypassed. Such methods have very limited protection for high-definition video.

为了解决现有技术中的缺陷,本方案给出了以下实施例:In order to solve the defects in the prior art, this scheme provides the following examples:

图1为本说明书实施例中一种活体检测系统的架构图。如图1所示,该架构中包括终端设备10和服务器20,终端设备10可以与服务器20进行通信,终端设备10可以用于采集与情绪引导视频相关的用户的多种生物特征信息,并将用户的多种生物特征信息发送至服务器20。终端设备10可以为手机、平板电脑、计算机等,终端设备10还可以为自助终端,例如:自助取款机、业务查询机等。服务器20可以基于用户的多种生物特征信息,得到用户的测定情绪类型结果,在与情绪引导视频中设定的情绪类型进行比对后,判定终端设备10对应的用户是否为活体对象。FIG. 1 is a structural diagram of a living body detection system in an embodiment of the present specification. As shown in FIG. 1, the architecture includes aterminal device 10 and aserver 20, theterminal device 10 can communicate with theserver 20, and theterminal device 10 can be used to collect various biometric information of the user related to the emotion guidance video, and to Various biometric information of the user is sent to theserver 20 . Theterminal device 10 may be a mobile phone, a tablet computer, a computer, etc., and theterminal device 10 may also be a self-service terminal, such as an automatic teller machine, a service inquiry machine, and the like. Theserver 20 may obtain the user's emotion type determination result based on various biometric information of the user, and after comparing with the emotion type set in the emotion guidance video, determine whether the user corresponding to theterminal device 10 is a living object.

接下来,将针对说明书实施例提供的一种活体检测方法结合附图进行具体说明:Next, a method for detecting a living body provided by the embodiments of the specification will be described in detail with reference to the accompanying drawings:

图2为本说明书实施例提供的一种活体检测方法的流程示意图。从程序角度而言,流程的执行主体可以为搭载于应用服务器的程序或应用客户端。FIG. 2 is a schematic flowchart of a method for detecting a living body according to an embodiment of the present specification. From a program perspective, the execution body of the process may be a program mounted on an application server or an application client.

如图2所示,该流程可以包括以下步骤:As shown in Figure 2, the process can include the following steps:

步骤202:获取第一信息采集设备采集的第一信息;所述第一信息为与情绪引导视频的播放过程相关的信息;所述第一信息包含用户的第一生物特征。Step 202: Acquire first information collected by the first information collection device; the first information is information related to the playing process of the emotion guidance video; the first information includes the first biometric feature of the user.

步骤204:获取第二信息采集设备采集的第二信息;所述第二信息为与所述情绪引导视频的播放过程相关的信息;所述第二信息包含用户的第二生物特征。Step 204: Acquire second information collected by the second information collection device; the second information is information related to the playing process of the emotion guidance video; the second information includes the second biometric feature of the user.

实际应用中,第一信息采集设备可以与第二信息采集设备为终端设备的不同部件,终端设备可以至少具有摄像或者录音功能,例如:终端设备具有摄像头和麦克风。第一信息采集设备可以为摄像头,第二信息采集设备可以为麦克风。第一信息采集设备与第二信息采集设备还可以为不同的终端设备。第一信息采集设备和第二信息采集设备是否为同一终端设备可以根据实际情况进行设定。In practical applications, the first information collection device and the second information collection device may be different components of the terminal device, and the terminal device may at least have a camera or recording function, for example, the terminal device has a camera and a microphone. The first information collection device may be a camera, and the second information collection device may be a microphone. The first information collection device and the second information collection device may also be different terminal devices. Whether the first information collection device and the second information collection device are the same terminal device can be set according to the actual situation.

情绪引导视频可以是包含鲜明的情感主题的视频,一段情绪引导视频的内容可以体现一种鲜明的情感主题,例如兴奋、悲伤、恐惧等情绪的视频。情绪引导视频的时长可以预先进行设定,例如:10秒、15秒等。情绪引导视频的内容可以为随机播放,以使攻击者不能预判情绪引导视频中对应的情绪,进而增加攻击难度。进一步地,为了增加攻击难度,还可以在随机播放情绪引导视频的基础上,使相邻两次播放的情绪引导视频对应的情绪不同。An emotion-guided video may be a video containing a distinct emotional theme, and the content of an emotion-guided video may reflect a distinct emotional theme, such as a video of emotions such as excitement, sadness, and fear. The duration of the emotion guide video can be preset, for example: 10 seconds, 15 seconds, etc. The content of the emotion-guided video can be played randomly, so that the attacker cannot predict the corresponding emotion in the emotion-guided video, thereby increasing the difficulty of the attack. Further, in order to increase the difficulty of the attack, on the basis of randomly playing the emotion-guiding videos, the emotions corresponding to the emotion-guiding videos played twice adjacently may be different.

实际应用中,情绪引导视频可以由终端设备中具有视频播放功能的视频播放器进行播放,视频播放器可以为终端设备的功能部件,例如:终端设备中具有屏幕和扬声器,可以通过终端设备播放情绪引导视频。应理解,情绪引导视频还可以由其他视频播放装置进行播放。In practical applications, the emotion guidance video can be played by a video player with a video playback function in the terminal device. The video player can be a functional component of the terminal device. For example, the terminal device has a screen and a speaker, and can play emotions through the terminal device. Introductory video. It should be understood that the emotion-guided video can also be played by other video playing devices.

第一信息可以是情绪引导视频的播放过程中的第一信息采集设备采集的包含用户第一生物特征的信息,也可以是情绪引导视频播放后的第一信息采集设备采集的包含用户第一特征的信息。第二信息可以是情绪引导视频的播放过程中的采集的包含用户第二生物特征的信息,也可以是情绪引导视频播放后的采集的包含用户第二特征的信息。The first information may be information collected by the first information collection device during the playback of the emotion guidance video and including the user's first biometric feature, or may be information collected by the first information collection device after the emotion guidance video is played and collected by the first information collection device including the user's first characteristic. Information. The second information may be information including the user's second biometric feature collected during the playing of the emotion-guiding video, or information including the user's second characteristic collected after the emotion-guiding video is played.

步骤206:根据多模态情绪识别算法,基于所述第一信息和所述第二信息,确定所述用户的测定情绪类型。Step 206: According to a multimodal emotion recognition algorithm, based on the first information and the second information, determine the user's measured emotion type.

多模态情绪识别算法是通过至少两种不同的生理特征,以识别用户的情绪的算法。本说明书实施例中多模态情绪识别算法可以基于第一信息中的第一生物特征,以及第二信息中的第二生物特征,识别得到用户的测定情绪类型。第一生物特征与第二生物特征用于表征用户的不同生物特征。其中,多模态情绪识别算法可以是通过对至少包含情绪引导视频中对应的情绪种类的训练样本进行训练得到的,该多模态情绪识别算法能够识别的情绪类型至少包含情绪引导视频中表达的情绪种类。例如:情绪引导视频中至少包含兴奋、悲伤、恐惧、开心和愤怒之一,通过多模态情绪识别算法识别的情绪类型至少包含上述情绪。A multimodal emotion recognition algorithm is an algorithm that recognizes a user's emotion through at least two different physiological characteristics. The multimodal emotion recognition algorithm in the embodiment of the present specification may identify the user's measured emotion type based on the first biometric feature in the first information and the second biometric feature in the second information. The first biometric feature and the second biometric feature are used to characterize different biometric features of the user. Wherein, the multimodal emotion recognition algorithm may be obtained by training training samples that contain at least the corresponding emotion types in the emotion guidance video, and the emotion types that the multimodal emotion recognition algorithm can recognize include at least the emotion types expressed in the emotion guidance video. Kind of emotion. For example, an emotion-guided video contains at least one of excitement, sadness, fear, happiness, and anger, and the emotion types identified by the multimodal emotion recognition algorithm contain at least one of the above emotions.

步骤208:判断所述测定情绪类型与所述情绪引导视频对应的设定情绪类型是否一致。Step 208: Determine whether the measured emotion type is consistent with the set emotion type corresponding to the emotion guidance video.

步骤210:若所述测定情绪类型与所述情绪引导视频对应的设定情绪类型一致,则确定所述用户为活体对象。Step 210: If the measured emotion type is consistent with the set emotion type corresponding to the emotion guidance video, determine that the user is a living object.

服务器可以预先存储有情绪引导视频和/或情绪引导视频对应的情绪类型,也可以从终端设备中获取,终端设备中可以存储情绪引导视频和/或情绪引导视频对应的设定情绪类型。在获得用户的测定情绪类型之后,服务器通过比对用户的测定情绪类型与情绪引导视频对应的设定情绪类型,以判断二者是否一致。当测定情绪类型与情绪引导视频对应的设定情绪类型一致时,可以确定用户为活体对象。The server may pre-store the emotion guidance video and/or the emotion type corresponding to the emotion guidance video, or obtain it from the terminal device, and the terminal device may store the emotion guidance video and/or the set emotion type corresponding to the emotion guidance video. After obtaining the user's measured emotion type, the server compares the user's measured emotion type with the set emotion type corresponding to the emotion guidance video to determine whether the two are consistent. When the determined emotion type is consistent with the set emotion type corresponding to the emotion guidance video, it can be determined that the user is a living object.

应当理解,本说明书一个或多个实施例所述的方法其中部分步骤的顺序可以根据实际需要相互交换,或者其中的部分步骤也可以省略或删除。It should be understood that the order of some steps in the method described in one or more embodiments of this specification may be interchanged according to actual needs, or some steps may be omitted or deleted.

本说明书实施例提供的活体检测方法,通过第一信息采集设备采集第一信息,通过第二信息采集设备采集第二信息,由于第一信息中包含用户的第一生物特征,第二信息中包含用户的第二生物特征,并基于第一生物特征和第二生物特征得到用户测定的情绪类型,与情绪引导视频对应的设定情绪类型判定之后,得到用户是否为活体对象,实现用户活体检测。In the living detection method provided by the embodiments of this specification, the first information is collected by the first information collection device, and the second information is collected by the second information collection device. Since the first information includes the first biometric feature of the user, the second information includes The second biometric feature of the user is obtained, and the emotion type determined by the user is obtained based on the first biometric feature and the second biometric feature. After the set emotion type corresponding to the emotion guidance video is determined, whether the user is a living object is obtained, and the user living body detection is realized.

基于图2的方法,本说明书实施例还提供了该方法的一些具体实施方案,下面进行说明。Based on the method of FIG. 2 , some specific implementations of the method are also provided in the examples of this specification, which will be described below.

可选的,所述第一信息可以为至少包含用户面部的局部图像的视频信息;所述第一生物特征信息可以为至少包含所述用户的局部面部特征的信息;Optionally, the first information may be video information that includes at least a partial image of the user's face; the first biometric information may be information that includes at least a partial facial feature of the user;

所述第二信息可以为包含用户肢体图像的视频信息;所述第二生物特征信息可以为包含所述用户的肢体特征的信息。The second information may be video information including an image of the user's body; the second biometric information may be information including the user's body feature.

第一信息可以是用户在观看情绪引导视频过程中采集的视频信息,也可以是用户在观看完情绪引导视频之后采集的视频信息。第一信息可以包含用户整个面部的图像,也可以是包含用户面部的局部图像,例如:包含用户的眼睛、眉毛、嘴巴、脸、额头、眉头等任意一个或者多个部位的图像。用户在观看情绪引导视频过程中,用户的情绪会跟随视频中的情绪进行变化,引起用户的面部的变化。例如:当情绪引导视频中的设定的情绪为高兴时,用户的嘴角会向上运动;当情绪引导视频中设定的情绪为悲伤时,用户的眼睛会含有泪水。The first information may be video information collected by the user while watching the emotion-guiding video, or may be video information collected by the user after watching the emotion-guiding video. The first information may include an image of the user's entire face, or may include a partial image of the user's face, such as an image including any one or more parts of the user's eyes, eyebrows, mouth, face, forehead, and brow. When a user watches an emotion-guided video, the user's emotion will change with the emotion in the video, causing the user's face to change. For example: when the emotion set in the emotion guidance video is happy, the corners of the user's mouth will move upward; when the emotion set in the emotion guidance video is sad, the user's eyes will contain tears.

第二信息可以包括用户的手指、手掌或胳膊中的任一部位。用户肢体图像的视频信息可以是用户在观看情绪引导视频过程中采集的视频信息,也可以是用户在观看情绪引导视频后采集的视频信息。The second information may include any part of the user's finger, palm or arm. The video information of the user's limb image may be video information collected by the user during the process of watching the emotion-guided video, or may be video information collected by the user after watching the emotion-guided video.

在本实施例中,步骤206可以是基于至少包含所述用户的局部面部特征的信息和包含所述用户的肢体特征的信息,确定用户的测定情绪类型。In this embodiment, step 206 may be to determine the user's measured emotion type based on at least information including the user's partial facial features and information including the user's body features.

本说明书实施例中还可以基于音频信息确定用户的测定情绪类型,可选的,所述第一信息可以为至少包含用户面部的局部图像的视频信息;所述第一生物特征信息为可以至少包含所述用户的局部面部特征的信息;In the embodiment of this specification, the user's measured emotion type may also be determined based on audio information. Optionally, the first information may be video information including at least a partial image of the user's face; the first biometric information may include at least a partial image of the user's face. information on the local facial features of the user;

所述第二信息可以为用户的音频信息;所述第二生物特征信息可以为包含所述用户的声音特征的信息;所述音频信息包括在所述情绪引导视频播放过程中或者播放结束后,对用户的声音采集得到的音频。The second information may be the user's audio information; the second biometric information may be information including the user's voice characteristics; the audio information includes during or after the emotional guidance video is played, Audio collected from the user's voice.

本说明书实施例中第一信息可以是用户面部的局部图像的视频信息,第二信息可以是用户的音频信息,音频信息可以在情绪引导视频播放过程中采集的音频信息,例如:可以是用户在观看情绪引导视频过程中的声音,可以是用户说话的声音,例如:用户的哭声、笑声、叹气声或叫声等。音频信息还可以是在情绪引导视频播放后采集的音频信息,例如:在情绪引导视频结束后,在视频播放器上显示随机数或随机文字,用户读出该随机数或随机文字,并对用户的声音采集得到音频信息。In the embodiment of this specification, the first information may be video information of a partial image of the user's face, and the second information may be audio information of the user. The audio information may be audio information collected during the playback of the emotion-guided video. For example, it may be the user's audio information. The sound in the process of watching the emotion-guided video may be the voice of the user, such as the user's cry, laughter, sigh, or cry. The audio information can also be audio information collected after the emotion-guided video is played, for example: after the emotion-guided video ends, a random number or random text is displayed on the video player, the user reads the random number or random text, and informs the user. The audio information is obtained from the sound acquisition.

执行步骤206时可以基于至少包含所述用户的局部面部特征的信息和包含所述用户的声音特征的信息,确定用户的测定情绪类型。Whenstep 206 is performed, the measured emotion type of the user may be determined based on at least the information including the partial facial features of the user and the information including the voice characteristics of the user.

可选的,所述第一信息可以为包含用户肢体图像的视频信息;所述第一生物特征信息可以为包含所述用户的肢体特征的信息;Optionally, the first information may be video information including an image of a user's body; the first biometric information may be information including the user's body feature;

所述第二信息为音频信息;所述第二生物特征信息可以为包含所述用户的声音特征的信息;所述音频信息包括在所述情绪引导视频播放过程中或者播放结束后,对用户的声音采集得到的音频。The second information is audio information; the second biometric information may be information including the voice characteristics of the user; the audio information includes the information about the user during or after the emotion guidance video is played. Audio captured by sound.

执行步骤206时,可以基于包含用户肢体图像的视频信息和包含用户的声音特征的信息,确定用户的测定情绪类型。其中,音频信息以及视频信息可以与上述介绍的内容相似,这里不再赘述。Whenstep 206 is performed, the user's measured emotion type may be determined based on the video information including the user's body image and the information including the user's voice feature. The audio information and the video information may be similar to the content introduced above, and will not be repeated here.

为提高活体检测的准确度,本说明书实施例中还可以基于用户的三种生物特征信息来确定用户的情绪。可选的,本说明书实施例中活体检测方法还可以包括:In order to improve the accuracy of living body detection, the user's emotion may also be determined based on three types of biometric information of the user in this embodiment of the present specification. Optionally, the live detection method in the embodiment of this specification may further include:

获取第三信息采集设备采集的第三信息;所述第三信息可以为与所述情绪引导视频的播放过程相关的信息;所述第三信息可以包含用户的第三生物特征;Acquiring third information collected by a third information collection device; the third information may be information related to the playing process of the emotion guidance video; the third information may include the third biometric feature of the user;

所述根据多模态情绪识别算法,基于所述第一信息和所述第二信息,确定所述用户的测定情绪类型,具体包括:The determination of the user's measured emotion type based on the first information and the second information according to the multimodal emotion recognition algorithm specifically includes:

所述根据多模态情绪识别算法,基于所述第一信息、所述第二信息和所述第三信息,确定所述用户的测定情绪类型。According to the multimodal emotion recognition algorithm, the measured emotion type of the user is determined based on the first information, the second information and the third information.

第三信息采集设备可以用于采集用户在观看情绪引导视频播放过程中的相关信息。第三信息采集设备可以为终端设备的功能部件,例如:第三信息采集设备可以为摄像头。The third information collection device may be used to collect relevant information of the user in the process of watching the emotion-guided video playing. The third information collection device may be a functional component of the terminal device, for example, the third information collection device may be a camera.

执行步骤206时,根据多模态情绪识别算法,基于第一信息、第二信息和第三信息,确定用户的测定情绪类型。Whenstep 206 is executed, the user's measured emotion type is determined based on the first information, the second information and the third information according to the multimodal emotion recognition algorithm.

为提高活体检测的准确度,可选的,所述第一信息可以为至少包含用户面部的局部图像的视频信息;所述第一生物特征信息可以为至少包含所述用户的局部面部特征的信息;In order to improve the accuracy of living body detection, optionally, the first information may be video information that includes at least a partial image of the user's face; the first biometric information may be information that includes at least a partial facial feature of the user. ;

所述第二信息可以为包含用户肢体图像的视频信息;所述第二生物特征信息可以为包含所述用户的肢体特征的信息;The second information may be video information including an image of the user's body; the second biometric information may be information including the user's body feature;

所述第三信息可以为用户的音频信息;所述第三生物特征信息为包含所述用户的声音特征的信息;所述音频信息包括在所述情绪引导视频播放过程中或者播放结束后,对用户的声音采集得到的音频。The third information may be the user's audio information; the third biometric information is information including the user's voice characteristics; the audio information includes, during or after the emotional guidance video is played, the Audio collected from the user's voice.

执行步骤206时,根据多模态情绪识别算法,基于至少包含用户面部的局部图像的视频信息、用户肢体图像的视频信息和用户的音频信息,确定用户的测定情绪类型。Whenstep 206 is executed, according to the multimodal emotion recognition algorithm, the user's measured emotion type is determined based on at least the video information of the partial image of the user's face, the video information of the user's limb image and the user's audio information.

在本实施例中,第一信息采集设备、第二信息采集设备和第三信息采集设备可以为终端设备的不同部件,例如:第一信息采集设备可以为终端设备的前置摄像头,第二信息采集设备可以为终端设备的后置摄像头,第三信息采集设备可以为终端设备的麦克风。应理解,第一信息采集设备、第二信息采集设备和第三信息设备还可以为不同终端设备,例如:第一信息采集设备可以为摄像头设备,第二信息采集设备可以为另一摄像头设备,第三信息采集设备为麦克风设备。无论第一信息采集设备、第二信息采集设备和第三信息采集设备是否为同一终端设备,第一信息采集设备采集的第一信息、第二信息采集设备采集的第二信息以及第三信息采集设备采集的第三信息均可以上传至服务器,以使服务器获得上述信息。In this embodiment, the first information collection device, the second information collection device, and the third information collection device may be different components of the terminal device. For example, the first information collection device may be the front camera of the terminal device, and the second information collection device may be a The collection device may be a rear camera of the terminal device, and the third information collection device may be a microphone of the terminal device. It should be understood that the first information collection device, the second information collection device and the third information device may also be different terminal devices, for example: the first information collection device may be a camera device, the second information collection device may be another camera device, The third information collection device is a microphone device. Regardless of whether the first information collection device, the second information collection device, and the third information collection device are the same terminal device, the first information collected by the first information collection device, the second information collected by the second information collection device, and the third information collected by the second information collection device All the third information collected by the device can be uploaded to the server, so that the server can obtain the above-mentioned information.

可选的,所述音频信息可以包括用户针对显示的预设文本的音频。Optionally, the audio information may include user audio for the displayed preset text.

本说明书实施例中音频信息可以为用户在观看情绪引导视频时录制的音频信息,也可以是在情绪引导视频播放完毕后录制的音频信息。用户可以通过阅读预设文本采集音频信息。预设文本可以是随机数,用户朗读随机数,信息采集设备采集用户的音频信息。预设文本还可以是一段文字,该段文字可以与情绪引导视频的内容相关,优选的该段文字可以与情绪引导视频的内容不相关,用户通过朗读与情绪引导视频内容不相关的预设文本更加能够反映出用户的真实情感。In the embodiment of this specification, the audio information may be audio information recorded by the user when watching the emotion guidance video, or may be audio information recorded after the emotion guidance video is played. Users can collect audio information by reading preset text. The preset text may be a random number, the user reads the random number, and the information collection device collects the user's audio information. The preset text can also be a paragraph of text, and this paragraph of text can be related to the content of the emotion-guiding video. Preferably, this paragraph of text can be irrelevant to the content of the emotion-guiding video. The user can read aloud the preset text that is not related to the content of the emotion-guiding video. It can better reflect the real emotions of users.

可选的,所述用户肢体可以包括用户的手指、手掌或胳膊中的部位。Optionally, the user's limb may include the user's fingers, palms or parts of the arms.

本说明书实施例中用户肢体可以为用户的手指、手掌或胳膊中的部位。实际应用中信息采集设备可以为终端设备的后置摄像头,用户可以将肢体放置在后置摄像头处,采集用户肢体图像的视频信息。其中,用户可以将肢体覆盖在摄像头处,或者紧贴摄像头,也可以距离摄像头一定的距离。实际应用中,可以根据实际需求调整支付放置位置,以便可以采集到能够获取满足要求的图像。In the embodiment of the present specification, the user's limb may be a part of the user's finger, palm or arm. In practical applications, the information collection device may be the rear camera of the terminal device, and the user may place the limb at the rear camera to collect video information of the user's limb image. Among them, the user can cover the body at the camera, or close to the camera, or a certain distance from the camera. In practical applications, the payment placement position can be adjusted according to actual needs, so that images that meet the requirements can be captured.

实际应用中,当采用手机等终端设备采集信息时,可以采用前置摄像头采集用户面部特征信息,后置摄像头采集用户肢体图像信息,还可以采用麦克风采集用户的音频信息。In practical applications, when a terminal device such as a mobile phone is used to collect information, the front camera can be used to collect the user's facial feature information, the rear camera can collect the user's limb image information, and the microphone can also be used to collect the user's audio information.

可选的,基于包含用户肢体图像的视频信息,确定用户的心功能信息。Optionally, the cardiac function information of the user is determined based on the video information including the image of the user's limbs.

在本实施例中,可以利用光电容积脉搏波描记法(Photo plethysmo graphy,简称PPG)获得心功能信息。PPG是以光源和探测器为基础,测量经过人体血管和组织反射、吸收后的衰减光,记录血管的搏动状态并测量脉搏波。In this embodiment, the cardiac function information may be obtained by photoplethysmography (Photo plethysmography, PPG for short). Based on light source and detector, PPG measures the attenuated light reflected and absorbed by human blood vessels and tissues, records the pulsatile state of blood vessels and measures pulse waves.

用户在观看情绪引导视频过程中,用户的情绪会随着视频的内容发生起伏,进一步影响到心跳、脉搏等生理特征,用户肢体在摄像头上覆盖时,由于肢体中分布着血管,可以采集到呈现为红色的肢体图像,通过算法可以提取出用户观看情绪引导视频过程中的PPG信号,通过该信号可以进一步提取用户的心率等生理指标。When the user is watching the emotion-guided video, the user's emotions will fluctuate with the content of the video, which further affects physiological characteristics such as heartbeat and pulse. When the user's limb is covered on the camera, due to the distribution of blood vessels in the limb, it can be collected and presented. It is a red limb image. The algorithm can extract the PPG signal in the process of the user watching the emotion-guided video. Through this signal, the user's heart rate and other physiological indicators can be further extracted.

为了提高活体检测准确度,本说明书实施例的所述第一信息为至少包含用户面部的局部图像的视频信息,所述方法还包括:In order to improve the accuracy of living body detection, the first information in the embodiment of this specification is video information including at least a partial image of the user's face, and the method further includes:

发送启动补光装置的指令,以便启动补光装置对被采集的用户面部进行补光。Sending an instruction to activate the supplementary light device, so as to activate the supplementary light device to perform supplementary light on the captured face of the user.

补光装置可以是终端设备的一部分,例如:补光装置可以为终端设备中的闪光灯。补光装置也可以是灯光装置,提供光线以实现对用户面部的补光。灯光装置可以与手段设备通信连接,在接收到终端设备指令时,打开光源;或者直接在终端设备收到启动指令时,手动打开灯光装置。The supplementary light device may be a part of the terminal device, for example, the supplementary light device may be a flash in the terminal device. The supplementary light device can also be a lighting device, which provides light to realize supplementary light for the user's face. The lighting device can be connected in communication with the means device, and turn on the light source when receiving an instruction from the terminal device; or manually turn on the lighting device directly when the terminal device receives a start-up command.

实际应用中,在发送启动补光装置的指令之前,本实施例的方法还包括:In practical applications, before sending an instruction to activate the light supplement device, the method of this embodiment further includes:

判断第一检测图像信息中的面部图像的亮度是否符合预设亮度条件;若不符合,则发送启动补光装置的第一指令,以便补光装置对用户面部进行补光。Determine whether the brightness of the face image in the first detected image information meets the preset brightness condition; if not, send a first instruction to activate the light-filling device, so that the light-filling device fills the user's face with light.

第一检测图像信息可以是播放情绪引导视频之前采集的包含用户面部图像的图像信息,可用于判断采集的图像是否符合预设条件,以便在播放情绪引导视频的过程中可以采集到能够利用多模态识别算法进行情绪类型识别的用户面部视频信息。预设亮度条件可以根据实际需求进行设定,在此不做限定。当终端设备采集的图像的亮度不符合预设亮度条件时,可以发送启动补光装置的第一指令,以启动补光装置启动对用户面部进行补光。The first detection image information may be the image information including the user's face image collected before playing the emotion guidance video, which can be used to judge whether the collected image meets the preset conditions, so that in the process of playing the emotion guidance video, it is possible to collect data that can utilize the multi-mode image. The facial video information of the user for emotion type recognition by the emotion recognition algorithm. The preset brightness conditions can be set according to actual needs, which is not limited here. When the brightness of the image collected by the terminal device does not meet the preset brightness condition, a first instruction to activate the light-filling device may be sent, so as to activate the light-filling device to start filling light on the user's face.

为了便于用户操作,终端设备可以文字、语音等形式进行提示,以使终端用户在接收到提示信息后,打开补光装置。In order to facilitate the user's operation, the terminal device may give prompts in the form of text, voice, etc., so that the terminal user can turn on the supplementary light device after receiving the prompt information.

为了提高活体识别准确性,本说明书实施例的所述第二信息可以为包含用户肢体图像的视频信息;所述方法还可以包括:In order to improve the accuracy of living body recognition, the second information in the embodiment of this specification may be video information including an image of a user's limb; the method may further include:

发送启动光照装置的指令,以便增加用户肢体的亮度。Send an instruction to activate the lighting device in order to increase the brightness of the user's limb.

补光装置和光照装置可以为同一光源,例如:在终端的顶部设置的照明灯,可以同时为用户的面部和用户的肢体提供光源。当然,补光装置和光照装置也可以不为同一光源。The supplementary light device and the lighting device can be the same light source, for example, a lighting lamp arranged on the top of the terminal can provide light sources for the user's face and the user's limbs at the same time. Of course, the supplementary light device and the lighting device may not be the same light source.

需要说明的是,在情绪引导视频播放过程中,可以使光照装置处于常亮的状态,以采集用户肢体皮肤颜色变化的视频。实际应用中,在执行步骤204之前,本实施例的方法还包括:It should be noted that, in the process of playing the emotion-guided video, the lighting device can be kept in a state of constant light, so as to collect the video of the color change of the skin of the user's limbs. In practical applications, beforestep 204 is executed, the method of this embodiment further includes:

获取所述第二信息采集设备采集到的第二检测图像信息;第二检测图像信息为采集所述第二信息之前,所述第二信息采集设备采集到的图像信息;acquiring second detection image information collected by the second information collection device; the second detection image information is image information collected by the second information collection device before the second information is collected;

判断所述第二检测图像信息的亮度是否符合预设亮度条件;若不符合,则发送启动光照装置的第二指令,以便光照装置对肢体进行补光,提高肢体的亮度。Determine whether the brightness of the second detected image information meets the preset brightness condition; if not, send a second instruction to activate the lighting device, so that the lighting device fills the limb with light to improve the brightness of the limb.

第二检测图像信息可以是情绪引导视频播放之前采集的包含用户肢体图像的图像信息,可用于判断包含用户肢体图像的视频信息的亮度是否符合预设条件,以便在播放情绪引导视频的过程中可以采集到能够利用多模态识别算法进行情绪类型识别的用户肢体视频信息。预设亮度条件可以根据实际需求进行设定,在此不做限定。当终端设备采集的图像亮度不符合预设亮度条件时,可以发送启动光照装置的第二指令,以启动补光装置启动对用户肢体进行补光,增加肢体的通透性。The second detection image information may be the image information containing the user's body image collected before the emotion guidance video is played, and can be used to determine whether the brightness of the video information containing the user's body image meets the preset condition, so that during the playback of the emotion guidance video, it can be The video information of the user's limbs that can be recognized by the multi-modal recognition algorithm is collected. The preset brightness conditions can be set according to actual needs, which is not limited here. When the brightness of the image collected by the terminal device does not meet the preset brightness conditions, a second instruction for activating the lighting device may be sent to activate the supplementary light device to start supplementing light on the user's limb to increase the permeability of the limb.

可选的,所述第一信息可以为至少包含用户面部的局部图像的视频信息;所述获取第一信息采集设备采集的第一信息之前,可以包括:Optionally, the first information may be video information including at least a partial image of the user's face; before the acquiring the first information collected by the first information collection device, may include:

发送启动所述第一信息采集设备的指令;sending an instruction to start the first information collection device;

获取所述第一信息采集设备采集到的第一检测图像信息;所述第一检测图像信息可以为采集所述第一信息之前所述第一信息采集设备采集到的图像信息;acquiring first detection image information collected by the first information collection device; the first detection image information may be image information collected by the first information collection device before collecting the first information;

基于所述第一检测图像信息,判断所述第一检测图像信息中是否包含符合预设条件的用户面部特征;Based on the first detected image information, determine whether the first detected image information includes user facial features that meet preset conditions;

若第一检测图像信息中未包含符合预设条件的用户面部特征,则发送提示用户调整面部位置的提示信息。If the first detected image information does not include the user's facial features that meet the preset conditions, prompt information for prompting the user to adjust the face position is sent.

通过第一检测图像信息中是否包含有预设条件的用户面部特征,判断用户是否处于预设区域内。预设条件为能够用于多模态情绪识别算法,以确定用户测定情绪类型的条件,例如:用户的眼睛和眉毛是否进入屏幕内。预设条件的用户面部特征还可以表示用户面部的面积占采集图像面积的比例达到预设比例。Whether the user is within the preset area is determined by first detecting whether the image information contains the user's facial features with preset conditions. The preset conditions are conditions that can be used by the multimodal emotion recognition algorithm to determine the user's type of emotion, such as whether the user's eyes and eyebrows enter the screen. The user facial feature of the preset condition may also indicate that the ratio of the area of the user's face to the area of the captured image reaches a preset ratio.

当第一检测图像信息中包含符合预设条件的用户面部特征时,视频播放部件或设备可以开始播放情绪引导视频。当第一检测图像信息中未包含符合预设条件的用户面部特征时,可以发送提示信息,以提示用户调整面部位置。例如:提示信息可以为“请向后移动,以使面部位于屏幕内”。应理解,实际应用中还可以设置为其他提示语。When the first detected image information contains the user's facial features that meet the preset conditions, the video playing component or device may start playing the emotion-guiding video. When the first detected image information does not contain the user's facial features that meet the preset conditions, prompt information may be sent to prompt the user to adjust the facial position. For example: the prompt message can be "Please move back so that the face is on the screen". It should be understood that other prompts may also be set in practical applications.

可选的,所述第二信息可以为包含用户肢体图像的视频信息;所述获取第二信息采集设备采集的第二信息之前,包括:Optionally, the second information may be video information including an image of a user's limb; before the acquiring the second information collected by the second information collection device, includes:

发送启动所述第二信息采集设备的指令;sending an instruction to start the second information collection device;

获取所述第二信息采集设备采集到的第二检测图像信息;所述第二检测图像信息可以为采集所述第二信息之前所述第二信息采集设备采集到的图像信息;acquiring second detection image information collected by the second information collection device; the second detection image information may be image information collected by the second information collection device before collecting the second information;

基于所述第二检测图像信息,判断所述第二检测图像信息中是否包含符合预设条件的用户肢体特征;Based on the second detected image information, determine whether the second detected image information includes user limb features that meet preset conditions;

若所述第二检测图像信息中未包含符合预设条件的用户肢体特征,则发送提示用户调整肢体位置的提示信息。If the second detected image information does not include the user's limb feature that meets the preset condition, prompt information for prompting the user to adjust the limb position is sent.

通过第二检测图像信息中是否包含有预设条件的用户肢体特征,判断用户是否处于预设区域内。预设条件可以是用户肢体面积占屏幕面积的比例达到预设比例。例如:第二检测图像信息中包含图像比例信息,该图形比例信息可以为80%。当第二检测图像信息中包含符合预设条件的用户肢体特征时,视频播放部件或设备可以开始播放情绪引导视频。当第二检测图像信息中未包含符合预设条件的用户肢体特征时,发送提示信息,以提示用户调整用户肢体位置。例如:提示信息可以为“请移动肢体,以使肢体全部覆盖摄像头”。应理解,实际应用中还可以设置为其他提示语。Whether the user is within the preset area is determined by secondly detecting whether the image information contains the user's limb feature with the preset condition. The preset condition may be that the ratio of the user's limb area to the screen area reaches a preset ratio. For example, the second detected image information includes image scale information, and the graphic scale information may be 80%. When the second detected image information contains the user's body feature that meets the preset condition, the video playing component or device may start to play the emotion-guiding video. When the second detected image information does not include the user's limb feature that meets the preset condition, prompt information is sent to prompt the user to adjust the user's limb position. For example: the prompt information can be "Please move the limb so that the limb covers the camera". It should be understood that other prompts may also be set in practical applications.

为更清楚的说明本说明书实施例中提供的活体检测方法,图3为本说明书实施例中提供的一种活体检测方法的泳道图。这里以第一信息采集设备、第二信息采集设备以及第三信息采集设备为终端设备的不同部件为例进行说明,这里终端设备为手机,手机至少具有前置摄像头、后置摄像头、麦克风和闪光灯。如图3所述,该方法可以包括信息采集阶段、信息处理阶段和结果显示阶段,具体可以包括:In order to more clearly illustrate the living body detection method provided in the embodiment of the present specification, FIG. 3 is a swimming lane diagram of a living body detection method provided in the embodiment of the present specification. Here, the first information collection device, the second information collection device, and the third information collection device are different components of the terminal device as examples for description. Here, the terminal device is a mobile phone, and the mobile phone has at least a front camera, a rear camera, a microphone and a flash. . As shown in FIG. 3 , the method may include an information collection stage, an information processing stage and a result display stage, and may specifically include:

步骤301:终端获取活体检测请求。Step 301: The terminal acquires a request for liveness detection.

实际应用中,用户在办理业务时,可能需要进行人脸识别,在人脸识别过程中需要进行活体检测,该请求可以是用户办理业务时发送的,例如,在人脸识别时,用户点击“开始识别”。In practical applications, users may need to perform face recognition when handling services, and liveness detection is required during the face recognition process. The request can be sent by the user when handling services. For example, during face recognition, the user clicks "" start to identify".

步骤303:终端播放情绪引导视频。Step 303: The terminal plays the emotion guidance video.

实际应用中,情绪引导视频可以是服务器发送至终端进行播放的,也可以是终端预先存储在终端的存储设备中的。In practical applications, the emotion guidance video may be sent by the server to the terminal for playback, or may be pre-stored by the terminal in a storage device of the terminal.

步骤305:终端采集第一信息、第二信息和第三信息。Step 305: The terminal collects the first information, the second information and the third information.

其中,第一信息可以为至少包含用户面部的局部图像的视频信息,第二信息可以为包含用户肢体图像的视频信息,第三信息可以为用户的音频信息。The first information may be video information including at least a partial image of the user's face, the second information may be video information including an image of the user's body, and the third information may be audio information of the user.

步骤307:服务器获取第一信息、第二信息和第三信息。Step 307: The server obtains the first information, the second information and the third information.

步骤309、服务器根据多模态情绪识别算法,基于所述第一信息、所述第二信息和所述第三信息,确定所述用户的测定情绪类型。Step 309: The server determines the measured emotion type of the user based on the first information, the second information and the third information according to a multimodal emotion recognition algorithm.

步骤311、服务器判断所述测定情绪类型与所述情绪引导视频对应的设定情绪类型是否一致;Step 311, the server determines whether the measured emotion type is consistent with the set emotion type corresponding to the emotion guidance video;

步骤313、若所述测定情绪类型与所述情绪引导视频对应的设定情绪类型一致,则确定所述用户为活体对象。Step 313: If the measured emotion type is consistent with the set emotion type corresponding to the emotion guidance video, determine that the user is a living object.

步骤315:终端设备显示识别结果。Step 315: The terminal device displays the identification result.

实际应用中,终端可以收到服务器发送的判断结果的提示信息,例如:“识别成功”、“比对成功”等提示语。In practical applications, the terminal can receive prompt information of the judgment result sent by the server, for example, prompt words such as "identification successful" and "comparison successful".

基于同样的思路,本说明书实施例还提供了上述方法对应的装置。图4为本说明书实施例提供的一种活体检测装置的结构示意图。如图4所示,该装置可以包括:Based on the same idea, the embodiments of the present specification also provide a device corresponding to the above method. FIG. 4 is a schematic structural diagram of a living body detection device according to an embodiment of the present specification. As shown in Figure 4, the apparatus may include:

第一获取模块402,用于获取第一信息采集设备采集的第一信息;所述第一信息为与情绪引导视频的播放过程相关的信息;所述第一信息包含用户的第一生物特征;The first obtainingmodule 402 is configured to obtain the first information collected by the first information collecting device; the first information is information related to the playing process of the emotion guidance video; the first information includes the first biometric feature of the user;

第二获取模块404,用于获取第二信息采集设备采集的第二信息;所述第二信息为与所述情绪引导视频的播放过程相关的信息;所述第二信息包含用户的第二生物特征;The second obtainingmodule 404 is configured to obtain second information collected by the second information collecting device; the second information is information related to the playing process of the emotion-guiding video; the second information includes the second bio of the user feature;

第一确定模块406,用于根据多模态情绪识别算法,基于所述第一信息和所述第二信息,确定所述用户的测定情绪类型;afirst determination module 406, configured to determine the measured emotion type of the user based on the first information and the second information according to a multimodal emotion recognition algorithm;

判断模块408,用于判断所述测定情绪类型与所述情绪引导视频对应的设定情绪类型是否一致;Judgingmodule 408, for judging whether the measured emotion type is consistent with the set emotion type corresponding to the emotion guidance video;

第二确定模块410,用于若所述测定情绪类型与所述情绪引导视频对应的设定情绪类型一致,则确定所述用户为活体对象。Thesecond determination module 410 is configured to determine that the user is a living object if the measured emotion type is consistent with the set emotion type corresponding to the emotion guidance video.

基于同样的思路,本说明书实施例还提供了上述方法对应的设备。Based on the same idea, the embodiments of this specification also provide a device corresponding to the above method.

图5为本说明书实施例提供的一种活体检测设备的结构示意图。如图5所示,设备500可以包括:FIG. 5 is a schematic structural diagram of a living body detection device according to an embodiment of the present specification. As shown in FIG. 5,device 500 may include:

至少一个处理器510;以及,at least one processor 510; and,

与所述至少一个处理器通信连接的存储器530;其中,a memory 530 in communication with the at least one processor; wherein,

所述存储器530存储有可被所述至少一个处理器510执行的指令520,所述指令被所述至少一个处理器510执行,以使所述至少一个处理器510能够:The memory 530 stores instructions 520 executable by the at least one processor 510, the instructions being executed by the at least one processor 510 to enable the at least one processor 510 to:

获取第一信息采集设备采集的第一信息;所述第一信息为与情绪引导视频的播放过程相关的信息;所述第一信息包含用户的第一生物特征;obtaining first information collected by a first information collection device; the first information is information related to the playing process of the emotion-guided video; the first information includes the first biometric feature of the user;

获取第二信息采集设备采集的第二信息;所述第二信息为与所述情绪引导视频的播放过程相关的信息;所述第二信息包含用户的第二生物特征;Acquiring second information collected by a second information collection device; the second information is information related to the playing process of the emotion guidance video; the second information includes the second biometric feature of the user;

根据多模态情绪识别算法,基于所述第一信息和所述第二信息,确定所述用户的测定情绪类型;According to a multimodal emotion recognition algorithm, based on the first information and the second information, determine the user's measured emotion type;

判断所述测定情绪类型与所述情绪引导视频对应的设定情绪类型是否一致;Judging whether the measured emotion type is consistent with the set emotion type corresponding to the emotion guidance video;

若所述测定情绪类型与所述情绪引导视频对应的设定情绪类型一致,则确定所述用户为活体对象。If the measured emotion type is consistent with the set emotion type corresponding to the emotion guidance video, it is determined that the user is a living object.

基于同样的思路,本说明书实施例还提供了上述方法对应的计算机可读介质。计算机可读介质上存储有计算机可读指令,所述计算机可读指令可被处理器执行以实现上述活体检测方法。Based on the same idea, the embodiments of the present specification also provide a computer-readable medium corresponding to the above method. Computer-readable instructions are stored on the computer-readable medium, and the computer-readable instructions can be executed by a processor to implement the above-mentioned method of living body detection.

本说明书中的各个实施例均采用递进的方式描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点说明的都是与其他实施例的不同之处。尤其,对于图5所示的设备而言,由于其基本相似于方法实施例,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。Each embodiment in this specification is described in a progressive manner, and the same and similar parts between the various embodiments may be referred to each other, and each embodiment focuses on the differences from other embodiments. In particular, for the device shown in FIG. 5 , since it is basically similar to the method embodiment, the description is relatively simple, and for related parts, please refer to the partial description of the method embodiment.

在20世纪90年代,对于一个技术的改进可以很明显地区分是硬件上的改进(例如,对二极管、晶体管、开关等电路结构的改进)还是软件上的改进(对于方法流程的改进)。然而,随着技术的发展,当今的很多方法流程的改进已经可以视为硬件电路结构的直接改进。设计人员几乎都通过将改进的方法流程编程到硬件电路中来得到相应的硬件电路结构。因此,不能说一个方法流程的改进就不能用硬件实体模块来实现。例如,可编程逻辑器件(Programmable Logic Device,PLD)(例如现场可编程门阵列(Field Programmable GateArray,FPGA))就是这样一种集成电路,其逻辑功能由用户对器件编程来确定。由设计人员自行编程来把一个数字符系统“集成”在一片PLD上,而不需要请芯片制造厂商来设计和制作专用的集成电路芯片。而且,如今,取代手工地制作集成电路芯片,这种编程也多半改用“逻辑编译器(logic compiler)”软件来实现,它与程序开发撰写时所用的软件编译器相类似,而要编译之前的原始代码也得用特定的编程语言来撰写,此称之为硬件描述语言(Hardware Description Language,HDL),而HDL也并非仅有一种,而是有许多种,如ABEL(Advanced Boolean Expression Language)、AHDL(Altera Hardware DescriptionLanguage)、Confluence、CUPL(Cornell University Programming Language)、HDCal、JHDL(Java Hardware Description Language)、Lava、Lola、MyHDL、PALASM、RHDL(RubyHardware Description Language)等,目前最普遍使用的是VHDL(Very-High-SpeedIntegrated Circuit Hardware Description Language)与Verilog。本领域技术人员也应该清楚,只需要将方法流程用上述几种硬件描述语言稍作逻辑编程并编程到集成电路中,就可以很容易得到实现该逻辑方法流程的硬件电路。In the 1990s, improvements in a technology could be clearly differentiated between improvements in hardware (eg, improvements to circuit structures such as diodes, transistors, switches, etc.) or improvements in software (improvements in method flow). However, with the development of technology, the improvement of many methods and processes today can be regarded as a direct improvement of the hardware circuit structure. Designers almost get the corresponding hardware circuit structure by programming the improved method flow into the hardware circuit. Therefore, it cannot be said that the improvement of a method flow cannot be realized by hardware entity modules. For example, a Programmable Logic Device (PLD) (eg, Field Programmable Gate Array (FPGA)) is an integrated circuit whose logic function is determined by user programming of the device. It is programmed by the designer to "integrate" a digital character system on a PLD without having to ask a chip manufacturer to design and manufacture a dedicated integrated circuit chip. And, instead of making integrated circuit chips by hand, these days, much of this programming is done using software called a "logic compiler", which is similar to the software compiler used in program development and writing, but before compiling The original code also has to be written in a specific programming language, which is called Hardware Description Language (HDL), and there is not only one HDL, but many kinds, such as ABEL (Advanced Boolean Expression Language) , AHDL (Altera Hardware Description Language), Confluence, CUPL (Cornell University Programming Language), HDCal, JHDL (Java Hardware Description Language), Lava, Lola, MyHDL, PALASM, RHDL (RubyHardware Description Language), etc. The most commonly used are VHDL (Very-High-Speed Integrated Circuit Hardware Description Language) and Verilog. It should also be clear to those skilled in the art that a hardware circuit for implementing the logic method process can be easily obtained by simply programming the method process in the above-mentioned several hardware description languages and programming it into the integrated circuit.

控制器可以按任何适当的方式实现,例如,控制器可以采取例如微处理器或处理器以及存储可由该(微)处理器执行的计算机可读程序代码(例如软件或固件)的计算机可读介质、逻辑门、开关、专用集成电路(Application Specific Integrated Circuit,ASIC)、可编程逻辑控制器和嵌入微控制器的形式,控制器的例子包括但不限于以下微控制器:ARC625D、Atmel AT91SAM、Microchip PIC18F26K20以及Silicone Labs C8051F320,存储器控制器还可以被实现为存储器的控制逻辑的一部分。本领域技术人员也知道,除了以纯计算机可读程序代码方式实现控制器以外,完全可以通过将方法步骤进行逻辑编程来使得控制器以逻辑门、开关、专用集成电路、可编程逻辑控制器和嵌入微控制器等的形式来实现相同功能。因此这种控制器可以被认为是一种硬件部件,而对其内包括的用于实现各种功能的装置也可以视为硬件部件内的结构。或者甚至,可以将用于实现各种功能的装置视为既可以是实现方法的软件模块又可以是硬件部件内的结构。The controller may be implemented in any suitable manner, for example, the controller may take the form of eg a microprocessor or processor and a computer readable medium storing computer readable program code (eg software or firmware) executable by the (micro)processor , logic gates, switches, application specific integrated circuits (ASICs), programmable logic controllers and embedded microcontrollers, examples of controllers include but are not limited to the following microcontrollers: ARC625D, Atmel AT91SAM, Microchip PIC18F26K20 and Silicon Labs C8051F320, the memory controller can also be implemented as part of the control logic of the memory. Those skilled in the art also know that, in addition to implementing the controller in the form of pure computer-readable program code, the controller can be implemented as logic gates, switches, application-specific integrated circuits, programmable logic controllers and embedded devices by logically programming the method steps. The same function can be realized in the form of a microcontroller, etc. Therefore, such a controller can be regarded as a hardware component, and the devices included therein for realizing various functions can also be regarded as a structure within the hardware component. Or even, the means for implementing various functions can be regarded as both a software module implementing a method and a structure within a hardware component.

上述实施例阐明的系统、装置、模块或单元,具体可以由计算机芯片或实体实现,或者由具有某种功能的产品来实现。一种典型的实现设备为计算机。具体的,计算机例如可以为个人计算机、膝上型计算机、蜂窝电话、相机电话、智能电话、个人数字符助理、媒体播放器、导航设备、电子邮件设备、游戏控制台、平板计算机、可穿戴设备或者这些设备中的任何设备的组合。The systems, devices, modules or units described in the above embodiments may be specifically implemented by computer chips or entities, or by products with certain functions. A typical implementation device is a computer. Specifically, the computer can be, for example, a personal computer, a laptop computer, a cellular phone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device Or a combination of any of these devices.

为了描述的方便,描述以上装置时以功能分为各种单元分别描述。当然,在实施本申请时可以把各单元的功能在同一个或多个软件和/或硬件中实现。For the convenience of description, when describing the above device, the functions are divided into various units and described respectively. Of course, when implementing the present application, the functions of each unit may be implemented in one or more software and/or hardware.

本领域内的技术人员应明白,本发明的实施例可提供为方法、系统、或计算机程序产品。因此,本发明可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本发明可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.

本发明是参照根据本发明实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block in the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to the processor of a general purpose computer, special purpose computer, embedded processor or other programmable data processing device to produce a machine such that the instructions executed by the processor of the computer or other programmable data processing device produce Means for implementing the functions specified in a flow or flow of a flowchart and/or a block or blocks of a block diagram.

这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。These computer program instructions may also be stored in a computer readable memory capable of directing a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer readable memory result in an article of manufacture comprising instruction means, the instructions The apparatus implements the functions specified in the flow or flow of the flowcharts and/or the block or blocks of the block diagrams.

这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。These computer program instructions can also be loaded on a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer-implemented process such that The instructions provide steps for implementing the functions specified in the flow or blocks of the flowcharts and/or the block or blocks of the block diagrams.

在一个典型的配置中,计算设备包括一个或多个处理器(CPU)、输入/输出接口、网络接口和内存。In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.

内存可能包括计算机可读介质中的非永久性存储器,随机存取存储器(RAM)和/或非易失性内存等形式,如只读存储器(ROM)或闪存(flash RAM)。内存是计算机可读介质的示例。Memory may include non-persistent memory in computer readable media, random access memory (RAM) and/or non-volatile memory in the form of, for example, read only memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.

计算机可读介质包括永久性和非永久性、可移动和非可移动媒体可以由任何方法或技术来实现信息存储。信息可以是计算机可读指令、数据结构、程序的模块或其他数据。计算机的存储介质的例子包括,但不限于相变内存(PRAM)、静态随机存取存储器(SRAM)、动态随机存取存储器(DRAM)、其他类型的随机存取存储器(RAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、快闪记忆体或其他内存技术、只读光盘只读存储器(CD-ROM)、数字符多功能光盘(DVD)或其他光学存储、磁盒式磁带,磁带式磁盘存储或其他磁性存储设备或任何其他非传输介质,可用于存储可以被计算设备访问的信息。按照本文中的界定,计算机可读介质不包括暂存电脑可读媒体(transitory media),如调制的数据信号和载波。Computer-readable media includes both persistent and non-permanent, removable and non-removable media, and storage of information may be implemented by any method or technology. Information may be computer readable instructions, data structures, modules of programs, or other data. Examples of computer storage media include, but are not limited to, phase-change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read only memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), Flash Memory or other memory technology, Compact Disc Read Only Memory (CD-ROM), Digital Versatile Disc (DVD), or other optical storage , magnetic tape cartridges, magnetic tape-disk storage or other magnetic storage devices or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, computer-readable media does not include transitory computer-readable media, such as modulated data signals and carrier waves.

还需要说明的是,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、商品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、商品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、商品或者设备中还存在另外的相同要素。It should also be noted that the terms "comprising", "comprising" or any other variation thereof are intended to encompass a non-exclusive inclusion such that a process, method, article or device comprising a series of elements includes not only those elements, but also Other elements not expressly listed, or which are inherent to such a process, method, article of manufacture, or apparatus are also included. Without further limitation, an element qualified by the phrase "comprising a..." does not preclude the presence of additional identical elements in the process, method, article of manufacture, or device that includes the element.

在本文中,诸如第一、第二、第三和第四等之类的关系术语仅仅用来将一个实体或操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际关系或者顺序。In this document, relational terms such as first, second, third, and fourth are used only to distinguish one entity or operation from another, and do not necessarily require or imply such entities or operations There is no such actual relationship or order between them.

本领域技术人员应明白,本申请的实施例可提供为方法、系统或计算机程序产品。因此,本申请可采用完全硬件实施例、完全软件实施例或结合软件和硬件方面的实施例的形式。而且,本申请可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。It will be appreciated by those skilled in the art that the embodiments of the present application may be provided as a method, a system or a computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.

本申请可以在由计算机执行的计算机可执行指令的一般上下文中描述,例如程序模块。一般地,程序模块包括执行特定任务或实现特定抽象数据类型的例程、程序、对象、组件、数据结构等等。也可以在分布式计算环境中实践本申请,在这些分布式计算环境中,由通过通信网络而被连接的远程处理设备来执行任务。在分布式计算环境中,程序模块可以位于包括存储设备在内的本地和远程计算机存储介质中。The application may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The application may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including storage devices.

以上所述仅为本申请的实施例而已,并不用于限制本申请。对于本领域技术人员来说,本申请可以有各种更改和变化。凡在本申请的精神和原理之内所作的任何修改、等同替换、改进等,均应包含在本申请的权利要求范围之内。The above descriptions are merely examples of the present application, and are not intended to limit the present application. Various modifications and variations of this application are possible for those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of this application shall be included within the scope of the claims of this application.

Claims (16)

Translated fromChinese
1.一种活体检测方法,包括:1. A method for detecting a living body, comprising:获取第一信息采集设备采集的第一信息;所述第一信息为与情绪引导视频的播放过程相关的信息;所述第一信息包含用户的第一生物特征;obtaining first information collected by a first information collection device; the first information is information related to the playing process of the emotion-guided video; the first information includes the first biometric feature of the user;获取第二信息采集设备采集的第二信息;所述第二信息为与所述情绪引导视频的播放过程相关的信息;所述第二信息包含用户的第二生物特征;Acquiring second information collected by a second information collection device; the second information is information related to the playing process of the emotion guidance video; the second information includes the second biometric feature of the user;根据多模态情绪识别算法,基于所述第一信息和所述第二信息,确定所述用户的测定情绪类型;According to a multimodal emotion recognition algorithm, based on the first information and the second information, determine the user's measured emotion type;判断所述测定情绪类型与所述情绪引导视频对应的设定情绪类型是否一致;Judging whether the measured emotion type is consistent with the set emotion type corresponding to the emotion guidance video;若所述测定情绪类型与所述情绪引导视频对应的设定情绪类型一致,则确定所述用户为活体对象。If the measured emotion type is consistent with the set emotion type corresponding to the emotion guidance video, it is determined that the user is a living object.2.根据权利要求1所述的方法,所述第一信息为至少包含用户面部的局部图像的视频信息;所述第一生物特征信息为至少包含所述用户的局部面部特征的信息;2. The method according to claim 1, wherein the first information is video information including at least a partial image of a user's face; the first biometric information is information at least including a partial facial feature of the user;所述第二信息为包含用户肢体图像的视频信息;所述第二生物特征信息为包含所述用户的肢体特征的信息。The second information is video information including an image of the user's body; the second biometric information is information including the user's body feature.3.根据权利要求1所述的方法,所述第一信息为至少包含用户面部的局部图像的视频信息;所述第一生物特征信息为至少包含所述用户的局部面部特征的信息;3. The method according to claim 1, wherein the first information is video information including at least a partial image of a user's face; the first biometric information is information at least including a partial facial feature of the user;所述第二信息为用户的音频信息;所述第二生物特征信息为包含所述用户的声音特征的信息;所述音频信息包括在所述情绪引导视频播放过程中或者播放结束后,对用户的声音采集得到的音频。The second information is the audio information of the user; the second biometric information is information including the voice characteristics of the user; The audio obtained from the sound collection.4.根据权利要求1所述的方法,所述第一信息为包含用户肢体图像的视频信息;所述第一生物特征信息为包含所述用户的肢体特征的信息;4. The method according to claim 1, wherein the first information is video information including an image of a user's body; the first biometric information is information including a body feature of the user;所述第二信息为音频信息;所述第二生物特征信息为包含所述用户的声音特征的信息;所述音频信息包括在所述情绪引导视频播放过程中或者播放结束后,对用户的声音采集得到的音频。The second information is audio information; the second biometric information is information including the voice characteristics of the user; the audio information includes the voice of the user during or after the emotional guidance video is played. captured audio.5.根据权利要求1所述的方法,所述方法还包括:5. The method of claim 1, further comprising:获取第三信息采集设备采集的第三信息;所述第三信息为与所述情绪引导视频的播放过程相关的信息;所述第三信息包含用户的第三生物特征;Acquiring third information collected by a third information collection device; the third information is information related to the playing process of the emotion guidance video; the third information includes the third biometric feature of the user;所述根据多模态情绪识别算法,基于所述第一信息和所述第二信息,确定所述用户的测定情绪类型,具体包括:The determination of the user's measured emotion type based on the first information and the second information according to the multimodal emotion recognition algorithm specifically includes:所述根据多模态情绪识别算法,基于所述第一信息、所述第二信息和所述第三信息,确定所述用户的测定情绪类型。According to the multimodal emotion recognition algorithm, the measured emotion type of the user is determined based on the first information, the second information and the third information.6.根据权利要求5所述的方法,所述第一信息为至少包含用户面部的局部图像的视频信息;所述第一生物特征信息为至少包含所述用户的局部面部特征的信息;6. The method according to claim 5, wherein the first information is video information including at least a partial image of a user's face; the first biometric information is information at least including a partial facial feature of the user;所述第二信息为包含用户肢体图像的视频信息;所述第二生物特征信息为包含所述用户的肢体特征的信息;The second information is video information including an image of the user's body; the second biometric information is information including the user's body feature;所述第三信息为用户的音频信息;所述第三生物特征信息为包含所述用户的声音特征的信息;所述音频信息包括在所述情绪引导视频播放过程中或者播放结束后,对用户的声音采集得到的音频。The third information is the user's audio information; the third biometric information is information including the user's voice characteristics; The audio obtained from the sound collection.7.根据权利要求3或权利要求4或权利要求6,所述音频信息包括用户针对显示的预设文本的音频。7. According to claim 3 or claim 4 or claim 6, the audio information comprises user audio for the displayed preset text.8.根据权利要求2或权利要求4或权利要求6,所述用户肢体包括用户的手指、手掌或胳膊中的部位。8. According to claim 2 or claim 4 or claim 6, the user limb comprises a part in a user's finger, palm or arm.9.根据权利要求2或权利要求4或权利要求6,所述方法包括:9. According to claim 2 or claim 4 or claim 6, the method comprises:基于包含用户肢体图像的视频信息,确定用户的心功能信息。Based on the video information containing the images of the user's limbs, the cardiac function information of the user is determined.10.根据权利要求1所述的方法,所述第一信息为至少包含用户面部的局部图像的视频信息,所述方法还包括:10. The method according to claim 1, wherein the first information is video information including at least a partial image of the user's face, the method further comprising:发送启动补光装置的指令,以便启动补光装置对被采集的用户面部进行补光。Sending an instruction to activate the supplementary light device, so as to activate the supplementary light device to perform supplementary light on the captured face of the user.11.根据权利要求1所述的方法,所述第二信息为包含用户肢体图像的视频信息;所述方法还包括:11. The method according to claim 1, wherein the second information is video information including an image of a user's limb; the method further comprises:发送启动光照装置的指令,以便增加用户肢体的亮度。Send an instruction to activate the lighting device in order to increase the brightness of the user's limb.12.根据权利要求1所述的方法,所述第一信息为至少包含用户面部的局部图像的视频信息;所述获取第一信息采集设备采集的第一信息之前,包括:12. The method according to claim 1, wherein the first information is video information including at least a partial image of the user's face; before the acquiring the first information collected by the first information collection device, comprising:发送启动所述第一信息采集设备的指令;sending an instruction to start the first information collection device;获取所述第一信息采集设备采集到的第一检测图像信息;所述第一检测图像信息为采集所述第一信息之前所述第一信息采集设备采集到的图像信息;acquiring first detection image information collected by the first information collection device; the first detection image information is image information collected by the first information collection device before collecting the first information;基于所述第一检测图像信息,判断所述第一检测图像信息中是否包含符合预设条件的用户面部特征;Based on the first detected image information, determine whether the first detected image information includes user facial features that meet preset conditions;若第一检测图像信息中未包含符合预设条件的用户面部特征,则发送提示用户调整面部位置的提示信息。If the first detected image information does not include the user's facial features that meet the preset conditions, prompt information for prompting the user to adjust the face position is sent.13.根据权利要求1所述的方法,所述第二信息为包含用户肢体图像的视频信息;所述获取第二信息采集设备采集的第二信息之前,包括:13. The method according to claim 1, wherein the second information is video information including an image of a user's limb; before the acquiring the second information collected by the second information collection device, comprising:发送启动所述第二信息采集设备的指令;sending an instruction to start the second information collection device;获取所述第二信息采集设备采集到的第二检测图像信息;所述第二检测图像信息为采集所述第二信息之前所述第二信息采集设备采集到的图像信息;acquiring second detection image information collected by the second information collection device; the second detection image information is image information collected by the second information collection device before the second information is collected;基于所述第二检测图像信息,判断所述第二检测图像信息中是否包含符合预设条件的用户肢体特征;Based on the second detected image information, determine whether the second detected image information includes user limb features that meet preset conditions;若所述第二检测图像信息中未包含符合预设条件的用户肢体特征,则发送提示用户调整肢体位置的提示信息。If the second detected image information does not include the user's limb feature that meets the preset condition, prompt information for prompting the user to adjust the limb position is sent.14.一种活体检测装置,包括:14. A living body detection device, comprising:第一获取模块,用于获取第一信息采集设备采集的第一信息;所述第一信息为与情绪引导视频的播放过程相关的信息;所述第一信息包含用户的第一生物特征;a first acquisition module, configured to acquire first information collected by a first information collection device; the first information is information related to the playing process of the emotion-guided video; the first information includes the first biometric feature of the user;第二获取模块,用于获取第二信息采集设备采集的第二信息;所述第二信息为与所述情绪引导视频的播放过程相关的信息;所述第二信息包含用户的第二生物特征;The second acquisition module is configured to acquire the second information collected by the second information collection device; the second information is information related to the playing process of the emotion guidance video; the second information includes the second biometric feature of the user ;第一确定模块,用于根据多模态情绪识别算法,基于所述第一信息和所述第二信息,确定所述用户的测定情绪类型;a first determination module, configured to determine the measured emotion type of the user based on the first information and the second information according to a multimodal emotion recognition algorithm;判断模块,用于判断所述测定情绪类型与所述情绪引导视频对应的设定情绪类型是否一致;A judgment module, for judging whether the measured emotion type is consistent with the set emotion type corresponding to the emotion guidance video;第二确定模块,用于若所述测定情绪类型与所述情绪引导视频对应的设定情绪类型一致,则确定所述用户为活体对象。The second determination module is configured to determine that the user is a living object if the measured emotion type is consistent with the set emotion type corresponding to the emotion guidance video.15.一种活体检测设备,包括:15. A liveness detection device, comprising:至少一个处理器;以及,at least one processor; and,与所述至少一个处理器通信连接的存储器;其中,a memory communicatively coupled to the at least one processor; wherein,所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够:The memory stores instructions executable by the at least one processor, the instructions being executed by the at least one processor to enable the at least one processor to:获取第一信息采集设备采集的第一信息;所述第一信息为与情绪引导视频的播放过程相关的信息;所述第一信息包含用户的第一生物特征;obtaining first information collected by a first information collection device; the first information is information related to the playing process of the emotion-guided video; the first information includes the first biometric feature of the user;获取第二信息采集设备采集的第二信息;所述第二信息为与所述情绪引导视频的播放过程相关的信息;所述第二信息包含用户的第二生物特征;Acquiring second information collected by a second information collection device; the second information is information related to the playing process of the emotion guidance video; the second information includes the second biometric feature of the user;根据多模态情绪识别算法,基于所述第一信息和所述第二信息,确定所述用户的测定情绪类型;According to a multimodal emotion recognition algorithm, based on the first information and the second information, determine the user's measured emotion type;判断所述测定情绪类型与所述情绪引导视频对应的设定情绪类型是否一致;Determine whether the measured emotion type is consistent with the set emotion type corresponding to the emotion guidance video;若所述测定情绪类型与所述情绪引导视频对应的设定情绪类型一致,则确定所述用户为活体对象。If the measured emotion type is consistent with the set emotion type corresponding to the emotion guidance video, it is determined that the user is a living object.16.一种计算机可读介质,其上存储有计算机可读指令,所述计算机可读指令可被处理器执行以实现权利要求1至13中任一项所述的活体检测方法。16. A computer-readable medium having computer-readable instructions stored thereon, the computer-readable instructions being executable by a processor to implement the liveness detection method of any one of claims 1 to 13.
CN202210508920.9A2022-05-102022-05-10 A living body detection method, device, equipment and mediumPendingCN114821824A (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202210508920.9ACN114821824A (en)2022-05-102022-05-10 A living body detection method, device, equipment and medium

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202210508920.9ACN114821824A (en)2022-05-102022-05-10 A living body detection method, device, equipment and medium

Publications (1)

Publication NumberPublication Date
CN114821824Atrue CN114821824A (en)2022-07-29

Family

ID=82512435

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202210508920.9APendingCN114821824A (en)2022-05-102022-05-10 A living body detection method, device, equipment and medium

Country Status (1)

CountryLink
CN (1)CN114821824A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN105989264A (en)*2015-02-022016-10-05北京中科奥森数据科技有限公司Bioassay method and bioassay system for biological characteristics
CN107220591A (en)*2017-04-282017-09-29哈尔滨工业大学深圳研究生院Multi-modal intelligent mood sensing system
CN110197108A (en)*2018-08-172019-09-03平安科技(深圳)有限公司Auth method, device, computer equipment and storage medium
CN112149610A (en)*2020-10-092020-12-29支付宝(杭州)信息技术有限公司Method and system for identifying target object
US20220021742A1 (en)*2019-04-092022-01-20Huawei Technologies Co., Ltd.Content push method and apparatus, and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN105989264A (en)*2015-02-022016-10-05北京中科奥森数据科技有限公司Bioassay method and bioassay system for biological characteristics
CN107220591A (en)*2017-04-282017-09-29哈尔滨工业大学深圳研究生院Multi-modal intelligent mood sensing system
CN110197108A (en)*2018-08-172019-09-03平安科技(深圳)有限公司Auth method, device, computer equipment and storage medium
US20220021742A1 (en)*2019-04-092022-01-20Huawei Technologies Co., Ltd.Content push method and apparatus, and device
CN112149610A (en)*2020-10-092020-12-29支付宝(杭州)信息技术有限公司Method and system for identifying target object

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
刘中扩 著: "5G+智慧金融 5G×AI技术驱动下的金融新生态", 31 January 2022, 中国友谊出版公司, pages: 71*

Similar Documents

PublicationPublication DateTitle
US10311289B2 (en)Face recognition method and device and apparatus
KR102299764B1 (en)Electronic device, server and method for ouptting voice
US8847884B2 (en)Electronic device and method for offering services according to user facial expressions
CN109446876A (en)Sign language information processing method, device, electronic equipment and readable storage medium storing program for executing
Su et al.Liplearner: Customizable silent speech interactions on mobile devices
WO2019161730A1 (en)Living body detection method, apparatus and device
TW201220216A (en)System and method for detecting human emotion and appeasing human emotion
WO2020215590A1 (en)Intelligent shooting device and biometric recognition-based scene generation method thereof
TW202008115A (en)Interaction method and device
CN109034827A (en)Payment method, payment device, wearable device and storage medium
KR20180075875A (en)Electronic device and method for delivering message thereof
CN115206306B (en) Voice interaction method, device, equipment and system
CN108495031A (en)Photographing method based on wearable device and wearable device
CN113450804B (en) Speech visualization method, device, projection equipment and computer-readable storage medium
CN110908576A (en)Vehicle system/vehicle application display method and device and electronic equipment
CN111259757A (en)Image-based living body identification method, device and equipment
TW201826167A (en)Method for face expression feedback and intelligent robot
US11256909B2 (en)Electronic device and method for pushing information based on user emotion
CN114821824A (en) A living body detection method, device, equipment and medium
CN111178151A (en)Method and device for realizing human face micro-expression change recognition based on AI technology
KR20200144821A (en)Smart mirror
CN115995114A (en)ATM intelligent care method and device based on emotion recognition
JPWO2018179972A1 (en) Information processing apparatus, information processing method and program
CN115543135A (en) Display screen control method, device and equipment
US20230334641A1 (en)Skin Care Auxiliary Method, Device, and Storage Medium

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination

[8]ページ先頭

©2009-2025 Movatter.jp