技术领域technical field
本发明涉及图像处理领域,具体而言,涉及一种基于图像处理的抑郁症数据分析方法及装置。The present invention relates to the field of image processing, in particular to a method and device for analyzing depression data based on image processing.
背景技术Background technique
随着计算机技术的发展,各个领域都实现了智能化,智能化的设备能够大大减少人力劳动,提高办事效率。在医疗领域中,一般通过医生直接观看对应人员的面部表情以及表情持续时间,通过经验获得到对应人员的抑郁症相关数据,此种方式效率比较低,而且受主观因素影响较大。因此,如何使用计算机技术来获取对应人员的变化,是目前研究的一大课题。With the development of computer technology, intelligence has been realized in various fields, and intelligent equipment can greatly reduce human labor and improve work efficiency. In the medical field, the doctor directly observes the facial expression and the duration of the expression of the corresponding person, and obtains the depression-related data of the corresponding person through experience. This method is relatively inefficient and is greatly affected by subjective factors. Therefore, how to use computer technology to obtain the change of the corresponding personnel is a major topic of current research.
发明内容Contents of the invention
有鉴于此,本发明实施例的目的在于提供一种抑郁症数据分析方法及装置。In view of this, the purpose of the embodiments of the present invention is to provide a depression data analysis method and device.
本发明实施例提供的一种抑郁症相关数据分析方法,所述抑郁症相关数据分析方法包括:A method for analyzing data related to depression provided by an embodiment of the present invention, the method for analyzing data related to depression includes:
使用人脸数据库对第一预设的深度网络模型进行预训练,得到初步网络模型;Using the face database to pre-train the first preset deep network model to obtain a preliminary network model;
使用抑郁症视频数据库对得到的初步网络模型进行训练得到第一识别模型;Using the depression video database to train the obtained preliminary network model to obtain the first recognition model;
使用所述抑郁症视频数据库计算得到抑郁症光流图集;Using the depression video database to calculate the depression optical flow atlas;
使用所述抑郁症光流图集对第二预设的深度网络模型进行训练得到第二识别模型;Using the depression optical flow atlas to train a second preset deep network model to obtain a second recognition model;
将所述第一识别模型与所述第二识别模型进行融合得到回归模型;fusing the first recognition model with the second recognition model to obtain a regression model;
将待识别视频输入所述回归模型对所述待识别视频中的人脸进行识别,得到抑郁症相关数据。Inputting the video to be identified into the regression model to identify the faces in the video to be identified to obtain data related to depression.
本发明实施例还提供一种抑郁症相关数据分析装置,所述抑郁症相关数据分析装置包括:An embodiment of the present invention also provides a depression-related data analysis device, the depression-related data analysis device includes:
第一训练模块,用于使用人脸数据库对第一预设的深度网络模型进行预训练,得到初步网络模型;The first training module is used to pre-train the first preset deep network model using the face database to obtain a preliminary network model;
第二训练模块,用于使用抑郁症视频数据库对得到的初步网络模型进行训练得到第一识别模型;The second training module is used to train the preliminary network model obtained using the depression video database to obtain the first identification model;
计算模块,用于使用所述抑郁症视频数据库计算得到抑郁症光流图集;Calculation module, for using the depression video database to calculate the optical flow atlas of depression;
第三训练模块,用于使用所述抑郁症光流图集对第二预设的深度网络模型进行训练得到第二识别模型;The third training module is used to use the depression optical flow atlas to train the second preset deep network model to obtain the second recognition model;
融合模块,用于将所述第一识别模型与所述第二识别模型进行融合得到回归模型;a fusion module, configured to fuse the first recognition model with the second recognition model to obtain a regression model;
识别模块,用于将待识别视频输入所述回归模型对所述待识别视频中的人脸进行识别,得到抑郁症相关数据。The identification module is used to input the video to be identified into the regression model to identify the face in the video to be identified to obtain data related to depression.
与现有技术相比,本发明实施例的抑郁症数据分析方法及装置,通过人脸数据库和抑郁症数据库,对预设的深度网络模型进行训练可以得到用于识别待识别视频中的人脸特征对应的抑郁症相关数据,减少需要用户通过观察对应人员的脸部变化以得到抑郁症相关数据所需要花费的人力资源。另外,通过对视频的识别可以大大提高识别的效率,也能够减少用户主观判断可能出现的误差。Compared with the prior art, the method and device for analyzing depression data in the embodiment of the present invention, through the face database and the depression database, can train the preset deep network model to obtain the face recognition method for identifying the face in the video to be recognized. The depression-related data corresponding to the features reduces the human resources required for the user to obtain depression-related data by observing the facial changes of the corresponding person. In addition, the efficiency of recognition can be greatly improved through the recognition of the video, and the error that may occur in the subjective judgment of the user can also be reduced.
为使本发明的上述目的、特征和优点能更明显易懂,下文特举较佳实施例,并配合所附附图,作详细说明如下。In order to make the above-mentioned objects, features and advantages of the present invention more comprehensible, preferred embodiments will be described in detail below together with the accompanying drawings.
附图说明Description of drawings
为了更清楚地说明本发明实施例的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,应当理解,以下附图仅示出了本发明的某些实施例,因此不应被看作是对范围的限定,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他相关的附图。In order to illustrate the technical solutions of the embodiments of the present invention more clearly, the accompanying drawings used in the embodiments will be briefly introduced below. It should be understood that the following drawings only show some embodiments of the present invention, and thus It should be regarded as a limitation on the scope, and those skilled in the art can also obtain other related drawings based on these drawings without creative work.
图1为本发明较佳实施例提供的电子终端的方框示意图。FIG. 1 is a schematic block diagram of an electronic terminal provided by a preferred embodiment of the present invention.
图2为本发明较佳实施例提供的抑郁症相关数据分析方法的流程图。Fig. 2 is a flow chart of a method for analyzing data related to depression provided by a preferred embodiment of the present invention.
图3为本发明较佳实施例提供的抑郁症相关数据分析方法的步骤S103的详细流程图。Fig. 3 is a detailed flow chart of step S103 of the depression-related data analysis method provided by a preferred embodiment of the present invention.
图4为本发明较佳实施例提供的抑郁症相关数据分析方法的步骤S106的详细流程图。Fig. 4 is a detailed flow chart of step S106 of the depression-related data analysis method provided by a preferred embodiment of the present invention.
图5为本发明较佳实施例提供的抑郁症相关数据分析方法的步骤S106的详细流程图。FIG. 5 is a detailed flow chart of step S106 of the depression-related data analysis method provided by a preferred embodiment of the present invention.
图6为本发明较佳实施例提供的抑郁症相关数据分析装置的功能模块示意图。Fig. 6 is a schematic diagram of functional modules of a depression-related data analysis device provided by a preferred embodiment of the present invention.
图7为本发明较佳实施例提供的抑郁症相关数据分析装置的计算模块的详细模块示意图。Fig. 7 is a detailed block diagram of the calculation module of the depression-related data analysis device provided by the preferred embodiment of the present invention.
图8为本发明较佳实施例提供的抑郁症相关数据分析装置的识别模块的详细模块示意图。Fig. 8 is a detailed block diagram of the identification module of the depression-related data analysis device provided by a preferred embodiment of the present invention.
具体实施方式Detailed ways
下面将结合本发明实施例中附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。通常在此处附图中描述和示出的本发明实施例的组件可以以各种不同的配置来布置和设计。因此,以下对在附图中提供的本发明的实施例的详细描述并非旨在限制要求保护的本发明的范围,而是仅仅表示本发明的选定实施例。基于本发明的实施例,本领域技术人员在没有做出创造性劳动的前提下所获得的所有其他实施例,都属于本发明保护的范围。The following will clearly and completely describe the technical solutions in the embodiments of the present invention with reference to the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are only some, not all, embodiments of the present invention. The components of the embodiments of the invention generally described and illustrated in the figures herein may be arranged and designed in a variety of different configurations. Accordingly, the following detailed description of the embodiments of the invention provided in the accompanying drawings is not intended to limit the scope of the claimed invention, but merely represents selected embodiments of the invention. Based on the embodiments of the present invention, all other embodiments obtained by those skilled in the art without making creative efforts belong to the protection scope of the present invention.
应注意到:相似的标号和字母在下面的附图中表示类似项,因此,一旦某一项在一个附图中被定义,则在随后的附图中不需要对其进行进一步定义和解释。同时,在本发明的描述中,术语“第一”、“第二”等仅用于区分描述,而不能理解为指示或暗示相对重要性。It should be noted that like numerals and letters denote similar items in the following figures, therefore, once an item is defined in one figure, it does not require further definition and explanation in subsequent figures. Meanwhile, in the description of the present invention, the terms "first", "second", etc. are only used to distinguish descriptions, and cannot be understood as indicating or implying relative importance.
如图1所示,是一电子终端100的方框示意图。所述电子终端100包括抑郁症相关数据分析装置110、存储器111、存储控制器112、处理器113、外设接口114、输入输出单元115、显示单元116。本领域普通技术人员可以理解,图1所示的结构仅为示意,其并不对电子终端100的结构造成限定。例如,电子终端100还可包括比图1中所示更多或者更少的组件,或者具有与图1所示不同的配置。本实施例所述的电子终端100可以是个人计算机、图像处理服务器、或者移动电子设备等具有图像处理能力的计算设备。As shown in FIG. 1 , it is a schematic block diagram of an electronic terminal 100 . The electronic terminal 100 includes a depression-related data analysis device 110 , a memory 111 , a storage controller 112 , a processor 113 , a peripheral interface 114 , an input and output unit 115 , and a display unit 116 . Those skilled in the art can understand that the structure shown in FIG. 1 is only for illustration, and does not limit the structure of the electronic terminal 100 . For example, the electronic terminal 100 may also include more or fewer components than those shown in FIG. 1 , or have a different configuration from that shown in FIG. 1 . The electronic terminal 100 described in this embodiment may be a computing device capable of image processing such as a personal computer, an image processing server, or a mobile electronic device.
所述存储器111、存储控制器112、处理器113、外设接口114、输入输出单元115及显示单元116各元件相互之间直接或间接地电性连接,以实现数据的传输或交互。例如,这些元件相互之间可通过一条或多条通讯总线或信号线实现电性连接。所述抑郁症相关数据分析装置110包括至少一个可以软件或固件(Firmware)的形式存储于所述存储器111中或固化在所述电子终端100的操作系统(Operating System,OS)中的软件功能模块。所述处理器113用于执行存储器中存储的可执行模块,例如所述抑郁症相关数据分析装置110包括的软件功能模块或计算机程序。The memory 111 , storage controller 112 , processor 113 , peripheral interface 114 , input/output unit 115 and display unit 116 are electrically connected to each other directly or indirectly to realize data transmission or interaction. For example, these components can be electrically connected to each other through one or more communication buses or signal lines. The depression-related data analysis device 110 includes at least one software function module that can be stored in the memory 111 in the form of software or firmware (Firmware) or solidified in the operating system (Operating System, OS) of the electronic terminal 100 . The processor 113 is configured to execute executable modules stored in the memory, such as software function modules or computer programs included in the depression-related data analysis device 110 .
其中,所述存储器111可以是,但不限于,随机存取存储器(Random AccessMemory,RAM),只读存储器(Read Only Memory,ROM),可编程只读存储器(ProgrammableRead-Only Memory,PROM),可擦除只读存储器(Erasable Programmable Read-OnlyMemory,EPROM),电可擦除只读存储器(Electric Erasable Programmable Read-OnlyMemory,EEPROM)等。其中,存储器111用于存储程序,所述处理器113在接收到执行指令后,执行所述程序,本发明实施例任一实施例揭示的过程定义的电子终端100所执行的方法可以应用于处理器113中,或者由处理器113实现。本实施例中,所述存储器111存储有MATLAB应用程序,所述处理器113可用于执行所述MATLAB应用程序中的各个功能模块。Wherein, the memory 111 can be, but not limited to, a random access memory (Random Access Memory, RAM), a read-only memory (Read Only Memory, ROM), a programmable read-only memory (Programmable Read-Only Memory, PROM), which can Erasable Programmable Read-Only Memory (EPROM), Electric Erasable Programmable Read-Only Memory (EEPROM), etc. Wherein, the memory 111 is used to store a program, and the processor 113 executes the program after receiving an execution instruction, and the method performed by the electronic terminal 100 according to the process definition disclosed in any embodiment of the present invention can be applied to processing In the device 113, or implemented by the processor 113. In this embodiment, the memory 111 stores a MATLAB application program, and the processor 113 can be used to execute various functional modules in the MATLAB application program.
所述处理器113可能是一种集成电路芯片,具有信号的处理能力。上述的处理器113可以是通用处理器,包括中央处理器(Central Processing Unit,简称CPU)、网络处理器(Network Processor,简称NP)等;还可以是数字信号处理器(DSP)、专用集成电路(ASIC)、现场可编程门阵列(FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。可以实现或者执行本发明实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。The processor 113 may be an integrated circuit chip with signal processing capabilities. Above-mentioned processor 113 can be general-purpose processor, comprises central processing unit (Central Processing Unit, be called for short CPU), network processor (Network Processor, be called for short NP) etc.; Can also be digital signal processor (DSP), application-specific integrated circuit (ASIC), Field Programmable Gate Array (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components. Various methods, steps and logic block diagrams disclosed in the embodiments of the present invention may be implemented or executed. A general-purpose processor may be a microprocessor, or the processor may be any conventional processor, or the like.
所述外设接口114将各种输入/输入装置耦合至处理器113以及存储器111。在一些实施例中,外设接口114,处理器113以及存储控制器112可以在单个芯片中实现。在其他一些实例中,他们可以分别由独立的芯片实现。The peripheral interface 114 couples various input/output devices to the processor 113 and the memory 111 . In some embodiments, peripheral interface 114, processor 113, and memory controller 112 may be implemented in a single chip. In some other instances, they can be implemented by independent chips respectively.
所述输入输出单元115用于提供给用户输入数据。所述输入输出单元115可以是,但不限于,鼠标和键盘等。The input and output unit 115 is used for providing input data to the user. The input and output unit 115 may be, but not limited to, a mouse and a keyboard.
所述显示单元116在所述电子终端100与用户之间提供一个交互界面(例如用户操作界面)或用于显示图像数据给用户参考。在本实施例中,所述显示单元可以是液晶显示器或触控显示器。若为触控显示器,其可为支持单点和多点触控操作的电容式触控屏或电阻式触控屏等。支持单点和多点触控操作是指触控显示器能感应到来自该触控显示器上一个或多个位置处同时产生的触控操作,并将该感应到的触控操作交由处理器进行计算和处理。The display unit 116 provides an interactive interface (such as a user operation interface) between the electronic terminal 100 and the user or is used to display image data for the user's reference. In this embodiment, the display unit may be a liquid crystal display or a touch display. If it is a touch display, it can be a capacitive touch screen or a resistive touch screen supporting single-point and multi-touch operations. Supporting single-point and multi-touch operations means that the touch display can sense simultaneous touch operations from one or more positions on the touch display, and hand over the sensed touch operations to the processor calculation and processing.
抑郁症作为一种复杂而影响面较广的心境障碍,抑郁症是一种从儿童到老年都会发病的精神疾病,在临床上表现为情绪低落、兴趣减低、悲观等。世界卫生组织(WHO)数据表明:中国抑郁症发病率为3.02%,目前已经有超过4000万抑郁患。与高发病率形成强烈反差的是就诊率不足10%,90%以上抑郁症患者没有接受专业的治疗。Depression is a complex and wide-ranging mood disorder. Depression is a mental disease that occurs from childhood to old age. It is clinically manifested as low mood, loss of interest, and pessimism. The World Health Organization (WHO) data shows that the incidence rate of depression in China is 3.02%, and there are already more than 40 million depression patients. In sharp contrast to the high morbidity rate, the outpatient rate is less than 10%, and more than 90% of depression patients have not received professional treatment.
在临床上,对抑郁症的诊断通常是通过医生跟病人的临床问诊或跟其家属或看护人的访谈来完成。通过临床医生对抑郁症进行诊断的方法是一种相对主观的评定方法,缺乏客观的衡量标准,就可能存在误诊问题。Clinically, the diagnosis of depression is usually done through clinical interviews between doctors and patients or interviews with their family members or caregivers. The method of diagnosing depression by clinicians is a relatively subjective evaluation method, and the lack of objective measurement standards may cause misdiagnosis.
然后随着计算机技术的快速发展,人脸识别作为视频人脸图像分析技术在真实环境下的应用,通过机器学习的方法对抑郁症患者的视频人脸图像进行分析可得到抑郁症相关数据,医生只需要在所述抑郁症相关数据的基础上,花费相对少的时间和精力来进行补充性的诊断分析。另一方面,这种针对视频图像进行分析得到抑郁症相关数据的辅助诊断方法更加的客观、稳定和便捷,甚至可以在家里进行自主的诊断评估。Then with the rapid development of computer technology, face recognition, as a video face image analysis technology, is applied in the real environment. The data related to depression can be obtained by analyzing the video face images of depression patients through machine learning methods. Doctors Complementary diagnostic analyses, based on said depression-related data, are required with relatively little expenditure of time and effort. On the other hand, this auxiliary diagnosis method of analyzing video images to obtain data related to depression is more objective, stable and convenient, and can even be used for independent diagnosis and evaluation at home.
据研究表明,可视的(visual)、非声音的(non-verbal)信息包含脸部表情的变化、头部的移动模式以及眼睛的活动等可以用来做抑郁症诊断。卡内基梅隆大学团队的研究结果表明,抑郁症患者的面部表情有十分显著的特征,如抑郁症患者微笑持续的时间更短、比非抑郁症人群皱眉的次数更多等。According to research, visual and non-verbal information including changes in facial expressions, head movement patterns, and eye movements can be used to diagnose depression. The results of the Carnegie Mellon University team's research showed that the facial expressions of depressed patients have very significant characteristics, such as depressed patients smiling for a shorter period of time and frowning more often than non-depressed people.
另外,深度学习被广泛应用于语音识别、图像识别、机器翻译等领域并取得突破性进展。现有已经通过机器识别抑郁症的相关内容,但是已有的研究大多采用手工设计特征方式,由于过度依赖于医生经验,这些特征表示能力存在明显的不足,导致识别系统的预测性能有限。In addition, deep learning is widely used in speech recognition, image recognition, machine translation and other fields and has made breakthroughs. At present, the relevant content of depression has been identified by machine, but most of the existing research adopts the method of manually designing features. Due to the over-reliance on the experience of doctors, the expressive ability of these features is obviously insufficient, resulting in the limited predictive performance of the recognition system.
请参阅图2,是本发明较佳实施例提供的应用于图1所示的电子终端的抑郁症相关数据分析方法的流程图。下面将对图2所示的具体流程进行详细阐述。Please refer to FIG. 2 , which is a flowchart of a method for analyzing depression-related data applied to the electronic terminal shown in FIG. 1 provided by a preferred embodiment of the present invention. The specific process shown in FIG. 2 will be described in detail below.
步骤S101,使用人脸数据库对第一预设的深度网络模型进行预训练,得到初步网络模型。Step S101, using the face database to pre-train the first preset deep network model to obtain a preliminary network model.
本实施例中,在进行第一预设的深度网络模型训练之前还可以包括先对所述人脸数据库中的图像进行人脸识别,并将检测到的脸进行对齐处理。详细地,可以先识别所述人脸数据库中各个图像的五官中的嘴角、鼻子、两个眼睛五个点的坐标;再将识别的五官进行对齐处理。在一个实例中,可以将所述人脸数据库中的图像进行截图处理以保留仅包括人脸部分的图像。In this embodiment, before performing the first preset deep network model training, it may further include performing face recognition on the images in the face database, and performing alignment processing on the detected faces. Specifically, the coordinates of the five points of the mouth, nose, and two eyes among the facial features of each image in the face database can be identified first; and then the identified facial features can be aligned. In an example, images in the face database may be processed as screenshots to retain images including only face parts.
在一个实例中,所述人脸数据库可以是CASIA WebFace数据库。In one example, the face database may be a CASIA WebFace database.
所述第一预设的深度网络模型可以是AlexNet、AlexNet-GAP、VGG-A、VGG-A-GAP、GoogleNet、ResNet中的任意一种。The first preset deep network model may be any one of AlexNet, AlexNet-GAP, VGG-A, VGG-A-GAP, GoogleNet, and ResNet.
步骤S102,使用抑郁症视频数据库对得到的初步网络模型进行训练得到第一识别模型。Step S102, using the depression video database to train the obtained preliminary network model to obtain a first recognition model.
本实施例中,所述抑郁症视频数据库可以是AVEC2013、AVEC2014视频库。In this embodiment, the depression video database may be AVEC2013 and AVEC2014 video databases.
步骤S103,使用所述抑郁症视频数据库计算得到抑郁症光流图集。Step S103, using the depression video database to calculate an optical flow atlas of depression.
本实施例中,光流(optical flow)法的概念是由James J.Gibson于20世纪40年代首先提出的,是指时变图像中模式运动速度。因为当物体在运动时,它在图像上对应点的亮度模式也在运动。In this embodiment, the concept of the optical flow method was first proposed by James J. Gibson in the 1940s, which refers to the movement speed of a pattern in a time-varying image. Because when the object is moving, the brightness pattern of its corresponding point on the image is also moving.
光流场,它是指图像中所有像素点构成的一种二维瞬时速度场,其中的二维速度矢量是所述待识别视频中的对象中可见点的三维速度矢量在成像表面的投影。所以光流不仅包含了所述待识别视频中的对象的运动信息,而且还包含有关对象三维结构的信息。The optical flow field refers to a two-dimensional instantaneous velocity field composed of all pixels in the image, where the two-dimensional velocity vector is the projection of the three-dimensional velocity vector of the visible points in the object in the video to be recognized on the imaging surface. Therefore, the optical flow not only contains the motion information of the object in the video to be recognized, but also contains information about the three-dimensional structure of the object.
本实施例中,如图3所示,所述步骤S103可以包括步骤S1031和步骤S1032。In this embodiment, as shown in FIG. 3 , the step S103 may include step S1031 and step S1032.
步骤S1031,获取抑郁症视频数据库中的每一帧图像。Step S1031, acquiring each frame of image in the depression video database.
步骤S1032,针对每一帧图像计算由水平方向、垂直方向及幅值构成的抑郁症光流图,根据针对每一帧图像计算得到的抑郁症光流图形成所述抑郁症光流图集。Step S1032, calculating a depression optical flow map composed of horizontal direction, vertical direction and amplitude for each frame of image, and forming the depression optical flow atlas set according to the depression optical flow map calculated for each frame of image.
步骤S104,使用所述抑郁症光流图集对第二预设的深度网络模型进行训练得到第二识别模型。Step S104, using the depression optical flow atlas to train a second preset deep network model to obtain a second recognition model.
所述第二预设的深度网络模型可以是AlexNet、AlexNet-GAP、VGG-A、VGG-A-GAP、GoogleNet、ResNet中的任意一种。The second preset deep network model may be any one of AlexNet, AlexNet-GAP, VGG-A, VGG-A-GAP, GoogleNet, and ResNet.
本实施例中,所述第二预设的深度网络模型可以是与第一预设的深度网络模型相同的网络模型,也可以是不同的网络模型。In this embodiment, the second preset deep network model may be the same network model as the first preset deep network model, or may be a different network model.
步骤S105,将所述第一识别模型与所述第二识别模型进行融合得到回归模型。Step S105, merging the first recognition model and the second recognition model to obtain a regression model.
本实施例中,将所述第一识别模型与第二识别模型进行融合形成一个训练模型。例如,所述第一识别模型包括A层结构,所述第二识别模型包括B层结构,则将第一识别模型和第二识别模型进行融合形成的回归模型包括A+B层结构。In this embodiment, the first recognition model and the second recognition model are fused to form a training model. For example, the first recognition model includes the A layer structure, and the second recognition model includes the B layer structure, then the regression model formed by fusing the first recognition model and the second recognition model includes the A+B layer structure.
步骤S106,将待识别视频输入所述回归模型对所述待识别视频中的人脸进行识别,得到抑郁症相关数据。Step S106, inputting the video to be recognized into the regression model to recognize faces in the video to be recognized to obtain data related to depression.
本实施例中,所述待识别视频以该待识别视频中的每一帧图像对应的数字矩阵输入所述回归模型进行训练。其中,所述待识别视频训练后的输出值为该待识别视频对应的每一帧图像的抑郁得分。综合所述待识别视频的每一帧图像的抑郁得分可以分析得到该待识别视频的用户对应的抑郁情况。In this embodiment, the digital matrix corresponding to each frame image in the video to be recognized is input into the regression model for training. Wherein, the output value of the video to be recognized after training is the depression score of each frame image corresponding to the video to be recognized. Combining the depression score of each frame image of the video to be identified can be analyzed to obtain the depression situation corresponding to the user of the video to be identified.
本实施例中,如图4所示,所述步骤S106可以包括步骤S1061和步骤S1062。In this embodiment, as shown in FIG. 4 , the step S106 may include step S1061 and step S1062.
步骤S1061,对所述待识别视频进行人脸预处理,以将待识别视频中的每一帧图像进行人脸检测以及将检测到的人脸对齐。Step S1061, performing face preprocessing on the video to be recognized, so as to perform face detection on each frame of the video to be recognized and align the detected faces.
本实施例中,首先获取所述待识别视频中的每一帧图像进行人脸识别。在一个实例中可以使用Seetaface进行人脸识别。将每一帧图像识别的结果进行对齐处理,将每一帧图像中的人脸部分对齐。进一步地,将待识别视频中的每一帧图像输入所述回归模型训练之前还可以对图像进行剪切处理,将不是人脸部分进行剪切,将人脸部分内容输入所述回归模型处理,以使输入所述回归模型训练的图像干扰内容更少。In this embodiment, firstly, each frame image in the video to be recognized is acquired for face recognition. In one instance Seetaface can be used for face recognition. The results of each frame of image recognition are aligned, and the face parts in each frame of image are aligned. Further, before inputting each frame of image in the video to be recognized into the regression model training, the image can also be cut, and the part that is not a human face is cut, and the content of the human face is input into the regression model for processing, In order to make the images input into the regression model training have less interference content.
步骤S1062,将预处理的待识别视频输入所述回归模型对所述待识别视频中的人脸进行识别以得到所述抑郁症相关数据。Step S1062, input the preprocessed video to be recognized into the regression model to recognize faces in the video to be recognized to obtain the depression-related data.
本实施例中,如图5所示,所述步骤S106可以包括步骤S1063至S1065。In this embodiment, as shown in FIG. 5 , the step S106 may include steps S1063 to S1065.
步骤S1063,获取所述待识别视频中的每一帧图像。Step S1063, acquiring each frame of image in the video to be identified.
步骤S1064,将待识别视频中的每一帧图像输入所述回归模型得到每一帧图像对应的抑郁得分。Step S1064, input each frame of image in the video to be recognized into the regression model to obtain the depression score corresponding to each frame of image.
本实施例中,所述得分可以在预存的评估量表的分数范围内。例如,所述评估量表可以是贝克抑郁量表,所述得分可以是根据贝克抑郁量表测试的得分取值范围内的分值。再例如,所述评估量表也可以是汉密尔顿抑郁量表,所述得分可以是根据汉密尔顿抑郁量表测试的得分取值范围中的值。In this embodiment, the score may be within the score range of the pre-stored evaluation scale. For example, the assessment scale may be the Beck Depression Scale, and the score may be a score within the range of scores tested by the Beck Depression Scale. For another example, the assessment scale may also be the Hamilton Depression Scale, and the score may be a value in the range of scores tested by the Hamilton Depression Scale.
所述评估量表可以是贝克抑郁量表或汉密尔顿抑郁量表。贝克抑郁量表(BeckDepression Inventory,BDI)共有21个条目,每个条目代表一个“症状-态度类型”,来表示抑郁状况。对每个类别的描述分为四级,按其所显示的症状严重程度排列,从无到极重,级别分为0~3分。汉密尔顿抑郁量表(Hamilton Depression Scale,HAMD)由汉密尔顿(Hamilton)是临床上评定抑郁状态时应用得最为普遍的量表,有l7项、21项和24项等3种版本。The assessment scale may be the Beck Depression Scale or the Hamilton Depression Scale. The Beck Depression Inventory (BDI) has a total of 21 items, and each item represents a "symptom-attitude type" to indicate depression. The description of each category is divided into four grades, which are arranged according to the severity of the symptoms displayed, from none to severe, and the grades are divided into 0-3 points. Hamilton Depression Scale (Hamilton Depression Scale, HAMD) by Hamilton is the most commonly used scale in clinical assessment of depression, with 17-item, 21-item and 24-item versions.
步骤S1065,根据每一帧图像对应的抑郁得分按照预设计算规则进行计算得到计算结果,将该计算结果与预设的抑郁分类进行匹配得到所述抑郁症相关数据。Step S1065, according to the depression score corresponding to each frame of image, calculate according to the preset calculation rules to obtain the calculation result, and match the calculation result with the preset depression classification to obtain the depression-related data.
具体地,所述根据每一帧图像对应的抑郁得分按照预设计算规则进行计算得到计算结果,将该计算结果与预设的抑郁分类进行匹配得到所述抑郁症相关数据的步骤包括:计算每一帧图像对应的抑郁得分的平均值;计算所述平均值及每个每一帧图像对应的抑郁得分的平均绝对误差或均方根误差,并将该平均绝对误差或均方根误差与预设抑郁分类进行匹配得到识所述抑郁症相关数据。Specifically, the depression score corresponding to each frame of image is calculated according to the preset calculation rule to obtain the calculation result, and the step of matching the calculation result with the preset depression classification to obtain the depression-related data includes: calculating each The average value of the depression score corresponding to a frame image; Calculate the average absolute error or root mean square error of the depression score corresponding to the average value and each frame image, and compare the average absolute error or root mean square error with the predicted Set the depression classification to match to obtain the depression-related data.
下面通过具体的公式对描述步骤S1065的计算过程:The calculation process of step S1065 is described below by a specific formula:
其中,MAE(Mean Absolute Error)表示平均绝对误差;RESE(Root Mean SquareError)表示均方根误差;yi表示第i帧图像的抑郁得分;表示每一帧图像对应的抑郁得分的平均值;N表示所述待识别视频中的图像帧数。Among them, MAE (Mean Absolute Error) means the mean absolute error; RESE (Root Mean Square Error) means the root mean square error; yi means the depression score of the i-th frame image; Represents the average value of the depression score corresponding to each frame of image; N represents the number of image frames in the video to be identified.
下面通过几个实例数据对本实施例中的抑郁症数据分析方法进行测试。对以上两个数据库,分别使用深度网络模型AlexNet、AlexNet-GAP、VGG-A、VGG-A-GAP、GoogleNet、ResNet进行测试,统计误差如表1和表2所示:The depression data analysis method in this embodiment will be tested through several example data. For the above two databases, the deep network models AlexNet, AlexNet-GAP, VGG-A, VGG-A-GAP, GoogleNet, and ResNet were used to test, and the statistical errors are shown in Table 1 and Table 2:
表1.AEVC2013测试结果Table 1. AEVC2013 test results
表2.AVEC2014的测试结果Table 2. AVEC2014 test results
从上面两个表可以看出:在不同的模型中,ResNet具有较小的平均绝对误差和和均方根误差。但是不同模型测试得到的平均误差分值相差不大,不同模型测试得到的均方根误差也相差不大。因此,本领域的技术人员可以按照需求选择所述第一预设的网络模型和第二预设的网络模型。As can be seen from the above two tables: in different models, ResNet has a smaller average absolute error and root mean square error. However, the average error scores obtained by different model tests are not much different, and the root mean square errors obtained by different model tests are also similar. Therefore, those skilled in the art may select the first preset network model and the second preset network model according to requirements.
本发明实施例的抑郁症数据分析方法,通过人脸数据库和抑郁症数据库,对预设的深度网络模型进行训练可以得到用于识别待识别视频中的人脸特征对应的抑郁症相关数据,减少需要用户通过观察对应人员的脸部变化以得到抑郁症相关数据所需要花费的人力资源。另外,通过对视频的识别可以大大提高识别的效率,也能够减少用户主观判断可能出现的误差。In the depression data analysis method of the embodiment of the present invention, through the face database and the depression database, the preset deep network model can be trained to obtain the depression-related data corresponding to the facial features in the video to be recognized, reducing the It requires human resources spent by the user to obtain depression-related data by observing the facial changes of the corresponding person. In addition, the efficiency of recognition can be greatly improved through the recognition of the video, and the error that may occur in the subjective judgment of the user can also be reduced.
请参阅图6,是本发明较佳实施例提供的图1所示的抑郁症相关数据分析装置的功能模块示意图。本实施例中的抑郁症相关数据分析装置中的各个模块和单元用于执行上述方法实施例中的各个步骤。所述抑郁症相关数据分析装置110包括:第一训练模块1101、第二训练模块1102、计算模块1103、第三训练模块1104、以及融合模块1105。Please refer to FIG. 6 , which is a schematic diagram of functional modules of the depression-related data analysis device shown in FIG. 1 provided by a preferred embodiment of the present invention. Each module and unit in the depression-related data analysis device in this embodiment is used to execute each step in the above-mentioned method embodiment. The depression-related data analysis device 110 includes: a first training module 1101 , a second training module 1102 , a calculation module 1103 , a third training module 1104 , and a fusion module 1105 .
所述第一训练模块1101,用于使用人脸数据库对第一预设的深度网络模型进行预训练,得到初步网络模型。The first training module 1101 is configured to use the face database to pre-train the first preset deep network model to obtain a preliminary network model.
所述第二训练模块1102,用于使用抑郁症视频数据库对得到的初步网络模型进行训练得到第一识别模型。The second training module 1102 is configured to use the depression video database to train the obtained preliminary network model to obtain the first recognition model.
所述计算模块1103,用于使用所述抑郁症视频数据库计算得到抑郁症光流图集。The calculation module 1103 is configured to use the depression video database to calculate the depression optical flow atlas.
所述第三训练模块1104,用于使用所述抑郁症光流图集对第二预设的深度网络模型进行训练得到第二识别模型。The third training module 1104 is configured to use the depression optical flow atlas to train a second preset deep network model to obtain a second recognition model.
所述融合模块1105,用于将所述第一识别模型与所述第二识别模型进行融合得到回归模型。The fusion module 1105 is configured to fuse the first recognition model and the second recognition model to obtain a regression model.
所述识别模块1106,用于将待识别视频输入所述回归模型对所述待识别视频中的人脸进行识别,得到抑郁症相关数据。The identification module 1106 is configured to input the video to be identified into the regression model to identify the faces in the video to be identified to obtain data related to depression.
本实施例中,如图7所示,所述计算模块1103包括:第一获取单元11031和图像计算单元11032。In this embodiment, as shown in FIG. 7 , the calculation module 1103 includes: a first acquisition unit 11031 and an image calculation unit 11032 .
所述第一获取单元11031,用于获取抑郁症视频数据库中的每一帧图像。The first acquisition unit 11031 is configured to acquire each frame of image in the depression video database.
所述图像计算单元11032,用于针对每一帧图像计算由水平方向、垂直方向及幅值构成的抑郁症光流图,根据针对每一帧图像计算得到的抑郁症光流图形成所述抑郁症光流图集。The image calculation unit 11032 is configured to calculate a depression optical flow diagram composed of horizontal direction, vertical direction and amplitude for each frame of image, and form the depression optical flow diagram according to the depression optical flow diagram calculated for each frame of image Syndrome Optical Flow Atlas.
本实施例中,如图8所示,所述识别模块1106包括:预处理单元11061及视频识别单元11062。In this embodiment, as shown in FIG. 8 , the recognition module 1106 includes: a preprocessing unit 11061 and a video recognition unit 11062 .
所述预处理单元11061,用于对所述待识别视频进行人脸预处理,以将待识别视频中的每一帧图像进行人脸检测以及将检测到的人脸对齐。The preprocessing unit 11061 is configured to perform face preprocessing on the video to be recognized, so as to perform face detection on each frame of image in the video to be recognized and align the detected faces.
所述视频识别单元11062,用于将预处理的待识别视频输入所述回归模型对所述待识别视频中的人脸进行识别以得到所述抑郁症相关数据。The video recognition unit 11062 is configured to input the pre-processed video to be recognized into the regression model to recognize faces in the video to be recognized to obtain the depression-related data.
本实施例中,请再次参阅图7,所述识别模块1106还可包括:第二获取单元11063、图像训练单元11064和赋值计算单元11065。In this embodiment, please refer to FIG. 7 again, the identification module 1106 may further include: a second acquisition unit 11063 , an image training unit 11064 and an assignment calculation unit 11065 .
所述第二获取单元11063,用于获取所述待识别视频中的每一帧图像。The second acquiring unit 11063 is configured to acquire each frame of image in the video to be identified.
所述图像训练单元11064,用于将待识别视频中的每一帧图像输入所述回归模型得到每一帧图像对应的抑郁得分。The image training unit 11064 is configured to input each frame of image in the video to be recognized into the regression model to obtain the depression score corresponding to each frame of image.
所述赋值计算单元11065,用于根据每一帧图像对应的抑郁得分按照预设计算规则进行计算得到计算结果,将该计算结果与预设的抑郁分类进行匹配得到所述抑郁症相关数据。The assignment calculation unit 11065 is configured to calculate the depression score corresponding to each frame image according to preset calculation rules to obtain a calculation result, and match the calculation result with a preset depression classification to obtain the depression-related data.
本实施例中,所述赋值计算单元11065包括:第一计算子单元及第二计算子单元。In this embodiment, the assignment calculation unit 11065 includes: a first calculation subunit and a second calculation subunit.
所述第一计算子单元,用于计算所述待识别视频中的每一帧图像对应的抑郁得分的平均值。The first calculation subunit is configured to calculate an average value of depression scores corresponding to each frame of image in the video to be identified.
所述第二计算子单元,用于计算所述平均值及每个评估项的赋值的平均绝对误差或均方根误差,并将该平均绝对误差或均方根误差与预设抑郁分类进行匹配得到识所述抑郁症相关数据。The second calculation subunit is used to calculate the mean absolute error or root mean square error of the average value and the assignment of each evaluation item, and match the mean absolute error or root mean square error with the preset depression classification Obtain the data related to the depression.
关于本实施例的其它细节可以进一步地参考上述方法实施例中的描述,在此不再赘述。For other details about this embodiment, reference may be made to the descriptions in the foregoing method embodiments, and details are not repeated here.
本发明实施例的抑郁症数据分析装置,通过人脸数据库和抑郁症数据库,对预设的深度网络模型进行训练可以得到用于识别待识别视频中的人脸特征对应的抑郁症相关数据,减少需要用户通过观察对应人员的脸部变化以得到抑郁症相关数据所需要花费的人力资源。另外,通过对视频的识别可以大大提高识别的效率,也能够减少用户主观判断可能出现的误差。The depression data analysis device of the embodiment of the present invention, through the face database and the depression database, trains the preset deep network model to obtain the depression-related data corresponding to the facial features in the video to be recognized, reducing the It requires human resources spent by the user to obtain depression-related data by observing the facial changes of the corresponding person. In addition, the efficiency of recognition can be greatly improved through the recognition of the video, and the error that may occur in the subjective judgment of the user can also be reduced.
在本申请所提供的几个实施例中,应该理解到,所揭露的装置和方法,也可以通过其它的方式实现。以上所描述的装置实施例仅仅是示意性的,例如,附图中的流程图和框图显示了根据本发明的多个实施例的装置、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段或代码的一部分,所述模块、程序段或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现方式中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个连续的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或动作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。In the several embodiments provided in this application, it should be understood that the disclosed devices and methods may also be implemented in other ways. The device embodiments described above are only illustrative. For example, the flowcharts and block diagrams in the accompanying drawings show the architecture, functions and possible implementations of devices, methods and computer program products according to multiple embodiments of the present invention. operate. In this regard, each block in a flowchart or block diagram may represent a module, program segment, or part of code that includes one or more Executable instructions. It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks in succession may, in fact, be executed substantially concurrently, or they may sometimes be executed in the reverse order, depending upon the functionality involved. It should also be noted that each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations, can be implemented by a dedicated hardware-based system that performs the specified function or action , or may be implemented by a combination of dedicated hardware and computer instructions.
另外,在本发明各个实施例中的各功能模块可以集成在一起形成一个独立的部分,也可以是各个模块单独存在,也可以两个或两个以上模块集成形成一个独立的部分。In addition, each functional module in each embodiment of the present invention can be integrated together to form an independent part, or each module can exist independently, or two or more modules can be integrated to form an independent part.
所述功能如果以软件功能模块的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本发明各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。If the functions are implemented in the form of software function modules and sold or used as independent products, they can be stored in a computer-readable storage medium. Based on this understanding, the essence of the technical solution of the present invention or the part that contributes to the prior art or the part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, including Several instructions are used to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the methods described in various embodiments of the present invention. The aforementioned storage medium includes: U disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disk or optical disk and other media that can store program codes. . It should be noted that in this article, relational terms such as first and second are only used to distinguish one entity or operation from another entity or operation, and do not necessarily require or imply that there is a relationship between these entities or operations. There is no such actual relationship or order between them. Furthermore, the term "comprises", "comprises" or any other variation thereof is intended to cover a non-exclusive inclusion such that a process, method, article, or apparatus comprising a set of elements includes not only those elements, but also includes elements not expressly listed. other elements of or also include elements inherent in such a process, method, article, or device. Without further limitations, an element defined by the phrase "comprising a ..." does not exclude the presence of additional identical elements in the process, method, article or apparatus comprising said element.
以上所述仅为本发明的优选实施例而已,并不用于限制本发明,对于本领域的技术人员来说,本发明可以有各种更改和变化。凡在本发明的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。应注意到:相似的标号和字母在下面的附图中表示类似项,因此,一旦某一项在一个附图中被定义,则在随后的附图中不需要对其进行进一步定义和解释。The above descriptions are only preferred embodiments of the present invention, and are not intended to limit the present invention. For those skilled in the art, the present invention may have various modifications and changes. Any modifications, equivalent replacements, improvements, etc. made within the spirit and principles of the present invention shall be included within the protection scope of the present invention. It should be noted that like numerals and letters denote similar items in the following figures, therefore, once an item is defined in one figure, it does not require further definition and explanation in subsequent figures.
以上所述,仅为本发明的具体实施方式,但本发明的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本发明揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本发明的保护范围之内。因此,本发明的保护范围应以权利要求的保护范围为准。The above is only a specific embodiment of the present invention, but the scope of protection of the present invention is not limited thereto. Anyone skilled in the art can easily think of changes or substitutions within the technical scope disclosed in the present invention. Should be covered within the protection scope of the present invention. Therefore, the protection scope of the present invention should be based on the protection scope of the claims.
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201810078907.8ACN108335749A (en) | 2018-01-26 | 2018-01-26 | Depression data analysing method and device |
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201810078907.8ACN108335749A (en) | 2018-01-26 | 2018-01-26 | Depression data analysing method and device |
| Publication Number | Publication Date |
|---|---|
| CN108335749Atrue CN108335749A (en) | 2018-07-27 |
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201810078907.8APendingCN108335749A (en) | 2018-01-26 | 2018-01-26 | Depression data analysing method and device |
| Country | Link |
|---|---|
| CN (1) | CN108335749A (en) |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN110931125A (en)* | 2019-12-11 | 2020-03-27 | 北京深睿博联科技有限责任公司 | Discrimination signal identification method and device for cerebral apoplexy |
| CN112037911A (en)* | 2020-08-28 | 2020-12-04 | 北京万灵盘古科技有限公司 | Machine learning-based mental assessment screening system and training method thereof |
| CN112614583A (en)* | 2020-11-25 | 2021-04-06 | 平安医疗健康管理股份有限公司 | Depression grade testing system |
| CN112932409A (en)* | 2020-10-12 | 2021-06-11 | 杭州易懂科技有限公司 | Depression detection and identification system based on computer vision |
| WO2021139471A1 (en)* | 2020-01-06 | 2021-07-15 | 华为技术有限公司 | Health status test method and device, and computer storage medium |
| CN114255433A (en)* | 2022-02-24 | 2022-03-29 | 首都师范大学 | Depression recognition method, device and storage medium based on facial video |
| CN115980364A (en)* | 2023-01-06 | 2023-04-18 | 广东省中医院(广州中医药大学第二附属医院、广州中医药大学第二临床医学院、广东省中医药科学院) | Use of methylated miR-124 gene in preparing markers for diagnosing depression |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20110310237A1 (en)* | 2010-06-17 | 2011-12-22 | Institute For Information Industry | Facial Expression Recognition Systems and Methods and Computer Program Products Thereof |
| CN105279380A (en)* | 2015-11-05 | 2016-01-27 | 东南大学 | Facial expression analysis-based depression degree automatic evaluation system |
| CN107133481A (en)* | 2017-05-22 | 2017-09-05 | 西北工业大学 | The estimation of multi-modal depression and sorting technique based on DCNN DNN and PV SVM |
| CN107368798A (en)* | 2017-07-07 | 2017-11-21 | 四川大学 | A kind of crowd's Emotion identification method based on deep learning |
| CN107609460A (en)* | 2017-05-24 | 2018-01-19 | 南京邮电大学 | A kind of Human bodys' response method for merging space-time dual-network stream and attention mechanism |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20110310237A1 (en)* | 2010-06-17 | 2011-12-22 | Institute For Information Industry | Facial Expression Recognition Systems and Methods and Computer Program Products Thereof |
| CN105279380A (en)* | 2015-11-05 | 2016-01-27 | 东南大学 | Facial expression analysis-based depression degree automatic evaluation system |
| CN107133481A (en)* | 2017-05-22 | 2017-09-05 | 西北工业大学 | The estimation of multi-modal depression and sorting technique based on DCNN DNN and PV SVM |
| CN107609460A (en)* | 2017-05-24 | 2018-01-19 | 南京邮电大学 | A kind of Human bodys' response method for merging space-time dual-network stream and attention mechanism |
| CN107368798A (en)* | 2017-07-07 | 2017-11-21 | 四川大学 | A kind of crowd's Emotion identification method based on deep learning |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN110931125A (en)* | 2019-12-11 | 2020-03-27 | 北京深睿博联科技有限责任公司 | Discrimination signal identification method and device for cerebral apoplexy |
| WO2021139471A1 (en)* | 2020-01-06 | 2021-07-15 | 华为技术有限公司 | Health status test method and device, and computer storage medium |
| CN112037911A (en)* | 2020-08-28 | 2020-12-04 | 北京万灵盘古科技有限公司 | Machine learning-based mental assessment screening system and training method thereof |
| CN112037911B (en)* | 2020-08-28 | 2024-03-05 | 北京万灵盘古科技有限公司 | Screening system for mental assessment based on machine learning and training method thereof |
| CN112932409A (en)* | 2020-10-12 | 2021-06-11 | 杭州易懂科技有限公司 | Depression detection and identification system based on computer vision |
| CN112614583A (en)* | 2020-11-25 | 2021-04-06 | 平安医疗健康管理股份有限公司 | Depression grade testing system |
| CN114255433A (en)* | 2022-02-24 | 2022-03-29 | 首都师范大学 | Depression recognition method, device and storage medium based on facial video |
| CN115980364A (en)* | 2023-01-06 | 2023-04-18 | 广东省中医院(广州中医药大学第二附属医院、广州中医药大学第二临床医学院、广东省中医药科学院) | Use of methylated miR-124 gene in preparing markers for diagnosing depression |
| Publication | Publication Date | Title |
|---|---|---|
| CN108335749A (en) | Depression data analysing method and device | |
| Brady et al. | Rapid grading of fundus photographs for diabetic retinopathy using crowdsourcing | |
| CN110570941B (en) | System and device for assessing psychological state based on text semantic vector model | |
| CN110136788A (en) | A medical record quality inspection method, device, equipment and storage medium based on automatic detection | |
| JP2015504555A (en) | Pathophysiological storm tracker | |
| Banerjee et al. | Training and profiling a pediatric facial expression classifier for children on mobile devices: machine learning study | |
| US20210319893A1 (en) | Avatar assisted telemedicine platform systems, methods for providing said systems, and methods for providing telemedicine services over said systems | |
| CN113658697B (en) | Psychological assessment system based on video fixation difference | |
| JP2015228202A (en) | Determination system, determination method, and determination program | |
| Guarin et al. | Video-based facial movement analysis in the assessment of bulbar amyotrophic lateral sclerosis: clinical validation | |
| Zhang et al. | Combination of paper and electronic trail making tests for automatic analysis of cognitive impairment: development and validation study | |
| Patnaik et al. | Intelligent decision support system in healthcare using machine learning models | |
| Cameron et al. | Multimodal gender fairness in depression prediction: Insights on data from the usa & china | |
| Krishnamoorthy et al. | StimulEye: An intelligent tool for feature extraction and event detection from raw eye gaze data | |
| CN104951665A (en) | Method and system of medicine recommendation | |
| Sigut et al. | In-depth evaluation of saliency maps for interpreting convolutional neural network decisions in the diagnosis of glaucoma based on fundus imaging | |
| JP7518971B2 (en) | Information judgment method, device, electronic device, storage medium, and computer program | |
| US20150227713A1 (en) | Real-time time series matrix pathophysiologic pattern processor and quality assessment method | |
| Mikołajczyk-Bareła et al. | A survey on bias in machine learning research | |
| CN109685143A (en) | A kind of thyroid gland technetium sweeps the identification model construction method and device of image | |
| US20190180859A1 (en) | System and method for creating an electronic database using voice intonation analysis score correlating to human affective states | |
| He et al. | A robust movement quantification algorithm of hyperactivity detection for ADHD children based on 3D depth images | |
| CN111512395B (en) | Learn and apply contextual similarity between entities | |
| Raj et al. | Classification of autism disorder using deep learning techniques: A blazepose-based approach | |
| Messan et al. | Intra-individual reproducibility as essential determinant of clinical utility of smartphone-based neurological disability tests |
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination |