
技术领域technical field
本发明涉及图像处理技术领域,特别涉及一种基于图像处理技术的眼动视频数据处理方法及系统。The present invention relates to the technical field of image processing, in particular to a method and system for processing eye movement video data based on image processing technology.
背景技术Background technique
眼动数据热力图是能够反应人眼在查看文字、图片等信息时注意力变化情况的数据,其通过可视化的形式将人眼注意力的变化情况直观的呈现出来,通过眼动特征,能够直观、准确地反映人的信息处理和思维过程。The eye movement data heat map is the data that can reflect the change of the human eye's attention when viewing text, pictures and other information. , accurately reflect the human information processing and thinking process.
然而,目前通过眼动仪器采集的连续视频数据,如Web页面浏览眼动数据,由于用户快速滑动页面、翻页和回溯等浏览行为,导致普通眼动数据分析工具(如Eyelink眼动数据分析软件Data Viewer)很难将不同时间点眼动数据精准地定位到关键页面中。同时,如果人工手工处理该类眼动视频数据,存在成本高且准确率不高等问题。However, the current continuous video data collected by eye-tracking instruments, such as web page browsing eye-tracking data, due to user's browsing behaviors such as rapid page swiping, page-turning and backtracking, lead to common eye-tracking data analysis tools (such as Eyelink eye-tracking data analysis software). Data Viewer), it is difficult to accurately locate eye movement data at different time points to key pages. At the same time, if this type of eye movement video data is manually processed, there are problems such as high cost and low accuracy.
发明内容SUMMARY OF THE INVENTION
本发明实施例提供了一种基于图像处理技术的眼动视频数据处理方法及系统,用以解决现有技术中获取视频眼动数据热力图的方法准确性不高的问题。Embodiments of the present invention provide a method and system for processing eye movement video data based on image processing technology, so as to solve the problem of low accuracy of the method for obtaining a heat map of video eye movement data in the prior art.
一方面,本发明实施例提供了一种基于图像处理技术的眼动视频数据处理方法,包括:On the one hand, an embodiment of the present invention provides an eye movement video data processing method based on image processing technology, including:
获取眼动视频数据;Obtain eye movement video data;
提取眼动视频数据中的眼动图像;Extract eye movement images from eye movement video data;
对眼动图像进行聚类;Clustering eye movement images;
对完成聚类的眼动图像进行拼接,获得完整图像;Splicing the clustered eye movement images to obtain a complete image;
对完整图像进行文本行定位,获得全局的文本行位置信息;Perform text line positioning on the complete image to obtain global text line position information;
提取眼动数据文件中的眼动数据,形成眼动数据全景图;Extract the eye movement data in the eye movement data file to form a panorama of the eye movement data;
将文本行位置信息与眼动数据全景图进行求交集,获得眼动数据热力图。The intersection of the text line position information and the eye movement data panorama is obtained to obtain the eye movement data heat map.
另一方面,本发明实施例提供了一种基于图像处理技术的眼动视频数据处理系统,包括:On the other hand, an embodiment of the present invention provides an eye movement video data processing system based on image processing technology, including:
数据获取模块,用于获取眼动视频数据;A data acquisition module for acquiring eye movement video data;
图像提取模块,用于提取眼动视频数据中的眼动图像;The image extraction module is used to extract the eye movement image in the eye movement video data;
图像聚类模块,用于对眼动图像进行聚类;Image clustering module for clustering eye movement images;
图像拼接模块,用于对完成聚类的眼动图像进行拼接,获得完整图像;The image stitching module is used to stitch the clustered eye movement images to obtain a complete image;
文本行定位模块,用于对完整图像进行文本行定位,获得全局的文本行位置信息;The text line positioning module is used to locate the text line of the complete image and obtain the global text line position information;
数据提取模块,用于提取眼动数据文件中的眼动数据,形成眼动数据全景图;The data extraction module is used to extract the eye movement data in the eye movement data file to form a panorama of the eye movement data;
热力图生成模块,用于将文本行位置信息与眼动数据全景图进行求交集,获得眼动数据热力图。The heat map generation module is used to intersect the text line position information with the eye movement data panorama to obtain the eye movement data heat map.
本发明中的一种基于图像处理技术的眼动视频数据处理方法及系统,具有以下优点:An eye movement video data processing method and system based on image processing technology in the present invention has the following advantages:
通过对动态的眼动视频数据进行分类和拼接处理,获得完整图像,并对完整图像和从眼动数据中提取的全景图进行交集处理,获得的眼动数据热力图具有很高的准确性。By classifying and splicing dynamic eye movement video data, a complete image is obtained, and the intersection of the complete image and the panorama extracted from the eye movement data is processed, and the obtained eye movement data heat map has high accuracy.
附图说明Description of drawings
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to explain the embodiments of the present invention or the technical solutions in the prior art more clearly, the following briefly introduces the accompanying drawings that need to be used in the description of the embodiments or the prior art. Obviously, the accompanying drawings in the following description are only These are some embodiments of the present invention. For those of ordinary skill in the art, other drawings can also be obtained according to these drawings without creative efforts.
图1为本发明实施例提供的一种基于图像处理技术的眼动视频数据处理方法的流程图。FIG. 1 is a flowchart of an eye movement video data processing method based on an image processing technology according to an embodiment of the present invention.
具体实施方式Detailed ways
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are only a part of the embodiments of the present invention, but not all of the embodiments. Based on the embodiments of the present invention, all other embodiments obtained by those of ordinary skill in the art without creative efforts shall fall within the protection scope of the present invention.
图1为本发明实施例提供的一种基于图像处理技术的眼动视频数据处理方法的流程图。本发明实施例提供了一种基于图像处理技术的眼动视频数据处理方法,包括:FIG. 1 is a flowchart of an eye movement video data processing method based on an image processing technology according to an embodiment of the present invention. An embodiment of the present invention provides an eye movement video data processing method based on image processing technology, including:
S100,获取眼动视频数据。S100, obtain eye movement video data.
S110,提取眼动视频数据中的眼动图像。S110, extracting eye movement images in the eye movement video data.
示例性地,目前一个眼动视频数据文件包含5000-7000帧的眼动图像,可以逐帧提取眼动图像,也可以按照设定的间隔进行提取。Exemplarily, an eye movement video data file currently contains 5000-7000 frames of eye movement images, and the eye movement images can be extracted frame by frame, or can be extracted according to a set interval.
S120,对眼动图像进行聚类。S120, cluster the eye movement images.
示例性地,聚类的目的是对眼动图像进行分组,具体可以采用帧位置以及图像特征信息对眼动图像进行聚类。例如,本发明实施例中的S120具体包括:采用HOG(Histogramof Oriented Gradient,梯度方向直方图)特征提取算法提取眼动图像中的特征信息;基于特征信息,采用KMeans无监督学习算法对眼动图像进行聚类。Exemplarily, the purpose of clustering is to group the eye movement images, and specifically, the frame positions and image feature information can be used to cluster the eye movement images. For example, S120 in the embodiment of the present invention specifically includes: using a HOG (Histogram of Oriented Gradient, histogram of gradient orientation) feature extraction algorithm to extract feature information in the eye movement image; to cluster.
其中,HOG特征提取算法的主要步骤包括:Among them, the main steps of the HOG feature extraction algorithm include:
(1)颜色空间归一化,包括图像灰度化处理和Gamma矫正。对彩色图像,需要按照下式将其转换为灰度图像:(1) Color space normalization, including image grayscale processing and Gamma correction. For a color image, it needs to be converted to a grayscale image as follows:
Gray=0.3*R+0.59*G+0.11*BGray=0.3*R+0.59*G+0.11*B
而在图像照度不均匀时,可以采用下式进行Gamma校正:When the image illumination is not uniform, the following formula can be used for Gamma correction:
Y(x,y)=I(x,y)γY(x,y)=I(x,y)γ
(2)梯度计算:对经过颜色空间归一化的图像,求取其梯度及梯度方向。可按照下式计算:(2) Gradient calculation: For the image normalized in the color space, obtain its gradient and gradient direction. It can be calculated according to the following formula:
Gx(x,y)=I(x+1,y)-I(x-1,y)Gx (x,y)=I(x+1,y)-I(x-1,y)
Gy(x,y)=I(x,y+1)-I(x,y-1)Gy (x,y)=I(x,y+1)-I(x,y-1)
(3)梯度方向直方图。(3) Gradient direction histogram.
(4)重叠块直方图归一化。(4) Normalization of overlapping block histograms.
而Kmeans无监督学习算法的流程如下:The process of the Kmeans unsupervised learning algorithm is as follows:
(1)从样本中选择K个点作为初始质心;(1) Select K points from the sample as the initial centroids;
(2)计算每个样本到各个质心的距离,将样本划分到距离最近的质心所对应的簇中;(2) Calculate the distance from each sample to each centroid, and divide the samples into clusters corresponding to the nearest centroids;
(3)计算每个簇内所有样本的均值,并使用该均值更新簇的质心;(3) Calculate the mean of all samples in each cluster, and use the mean to update the centroid of the cluster;
(4)重复步骤(2)和(3),直到达到以下条件之一:质心的位置变化小于指定的阈值;达到最大迭代次数。(4) Repeat steps (2) and (3) until one of the following conditions is reached: the position change of the centroid is less than the specified threshold; the maximum number of iterations is reached.
在分组过程中,对于读取的第n张眼动图像,首先需要搜索是否存在与该张眼动图像匹配的图像组,如果不存在则需要新建图像组,如果存在则直接将眼动图像加入到图像组中。具体地,可以采用Ssim匹配算法来确定是否存在与该张眼动图像匹配的图像组,该算法综合亮度、对比度、结构和直方图等参数对图像进行相似度计算,只有新读取的眼动图像与图像组中的任意一张眼动图像的相似度达到或超过设定的阈值,才认为新读取的眼动图像和图像组匹配。In the grouping process, for the nth eye tracking image read, it is first necessary to search for an image group that matches the eye tracking image. If it does not exist, a new image group needs to be created. If it exists, the eye tracking image is directly added into the image group. Specifically, the Ssim matching algorithm can be used to determine whether there is an image group matching the eye movement image. Only when the similarity between the image and any eye movement image in the image group reaches or exceeds a set threshold, the newly read eye movement image and the image group are considered to match.
其中,Ssim匹配算法的流程如下:Among them, the process of Ssim matching algorithm is as follows:
对于输入的两张图像x和y,Ssim相似度可以表示为:For two input images x and y, the Ssim similarity can be expressed as:
SSIM(x,y)=[l(x,y)α][c(x,y)β][s(x,y)γ]SSIM(x, y)=[l(x, y)α ][c(x, y)β ][s(x, y)γ ]
其中,in,
S130,对完成聚类的眼动图像进行拼接,获得完整图像。S130, stitching the clustered eye movement images to obtain a complete image.
示例性地,S130具体包括:对眼动图像进行配准,获得ROI(Region of Interest,感兴趣区域)区域的偏移量;根据ROI区域的偏移量对眼动图像进行拼接,获得完整图像。Exemplarily, S130 specifically includes: registering the eye movement images to obtain the offset of the ROI (Region of Interest, region of interest) area; stitching the eye movement images according to the offset of the ROI area to obtain a complete image .
在本发明的实施例中,对眼动图像进行配准时,可以采用以下两种方法中的任意一种:一是采用多个ROI区域对眼动图像进行配准,获得ROI区域的偏移量;二是采用KCF(Kernel Correlation Filter,核相关滤波)跟踪算法和Shift特征匹配算法对眼动图像进行配准,获得ROI区域的偏移量。In the embodiment of the present invention, when registering the eye movement image, any one of the following two methods can be used: one is to use multiple ROI areas to register the eye movement image, and obtain the offset of the ROI area The second is to use the KCF (Kernel Correlation Filter) tracking algorithm and the Shift feature matching algorithm to register the eye movement image to obtain the offset of the ROI area.
由于S120对眼动图像进行了分组,因此拼接时是将处在同一个图像组中的眼动图像进行拼接,即对同类别的眼动图像进行拼接。Since the eye movement images are grouped in S120, the eye movement images in the same image group are spliced during splicing, that is, eye movement images of the same category are spliced.
在本发明的实施例中,在获得完整图像后,还记录当前的帧位置和相应的偏移量。相应地,提取眼动数据文件中的眼动数据,形成眼动数据全景图,包括:根据帧位置和偏移量从眼动数据文件中提取相关位置的眼动数据,形成眼动数据全景图。In the embodiment of the present invention, after the complete image is obtained, the current frame position and the corresponding offset are also recorded. Correspondingly, extracting the eye movement data in the eye movement data file to form the eye movement data panorama, including: extracting the eye movement data of the relevant position from the eye movement data file according to the frame position and the offset to form the eye movement data panorama .
S140,对完整图像进行文本行定位,获得全局的文本行位置信息。S140: Perform text line positioning on the complete image to obtain global text line position information.
示例性地,可以采用OCR(Optical Character Recognition,光学文字识别)技术对完整图像进行文本行定位。具体地,可以采用深度学习技术实现对完整图像的文本行定位,也可以采用传动的二值化后连通域搜索合并算法对完整图像进行文本行定位。如果采用深度学习技术,则可以采用经典的CTPN(Connection Text Proposal Network,连接文本区域网络)。Exemplarily, OCR (Optical Character Recognition, Optical Character Recognition) technology may be used to locate text lines in the complete image. Specifically, the deep learning technology can be used to locate the text line of the complete image, and the connected domain search and merge algorithm after binarization can also be used to locate the text line of the complete image. If deep learning technology is used, the classic CTPN (Connection Text Proposal Network) can be used.
其中,CTPN的算法流程如下:Among them, the algorithm flow of CTPN is as follows:
(1)将图像输入到VGG16模型中提取特征;(1) Input the image into the VGG16 model to extract features;
(2)将上一步得到的特征输入到双向的LSTM(Long Short Term Memory,长短时记忆)网络中,输出W*256的结果,再将这个结果输入到一个512维的全连接层;(2) Input the features obtained in the previous step into the bidirectional LSTM (Long Short Term Memory) network, output the result of W*256, and then input the result into a 512-dimensional fully connected layer;
(3)最后通过分类或回归得到的输出主要分为三部分:2k verticalcoordinates:表示选择框的高度和中心的y轴的坐标;2k scores:表示的是k个anchor的类别信息,说明其是否为字符;k side-refinement表示的是选择框的水平偏移量。(3) The output obtained by classification or regression is mainly divided into three parts: 2k verticalcoordinates: indicates the height of the selection box and the coordinates of the y-axis of the center; 2k scores: indicates the category information of k anchors, indicating whether it is a Character; k side-refinement represents the horizontal offset of the selection box.
S150,提取眼动数据文件中的眼动数据,形成眼动数据全景图。S150, extract the eye movement data in the eye movement data file to form a panorama of the eye movement data.
示例性地,眼动数据文件中的眼动数据主要包括注视数据、眼跳数据、瞳孔大小数据和扫描路径数据,其中注视数据又包括注视点个数、总注视时长、首次注视时间、注视位置、注视序列和凝视时间等数据,眼跳数据包括眼跳次数和眼跳距离等数据,瞳孔大小数据包括平均归一化右瞳孔直径、平均归一化左瞳孔直径、平均右瞳孔扩张速度、平均左瞳孔扩张速度等数据,扫描路径数据包括扫描持续时间和扫描路径长度等数据。Exemplarily, the eye movement data in the eye movement data file mainly includes fixation data, saccade data, pupil size data and scan path data, wherein the fixation data includes the number of fixation points, the total fixation duration, the first fixation time, and the fixation position. , gaze sequence and gaze time, saccade data includes saccade times and saccade distance and other data, pupil size data includes average normalized right pupil diameter, average normalized left pupil diameter, average right pupil dilation velocity, average Left pupil dilation speed and other data, and scan path data include scan duration and scan path length and other data.
S160,将文本行位置信息与眼动数据全景图进行求交集,获得眼动数据热力图。S160, the intersection of the text line position information and the eye movement data panorama is obtained to obtain a heat map of the eye movement data.
本发明实施例还提供了一种基于图像处理技术的眼动视频数据处理系统,该系统包括:The embodiment of the present invention also provides an eye movement video data processing system based on image processing technology, the system includes:
数据获取模块,用于获取眼动视频数据;A data acquisition module for acquiring eye movement video data;
图像提取模块,用于提取眼动视频数据中的眼动图像;The image extraction module is used to extract the eye movement image in the eye movement video data;
图像聚类模块,用于对眼动图像进行聚类;Image clustering module for clustering eye movement images;
图像拼接模块,用于对完成聚类的眼动图像进行拼接,获得完整图像;The image stitching module is used to stitch the clustered eye movement images to obtain a complete image;
文本行定位模块,用于对完整图像进行文本行定位,获得全局的文本行位置信息;The text line positioning module is used to locate the text line of the complete image and obtain the global text line position information;
数据提取模块,用于提取眼动数据文件中的眼动数据,形成眼动数据全景图;The data extraction module is used to extract the eye movement data in the eye movement data file to form a panorama of the eye movement data;
热力图生成模块,用于将文本行位置信息与眼动数据全景图进行求交集,获得眼动数据热力图。The heat map generation module is used to intersect the text line position information with the eye movement data panorama to obtain the eye movement data heat map.
尽管已描述了本发明的优选实施例,但本领域内的技术人员一旦得知了基本创造性概念,则可对这些实施例作出另外的变更和修改。所以,所附权利要求意欲解释为包括优选实施例以及落入本发明范围的所有变更和修改。Although preferred embodiments of the present invention have been described, additional changes and modifications to these embodiments may occur to those skilled in the art once the basic inventive concepts are known. Therefore, the appended claims are intended to be construed to include the preferred embodiment and all changes and modifications that fall within the scope of the present invention.
显然,本领域的技术人员可以对本发明进行各种改动和变型而不脱离本发明的精神和范围。这样,倘若本发明的这些修改和变型属于本发明权利要求及其等同技术的范围之内,则本发明也意图包含这些改动和变型在内。It will be apparent to those skilled in the art that various modifications and variations can be made in the present invention without departing from the spirit and scope of the invention. Thus, provided that these modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include these modifications and variations.
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202210871715.9ACN115100575A (en) | 2022-07-22 | 2022-07-22 | An eye movement video data processing method and system based on image processing technology |
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202210871715.9ACN115100575A (en) | 2022-07-22 | 2022-07-22 | An eye movement video data processing method and system based on image processing technology |
| Publication Number | Publication Date |
|---|---|
| CN115100575Atrue CN115100575A (en) | 2022-09-23 |
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202210871715.9APendingCN115100575A (en) | 2022-07-22 | 2022-07-22 | An eye movement video data processing method and system based on image processing technology |
| Country | Link |
|---|---|
| CN (1) | CN115100575A (en) |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN109255309A (en)* | 2018-08-28 | 2019-01-22 | 中国人民解放军战略支援部队信息工程大学 | Brain electricity and eye movement fusion method and device towards Remote Sensing Target detection |
| CN109978753A (en)* | 2017-12-28 | 2019-07-05 | 北京京东尚科信息技术有限公司 | The method and apparatus for drawing panorama thermodynamic chart |
| WO2020042542A1 (en)* | 2018-08-31 | 2020-03-05 | 深圳市沃特沃德股份有限公司 | Method and apparatus for acquiring eye movement control calibration data |
| CN111695516A (en)* | 2020-06-12 | 2020-09-22 | 百度在线网络技术(北京)有限公司 | Thermodynamic diagram generation method, device and equipment |
| CN111832551A (en)* | 2020-07-15 | 2020-10-27 | 网易有道信息技术(北京)有限公司 | Text image processing method, device, electronic scanning device and storage medium |
| CN113160048A (en)* | 2021-02-02 | 2021-07-23 | 重庆高新区飞马创新研究院 | Suture line guided image splicing method |
| CN113807119A (en)* | 2020-05-29 | 2021-12-17 | 魔门塔(苏州)科技有限公司 | Method and device for detecting person fixation position |
| CN114627544A (en)* | 2022-03-09 | 2022-06-14 | 北京沃东天骏信息技术有限公司 | Method and device for monitoring Internet user experience |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN109978753A (en)* | 2017-12-28 | 2019-07-05 | 北京京东尚科信息技术有限公司 | The method and apparatus for drawing panorama thermodynamic chart |
| CN109255309A (en)* | 2018-08-28 | 2019-01-22 | 中国人民解放军战略支援部队信息工程大学 | Brain electricity and eye movement fusion method and device towards Remote Sensing Target detection |
| WO2020042542A1 (en)* | 2018-08-31 | 2020-03-05 | 深圳市沃特沃德股份有限公司 | Method and apparatus for acquiring eye movement control calibration data |
| CN113807119A (en)* | 2020-05-29 | 2021-12-17 | 魔门塔(苏州)科技有限公司 | Method and device for detecting person fixation position |
| CN111695516A (en)* | 2020-06-12 | 2020-09-22 | 百度在线网络技术(北京)有限公司 | Thermodynamic diagram generation method, device and equipment |
| CN111832551A (en)* | 2020-07-15 | 2020-10-27 | 网易有道信息技术(北京)有限公司 | Text image processing method, device, electronic scanning device and storage medium |
| CN113160048A (en)* | 2021-02-02 | 2021-07-23 | 重庆高新区飞马创新研究院 | Suture line guided image splicing method |
| CN114627544A (en)* | 2022-03-09 | 2022-06-14 | 北京沃东天骏信息技术有限公司 | Method and device for monitoring Internet user experience |
| Title |
|---|
| 黄君浩;贺辉;: "基于LSTM的眼动行为识别及人机交互应用", 计算机系统应用, no. 03, 15 March 2020 (2020-03-15)* |
| Publication | Publication Date | Title |
|---|---|---|
| CN109117836B (en) | A method and device for text detection and positioning in natural scenes based on focal loss function | |
| JP4542591B2 (en) | Image processing system, image storage device, image processing device, image processing method, and program | |
| US8649602B2 (en) | Systems and methods for tagging photos | |
| CN114758362B (en) | Clothes-changing pedestrian re-identification method based on semantic-aware attention and visual masking | |
| WO2011028023A2 (en) | Apparatus and method for detecting eye state | |
| CN112927776A (en) | Artificial intelligence automatic interpretation system for medical inspection report | |
| WO2011065952A1 (en) | Face recognition apparatus and methods | |
| US20230052133A1 (en) | Medical image processing method and apparatus, device, storage medium, and product | |
| CN113111689B (en) | A sample mining method, device, equipment and storage medium | |
| CN112330731A (en) | Image processing apparatus, image processing method, image processing device, ultrasound system, and readable storage medium | |
| CN108154132A (en) | Method, system and equipment for extracting characters of identity card and storage medium | |
| WO2020103462A1 (en) | Video search method and apparatus, computer device, and storage medium | |
| CN113780116A (en) | Invoice classification method, apparatus, computer equipment and storage medium | |
| CN114078271B (en) | Threshold determination method, target person identification method, device, equipment and medium | |
| Shanmugavadivu et al. | Rapid face detection and annotation with loosely face geometry | |
| CN117893813A (en) | A species identification method, system and device based on feature fusion | |
| CN111753168A (en) | A method, device, electronic device and storage medium for searching questions | |
| CN110458024A (en) | Biopsy method and device and electronic equipment | |
| CN114066850A (en) | An Image Binarization Method Based on Classification Framework | |
| CN119092117A (en) | Sleep apnea risk assessment method and system based on face recognition | |
| WO2024222426A1 (en) | Vision test method and apparatus, and vision test device and storage medium | |
| CN115100575A (en) | An eye movement video data processing method and system based on image processing technology | |
| CN118053162A (en) | Method and system for identifying handwriting content of test paper | |
| CN114663965B (en) | Testimony comparison method and device based on two-stage alternative learning | |
| WO2020244076A1 (en) | Face recognition method and apparatus, and electronic device and storage medium |
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination |