Movatterモバイル変換


[0]ホーム

URL:


CN116563378A - A visual positioning method, device, electronic equipment and storage medium - Google Patents

A visual positioning method, device, electronic equipment and storage medium
Download PDF

Info

Publication number
CN116563378A
CN116563378ACN202310612462.8ACN202310612462ACN116563378ACN 116563378 ACN116563378 ACN 116563378ACN 202310612462 ACN202310612462 ACN 202310612462ACN 116563378 ACN116563378 ACN 116563378A
Authority
CN
China
Prior art keywords
frame image
current frame
information
coordinate system
offline map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310612462.8A
Other languages
Chinese (zh)
Inventor
褚冠宜
吴琅
谢卫健
王楠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Shangtang Technology Development Co Ltd
Original Assignee
Zhejiang Shangtang Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Shangtang Technology Development Co LtdfiledCriticalZhejiang Shangtang Technology Development Co Ltd
Publication of CN116563378ApublicationCriticalpatent/CN116563378A/en
Pendinglegal-statusCriticalCurrent

Links

Classifications

Landscapes

Abstract

The application discloses a visual positioning method, a visual positioning device, electronic equipment and a storage medium, wherein the visual positioning method comprises the following steps: determining matching characteristic information of the current frame image in an offline map based on the acquired characteristic information of the current frame image; determining transformation parameters between a current coordinate system and a coordinate system of an offline map according to the characteristic information and the corresponding matching characteristic information of the current frame image and the characteristic information and the corresponding matching characteristic information of the historical frame image before the current frame image; determining whether to output the detection position information of the current frame image in the offline map based on the transformation parameters; the detection position information of the current frame image is determined through the position information of the matching characteristic information of the current frame image in the offline map. The visual positioning method has the advantages that the positioning result is more accurate and stable, and the positioning accuracy is improved.

Description

Translated fromChinese
一种视觉定位方法、装置、电子设备及存储介质A visual positioning method, device, electronic equipment and storage medium

技术领域technical field

本申请涉及人工智能技术领域,特别是涉及一种视觉定位方法、装置、电子设备及计算机可读存储介质。The present application relates to the technical field of artificial intelligence, in particular to a visual positioning method, device, electronic equipment and computer-readable storage medium.

背景技术Background technique

视觉定位是计算机视觉以及机器人运动领域的重要问题。视觉定位在很多领域都有重要应用,比如增强现实(Augmented reality,AR)、虚拟现实(Virtual reality,VR)、机器人路径规划。Visual localization is an important problem in the field of computer vision and robot motion. Visual positioning has important applications in many fields, such as augmented reality (Augmented reality, AR), virtual reality (Virtual reality, VR), robot path planning.

由于视觉重定位往往只使用单帧图像的检测结果进行定位,使得定位结果的精确度波动可能较大,一次定位错误或者不精确的定位结果很容易导致系统陷入异常。且在视觉定位过程中,只使用简单的标准(比如符合模型的数据的数量或者数据的分布情况对定位结果进行判断,导致定位结果不准。Since visual relocalization often only uses the detection results of a single frame image for positioning, the accuracy of the positioning results may fluctuate greatly, and a positioning error or inaccurate positioning results can easily cause the system to fall into an abnormality. And in the visual positioning process, only simple criteria (such as the number of data conforming to the model or the distribution of data) are used to judge the positioning results, resulting in inaccurate positioning results.

发明内容Contents of the invention

本申请至少提供一种视觉定位方法、装置、电子设备及存储介质。The present application at least provides a visual positioning method, device, electronic equipment and storage medium.

本申请第一方面提供了一种视觉定位方法,视觉定位方法包括:The first aspect of the present application provides a visual positioning method. The visual positioning method includes:

基于获取的当前帧图像的特征信息,确定当前帧图像的特征信息在离线地图中的匹配特征信息;Based on the acquired feature information of the current frame image, determine the matching feature information of the feature information of the current frame image in the offline map;

根据当前帧图像的特征信息在当前坐标系中的位置信息及其对应的匹配特征信息在离线地图中的位置信息、当前帧图像之前的历史帧图像的特征信息在当前坐标系中的位置信息及其对应的匹配特征信息在离线地图中的位置信息,确定当前坐标系和离线地图的坐标系之间的变换参数;According to the position information of the feature information of the current frame image in the current coordinate system and the position information of the corresponding matching feature information in the offline map, the position information of the feature information of the historical frame images before the current frame image in the current coordinate system and The location information of the corresponding matching feature information in the offline map determines the transformation parameters between the current coordinate system and the coordinate system of the offline map;

基于变换参数,确定是否将当前帧图像在离线地图中的检测位置信息输出;当前帧图像的检测位置信息通过当前帧图像的特征信息在离线地图中的匹配特征信息的位置信息确定。Based on the transformation parameters, it is determined whether to output the detection position information of the current frame image in the offline map; the detection position information of the current frame image is determined by the position information of the matching feature information of the feature information of the current frame image in the offline map.

因此,本实施例中根据当前帧图像的特征信息和各历史帧图像分别对应的特征信息分别在当前坐标系中的位置信息、与当前帧图像和历史帧图像的特征信息对应的匹配特征信息分别在离线地图的坐标系中的位置信息确定当前坐标系与离线地图的坐标系之间的对应关系,通过多帧图像分别对应的特征信息确定当前坐标系与离线地图的坐标系之间的对应关系,提高了机器人视觉定位的准确率。Therefore, in this embodiment, according to the feature information of the current frame image and the feature information corresponding to each historical frame image, respectively, the position information in the current coordinate system, the matching feature information corresponding to the feature information of the current frame image and the historical frame image respectively The position information in the coordinate system of the offline map determines the correspondence between the current coordinate system and the coordinate system of the offline map, and the corresponding relationship between the current coordinate system and the coordinate system of the offline map is determined through the feature information corresponding to the multi-frame images , which improves the accuracy of robot vision positioning.

在一些实施例中,视觉定位方法还包括:In some embodiments, the visual positioning method also includes:

响应于当前帧图像是第一帧图像,则根据当前帧图像的特征信息在当前坐标系中的位置信息及其对应的匹配特征信息在离线地图的坐标系中的位置信息,确定当前坐标系和离线地图的坐标系之间的变换参数。In response to the fact that the current frame image is the first frame image, the current coordinate system and Transformation parameters between coordinate systems for offline maps.

因此,本实施例中,当当前帧图像为第一帧图像时,则通过当前帧图像的特征信息在当前坐标系中的位置信息与对应的匹配特征信息在离线地图的坐标系中的位置信息确定当前坐标系与离线地图的坐标系之间的对应关系。Therefore, in this embodiment, when the current frame image is the first frame image, the position information of the feature information of the current frame image in the current coordinate system and the position information of the corresponding matching feature information in the coordinate system of the offline map Determine the correspondence between the current coordinate system and the coordinate system of the offline map.

在一些实施例中,视觉定位方法还包括:In some embodiments, the visual positioning method also includes:

基于变换参数,将匹配特征信息在离线地图的坐标系中的位置信息转换得到匹配特征信息在当前坐标系中的预测位置信息;Based on the transformation parameters, converting the position information of the matching feature information in the coordinate system of the offline map to obtain the predicted position information of the matching feature information in the current coordinate system;

基于匹配特征信息在当前坐标系中的预测位置信息与对应的特征信息在当前坐标系中的位置信息之间的误差,调节变换参数。The transformation parameter is adjusted based on an error between the predicted position information of the matching feature information in the current coordinate system and the position information of the corresponding feature information in the current coordinate system.

因此,本实施例中,通过当前帧图像和历史帧图像分别对应的特征信息在当前坐标系中的位置信息与基于变换参数在当前坐标系中的预测位置信息之间的误差,优化变换参数,提高变换参数的精确度。Therefore, in this embodiment, the transformation parameters are optimized based on the error between the position information in the current coordinate system of the feature information corresponding to the current frame image and the historical frame image respectively and the predicted position information in the current coordinate system based on the transformation parameters, Improve the precision of transform parameters.

在一些实施例中,特征信息包括特征点信息;In some embodiments, the feature information includes feature point information;

基于获取的当前帧图像的特征信息,确定当前帧图像的特征信息在离线地图中的匹配特征信息,包括:Based on the acquired feature information of the current frame image, determine the matching feature information of the feature information of the current frame image in the offline map, including:

对获取的当前帧图像进行特征提取,得到当前帧图像的特征点信息;Perform feature extraction on the acquired current frame image to obtain feature point information of the current frame image;

将当前帧图像的特征点信息与离线地图上的预设关键点信息进行比对;Compare the feature point information of the current frame image with the preset key point information on the offline map;

响应于当前帧图像的特征点信息与离线地图上的预设关键点信息的相似度超过相似度阈值,则确定相似度对应的预设关键点信息为当前帧图像的特征点信息在离线地图上的匹配特征信息。In response to the similarity between the feature point information of the current frame image and the preset key point information on the offline map exceeding the similarity threshold, it is determined that the preset key point information corresponding to the similarity is the feature point information of the current frame image on the offline map matching feature information.

因此,本实施例中,通过特征点信息与预设关键点信息之间的相似度,确定与当前帧图像的特征信息相对应的匹配特征信息,提高当前帧图像的定位准确率。Therefore, in this embodiment, the matching feature information corresponding to the feature information of the current frame image is determined through the similarity between the feature point information and the preset key point information, so as to improve the positioning accuracy of the current frame image.

在一些实施例中,基于获取的当前帧图像的特征信息,确定当前帧图像的特征信息在离线地图中的匹配特征信息,之后还包括:In some embodiments, based on the acquired feature information of the current frame image, determine the matching feature information of the feature information of the current frame image in the offline map, and then further include:

将超过相似度阈值的相似度对应的预设关键点信息和特征点信息组成特征点对;Preset key point information and feature point information corresponding to the similarity exceeding the similarity threshold to form a feature point pair;

基于降噪算法对当前帧图像对应的特征点对进行筛选。Based on the noise reduction algorithm, the feature point pairs corresponding to the current frame image are screened.

因此,在本实施例中,通过降噪算法对特征点对进行筛选,提高当前帧图像对应的特征点对的精确度,进而可以提高当前帧图像的定位准确率。Therefore, in this embodiment, the feature point pairs are screened through the noise reduction algorithm to improve the accuracy of the feature point pairs corresponding to the current frame image, thereby improving the positioning accuracy of the current frame image.

在一些实施例中,视觉定位方法还包括:In some embodiments, the visual positioning method also includes:

响应于当前帧图像的特征点对的个数超过预设个数,则确定将当前帧图像归档于历史轨迹图像集。In response to the fact that the number of feature point pairs of the current frame image exceeds the preset number, it is determined to file the current frame image into the historical track image set.

因此,在本实施例中,根据当前帧图像中特征点对的数量确定当前帧图像是否可以归档于历史轨迹图像集,可以提高机器人的定位准确度。Therefore, in this embodiment, determining whether the current frame image can be archived in the historical trajectory image set according to the number of feature point pairs in the current frame image can improve the positioning accuracy of the robot.

在一些实施例中,基于变换参数,确定是否将当前帧图像在离线地图中的检测位置信息输出,包括:In some embodiments, based on the transformation parameters, determining whether to output the detected position information of the current frame image in the offline map includes:

根据当前帧图像、历史帧图像及其分别对应的特征信息、变换参数,确定是否将当前帧图像在离线地图中的检测位置信息输出。According to the current frame image, the historical frame image and their respective corresponding feature information and transformation parameters, it is determined whether to output the detected position information of the current frame image in the offline map.

在一些实施例中,根据当前帧图像、历史帧图像及其分别对应的特征信息、变换参数,确定是否将当前帧图像在离线地图中的检测位置信息输出,包括:In some embodiments, according to the current frame image, the historical frame image and their respective corresponding feature information and transformation parameters, it is determined whether to output the detected position information of the current frame image in the offline map, including:

基于当前帧图像、历史帧图像及其分别对应的特征信息、变换参数,确定是否将当前帧图像在离线地图中的检测位置信息输出,包括:Based on the current frame image, the historical frame image and their respective corresponding feature information and transformation parameters, determine whether to output the detected position information of the current frame image in the offline map, including:

通过当前帧图像、历史帧图像及其分别对应的特征信息、变换参数,确定特征信息的协方差矩阵;Determine the covariance matrix of the feature information through the current frame image, the historical frame image and their corresponding feature information and transformation parameters;

基于协方差矩阵的估计值,确定是否将当前帧图像在离线地图中的检测位置信息输出。Based on the estimated value of the covariance matrix, it is determined whether to output the detected position information of the current frame image in the offline map.

在一些实施例中,通过当前帧图像、历史帧图像及其分别对应的特征信息、变换参数,确定特征信息的协方差矩阵,包括:In some embodiments, the covariance matrix of the feature information is determined through the current frame image, the historical frame image and their corresponding feature information and transformation parameters, including:

基于变换参数,将当前帧图像和历史帧图像在当前坐标系中的位置信息分别转换为在离线地图的坐标系中的位置信息;Based on the transformation parameters, the position information of the current frame image and the historical frame image in the current coordinate system are respectively converted into position information in the coordinate system of the offline map;

根据当前帧图像和历史帧图像对应在离线地图的坐标系中的位置信息,确定关键帧分布信息;Determine the key frame distribution information according to the current frame image and the historical frame image corresponding to the position information in the coordinate system of the offline map;

根据当前帧图像的特征点信息、历史帧图像的特征点信息对应的各匹配特征信息在离线地图的坐标系中的空间分布,确定特征点分布信息;According to the spatial distribution of each matching feature information corresponding to the feature point information of the current frame image and the feature point information of the historical frame image in the coordinate system of the offline map, determine the feature point distribution information;

基于当前帧图像和历史帧图像对应的关键帧分布信息、当前帧图像的特征点信息和历史帧图像的特征点信息对应的特征点分布信息,确定特征信息的协方差矩阵。Based on the key frame distribution information corresponding to the current frame image and the historical frame image, the feature point information corresponding to the feature point information of the current frame image, and the feature point distribution information corresponding to the feature point information of the historical frame image, the covariance matrix of the feature information is determined.

在一些实施例中,基于协方差矩阵的估计值,确定是否将当前帧图像在离线地图中的检测位置信息输出,包括:In some embodiments, based on the estimated value of the covariance matrix, it is determined whether to output the detected position information of the current frame image in the offline map, including:

对特征信息的协方差矩阵进行计算,得到协方差矩阵的估计值;Calculate the covariance matrix of the feature information to obtain the estimated value of the covariance matrix;

响应于协方差矩阵的估计值小于预设估计值,则输出当前帧图像在离线地图中的检测位置信息。In response to the estimated value of the covariance matrix being smaller than the preset estimated value, the detected position information of the current frame image in the offline map is output.

因此,在本实施例中,根据当前帧图像、历史帧图像及其分别对应的特征信息确定特征信息的协方差矩阵,并根据协方差矩阵对当前帧图像和历史帧图像的定位结果的质量进行评价,提高定位结果的可靠性。Therefore, in this embodiment, the covariance matrix of the feature information is determined according to the current frame image, the historical frame image and their respective corresponding feature information, and the quality of the positioning results of the current frame image and the historical frame image is evaluated according to the covariance matrix. Evaluation to improve the reliability of positioning results.

本申请第二方面提供了一种视觉定位装置,视觉定位装置包括:The second aspect of the present application provides a visual positioning device. The visual positioning device includes:

特征匹配模块,用于基于获取的当前帧图像的特征信息,确定当前帧图像的特征信息在离线地图中的匹配特征信息;The feature matching module is used to determine the matching feature information of the feature information of the current frame image in the offline map based on the feature information of the acquired current frame image;

处理模块,用于根据当前帧图像的特征信息在当前坐标系中的位置信息及其对应的匹配特征信息在离线地图中的位置信息、当前帧图像之前的历史帧图像的特征信息在当前坐标系中的位置信息及其对应的匹配特征信息在离线地图中的位置信息,确定当前坐标系和离线地图的坐标系之间的变换参数;The processing module is used for the position information of the feature information of the current frame image in the current coordinate system and the position information of the corresponding matching feature information in the offline map, and the feature information of the historical frame images before the current frame image in the current coordinate system The position information in and the corresponding matching feature information in the offline map determine the transformation parameters between the current coordinate system and the coordinate system of the offline map;

分析模块,用于基于变换参数,确定是否将当前帧图像在离线地图中的检测位置信息输出;当前帧图像的检测位置信息通过当前帧图像的特征信息在离线地图中的匹配特征信息的位置信息确定。The analysis module is used to determine whether to output the detection position information of the current frame image in the offline map based on the transformation parameters; the detection position information of the current frame image is based on the position information of the matching feature information of the feature information of the current frame image in the offline map Sure.

因此,本实施例中根据当前帧图像的特征信息和各历史帧图像分别对应的特征信息分别在当前坐标系中的位置信息、与当前帧图像和历史帧图像的特征信息对应的匹配特征信息分别在离线地图的坐标系中的位置信息确定当前坐标系与离线地图的坐标系之间的对应关系,通过多帧图像分别对应的特征信息确定当前坐标系与离线地图的坐标系之间的对应关系,提高了机器人视觉定位的准确率。Therefore, in this embodiment, according to the feature information of the current frame image and the feature information corresponding to each historical frame image, respectively, the position information in the current coordinate system, the matching feature information corresponding to the feature information of the current frame image and the historical frame image respectively The position information in the coordinate system of the offline map determines the correspondence between the current coordinate system and the coordinate system of the offline map, and the corresponding relationship between the current coordinate system and the coordinate system of the offline map is determined through the feature information corresponding to the multi-frame images , which improves the accuracy of robot vision positioning.

本申请第三方面提供了一种电子设备,包括相互耦接的存储器和处理器,处理器用于执行存储器中存储的程序指令,以实现上述第一方面的视觉定位方法。The third aspect of the present application provides an electronic device, including a memory and a processor coupled to each other, and the processor is configured to execute program instructions stored in the memory, so as to implement the visual positioning method in the first aspect above.

本申请第四方面提供了一种计算机可读存储介质,其上存储有程序指令,程序指令被处理器执行时实现上述第一方面的视觉定位方法。A fourth aspect of the present application provides a computer-readable storage medium, on which program instructions are stored, and when the program instructions are executed by a processor, the visual positioning method of the above-mentioned first aspect is implemented.

应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,而非限制本申请。It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.

附图说明Description of drawings

此处的附图被并入说明书中并构成本说明书的一部分,这些附图示出了符合本申请的实施例,并与说明书一起用于说明本申请的技术方案。The accompanying drawings here are incorporated into the specification and constitute a part of the specification. These drawings show embodiments consistent with the application, and are used together with the description to describe the technical solution of the application.

图1是本申请提供的视觉定位方法的流程示意图;Fig. 1 is a schematic flow chart of the visual positioning method provided by the present application;

图2是本申请提供的视觉定位方法一实施例的流程示意图;FIG. 2 is a schematic flowchart of an embodiment of a visual positioning method provided by the present application;

图3是本申请提供的视觉定位方法一具体实施例的示意图;Fig. 3 is a schematic diagram of a specific embodiment of the visual positioning method provided by the present application;

图4是本申请提供的视觉定位装置一实施例的框架示意图;Fig. 4 is a schematic frame diagram of an embodiment of the visual positioning device provided by the present application;

图5是本申请电子设备一实施例的框架示意图;Fig. 5 is a schematic frame diagram of an embodiment of the electronic device of the present application;

图6为本申请计算机可读存储介质一实施例的框架示意图。FIG. 6 is a schematic diagram of an embodiment of a computer-readable storage medium of the present application.

具体实施方式Detailed ways

下面结合说明书附图,对本申请实施例的方案进行详细说明。The solutions of the embodiments of the present application will be described in detail below in conjunction with the accompanying drawings.

以下描述中,为了说明而不是为了限定,提出了诸如特定系统结构、接口、技术之类的具体细节,以便透彻理解本申请。In the following description, for purposes of illustration rather than limitation, specific details, such as specific system architectures, interfaces, and techniques, are set forth in order to provide a thorough understanding of the present application.

本文中术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,本文中字符“/”,一般表示前后关联对象是一种“或”的关系。此外,本文中的“多”表示两个或者多于两个。另外,本文中术语“至少一种”表示多种中的任意一种或多种中的至少两种的任意组合,例如,包括A、B、C中的至少一种,可以表示包括从A、B和C构成的集合中选择的任意一个或多个元素。The term "and/or" in this article is just an association relationship describing associated objects, which means that there can be three relationships, for example, A and/or B can mean: A exists alone, A and B exist simultaneously, and there exists alone B these three situations. In addition, the character "/" in this article generally indicates that the contextual objects are an "or" relationship. In addition, "many" herein means two or more than two. In addition, the term "at least one" herein means any one of a variety or any combination of at least two of the more, for example, including at least one of A, B, and C, which may mean including from A, Any one or more elements selected from the set formed by B and C.

请参阅图1,图1是本申请提供的视觉定位方法的流程示意图。本实施例提供一种视觉定位方法,视觉定位方法的执行主体可以是图像处理装置,该图像处理装置可以任意一种能够执行本申请方法实施例的终端设备或服务器或其它处理设备,其中,终端设备可以为用户设备(User Equipment,UE)、移动设备、用户终端、终端、蜂窝电话、无绳电话、个人数字处理(Personal Digital Assistant,PDA)、手持设备、计算设备、车载设备、可穿戴设备等。本实施例提供的视觉定位方法适用于机器人、无人机等。具体而言,视觉定位方法可以包括如下步骤:Please refer to FIG. 1 . FIG. 1 is a schematic flowchart of the visual positioning method provided by the present application. This embodiment provides a visual positioning method. The execution subject of the visual positioning method may be an image processing device, and the image processing device may be any terminal device or server or other processing device capable of executing the method embodiments of the present application, wherein the terminal The device can be user equipment (User Equipment, UE), mobile device, user terminal, terminal, cellular phone, cordless phone, personal digital assistant (PDA), handheld device, computing device, vehicle-mounted device, wearable device, etc. . The visual positioning method provided in this embodiment is applicable to robots, unmanned aerial vehicles, and the like. Specifically, the visual positioning method may include the following steps:

步骤S11:基于获取的当前帧图像的特征信息,确定当前帧图像的特征信息在离线地图中的匹配特征信息。Step S11: Based on the acquired feature information of the current frame image, determine matching feature information of the feature information of the current frame image in the offline map.

在一实施例中,通过图像采集设置获取当前帧图像,对当前帧图像进行特征点提取,得到当前帧图像的特征点信息。本实施例中,对机器人进行视觉定位时,通过机器人获取当前帧图像,进而确定当前帧图像的位置。In an embodiment, the image of the current frame is obtained through image acquisition settings, and the feature points of the image of the current frame are extracted to obtain the feature point information of the image of the current frame. In this embodiment, when visually locating the robot, the robot acquires the current frame image, and then determines the position of the current frame image.

在一实施例中,特征信息包括特征点信息。对获取的当前帧图像进行特征提取,得到当前帧图像的特征点信息;将当前帧图像的特征点信息与离线地图上的预设关键点信息进行比对;响应于当前帧图像的特征点信息与离线地图上的预设关键点信息的相似度超过相似度阈值,则确定相似度对应的预设关键点信息为当前帧图像的特征点信息在离线地图上的匹配特征信息。将超过相似度阈值的相似度对应的预设关键点信息和特征点信息组成特征点对;基于降噪算法对当前帧图像对应的特征点对进行筛选,例如,降噪算法可以为RANSAC(RANdom SAmple Consensus,随机一致性)算法。In an embodiment, the feature information includes feature point information. Perform feature extraction on the acquired current frame image to obtain the feature point information of the current frame image; compare the feature point information of the current frame image with the preset key point information on the offline map; respond to the feature point information of the current frame image If the similarity with the preset key point information on the offline map exceeds the similarity threshold, it is determined that the preset key point information corresponding to the similarity is the matching feature information of the feature point information of the current frame image on the offline map. The preset key point information and feature point information corresponding to the similarity exceeding the similarity threshold are composed of feature point pairs; the feature point pairs corresponding to the current frame image are screened based on the noise reduction algorithm. For example, the noise reduction algorithm can be RANSAC (RANdom SAmple Consensus, random consistency) algorithm.

步骤S12:根据当前帧图像的特征信息在当前坐标系中的位置信息及其对应的匹配特征信息在离线地图中的位置信息、当前帧图像之前的历史帧图像的特征信息在当前坐标系中的位置信息及其对应的匹配特征信息在离线地图中的位置信息,确定当前坐标系和离线地图的坐标系之间的变换参数。Step S12: According to the position information of the feature information of the current frame image in the current coordinate system and the position information of the corresponding matching feature information in the offline map, and the feature information of the historical frame images before the current frame image in the current coordinate system The position information and the position information of the corresponding matching feature information in the offline map determine the transformation parameters between the current coordinate system and the coordinate system of the offline map.

在一实施例中,基于变换参数,将匹配特征信息在离线地图的坐标系中的位置信息转换得到匹配特征信息在当前坐标系中的预测位置信息;基于匹配特征信息在当前坐标系中的预测位置信息与对应的特征信息在当前坐标系中的位置信息之间的误差,调节变换参数。In one embodiment, based on the transformation parameters, the position information of the matching feature information in the coordinate system of the offline map is converted to obtain the predicted position information of the matching feature information in the current coordinate system; based on the prediction of the matching feature information in the current coordinate system The error between the position information and the position information of the corresponding feature information in the current coordinate system adjusts the transformation parameters.

步骤S13:基于变换参数,确定是否将当前帧图像在离线地图中的检测位置信息输出;当前帧图像的检测位置信息通过当前帧图像的特征信息在离线地图中的匹配特征信息的位置信息确定。Step S13: Based on the transformation parameters, determine whether to output the detected position information of the current frame image in the offline map; the detected position information of the current frame image is determined by the position information of the matching feature information of the feature information of the current frame image in the offline map.

在一实施例中,根据当前帧图像、历史帧图像及其分别对应的特征信息、变换参数,确定是否将当前帧图像在离线地图中的检测位置信息输出。In an embodiment, it is determined whether to output the detected position information of the current frame image in the offline map according to the current frame image, the historical frame image and their respective corresponding feature information and transformation parameters.

具体地,通过当前帧图像、历史帧图像及其分别对应的特征信息、变换参数,确定特征信息的协方差矩阵;基于协方差矩阵的估计值,确定是否将当前帧图像在离线地图中的检测位置信息输出。Specifically, the covariance matrix of the feature information is determined through the current frame image, the historical frame image and their corresponding feature information and transformation parameters; based on the estimated value of the covariance matrix, it is determined whether the current frame image is detected in the offline map Position information output.

在一具体实施例中,基于变换参数,将当前帧图像和历史帧图像在当前坐标系中的位置信息分别转换为在离线地图的坐标系中的位置信息;根据当前帧图像和历史帧图像对应在离线地图的坐标系中的位置信息,确定关键帧分布信息;根据当前帧图像的特征点信息、历史帧图像的特征点信息对应的各匹配特征信息在离线地图的坐标系中的空间分布,确定特征点分布信息;基于当前帧图像和历史帧图像对应的关键帧分布信息、当前帧图像的特征点信息和历史帧图像的特征点信息对应的特征点分布信息,确定特征信息的协方差矩阵。对特征信息的协方差矩阵进行计算,得到协方差矩阵的估计值;响应于协方差矩阵的估计值小于预设估计值,则输出当前帧图像在离线地图中的检测位置信息。In a specific embodiment, based on the transformation parameters, the position information of the current frame image and the historical frame image in the current coordinate system are respectively converted into the position information in the coordinate system of the offline map; according to the correspondence between the current frame image and the historical frame image The position information in the coordinate system of the offline map determines the key frame distribution information; according to the spatial distribution of each matching feature information corresponding to the feature point information of the current frame image and the feature point information of the historical frame image in the coordinate system of the offline map, Determine the feature point distribution information; based on the key frame distribution information corresponding to the current frame image and the historical frame image, the feature point information of the current frame image and the feature point distribution information corresponding to the feature point information of the historical frame image, determine the covariance matrix of the feature information . The covariance matrix of the feature information is calculated to obtain an estimated value of the covariance matrix; in response to the estimated value of the covariance matrix being less than the preset estimated value, the detection position information of the current frame image in the offline map is output.

在另一实施例中,响应于当前帧图像是第一帧图像,则根据当前帧图像的特征信息在当前坐标系中的位置信息与对应的匹配特征信息在离线地图的坐标系中的位置信息,确定当前坐标系和离线地图的坐标系之间的变换参数。In another embodiment, in response to the fact that the current frame image is the first frame image, according to the position information of the feature information of the current frame image in the current coordinate system and the position information of the corresponding matching feature information in the coordinate system of the offline map , to determine the transformation parameters between the current coordinate system and the coordinate system of the offline map.

本公开实施例中提供的视觉定位方法,根据当前帧图像的特征信息和各历史帧图像分别对应的特征信息分别在当前坐标系中的位置信息、与当前帧图像和历史帧图像的特征信息对应的匹配特征信息分别在离线地图的坐标系中的位置信息确定当前坐标系与离线地图的坐标系之间的对应关系,通过多帧图像分别对应的特征信息确定当前坐标系与离线地图的坐标系之间的对应关系,提高了机器人视觉定位的准确率。The visual positioning method provided in the embodiments of the present disclosure corresponds to the feature information of the current frame image and the historical frame images according to the feature information of the current frame image and the feature information corresponding to each historical frame image respectively in the current coordinate system. The matching feature information of the corresponding position information in the coordinate system of the offline map determines the correspondence between the current coordinate system and the coordinate system of the offline map, and determines the coordinate system of the current coordinate system and the offline map through the corresponding feature information of multiple frames of images The corresponding relationship between them improves the accuracy of robot vision positioning.

请参阅图2和图3,图2是本申请提供的视觉定位方法一实施例的流程示意图;图3是本申请提供的视觉定位方法一具体实施例的示意图。本实施例提供一种视觉定位方法,视觉定位方法的执行主体可以是图像处理装置,该图像处理装置可以任意一种能够执行本申请方法实施例的终端设备或服务器或其它处理设备,其中,终端设备可以为用户设备(User Equipment,UE)、移动设备、用户终端、终端、蜂窝电话、无绳电话、个人数字处理(Personal Digital Assistant,PDA)、手持设备、计算设备、车载设备、可穿戴设备等。具体而言,视觉定位方法可以包括如下步骤:Please refer to FIG. 2 and FIG. 3 . FIG. 2 is a schematic flowchart of an embodiment of a visual positioning method provided by the present application; FIG. 3 is a schematic diagram of a specific embodiment of the visual positioning method provided by the present application. This embodiment provides a visual positioning method. The execution subject of the visual positioning method may be an image processing device, and the image processing device may be any terminal device or server or other processing device capable of executing the method embodiments of the present application, wherein the terminal The device can be user equipment (User Equipment, UE), mobile device, user terminal, terminal, cellular phone, cordless phone, personal digital assistant (PDA), handheld device, computing device, vehicle-mounted device, wearable device, etc. . Specifically, the visual positioning method may include the following steps:

步骤S201:对获取的当前帧图像进行特征提取,得到当前帧图像的特征信息。Step S201: Perform feature extraction on the acquired current frame image to obtain feature information of the current frame image.

在一实施例中,通过图像采集设置获取当前帧图像,对当前帧图像进行特征点提取,得到当前帧图像的特征点信息。本实施例中,对机器人进行视觉定位时,通过机器人上设置的摄像头获取当前帧图像,进而检测当前帧图像的位置。在其他实施例中,也可以对当前帧图像进行特征图提取,以得到当前帧图像的特征信息。在一具体实施例中,通过对获取的当前帧图像进行ORB(Oriented Fast and Rotated BRIEF)特征点检测,得到当前帧图像的特征点。其中,ORB采用FAST(features from accelerated segment test)算法来检测当前帧图像的特征点。In an embodiment, the image of the current frame is obtained through image acquisition settings, and the feature points of the image of the current frame are extracted to obtain the feature point information of the image of the current frame. In this embodiment, when visually locating the robot, the camera installed on the robot acquires the current frame image, and then detects the position of the current frame image. In other embodiments, feature map extraction may also be performed on the current frame image to obtain feature information of the current frame image. In a specific embodiment, the feature points of the current frame image are obtained by performing ORB (Oriented Fast and Rotated Brief) feature point detection on the acquired current frame image. Among them, ORB uses the FAST (features from accelerated segment test) algorithm to detect the feature points of the current frame image.

步骤S202:将当前帧图像的特征点信息与离线地图上的预设关键点信息进行比对。Step S202: Compare the feature point information of the current frame image with the preset key point information on the offline map.

具体地,为了确定当前帧图像在离线地图上的对应位置,则需要将当前帧图像的特征点与离线地图上标注的所有预设关键点进行比较。判断当前帧图像的各特征点与离线地图上对应的预设关键点进行相似度比对,进而确定与当前帧图像的特征点相匹配的预设关键点。Specifically, in order to determine the corresponding position of the current frame image on the offline map, it is necessary to compare the feature points of the current frame image with all preset key points marked on the offline map. Determine the similarity comparison between each feature point of the current frame image and the corresponding preset key point on the offline map, and then determine the preset key point matching the feature point of the current frame image.

步骤S203:响应于当前帧图像的特征点信息与离线地图上的预设关键点信息的相似度超过相似度阈值,则确定相似度对应的预设关键点信息为当前帧图像的特征点信息在离线地图上的匹配特征信息。Step S203: In response to the similarity between the feature point information of the current frame image and the preset key point information on the offline map exceeding the similarity threshold, determine that the preset key point information corresponding to the similarity is the feature point information of the current frame image in Match feature information on offline maps.

具体地,如果当前帧图像的特征点与离线地图上的预设关键点的相似度超过相似度阈值,则确定相似度对应的特征点与预设关键点相匹配,则将预设关键点作为当前帧图像的特征点在离线地图上的匹配特征点。其中,相似度阈值可以根据实际情况自行设定。例如,相似度阈值可以为95%或99%。Specifically, if the similarity between the feature point of the current frame image and the preset key point on the offline map exceeds the similarity threshold, it is determined that the feature point corresponding to the similarity matches the preset key point, and the preset key point is used as The matching feature points of the feature points of the current frame image on the offline map. Wherein, the similarity threshold can be set according to the actual situation. For example, the similarity threshold may be 95% or 99%.

在一实施例中,对当前帧图像的特征点与离线地图上的预设关键点进行粗匹配,获取候选特征点。具体地,对当前帧图像和离线地图,采用特征点的二值化梯度特征向量间的汉明距离作为特征点相似性的判定度量。如果当前帧图像的特征点与离线地图中一预设关键点的特征向量之间的汉明距离小于预设距离,则确定当前帧图像的特征点与离线地图上的预设关键点相匹配。在其他可选实施例中,也可以通过当前帧图像的特征点与离线地图中预设关键点之间的欧式距离和余弦相似度等方法评判其相似度。In one embodiment, the feature points of the current frame image are roughly matched with the preset key points on the offline map to obtain candidate feature points. Specifically, for the current frame image and the offline map, the Hamming distance between the binary gradient feature vectors of feature points is used as the judgment measure of feature point similarity. If the Hamming distance between the feature point of the current frame image and the feature vector of a preset key point in the offline map is smaller than the preset distance, it is determined that the feature point of the current frame image matches the preset key point on the offline map. In other optional embodiments, the similarity can also be judged by methods such as Euclidean distance and cosine similarity between the feature points of the current frame image and the preset key points in the offline map.

步骤S204:将超过相似度阈值的相似度对应的预设关键点信息和特征点信息组成特征点对。Step S204: Combining the preset key point information and feature point information corresponding to the similarity exceeding the similarity threshold to form a feature point pair.

具体地,为了提高机器人定位的准确率,将当前帧图像的各特征点与各特征点在离线地图上匹配的预设关键点进行分别组合,得到当前帧图像与离线地图对应的候选特征点对。通过上述步骤S203可以对当前帧图像的特征点与离线地图上的预设关键点进行粗匹配,获取候选特征点。Specifically, in order to improve the accuracy of robot positioning, each feature point of the current frame image is combined with the preset key points matched by each feature point on the offline map to obtain a pair of candidate feature points corresponding to the current frame image and the offline map . Through the above step S203, the feature points of the current frame image can be roughly matched with the preset key points on the offline map to obtain candidate feature points.

步骤S205:基于降噪算法对当前帧图像对应的特征点对进行筛选。Step S205: Filter the feature point pairs corresponding to the current frame image based on the noise reduction algorithm.

具体地,利用RANSAC剔除候选特征点中的误匹配的特征点对,得到精确匹配的匹配点对。在其他可选实施例中,也可以通过其他方式对当前帧图像对应的特征点对进行筛选,得到精确匹配的特征点对,进而提高当前帧图像的定位精确度。Specifically, the RANSAC is used to eliminate the mismatched feature point pairs among the candidate feature points to obtain the exact matching matching point pairs. In other optional embodiments, the feature point pairs corresponding to the current frame image may also be screened in other ways to obtain precisely matched feature point pairs, thereby improving the positioning accuracy of the current frame image.

步骤S206:响应于当前帧图像的特征点对的个数超过预设个数,则确定将当前帧图像归档于历史轨迹图像集。Step S206: In response to the fact that the number of feature point pairs of the current frame image exceeds the preset number, it is determined to file the current frame image into the historical track image set.

具体地,确定当前帧图像对应的精确匹配的特征点对的个数,并判断当前帧图像对应的精确匹配的特征点对的个数是否超过预设个数。如果当前帧图像对应的精确匹配的特征点对的个数超过预设个数,则将当前帧图像归档于历史轨迹图像集中。Specifically, the number of exactly matched feature point pairs corresponding to the current frame image is determined, and it is determined whether the number of exactly matched feature point pairs corresponding to the current frame image exceeds a preset number. If the number of precisely matched feature point pairs corresponding to the current frame image exceeds the preset number, the current frame image is archived in the historical trajectory image set.

步骤S207:判断当前帧图像是否为第一帧图像。Step S207: Determine whether the current frame image is the first frame image.

在一实施例中,根据机器人在当前运行轨迹的第一帧图像中任一特征点做为坐标原点建立当前坐标系。进而确定当前帧图像中各特征点在当前坐标系中对应的坐标位置。也可以根据其他位置作为坐标原点建立当前坐标系,在当前坐标系中确定当前帧图像中各特征点的坐标位置。In one embodiment, the current coordinate system is established according to any feature point in the first frame image of the robot's current running track as the coordinate origin. Further, the corresponding coordinate positions of each feature point in the current frame image in the current coordinate system are determined. The current coordinate system can also be established according to other positions as the coordinate origin, and the coordinate positions of each feature point in the current frame image can be determined in the current coordinate system.

离线地图具有对应的坐标系,可以在离线地图的坐标系中确定各预设关键点的坐标位置。The offline map has a corresponding coordinate system, and the coordinate positions of each preset key point can be determined in the coordinate system of the offline map.

由于当前坐标系与离线地图的坐标系不同,因此需要确定当前坐标系与离线地图的坐标系之间的对应关系。当当前帧图像具有历史帧图像时,可以结合当前帧图像的特征点信息和历史帧图像对应的特征点信息确定当前坐标系与离线地图的坐标系之间的对应关系,提高当前坐标系与离线地图的坐标系之间对应关系的精确度。Since the current coordinate system is different from the coordinate system of the offline map, it is necessary to determine the correspondence between the current coordinate system and the coordinate system of the offline map. When the current frame image has a historical frame image, the corresponding relationship between the current coordinate system and the coordinate system of the offline map can be determined by combining the feature point information of the current frame image and the feature point information corresponding to the historical frame image, so as to improve the relationship between the current coordinate system and the offline map. The accuracy of the correspondence between the map's coordinate systems.

如果当前帧图像是第一帧图像,则当前帧图像不具有历史帧图像,则直接跳转至步骤S208;如果当前帧图像不是第一帧图像,则当前帧图像具有历史帧图像,则直接跳转至步骤S209。If the current frame image is the first frame image, then the current frame image does not have a historical frame image, then directly jump to step S208; if the current frame image is not the first frame image, then the current frame image has a historical frame image, then directly jump Go to step S209.

步骤S208:根据当前帧图像的特征信息在当前坐标系中的位置信息及其对应的匹配特征信息在离线地图的坐标系中的位置信息,确定当前坐标系和离线地图的坐标系之间的变换参数。Step S208: According to the position information of the feature information of the current frame image in the current coordinate system and the position information of the corresponding matching feature information in the coordinate system of the offline map, determine the transformation between the current coordinate system and the coordinate system of the offline map parameter.

具体地,当前帧图像为第一帧图像时,当前帧图像中各特征点在相机坐标系中的坐标位置为Pc,各特征点在离线坐标系中的坐标位置记为Pw1,通过当前帧图像中同一特征点在相机坐标系中的坐标位置Pc和在离线坐标系中的坐标位置Pw1得到当前帧图像在离线地图中的坐标位置Tcw1。已知当前帧图像在本次运行时以起点为原点的坐标系下的坐标位置为Tcw2Specifically, when the current frame image is the first frame image, the coordinate position of each feature point in the current frame image in the camera coordinate system is Pc , and the coordinate position of each feature point in the offline coordinate system is denoted as Pw1 , through the current The coordinate position Pc of the same feature point in the frame image in the camera coordinate system and the coordinate position Pw1 in the offline coordinate system obtain the coordinate position Tcw1 of the current frame image in the offline map. It is known that the coordinate position of the current frame image in the coordinate system with the starting point as the origin during this operation is Tcw2 .

可以根据当前帧图像在当前坐标系中的坐标位置Tcw2以及与当前帧图像在离线地图的坐标系中的坐标位置Tcw1,确定特征点在当前坐标系中坐标位置与匹配特征点在离线地图的坐标系中的坐标位置之间的变换参数dT,其中,According to the coordinate position Tcw2 of the current frame image in the current coordinate system and the coordinate position Tcw1 of the current frame image in the coordinate system of the offline map, the coordinate position of the feature point in the current coordinate system and the matching feature point in the offline map can be determined The transformation parameter dT between coordinate positions in the coordinate system, where,

步骤S209:根据当前帧图像的特征信息在当前坐标系中的位置信息及其对应的匹配特征信息在离线地图中的位置信息、当前帧图像之前的历史帧图像的特征信息在当前坐标系中的位置信息及其对应的匹配特征信息在离线地图中的位置信息,确定当前坐标系和离线地图的坐标系之间的变换参数。Step S209: According to the position information of the feature information of the current frame image in the current coordinate system and the position information of the corresponding matching feature information in the offline map, and the feature information of the historical frame images before the current frame image in the current coordinate system The position information and the position information of the corresponding matching feature information in the offline map determine the transformation parameters between the current coordinate system and the coordinate system of the offline map.

具体地,当前帧图像不是第一帧图像时,当前帧图像中各特征点在相机坐标系中的坐标位置为Pc,各特征点在离线坐标系中的坐标位置记为Pw1,通过当前帧图像中同一特征点在相机坐标系中的坐标位置Pc和在离线坐标系中的坐标位置Pw1得到当前帧图像在离线地图中的坐标位置Tcw1。已知当前帧图像在本次运行时以起点为原点的坐标系下的坐标位置为Tcw2Specifically, when the current frame image is not the first frame image, the coordinate position of each feature point in the current frame image in the camera coordinate system is Pc , and the coordinate position of each feature point in the offline coordinate system is denoted as Pw1 , through the current The coordinate position Pc of the same feature point in the frame image in the camera coordinate system and the coordinate position Pw1 in the offline coordinate system obtain the coordinate position Tcw1 of the current frame image in the offline map. It is known that the coordinate position of the current frame image in the coordinate system with the starting point as the origin during this operation is Tcw2 .

可以根据当前帧图像在当前坐标系中的坐标位置Tcw2及其以及与当前帧图像在离线地图的坐标系中的坐标位置Tcw1、当前帧图像之前的预设数量的历史图像帧分别在当前坐标系中的坐标位置Tcw2以及分别在离线地图的坐标系中的坐标位置Tcw1,确定特征点在当前坐标系中坐标位置与匹配特征点在离线地图的坐标系中的坐标位置之间的变换参数dT,其中,According to the coordinate position Tcw2 of the current frame image in the current coordinate system and the coordinate position Tcw1 of the current frame image in the coordinate system of the offline map, the preset number of historical image frames before the current frame image are respectively in the current The coordinate position Tcw2 in the coordinate system and the coordinate position Tcw1 in the coordinate system of the offline map respectively determine the coordinate position between the coordinate position of the feature point in the current coordinate system and the coordinate position of the matching feature point in the coordinate system of the offline map The transformation parameter dT, where,

在本实施例中结合多帧图像确定当前坐标系和离线地图的坐标系之间的变换参数,可提高变换参数的精确度,便于提高定位结果的准确率。In this embodiment, multiple frames of images are combined to determine the transformation parameters between the current coordinate system and the coordinate system of the offline map, which can improve the accuracy of the transformation parameters and facilitate the improvement of the accuracy of the positioning result.

在本实施例中,当前帧图像和当前帧图像之前的历史帧图像共用一个变换参数。In this embodiment, the current frame image and the historical frame images before the current frame image share one transformation parameter.

步骤S210:基于匹配特征信息在当前坐标系中的预测位置信息与对应的特征信息在当前坐标系中的位置信息之间的误差,调节变换参数。Step S210: Adjust the transformation parameters based on the error between the predicted position information of the matching feature information in the current coordinate system and the position information of the corresponding feature information in the current coordinate system.

具体地,为了进一步提高当前坐标系和离线地图的坐标系之间变换参数的精确度,则将与特征点相对应的匹配特征信息在离线地图的坐标系中的坐标位置,通过变换参数转换得到与匹配特征信息在当前坐标系中的预测坐标位置,根据各特征点在当前坐标系中的坐标位置和预测坐标位置之间的差值,不断优化变换参数,直至各特征点在当前坐标系中的坐标位置和预测坐标位置之间的差值小于预设值。例如,预设值可以设置为3个像素,可以根据实际情况自行设定。Specifically, in order to further improve the accuracy of the transformation parameters between the current coordinate system and the coordinate system of the offline map, the coordinate position of the matching feature information corresponding to the feature point in the coordinate system of the offline map is obtained by converting the transformation parameters According to the predicted coordinate position of the matching feature information in the current coordinate system, according to the difference between the coordinate position of each feature point in the current coordinate system and the predicted coordinate position, the transformation parameters are continuously optimized until each feature point is in the current coordinate system The difference between the coordinate position of and the predicted coordinate position is less than the preset value. For example, the preset value can be set to 3 pixels, which can be set according to the actual situation.

基于变换参数,将当前帧图像在当前坐标系中的位置坐标转换为在离线地图的坐标系中的坐标位置,进而确定当前帧图像的定位结果。并将当前帧图像的定位结果归档至历史轨迹图像集中,且与当前帧图像关联。Based on the transformation parameters, the position coordinates of the current frame image in the current coordinate system are converted into coordinate positions in the coordinate system of the offline map, and then the positioning result of the current frame image is determined. And the positioning result of the current frame image is archived into the historical trajectory image set and associated with the current frame image.

步骤S211:通过当前帧图像、历史帧图像及其分别对应的特征信息、变换参数,确定特征信息的协方差矩阵。Step S211: Determine the covariance matrix of the feature information through the current frame image, the historical frame image and their corresponding feature information and transformation parameters.

在一具体实施例中,基于变换参数,将当前帧图像和历史帧图像在当前坐标系中的坐标位置分别转换为在离线地图的坐标系中的坐标位置;根据当前帧图像和历史帧图像对应在离线地图的坐标系中的坐标位置,确定当前图像帧和当前图像帧之前的所有历史图像帧的分布信息。In a specific embodiment, based on the transformation parameters, the coordinate positions of the current frame image and the historical frame image in the current coordinate system are respectively transformed into coordinate positions in the coordinate system of the offline map; according to the correspondence between the current frame image and the historical frame image At the coordinate position in the coordinate system of the offline map, determine the distribution information of the current image frame and all historical image frames before the current image frame.

根据当前帧图像的特征点信息、历史帧图像的特征点信息分别对应的各匹配特征信息在离线地图的坐标系中的空间分布,确定特征点分布信息;基于当前帧图像和历史帧图像对应的当前帧图像和历史帧图像分布信息、当前帧图像的特征点信息和历史帧图像的特征点信息对应的特征点分布信息,确定特征信息的协方差矩阵。According to the spatial distribution of the matching feature information corresponding to the feature point information of the current frame image and the feature point information of the historical frame image in the coordinate system of the offline map, determine the feature point distribution information; The distribution information of the current frame image and the historical frame image, the feature point distribution information corresponding to the feature point information of the current frame image and the feature point information of the historical frame image determine the covariance matrix of the feature information.

步骤S212:基于协方差矩阵的估计值,确定是否将当前帧图像在离线地图中的检测位置信息输出。Step S212: Based on the estimated value of the covariance matrix, determine whether to output the detected position information of the current frame image in the offline map.

具体地,对特征信息的协方差矩阵进行计算,得到协方差矩阵的估计值。在一实施例中,使用协方差矩阵在主方向上的模长作为当前帧图像的定位结果不确定度的评判标准。Specifically, the covariance matrix of the feature information is calculated to obtain an estimated value of the covariance matrix. In one embodiment, the modulus of the covariance matrix in the main direction is used as a criterion for judging the uncertainty of the positioning result of the current frame image.

响应于协方差矩阵的估计值小于预设估计值,则输出当前帧图像在离线地图中的检测位置信息。也就是说,如果协方差矩阵在主方向上的模长未超过设定值,则确定当前帧图像的定位结果可靠,可以将当前帧图像的定位结果输出。如果协方差矩阵在主方向上的模长超过设定值,则确定当前帧图像的定位结果不可靠,可以修改当前帧图像的定位结果,直至判定当前帧图像的定位结果可靠为止。In response to the estimated value of the covariance matrix being smaller than the preset estimated value, the detected position information of the current frame image in the offline map is output. That is to say, if the modulus length of the covariance matrix in the main direction does not exceed the set value, it is determined that the positioning result of the current frame image is reliable, and the positioning result of the current frame image can be output. If the modulus length of the covariance matrix in the main direction exceeds the set value, it is determined that the positioning result of the current frame image is unreliable, and the positioning result of the current frame image can be modified until it is determined that the positioning result of the current frame image is reliable.

本实施例中,通过使用协方差矩阵的估计值对定位结果进行评估,可以过滤错误和不够精确的定位结果。In this embodiment, erroneous and inaccurate positioning results can be filtered by using the estimated value of the covariance matrix to evaluate the positioning results.

本公开实施例中提供的视觉定位方法,根据当前帧图像的特征信息和各历史帧图像分别对应的特征信息分别在当前坐标系中的位置信息、与当前帧图像和历史帧图像的特征信息对应的匹配特征信息分别在离线地图的坐标系中的位置信息确定当前坐标系与离线地图的坐标系之间的对应关系,通过多帧图像分别对应的特征信息确定当前坐标系与离线地图的坐标系之间的对应关系,使得定位结果更精确、稳定,受错误匹配和建图精度的影响更小,提高了机器人视觉定位的准确率。通过本方法,可以使得机器人找回已经探索环境的地图,将多次建立的不同坐标系同一到离线地图的坐标系中,从而支持多人AR的实现。The visual positioning method provided in the embodiments of the present disclosure corresponds to the feature information of the current frame image and the historical frame images according to the feature information of the current frame image and the feature information corresponding to each historical frame image respectively in the current coordinate system. The matching feature information of the corresponding position information in the coordinate system of the offline map determines the correspondence between the current coordinate system and the coordinate system of the offline map, and determines the coordinate system of the current coordinate system and the offline map through the corresponding feature information of multiple frames of images The corresponding relationship between them makes the positioning results more accurate and stable, and is less affected by wrong matching and mapping accuracy, which improves the accuracy of robot vision positioning. Through this method, the robot can retrieve the map that has already explored the environment, and unify the different coordinate systems that have been established many times into the coordinate system of the offline map, thereby supporting the realization of multi-person AR.

本领域技术人员可以理解,在具体实施方式的上述方法中,各步骤的撰写顺序并不意味着严格的执行顺序而对实施过程构成任何限定,各步骤的具体执行顺序应当以其功能和可能的内在逻辑确定。Those skilled in the art can understand that in the above method of specific implementation, the writing order of each step does not mean a strict execution order and constitutes any limitation on the implementation process. The specific execution order of each step should be based on its function and possible The inner logic is OK.

请参阅图4,图4是本申请视觉定位装置一实施例的框架示意图。视觉定位装置60包括特征匹配模块61、处理模块62和分析模块63。特征匹配模块61用于基于获取的当前帧图像的特征信息,确定当前帧图像的特征信息在离线地图中的匹配特征信息;处理模块62用于根据当前帧图像的特征信息在当前坐标系中的位置信息及其对应的匹配特征信息在离线地图中的位置信息、当前帧图像之前的历史帧图像的特征信息在当前坐标系中的位置信息及其对应的匹配特征信息在离线地图中的位置信息,确定当前坐标系和离线地图的坐标系之间的变换参数;分析模块63用于基于变换参数,确定是否将当前帧图像在离线地图中的检测位置信息输出;当前帧图像的检测位置信息通过当前帧图像的特征信息在离线地图中的匹配特征信息的位置信息确定。Please refer to FIG. 4 . FIG. 4 is a schematic frame diagram of an embodiment of the visual positioning device of the present application. The visual positioning device 60 includes a feature matching module 61 , a processing module 62 and an analysis module 63 . The feature matching module 61 is used to determine the matching feature information of the feature information of the current frame image in the offline map based on the feature information of the acquired current frame image; the processing module 62 is used to determine the matching feature information of the feature information of the current frame image in the current coordinate system The position information of the position information and its corresponding matching feature information in the offline map, the position information of the feature information of the historical frame image before the current frame image in the current coordinate system and the position information of its corresponding matching feature information in the offline map , to determine the transformation parameters between the current coordinate system and the coordinate system of the offline map; the analysis module 63 is used to determine whether to output the detection position information of the current frame image in the offline map based on the transformation parameters; the detection position information of the current frame image is passed The feature information of the current frame image is determined by matching the position information of the feature information in the offline map.

本公开实施例中提供的视觉定位装置中,根据当前帧图像的特征信息和各历史帧图像分别对应的特征信息分别在当前坐标系中的位置信息、与当前帧图像和历史帧图像的特征信息对应的匹配特征信息分别在离线地图的坐标系中的位置信息确定当前坐标系与离线地图的坐标系之间的对应关系,通过多帧图像分别对应的特征信息确定当前坐标系与离线地图的坐标系之间的对应关系,提高了机器人视觉定位的准确率。In the visual positioning device provided in the embodiments of the present disclosure, according to the feature information of the current frame image and the feature information corresponding to each historical frame image, respectively, the position information in the current coordinate system, and the feature information of the current frame image and the historical frame image The position information of the corresponding matching feature information in the coordinate system of the offline map determines the correspondence between the current coordinate system and the coordinate system of the offline map, and the coordinates of the current coordinate system and the offline map are determined through the feature information corresponding to the multi-frame images The corresponding relationship between systems improves the accuracy of robot vision positioning.

请参阅图5,图5是本申请电子设备一实施例的框架示意图。电子设备80包括相互耦接的存储器81和处理器82,处理器82用于执行存储器81中存储的程序指令,以实现上述任一视觉定位方法实施例的步骤。在一个具体的实施场景中,电子设备80可以包括但不限于:微型计算机、服务器,此外,电子设备80还可以包括笔记本电脑、平板电脑等移动设备,在此不做限定。Please refer to FIG. 5 . FIG. 5 is a schematic frame diagram of an embodiment of the electronic device of the present application. The electronic device 80 includes a memory 81 and a processor 82 coupled to each other, and the processor 82 is configured to execute program instructions stored in the memory 81 to implement the steps of any of the above embodiments of the visual positioning method. In a specific implementation scenario, the electronic device 80 may include, but is not limited to: a microcomputer and a server. In addition, the electronic device 80 may also include mobile devices such as notebook computers and tablet computers, which are not limited here.

具体而言,处理器82用于控制其自身以及存储器81以实现上述任一视觉定位方法实施例中的步骤。处理器82还可以称为CPU(Central Processing Unit,中央处理单元)。处理器82可能是一种集成电路芯片,具有信号的处理能力。处理器82还可以是通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application SpecificIntegrated Circuit,ASIC)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。另外,处理器82可以由集成电路芯片共同实现。Specifically, the processor 82 is configured to control itself and the memory 81 to implement the steps in any of the above embodiments of the visual positioning method. The processor 82 may also be referred to as a CPU (Central Processing Unit, central processing unit). The processor 82 may be an integrated circuit chip with signal processing capabilities. The processor 82 can also be a general-purpose processor, a digital signal processor (Digital Signal Processor, DSP), an application-specific integrated circuit (Application Specific Integrated Circuit, ASIC), a field-programmable gate array (Field-Programmable Gate Array, FPGA) or other possible Program logic devices, discrete gate or transistor logic devices, discrete hardware components. A general-purpose processor may be a microprocessor, or the processor may be any conventional processor, or the like. In addition, the processor 82 may be jointly realized by an integrated circuit chip.

请参阅图6,图6为本申请计算机可读存储介质一实施例的框架示意图。计算机可读存储介质90存储有能够被处理器运行的程序指令901,程序指令901用于实现上述任一视觉定位方法实施例的步骤。Please refer to FIG. 6 . FIG. 6 is a schematic frame diagram of an embodiment of a computer-readable storage medium of the present application. The computer-readable storage medium 90 stores program instructions 901 that can be executed by the processor, and the program instructions 901 are used to implement the steps of any of the above embodiments of the visual positioning method.

在一些实施例中,本公开实施例提供的装置具有的功能或包含的模块可以用于执行上文方法实施例描述的方法,其具体实现可以参照上文方法实施例的描述,为了简洁,这里不再赘述。In some embodiments, the functions or modules included in the device provided by the embodiments of the present disclosure can be used to execute the methods described in the method embodiments above, and its specific implementation can refer to the description of the method embodiments above. For brevity, here No longer.

上文对各个实施例的描述倾向于强调各个实施例之间的不同之处,其相同或相似之处可以互相参考,为了简洁,本文不再赘述。The above descriptions of the various embodiments tend to emphasize the differences between the various embodiments, the same or similar points can be referred to each other, and for the sake of brevity, details are not repeated herein.

在本申请所提供的几个实施例中,应该理解到,所揭露的方法和装置,可以通过其它的方式实现。例如,以上所描述的装置实施方式仅仅是示意性的,例如,模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性、机械或其它的形式。In the several embodiments provided in this application, it should be understood that the disclosed methods and devices may be implemented in other ways. For example, the device implementations described above are only illustrative. For example, the division of modules or units is only a logical function division. In actual implementation, there may be other division methods. For example, units or components can be combined or integrated. to another system, or some features may be ignored, or not implemented. In another point, the mutual coupling or direct coupling or communication connection shown or discussed may be through some interfaces, and the indirect coupling or communication connection of devices or units may be in electrical, mechanical or other forms.

另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。In addition, each functional unit in each embodiment of the present application may be integrated into one processing unit, each unit may exist separately physically, or two or more units may be integrated into one unit. The above-mentioned integrated units can be implemented in the form of hardware or in the form of software functional units.

集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)或处理器(processor)执行本申请各个实施方式方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。If the integrated unit is realized in the form of a software function unit and sold or used as an independent product, it can be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present application is essentially or part of the contribution to the prior art or all or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium , including several instructions to make a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (processor) execute all or part of the steps of the methods in various embodiments of the present application. The aforementioned storage medium includes: U disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disk or optical disk and other media that can store program codes. .

若本申请技术方案涉及个人信息,应用本申请技术方案的产品在处理个人信息前,已明确告知个人信息处理规则,并取得个人自主同意。若本申请技术方案涉及敏感个人信息,应用本申请技术方案的产品在处理敏感个人信息前,已取得个人单独同意,并且同时满足“明示同意”的要求。例如,在摄像头等个人信息采集装置处,设置明确显著的标识告知已进入个人信息采集范围,将会对个人信息进行采集,若个人自愿进入采集范围即视为同意对其个人信息进行采集;或者在个人信息处理的装置上,利用明显的标识/信息告知个人信息处理规则的情况下,通过弹窗信息或请个人自行上传其个人信息等方式获得个人授权;其中,个人信息处理规则可包括个人信息处理者、个人信息处理目的、处理方式以及处理的个人信息种类等信息。If the technical solution of this application involves personal information, the product applying the technical solution of this application has clearly notified the personal information processing rules and obtained the individual's independent consent before processing personal information. If the technical solution of this application involves sensitive personal information, the products applying the technical solution of this application have obtained individual consent before processing sensitive personal information, and at the same time meet the requirements of "express consent". For example, at a personal information collection device such as a camera, a clear and prominent sign is set up to inform that it has entered the scope of personal information collection, and personal information will be collected. If an individual voluntarily enters the collection scope, it is deemed to agree to the collection of his personal information; or On the personal information processing device, when the personal information processing rules are informed with obvious signs/information, personal authorization is obtained through pop-up information or by asking individuals to upload their personal information; among them, the personal information processing rules may include Information such as the information processor, the purpose of personal information processing, the method of processing, and the type of personal information processed.

Claims (13)

CN202310612462.8A2023-03-242023-05-26 A visual positioning method, device, electronic equipment and storage mediumPendingCN116563378A (en)

Applications Claiming Priority (2)

Application NumberPriority DateFiling DateTitle
CN2023103239192023-03-24
CN20231032391932023-03-24

Publications (1)

Publication NumberPublication Date
CN116563378Atrue CN116563378A (en)2023-08-08

Family

ID=87492891

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202310612462.8APendingCN116563378A (en)2023-03-242023-05-26 A visual positioning method, device, electronic equipment and storage medium

Country Status (1)

CountryLink
CN (1)CN116563378A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN110032965A (en)*2019-04-102019-07-19南京理工大学Vision positioning method based on remote sensing images
CN111508021A (en)*2020-03-242020-08-07广州视源电子科技股份有限公司 A pose determination method, device, storage medium and electronic device
CN111780764A (en)*2020-06-302020-10-16杭州海康机器人技术有限公司Visual positioning method and device based on visual map
CN112560769A (en)*2020-12-252021-03-26北京百度网讯科技有限公司Method for detecting obstacle, electronic device, road side device and cloud control platform
CN113096185A (en)*2021-03-292021-07-09Oppo广东移动通信有限公司Visual positioning method, visual positioning device, storage medium and electronic equipment
CN113393505A (en)*2021-06-252021-09-14浙江商汤科技开发有限公司Image registration method, visual positioning method, related device and equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN110032965A (en)*2019-04-102019-07-19南京理工大学Vision positioning method based on remote sensing images
CN111508021A (en)*2020-03-242020-08-07广州视源电子科技股份有限公司 A pose determination method, device, storage medium and electronic device
CN111780764A (en)*2020-06-302020-10-16杭州海康机器人技术有限公司Visual positioning method and device based on visual map
CN112560769A (en)*2020-12-252021-03-26北京百度网讯科技有限公司Method for detecting obstacle, electronic device, road side device and cloud control platform
CN113096185A (en)*2021-03-292021-07-09Oppo广东移动通信有限公司Visual positioning method, visual positioning device, storage medium and electronic equipment
CN113393505A (en)*2021-06-252021-09-14浙江商汤科技开发有限公司Image registration method, visual positioning method, related device and equipment

Similar Documents

PublicationPublication DateTitle
CN110322500B (en) Optimization method and device, medium and electronic equipment for real-time positioning and map construction
CN105701766B (en)Image matching method and device
CN108960211B (en)Multi-target human body posture detection method and system
CN109559330B (en)Visual tracking method and device for moving target, electronic equipment and storage medium
US10147015B2 (en)Image processing device, image processing method, and computer-readable recording medium
US20190095745A1 (en)Systems and methods to improve visual feature detection using motion-related data
CN110096929A (en) Object Detection Based on Neural Network
CN110956131B (en)Single-target tracking method, device and system
CN111914921A (en) A method and system for similarity image retrieval based on multi-feature fusion
CN114902299B (en) Method, device, equipment and storage medium for detecting associated objects in images
CN115345905A (en) Target object tracking method, device, terminal and storage medium
CN113624222A (en) A map updating method, robot and readable storage medium
KR20220057691A (en)Image registration method and apparatus using siamese random forest
CN111951211B (en)Target detection method, device and computer readable storage medium
CN116051873A (en)Key point matching method and device and electronic equipment
CN112669277A (en)Vehicle association method, computer equipment and device
US20240127567A1 (en)Detection-frame position-accuracy improving system and detection-frame position correction method
CN110689556A (en)Tracking method and device and intelligent equipment
WO2019100348A1 (en)Image retrieval method and device, and image library generation method and device
US20230222686A1 (en)Information processing apparatus, information processing method, and program
CN116824609B (en)Document format detection method and device and electronic equipment
CN116563378A (en) A visual positioning method, device, electronic equipment and storage medium
US11175148B2 (en)Systems and methods to accommodate state transitions in mapping
CN113409365B (en)Image processing method, related terminal, device and storage medium
CN113822146B (en) Target detection method, terminal device and computer storage medium

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination

[8]ページ先頭

©2009-2025 Movatter.jp