Movatterモバイル変換


[0]ホーム

URL:


CN110428465A - View-based access control model and the mechanical arm grasping means of tactile, system, device - Google Patents

View-based access control model and the mechanical arm grasping means of tactile, system, device
Download PDF

Info

Publication number
CN110428465A
CN110428465ACN201910629058.5ACN201910629058ACN110428465ACN 110428465 ACN110428465 ACN 110428465ACN 201910629058 ACN201910629058 ACN 201910629058ACN 110428465 ACN110428465 ACN 110428465A
Authority
CN
China
Prior art keywords
image
target
tactile
grasped
pose
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910629058.5A
Other languages
Chinese (zh)
Inventor
李玉苹
蒋应元
乔红
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Automation of Chinese Academy of Science
Original Assignee
Institute of Automation of Chinese Academy of Science
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Automation of Chinese Academy of SciencefiledCriticalInstitute of Automation of Chinese Academy of Science
Priority to CN201910629058.5ApriorityCriticalpatent/CN110428465A/en
Publication of CN110428465ApublicationCriticalpatent/CN110428465A/en
Pendinglegal-statusCriticalCurrent

Links

Classifications

Landscapes

Abstract

Translated fromChinese

本发明属于工业机器人领域,具体涉及一种基于视觉和触觉的机械臂抓取方法、系统、装置,旨在解决机械臂在不同光照条件下抓取工业零件成功率低的问题。本发明方法包括:获取光照强度;若光照强度小于设定阈值,提取待抓取目标图像,与对应的全局模型匹配后获取目标的第一姿态;对目标图像进行阴影移除;基于第一姿态,采用迭代最近点和高斯‑牛顿算法,基于全局模型获取与去除阴影的目标图像匹配的最终识别姿态;若光照强度大于阈值,获取目标的触觉图像,与预先构建的目标的触觉图像与姿态对应关系知识库匹配后获取目标的最终识别姿态;机械臂根据最终识别姿态及位置信息进行抓取。本发明提高了机械臂在不同光照条件下对工业零件抓取的成功率。

The invention belongs to the field of industrial robots, and in particular relates to a visual and tactile-based grasping method, system and device for a mechanical arm, aiming at solving the problem of low success rate of the mechanical arm grasping industrial parts under different lighting conditions. The method of the present invention includes: obtaining the illumination intensity; if the illumination intensity is less than the set threshold, extracting the target image to be captured, and obtaining the first pose of the target after matching with the corresponding global model; performing shadow removal on the target image; , using the iterative closest point and Gauss-Newton algorithm, based on the global model to obtain the final recognition pose that matches the shadow-removed target image; if the illumination intensity is greater than the threshold, obtain the tactile image of the target, which corresponds to the pre-built tactile image of the target and the pose After the relational knowledge base is matched, the final recognition pose of the target is obtained; the robot arm grasps according to the final recognition pose and position information. The invention improves the success rate of the mechanical arm grasping the industrial parts under different lighting conditions.

Description

Translated fromChinese
基于视觉和触觉的机械臂抓取方法、系统、装置Robotic arm grasping method, system and device based on vision and touch

技术领域technical field

本发明属于工业机器人领域,具体涉及一种基于视觉和触觉的机械臂抓取方法、系统、装置。The invention belongs to the field of industrial robots, and in particular relates to a visual and tactile-based grasping method, system and device for a mechanical arm.

背景技术Background technique

机器人飞速发展的今天,工业机器人在制造业中的应用也越来越广泛。如汽车及汽车零部件制造、机械加工、电子电气生产、橡胶及塑料制造、食品加工、木材与家具制造等领域的自动化生产过程中,机器人作业发挥着重要作用。机器人对工业零件的抓取是制造业自动化生产中一项常见的任务。目前,视觉引导与定位技术成为工业机器人获得作业周围环境信息的主要手段。Today, with the rapid development of robots, industrial robots are more and more widely used in manufacturing. Robotic operations play an important role in automated production processes in areas such as automobile and auto parts manufacturing, machining, electrical and electronic production, rubber and plastic manufacturing, food processing, wood and furniture manufacturing. Robotic gripping of industrial parts is a common task in manufacturing automation. At present, visual guidance and positioning technology has become the main means for industrial robots to obtain information about the surrounding environment.

视觉引导和定位虽然应用广泛,但也存在一些缺陷,比如双目视觉系统有很强的恢复三维信息的能力,但是测量精度与摄像机的标定精度密切相关。同时,在非正常光照条件下因光照不足或过强会出现目标丢失的问题。因此,除了计算机视觉,还需要加入其它的传感器对视觉引导和定位系统进行弥补。本发明采用视觉相关的技术对要处理的工业零件图像进行球面多角度建模和实时动态去除阴影,同时结合了触觉传感器对于光照过暗无法捕捉工业零件位姿以及光照光强工业零件表面存在反光干扰定位的情况进行了有效的补充。综合各种可能的情况进行了全面的考虑,对于复杂环境下的工业零件抓取具有重要的意义。Although vision guidance and positioning are widely used, there are still some defects. For example, the binocular vision system has a strong ability to restore three-dimensional information, but the measurement accuracy is closely related to the calibration accuracy of the camera. At the same time, under abnormal lighting conditions, the problem of target loss may occur due to insufficient or too strong illumination. Therefore, in addition to computer vision, other sensors need to be added to make up for the vision guidance and positioning system. The present invention adopts vision-related technology to carry out spherical multi-angle modeling and real-time dynamic shadow removal on the image of the industrial part to be processed, and at the same time combines the tactile sensor to capture the position and posture of the industrial part when the light is too dark and the reflection of light on the surface of the industrial part. The situation of interference positioning is effectively supplemented. The comprehensive consideration of various possible situations is of great significance to the grasping of industrial parts in complex environments.

发明内容Contents of the invention

为了解决现有技术中的上述问题,即为了解决机械臂在不同光照条件下抓取工业零件成功率低问题,本发明第一方面,提出了一种基于视觉和触觉的机械臂抓取方法,该方法包括:In order to solve the above-mentioned problems in the prior art, that is, in order to solve the problem of low success rate of the mechanical arm grasping industrial parts under different lighting conditions, the first aspect of the present invention proposes a visual and tactile-based mechanical arm grasping method, The method includes:

步骤S10,获取待抓取目标的光照强度,若该光照强度在设定阈值范围之内则执行步骤S20,否则执行步骤S50;Step S10, acquiring the light intensity of the target to be grasped, if the light intensity is within the set threshold range, execute step S20, otherwise execute step S50;

步骤S20,基于所述待抓取目标的拍摄图像进行目标图像的提取,得到第一图像,并基于所述待抓取目标对应的全局模型通过视图匹配的方法获取所述待抓取目标的姿态,作为第一姿态;Step S20, extracting the target image based on the captured image of the target to be captured to obtain a first image, and acquiring the posture of the target to be captured by a method of view matching based on the global model corresponding to the target to be captured , as the first gesture;

步骤S30,对所述第一图像进行阴影移除,得到第二图像;Step S30, performing shadow removal on the first image to obtain a second image;

步骤S40,以所述第一姿态为初始姿态,采用迭代最近点算法和高斯-牛顿算法,基于所述待抓取目标对应的全局模型获取与所述第二图像匹配的第二姿态,将所述第二姿态作为最终识别姿态,执行步骤S60;Step S40, using the first pose as the initial pose, using the iterative closest point algorithm and the Gauss-Newton algorithm, based on the global model corresponding to the target to be captured, to obtain a second pose that matches the second image, and convert the The second gesture is used as the final recognized gesture, and step S60 is performed;

步骤S50,获取所述待抓取目标的触觉图像,基于预先构建的所述待抓取目标的触觉图像与姿态对应关系知识库,通过触觉图像匹配的方法获取所述待抓取目标的第三姿态,将所述第三姿态作为最终识别姿态;Step S50, acquiring the tactile image of the object to be grasped, based on the pre-built knowledge base of the corresponding relationship between the tactile image and the posture of the object to be grasped, and obtaining the third tactile image of the object to be grasped through the method of tactile image matching. gesture, using the third gesture as the final recognized gesture;

步骤S60,机械臂根据得到的所述最终识别姿态,以及获取的所述待抓取目标的位置信息进行所述待抓取目标的抓取。Step S60, the robotic arm grasps the target to be grasped according to the obtained final recognized pose and the obtained position information of the target to be grasped.

在一些优选的实施方式中,步骤S20中“基于所述待抓取目标对应的全局模型通过视图匹配的方法获取所述待抓取目标的姿态”,其方法为:基于所述待抓取目标对应的全局模型,获取通过虚拟球生成的不同视点的2D投影视图的集合,通过图像匹配的方法获取与所述第一图像匹配的视图,并将其对应的姿态作为所述待抓取目标的姿态。In some preferred implementation manners, in step S20, "obtain the posture of the target to be grasped based on the global model corresponding to the target to be grasped by the method of view matching", the method is: based on the target to be grasped The corresponding global model obtains a collection of 2D projection views of different viewpoints generated by the virtual ball, obtains the view matched with the first image through an image matching method, and uses its corresponding posture as the target to be grasped attitude.

在一些优选的实施方式中,步骤S30中“对所述第一图像进行阴影移除”,其方法为:计算第一图像方差,将方差值小于设定阈值的点作为阴影点并移除。In some preferred implementations, in step S30, "remove the shadow of the first image", the method is: calculate the variance of the first image, and use the points whose variance value is less than the set threshold as shadow points and remove .

在一些优选的实施方式中,所述方差,其计算方法为:In some preferred embodiments, the calculation method of the variance is:

其中,V(x,y)是像素点(x,y)的方差值,g(x,y)表示像素点(x,y)的平均灰度值,I(x,y)表示特定像素点的灰度值,NV为求取方差的区域的边长,x、y为像素点的二维坐标值。Among them, V(x, y) is the variance value of the pixel point (x, y), g(x, y) represents the average gray value of the pixel point (x, y), and I(x, y) represents a specific pixel The gray value of the point, NV is the side length of the area where the variance is calculated, and x and y are the two-dimensional coordinates of the pixel.

在一些优选的实施方式中,所述平均灰度值,其计算方法为:In some preferred embodiments, the calculation method of the average gray value is:

其中,NA为求取平均灰度值的区域的边长。Among them, NA is the side length of the area where the average gray value is calculated.

在一些优选的实施方式中,步骤S40中“以所述第一姿态为初始姿态,采用迭代最近点算法和高斯-牛顿算法,基于所述待抓取目标对应的全局模型获取与所述第二图像匹配的第二姿态”,通过融合迭代最近点算法和高斯-牛顿算法得到如下公式,并通多迭代求解达到预设收敛条件时的姿态:In some preferred implementations, in step S40, "taking the first pose as the initial pose, using the iterative closest point algorithm and the Gauss-Newton algorithm, based on the global model corresponding to the target to be grasped, obtains the second The second attitude of image matching", the following formula is obtained by combining the iterative closest point algorithm and the Gauss-Newton algorithm, and the attitude when the preset convergence condition is reached through multiple iterations:

pt+1=pt+Δppt+1 = pt +Δp

其中,p为姿态估计值,Δp为更新向量,ε为差向量,Jε为ε相对于p的雅可比矩阵,t、t+1为时刻值,代表任一时刻和任一时刻的下一时刻,T为迭代周期。Among them, p is the attitude estimation value, Δp is the update vector, ε is the difference vector, Jε is the Jacobian matrix of ε relative to p, t and t+1 are the time values, representing any moment and the next Moment, T is the iteration period.

在一些优选的实施方式中,步骤S50中所述触觉图像,通过将所述待抓取目标放置于触觉传感器表面获取。In some preferred implementation manners, the tactile image in step S50 is acquired by placing the target to be grasped on the surface of the tactile sensor.

在一些优选的实施方式中,所述触觉传感器为阵列触觉传感器。In some preferred embodiments, the tactile sensor is an array tactile sensor.

本发明的第二方面,提出了一种基于视觉和触觉的机械臂抓取系统,该机械臂抓取系统包括图像采集装置、触觉传感器装置、放置平台、处理器、机械臂;In the second aspect of the present invention, a vision- and tactile-based robotic arm grasping system is proposed, the robotic arm grasping system includes an image acquisition device, a tactile sensor device, a placement platform, a processor, and a robotic arm;

所述图像采集装置设置于所述机械臂上部设定位置,用于采集所述放置平台上的待抓取目标的图像;The image acquisition device is arranged at a set position on the upper part of the mechanical arm, and is used to acquire the image of the target to be grasped on the placement platform;

所述触觉传感器装置设置于所述放置平台上部,用于获取所述放置平台所放置的待抓取目标的触觉图像;The tactile sensor device is arranged on the upper part of the placement platform, and is used to acquire the tactile image of the target to be grasped placed on the placement platform;

所述放置平台设置于所述机械臂的抓取半径内的设定位置,用于放置待抓取目标;The placing platform is set at a set position within the grasping radius of the robotic arm, and is used to place the target to be grasped;

所述处理器,用于基于所述图像采集装置采集的待抓取目标的拍摄图像,和/或基于所述触觉传感器装置采集的待抓取目标的触觉图像,通过所述的基于视觉和触觉的机械臂抓取方法生成所述机械臂的抓取指令;The processor is configured to capture images of the target to be grasped based on the image acquisition device, and/or based on the tactile image of the target to be grasped by the tactile sensor device, through the visual and tactile based The grasping method of the mechanical arm generates the grasping instruction of the mechanical arm;

所述机械臂,用于基于所述处理器输出的抓取指令抓取所述放置平台上的待抓取目标。The mechanical arm is used for grabbing the target to be grabbed on the placement platform based on the grabbing instruction output by the processor.

在一些优选的实施方式中,所述机械臂抓取系统还包括显示装置,用于显示所述放置平台上待抓取目标的图像。In some preferred embodiments, the robotic arm grasping system further includes a display device for displaying an image of the target to be grasped on the placement platform.

本发明的第三方面,提出了一种存储装置,其中存储有多条程序,所述程序应用由处理器加载并执行以实现上述的基于视觉和触觉的机械臂抓取方法。The third aspect of the present invention proposes a storage device, in which a plurality of programs are stored, and the program application is loaded and executed by a processor to realize the above-mentioned visual and tactile-based grasping method of a robotic arm.

本发明的第四方面,提出了一种处理设置,包括处理器、存储装置;处理器,适用于执行各条程序;存储装置,适用于存储多条程序;所述程序适用于由处理器加载并执行以实现上述的基于视觉和触觉的机械臂抓取方法。In a fourth aspect of the present invention, a processing arrangement is proposed, including a processor and a storage device; the processor is suitable for executing various programs; the storage device is suitable for storing multiple programs; the program is suitable for being loaded by the processor And execute to realize the above-mentioned vision-based and tactile-based grasping method of the mechanical arm.

本发明的有益效果:Beneficial effects of the present invention:

本发明提高了机械臂在不同光照条件下对工业零件抓取的成功率。本发明在正常光照条件下建立球面坐标系下的全局模型库,实现了3D情况下对工业零件的实时的定位与抓取;对采集的图片动态的去除阴影,阴影去除后,提高匹配的准确性;同时引入了触觉传感器,当光照情况不理想时,启动触觉传感器获取工业零件的接触面图像及位置信息,传给计算机通过相应的处理算法对图片进行处理和匹配,从而能够精确的进行工业零件的定位与抓取,提高了抓取的成功率。The invention improves the success rate of the mechanical arm grasping the industrial parts under different lighting conditions. The invention establishes a global model library under the spherical coordinate system under normal lighting conditions, realizes real-time positioning and capture of industrial parts under 3D conditions; dynamically removes shadows from collected pictures, and improves matching accuracy after shadow removal At the same time, the tactile sensor is introduced. When the lighting conditions are not ideal, the tactile sensor is activated to obtain the contact surface image and position information of the industrial parts, and the information is sent to the computer to process and match the pictures through the corresponding processing algorithm, so that the industrial parts can be accurately processed. The positioning and grabbing of parts improves the success rate of grabbing.

附图说明Description of drawings

通过阅读参照以下附图所做的对非限制性实施例所做的详细描述,本申请的其他特征、目的和优点将会变得更明显。Other features, objects and advantages of the present application will become more apparent by reading the detailed description of non-limiting embodiments made with reference to the following drawings.

图1是本发明一种实施例的基于视觉和触觉的机械臂抓取方法的流程示意图;Fig. 1 is a schematic flow chart of a visual and tactile-based robotic arm grasping method according to an embodiment of the present invention;

图2是本发明一种实施例的基于视觉和触觉的机械臂抓取方法的虚拟视图球建模原理示例图;Fig. 2 is an example diagram of the virtual view ball modeling principle of the visual and tactile-based mechanical arm grasping method of an embodiment of the present invention;

图3是本发明一种实施例的基于视觉和触觉的机械臂抓取方法的平均灰度值与方差关系示例图;Fig. 3 is an example diagram of the relationship between the average gray value and the variance of the visual and tactile-based robotic arm grasping method according to an embodiment of the present invention;

图4是本发明一种实施例的基于视觉和触觉的机械臂抓取方法的触觉传感器匹配示例图;Fig. 4 is an example diagram of tactile sensor matching of a robotic arm grasping method based on vision and touch according to an embodiment of the present invention;

图5本发明一种实施例的基于视觉和触觉的机械臂抓取系统的硬件结构示例图。Fig. 5 is an example diagram of the hardware structure of the visual and tactile-based robotic arm grasping system according to an embodiment of the present invention.

具体实施方式Detailed ways

为使本发明的目的、技术方案和优点更加清楚,下面将结合附图对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。In order to make the purpose, technical solutions and advantages of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below in conjunction with the accompanying drawings. Obviously, the described embodiments are part of the embodiments of the present invention, rather than Full examples. Based on the embodiments of the present invention, all other embodiments obtained by persons of ordinary skill in the art without making creative efforts belong to the protection scope of the present invention.

下面结合附图和实施例对本申请作进一步的详细说明。可以理解的是,此处所描述的具体实施例仅用于解释相关发明,而非对该发明的限定。另外还需要说明的是,为了便于描述,附图中仅示出了与有关发明相关的部分。The application will be further described in detail below in conjunction with the accompanying drawings and embodiments. It should be understood that the specific embodiments described here are only used to explain related inventions, not to limit the invention. It should also be noted that, for the convenience of description, only the parts related to the related invention are shown in the drawings.

需要说明的是,在不冲突的情况下,本申请中的实施例及实施例中的特征可以相互组合。It should be noted that, in the case of no conflict, the embodiments in the present application and the features in the embodiments can be combined with each other.

本发明的基于视觉和触觉的机械臂抓取方法,如图1所示,包括以下步骤:The manipulator grasping method based on vision and touch of the present invention, as shown in Figure 1, comprises the following steps:

步骤S10,获取待抓取目标的光照强度,若该光照强度在设定阈值范围之内则执行步骤S20,否则执行步骤S50;Step S10, acquiring the light intensity of the target to be grasped, if the light intensity is within the set threshold range, execute step S20, otherwise execute step S50;

步骤S20,基于所述待抓取目标的拍摄图像进行目标图像的提取,得到第一图像,并基于所述待抓取目标对应的全局模型通过视图匹配的方法获取所述待抓取目标的姿态,作为第一姿态;Step S20, extracting the target image based on the captured image of the target to be captured to obtain a first image, and acquiring the posture of the target to be captured by a method of view matching based on the global model corresponding to the target to be captured , as the first gesture;

步骤S30,对所述第一图像进行阴影移除,得到第二图像;Step S30, performing shadow removal on the first image to obtain a second image;

步骤S40,以所述第一姿态为初始姿态,采用迭代最近点算法和高斯-牛顿算法,基于所述待抓取目标对应的全局模型获取与所述第二图像匹配的第二姿态,将所述第二姿态作为最终识别姿态,执行步骤S60;Step S40, using the first pose as the initial pose, using the iterative closest point algorithm and the Gauss-Newton algorithm, based on the global model corresponding to the target to be captured, to obtain a second pose that matches the second image, and convert the The second gesture is used as the final recognized gesture, and step S60 is performed;

步骤S50,获取所述待抓取目标的触觉图像,基于预先构建的所述待抓取目标的触觉图像与姿态对应关系知识库,通过触觉图像匹配的方法获取所述待抓取目标的第三姿态,将所述第三姿态作为最终识别姿态;Step S50, acquiring the tactile image of the object to be grasped, based on the pre-built knowledge base of the corresponding relationship between the tactile image and the posture of the object to be grasped, and obtaining the third tactile image of the object to be grasped through the method of tactile image matching. gesture, using the third gesture as the final recognized gesture;

步骤S60,机械臂根据得到的所述最终识别姿态,以及获取的所述待抓取目标的位置信息进行所述待抓取目标的抓取。Step S60, the robotic arm grasps the target to be grasped according to the obtained final recognized pose and the obtained position information of the target to be grasped.

为了更清晰地对本发明基于视觉和触觉的机械臂抓取方法进行说明,下面结合附图对本发明方法一种实施例中各步骤进行展开详述。In order to describe the vision-based and tactile-based grasping method of the robotic arm of the present invention more clearly, each step in an embodiment of the method of the present invention will be described in detail below in conjunction with the accompanying drawings.

步骤S10,获取待抓取目标的光照强度,若该光照强度在设定阈值范围之内则执行步骤S20,否则执行步骤S50。Step S10, acquiring the light intensity of the target to be grasped, if the light intensity is within the set threshold range, go to step S20, otherwise go to step S50.

本发明首要目的是提供一种在各种复杂光照环境下,机械臂基于触觉传感器或视觉传感器获取工业零件的位姿信息,并根据位姿信息对工业零件进行抓取的方法。在本实施例中,先对当前抓取环境的光照条件进行判断,即通过获取待抓取目标的光照强度,若该光照强度在设定阈值范围之内则认为正常,若光照正常,采用视觉传感器对要处理的工业零件进行球面多角度进行建模和实时动态去除阴影的预处理;若出现光照过暗无法捕捉工业零件位姿以及光照光强工业零件表面存在反光干扰定位的情况,则通过触觉传感器获取工业零件的位姿信息,综合各种可能的情况进行了全面的考虑。The primary purpose of the present invention is to provide a method for the mechanical arm to obtain the pose information of industrial parts based on the tactile sensor or the visual sensor, and grasp the industrial parts according to the pose information under various complex lighting environments. In this embodiment, first judge the lighting conditions of the current grasping environment, that is, by acquiring the light intensity of the target to be grasped, if the light intensity is within the set threshold range, it is considered normal; The sensor performs spherical multi-angle modeling and real-time dynamic shadow removal preprocessing on the industrial parts to be processed; if the lighting is too dark to capture the pose of the industrial parts and there is reflection interference on the surface of the light intensity industrial parts, then through The tactile sensor obtains the position and orientation information of industrial parts, and comprehensively considers various possible situations.

步骤S20,基于所述待抓取目标的拍摄图像进行目标图像的提取,得到第一图像,并基于所述待抓取目标对应的全局模型通过视图匹配的方法获取所述待抓取目标的姿态,作为第一姿态。Step S20, extracting the target image based on the captured image of the target to be captured to obtain a first image, and acquiring the posture of the target to be captured by a method of view matching based on the global model corresponding to the target to be captured , as the first pose.

对于基于模型的姿态估计方法,需要从模型中生成全局的图库(3D shapemodel),其中包含从不同视点中看到的三维对象的二维投影。在本实施例中,使用虚拟视图球生成指定3D对象的2D视图。将虚拟摄像机放置在物体模型周围,将三维物体模型投影到各个摄像机位置的像平面上,得到图像。虚拟相机的参数等于输入相机的内部参数。所有视图的二维形状表示形式都存储在三维形状模型中。局部的虚拟视图球,它限制了形状模型允许的姿态范围,从而最小化了三维形状模型中需要计算和存储的二维投影的数量。要指定姿态范围,设置一个环绕该对象的球体。球体的位置是通过把它的中心放在物体的边界框的中心来定义的。如图2所示,物体中心坐标系的xz平面定义了球面的赤道面。北极在负y轴上,latitude为纬度,longitude为经度,范围都为[-90,90]度,pose Range为姿态范围。在球体的表面,放置一个摄像机来观察物体。此外,必须指定摄像机到物体的最小和最大距离,即摄像机到不同球体的物体中心的距离所对应的球面半径。除了定义姿态范围外,还必须设置摄像机的滚动角度和虚拟摄像机绕z轴旋转的允许范围。For model-based pose estimation methods, a global gallery (3D shapemodel) needs to be generated from the model, which contains 2D projections of 3D objects seen from different viewpoints. In this embodiment, a virtual view sphere is used to generate a 2D view of a given 3D object. Place the virtual camera around the object model, and project the three-dimensional object model onto the image plane of each camera position to obtain an image. The parameters of the virtual camera are equal to the intrinsic parameters of the input camera. The 2D shape representations for all views are stored in the 3D shape model. A local virtual view sphere that limits the range of poses allowed by the shape model, thereby minimizing the number of 2D projections that need to be computed and stored in the 3D shape model. To specify a pose range, set a sphere around the object. The position of the sphere is defined by placing its center in the center of the object's bounding box. As shown in Figure 2, the xz plane of the object center coordinate system defines the equatorial plane of the sphere. The North Pole is on the negative y-axis, latitude is latitude, longitude is longitude, the range is [-90,90] degrees, and pose Range is the attitude range. On the surface of the sphere, place a camera to observe the object. In addition, the minimum and maximum distances from the camera to the object must be specified, i.e. the radius of the sphere corresponding to the distance from the camera to the center of the object for the different spheres. In addition to defining the attitude range, you must also set the roll angle of the camera and the allowable range of the virtual camera's rotation around the z-axis.

通过设置的虚拟视图球360度生成离线的各个工业零件的全局模型库,在本实施例中,基于所述待抓取目标的拍摄图像进行目标图像的提取,采用预设好的全局模型库获取工业零件的第一姿态信息,即工业零件在二维图像中粗略的姿态估计信息。The global model library of each industrial part that is offline is generated through the set virtual view ball 360 degrees. In this embodiment, the target image is extracted based on the captured image of the target to be captured, and the preset global model library is used to obtain The first pose information of the industrial part, that is, the rough pose estimation information of the industrial part in the two-dimensional image.

步骤S30,对所述第一图像进行阴影移除,得到第二图像。Step S30, performing shadow removal on the first image to obtain a second image.

在本实例中,计算第一图像方差,将方差值小于设定阈值的点作为阴影点并移除,得到第二图像。In this example, the variance of the first image is calculated, and the points whose variance value is smaller than the set threshold are taken as shadow points and removed to obtain the second image.

我们首先计算工业零件的二维图像对应的灰度图像的平均的灰度值,相对来说背景的灰度值很高,物体的区域和阴影的灰度值相对较低,实际上这是一个平滑的过程。平均灰度值的计算如公式(1)所示:We first calculate the average grayscale value of the grayscale image corresponding to the two-dimensional image of the industrial part. Relatively speaking, the grayscale value of the background is high, and the grayscale value of the object area and shadow is relatively low. In fact, this is a smooth process. The calculation of the average gray value is shown in formula (1):

其中,g(x,y)表示像素点(x,y)在NA*NA的区域内的平均灰度值,I(x,y)表示特定像素点(当前像素点)的灰度值,NA为求取平均灰度值的区域的边长,x、y为像素点的二维坐标值。Among them, g(x, y) represents the average gray value of the pixel point (x, y) in the area of NA * NA , and I(x, y) represents the gray value of a specific pixel point (current pixel point) , NA is the side length of the area where the average gray value is calculated, and x, y are the two-dimensional coordinates of the pixel.

对于图像的像素来说,计算每个像素邻域内的平均的方差。在直观的层面上,如果图像的区域是平滑的,那么该区域的方差就会很低。如果图像的区域是粗糙的,那么该区域的方差会很大。我们可以推测阴影区域的方差较小,因为阴影区域是平滑的,计算方差的方程如公式(2)所示:For the pixels of the image, the variance of the average within the neighborhood of each pixel is calculated. On an intuitive level, if a region of an image is smooth, then the variance in that region will be low. If a region of the image is rough, then the variance in that region will be large. We can speculate that the variance of the shaded area is smaller because the shaded area is smooth, and the equation for calculating the variance is shown in formula (2):

其中,V(x,y)是像素点(x,y)的方差值,NV为求取方差的区域的边长。Among them, V(x, y) is the variance value of the pixel point (x, y), and NV is the side length of the area where the variance is calculated.

如图3所示,横轴Variance表示图像中像素的方差,纵轴Average Gray表示图像中像素的平均灰度值。图像中的每个点对应于二维图像中的像素,图像的强度表示包含当前特征的点的个数,具有一定特征的点越多,对应的在图像中越亮。因为阴影中的点具有相同的特征,所以我们可以推断出这些点在图像中的某一点集合,在物体区域和背景中的点也是如此。在图像中,我们可以看到三个区域用矩形和椭圆来标记。左侧矩形中的点对应于背景(Background),因为背景是明亮和平滑的。左下角矩形中的点对应于阴影(Shadow),因为阴影是黑色和平滑的。右边椭圆上的点对应于物体(Object),因为物体的区域是黑色和粗糙的。As shown in Figure 3, the horizontal axis Variance represents the variance of the pixels in the image, and the vertical axis Average Gray represents the average gray value of the pixels in the image. Each point in the image corresponds to a pixel in the two-dimensional image, and the intensity of the image represents the number of points that contain the current feature. The more points with a certain feature, the brighter the corresponding point in the image. Because the points in the shadow have the same characteristics, we can infer that these points converge at a certain point in the image, as do points in the object region and in the background. In the image, we can see three regions marked with rectangles and ellipses. The points in the rectangle on the left correspond to the Background because the background is bright and smooth. The point in the lower left rectangle corresponds to the Shadow because the shadow is black and smooth. The points on the right ellipse correspond to the Object, since the object's area is black and rough.

在工业环境中,背景相对简单,工业形象的背景往往是明亮的,足够的照明就足够了。我们用几乎纯白色作为背景。所以检测到阴影后去除阴影很简单,只需将阴影区域的灰度值设置为白色。这只是本实施例在工业环境中优选的去除阴影的方式,其他去除阴影的方式在本发明中也适用。In an industrial environment, the background is relatively simple, and the background of an industrial image is often bright, and sufficient lighting is enough. We use almost pure white as the background. So removing the shadow after it is detected is simple, just set the grayscale value of the shadowed area to white. This is only the preferred method for removing shadows in the industrial environment in this embodiment, and other methods for removing shadows are also applicable in the present invention.

步骤S40,以所述第一姿态为初始姿态,采用迭代最近点算法和高斯-牛顿算法,基于所述待抓取目标对应的全局模型获取与所述第二图像匹配的第二姿态,将所述第二姿态作为最终识别姿态,执行步骤S60。Step S40, using the first pose as the initial pose, using the iterative closest point algorithm and the Gauss-Newton algorithm, based on the global model corresponding to the target to be captured, to obtain a second pose that matches the second image, and convert the The above-mentioned second posture is used as the final recognized posture, and step S60 is executed.

在本实施例中,为了获得更准确的姿态,需要一种连续优化的方法来细化姿态估计。我们采用迭代最近点(ICP)和高斯-牛顿组合的方法。In this embodiment, in order to obtain a more accurate pose, a continuous optimization method is needed to refine the pose estimation. We adopt methods of iterative closest point (ICP) and Gauss-Newton combination.

通过一定的旋转和平移变换将不同坐标系下的两组或者多组点云数据统一到同一参考坐标系下。这个过程,可以通过一组映射来完成,从两组3D点中恢复出相机姿态信息的方法通常称为迭代最近点(Iterative Closest Point,ICP)。高斯-牛顿算法通过使用泰勒级数展开式去近似地代替非线性回归模型,然后多次迭代,多次修正回归系数,使回归系数不断逼近非线性回归模型的最佳回归系数,最后使原模型的残差平方和达到最小。Unify two or more sets of point cloud data in different coordinate systems into the same reference coordinate system through certain rotation and translation transformations. This process can be done through a set of mappings, and the method of recovering camera pose information from two sets of 3D points is usually called Iterative Closest Point (ICP). The Gauss-Newton algorithm approximately replaces the nonlinear regression model by using the Taylor series expansion, and then iterates and corrects the regression coefficients multiple times, so that the regression coefficients are constantly approaching the best regression coefficients of the nonlinear regression model, and finally the original model The residual sum of squares is minimized.

优化过程如下:最初的状态为p0,也就是初始姿态,通过融合迭代最近点算法和高斯-牛顿算法得到公式(3),并通多迭代求解达到预设收敛条件时的姿态,即基于所述待抓取目标对应的全局模型获取与所述第二图像匹配的姿态:The optimization process is as follows: the initial state is p0, which is the initial attitude, and the formula (3) is obtained by fusing the iterative closest point algorithm and the Gauss-Newton algorithm, and the attitude when the preset convergence condition is reached through multiple iterations, that is, based on the above The global model corresponding to the target to be grasped acquires a pose matching the second image:

pt+1=pt+Δp (3)pt+1 = pt +Δp (3)

更新向量Δp可以表示为公式(4):The update vector Δp can be expressed as formula (4):

其中,ε为差向量,Jε为ε相对于p的雅可比矩阵,t、t+1为时刻值,代表任一时刻和任一时刻的下一时刻,T为迭代周期,p为姿态估计值,与ICP算法相似,对应性和最小化问题迭代求解,直到收敛为止,将基于所述待抓取目标对应的全局模型获取与所述第二图像匹配的姿态即第二姿态,作为最终识别姿态。Among them, ε is the difference vector, Jε is the Jacobian matrix of ε relative to p, t and t+1 are the time values, representing any time and the next time at any time, T is the iteration cycle, and p is the attitude estimation value, similar to the ICP algorithm, the correspondence and minimization problems are solved iteratively until convergence, and the pose matching the second image, that is, the second pose, will be obtained based on the global model corresponding to the target to be captured, as the final recognition attitude.

步骤S50,获取所述待抓取目标的触觉图像,基于预先构建的所述待抓取目标的触觉图像与姿态对应关系知识库,通过触觉图像匹配的方法获取所述待抓取目标的第三姿态,将所述第三姿态作为最终识别姿态。Step S50, acquiring the tactile image of the object to be grasped, based on the pre-built knowledge base of the corresponding relationship between the tactile image and the posture of the object to be grasped, and obtaining the third tactile image of the object to be grasped through the method of tactile image matching. gesture, using the third gesture as the final recognized gesture.

在本实例中,提前通过收集大量的有关接触面形状的信息,构建接触面的知识库,知识库用于存储目标的触觉图像与姿态对应关系。当光照条件不理想的时候,将工业零件放在触觉传感器表面,触觉传感器会感应生成关于接触面信息的图片,通过接触面的接触信息分析并判断工业零件的姿态信息并通过相应设备传回电脑,传回的图片的形式如图4所示,给出了两种不同类型的工业零件接触面示意图,图4中1表示第一种类型的工业零件的两种放置的方式在接触面上形成图片的示意图,图4中2表示第二种类型的工业零件竖放在接触面上形成的有关接触面的示意图,图4中3表示第二种类型的工业零件在接触面上斜着放形成的有关接触面信息的示意图。电脑知识库对传回的图片进行相应的处理和匹配,包括相应的目标提取和像素坐标信息的获取,获取有关工业零件第三姿态,作为最终识别姿态。In this example, a knowledge base of the contact surface is constructed by collecting a large amount of information about the shape of the contact surface in advance, and the knowledge base is used to store the corresponding relationship between the tactile image and the posture of the target. When the lighting conditions are not ideal, put the industrial parts on the surface of the tactile sensor, the tactile sensor will sense and generate a picture about the contact surface information, analyze and judge the attitude information of the industrial part through the contact information of the contact surface and send it back to the computer through the corresponding equipment , the form of the picture returned is shown in Figure 4, which shows the schematic diagram of the contact surface of two different types of industrial parts. In Figure 4, 1 indicates that the two placement methods of the first type of industrial parts are formed on the contact surface The schematic diagram of the picture, 2 in Figure 4 represents the schematic diagram of the second type of industrial parts vertically placed on the contact surface and the relevant contact surface, and 3 in Figure 4 represents the second type of industrial parts formed obliquely on the contact surface A schematic diagram of the information about the contact surface. The computer knowledge base performs corresponding processing and matching on the returned pictures, including corresponding target extraction and acquisition of pixel coordinate information, and obtains the third pose of relevant industrial parts as the final recognition pose.

步骤S60,机械臂根据得到的所述最终识别姿态,以及获取的所述待抓取目标的位置信息进行所述待抓取目标的抓取。Step S60, the robotic arm grasps the target to be grasped according to the obtained final recognized pose and the obtained position information of the target to be grasped.

在本实例中,若光照正常,采用视觉相关的技术对要处理的工业零件进行球面多角度进行建模和实时动态去除阴影的预处理,获取比较精确的姿态信息,作为最终识别姿态;若出现光照过暗无法捕捉工业零件姿态以及光照光强工业零件表面存在反光的干扰定位的情况,则通过触觉传感器获取工业零件接触面的精确姿态信息,作为最终识别姿态。最终识别姿态坐标为摄像机的坐标,世界坐标系和摄像机坐标系之间有一个固定的转换矩阵,每次获取图片之后计算机会自动处理将图像坐标转化为实际的坐标,机械臂按照转换后的位置信息和最终识别姿态对工业零件进行定位抓取。In this example, if the illumination is normal, the industrial parts to be processed are modeled with spherical multi-angles and real-time dynamic shadow removal preprocessing using vision-related technology to obtain more accurate attitude information as the final recognition attitude; If the light is too dark to capture the attitude of industrial parts and the surface of the industrial parts has reflections that interfere with positioning, the precise attitude information of the contact surface of the industrial parts is obtained through the tactile sensor as the final recognition attitude. Finally, the posture coordinates are recognized as the coordinates of the camera. There is a fixed transformation matrix between the world coordinate system and the camera coordinate system. After each picture is acquired, the computer will automatically convert the image coordinates into actual coordinates. The robot arm follows the converted position The information and the final recognition gesture are used to locate and grasp industrial parts.

本发明第二实施例的一种基于视觉和触觉的机械臂抓取系统,该机械臂抓取系统包括图像采集装置、触觉传感器装置、放置平台、处理器、机械臂;According to the second embodiment of the present invention, a vision- and tactile-based robotic arm grabbing system includes an image acquisition device, a tactile sensor device, a placement platform, a processor, and a robotic arm;

如图5为该系统的硬件结构,包括零件放置台1、机器臂2、摄像机3、零件状况显示屏4、触觉传感器5、工业零件6,除此之外还有远程控制计算机,图中暂不标注,所述零件放置台为传输带,表面设置有触觉传感器,可以提前进行待抓取目标姿态的识别,然后进行位置跟踪,到达待抓取位置的时候进行抓取动作,位置跟踪及抓取。Figure 5 shows the hardware structure of the system, including parts placement table 1, robot arm 2, camera 3, parts status display screen 4, tactile sensor 5, industrial parts 6, and a remote control computer, temporarily in the figure Not marked, the part placement platform is a conveyor belt, with tactile sensors installed on the surface, which can identify the posture of the target to be grasped in advance, and then perform position tracking. When it reaches the position to be grasped, the grasping action, position tracking and grasping Pick.

其中,所述图像采集装置即摄像机,设置于所述机械臂上部设定位置,用于采集所述放置平台上的待抓取目标的图像;所述触觉传感器装置即触觉传感器,设置于所述放置平台上部,上部可以为上表面,或者放置平台放置面下部,只要能够检测到待抓取目标的触觉图像即可,在其他一些实施例中,还可以通过其他方式获取待检测目标的触觉图像,用于获取所述放置平台所放置的待抓取目标的触觉图像;所述放置平台即零件放置台,设置于所述机械臂的抓取半径内的设定位置,用于放置待抓取目标;所述处理器即远程控制计算机,用于基于所述图像采集装置采集的待抓取目标的拍摄图像,和/或基于所述触觉传感器装置采集的待抓取目标的触觉图像,通过基于视觉和触觉的机械臂抓取方法生成所述机械臂的抓取指令;所述机械臂,用于基于所述处理器输出的抓取指令抓取所述放置平台上的待抓取目标;摄像机、远程控制计算机和机械臂依次电气连接。Wherein, the image acquisition device is a camera, which is set at a set position on the upper part of the mechanical arm, and is used to collect the image of the target to be grabbed on the placement platform; the tactile sensor device is a tactile sensor, which is set on the Place the upper part of the platform, the upper part can be the upper surface, or place the lower part of the platform placement surface, as long as the tactile image of the target to be captured can be detected, in some other embodiments, the tactile image of the target to be detected can also be obtained by other means , used to acquire the tactile image of the object to be grasped placed on the placement platform; target; the processor is a remote control computer, used for taking images of the target to be captured based on the image acquisition device, and/or based on the tactile image of the target to be captured by the tactile sensor device, by The visual and tactile mechanical arm grasping method generates the grasping instruction of the mechanical arm; the mechanical arm is used to grasp the object to be grasped on the placement platform based on the grasping instruction output by the processor; the camera , the remote control computer and the mechanical arm are electrically connected in turn.

所述机械臂抓取系统还包括显示装置即零件状况显示屏,用于显示所述放置平台上待抓取目标的图像。The robotic arm grabbing system also includes a display device, that is, a part status display screen, for displaying images of objects to be grabbed on the placement platform.

所述技术领域的技术人员可以清楚的了解到,为描述的方便和简洁,上述描述的系统的具体的工作过程及有关说明,可以参考签署方法实施例中的对应过程,在此不再赘述。Those skilled in the technical field can clearly understand that for the convenience and brevity of the description, the specific working process and related instructions of the above-described system can refer to the corresponding process in the signing method embodiment, and will not be repeated here.

需要说明的是,上述实施例提供的基于视觉和触觉的机械臂抓取系统,仅以上述各功能模块的划分进行举例说明,在实际应用中,可以根据需要而将上述功能分配由不同的功能模块来完成,即将本发明实施例中的模块或者步骤再分解或者组合,例如,上述实施例的模块可以合并为一个模块,也可以进一步拆分成多个子模块,以完成以上描述的全部或者部分功能。对于本发明实施例中涉及的模块、步骤的名称,仅仅是为了区分各个模块或者步骤,不视为对本发明的不当限定。It should be noted that the visual and tactile-based robotic arm grasping system provided in the above-mentioned embodiments is only illustrated by the division of the above-mentioned functional modules. In practical applications, the above-mentioned functions can be assigned to different functional modules according to needs modules, that is, to decompose or combine the modules or steps in the embodiments of the present invention. For example, the modules in the above embodiments can be combined into one module, or can be further split into multiple sub-modules to complete all or part of the above description Function. The names of the modules and steps involved in the embodiments of the present invention are only used to distinguish each module or step, and are not regarded as improperly limiting the present invention.

本发明第三实施例的一种存储装置,其中存储有多条程序,所述程序适用于由处理器加载并实现上述的基于视觉和触觉的机械臂抓取方法。A storage device according to the third embodiment of the present invention, wherein a plurality of programs are stored, and the programs are suitable for being loaded by a processor to implement the above-mentioned visual and tactile-based grasping method of a robotic arm.

本发明第四实施例的一种处理装置,包括处理器、存储装置;处理器,适于执行各条程序;存储装置,适于存储多条程序;所述程序适于由处理器加载并执行以实现上述的基于视觉和触觉的机械臂抓取方法。A processing device according to the fourth embodiment of the present invention includes a processor and a storage device; the processor is suitable for executing various programs; the storage device is suitable for storing multiple programs; the program is suitable for being loaded and executed by the processor In order to realize the above-mentioned visual and tactile-based mechanical arm grasping method.

所述技术领域的技术人员可以清楚的了解到,未描述的方便和简洁,上述描述的存储装置、处理装置的具体工作过程及有关说明,可以参考签署方法实例中的对应过程,在此不再赘述。Those skilled in the technical field can clearly understand that the convenience and simplicity not described, the specific working process and related instructions of the storage device and processing device described above can refer to the corresponding process in the example of the signing method, which will not be repeated here. repeat.

本领域技术人员应该能够意识到,结合本文中所公开的实施例描述的各示例的模块、方法步骤,能够以电子硬件、计算机软件或者二者的结合来实现,软件模块、方法步骤对应的程序可以置于随机存储器(RAM)、内存、只读存储器(ROM)、电可编程ROM、电可擦除可编程ROM、寄存器、硬盘、可移动磁盘、CD-ROM、或技术领域内所公知的任意其它形式的存储介质中。为了清楚地说明电子硬件和软件的可互换性,在上述说明中已经按照功能一般性地描述了各示例的组成及步骤。这些功能究竟以电子硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。本领域技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本发明的范围。Those skilled in the art should be able to realize that the modules and method steps described in conjunction with the embodiments disclosed herein can be implemented by electronic hardware, computer software, or a combination of the two, and that the programs corresponding to the software modules and method steps Can be placed in random access memory (RAM), internal memory, read-only memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, removable disk, CD-ROM, or known in the technical field any other form of storage medium. In order to clearly illustrate the interchangeability of electronic hardware and software, the composition and steps of each example have been generally described in terms of functions in the above description. Whether these functions are performed by electronic hardware or software depends on the specific application and design constraints of the technical solution. Those skilled in the art may implement the described functionality using different methods for each particular application, but such implementation should not be considered as exceeding the scope of the present invention.

术语“第一”、“第二”等是用于区别类似的对象,而不是用于描述或表示特定的顺序或先后次序。The terms "first", "second", etc. are used to distinguish similar items, and are not used to describe or represent a specific order or sequence.

术语“包括”或者任何其它类似用语旨在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备/装置不仅包括那些要素,而且还包括没有明确列出的其它要素,或者还包括这些过程、方法、物品或者设备/装置所固有的要素。The term "comprising" or any other similar term is intended to cover a non-exclusive inclusion such that a process, method, article, or apparatus/apparatus comprising a set of elements includes not only those elements but also other elements not expressly listed, or Also included are elements inherent in these processes, methods, articles, or devices/devices.

至此,已经结合附图所示的优选实施方式描述了本发明的技术方案,但是,本领域技术人员容易理解的是,本发明的保护范围显然不局限于这些具体实施方式。在不偏离本发明的原理的前提下,本领域技术人员可以对相关技术特征作出等同的更改或替换,这些更改或替换之后的技术方案都将落入本发明的保护范围之内。So far, the technical solutions of the present invention have been described in conjunction with the preferred embodiments shown in the accompanying drawings, but those skilled in the art will easily understand that the protection scope of the present invention is obviously not limited to these specific embodiments. Without departing from the principles of the present invention, those skilled in the art can make equivalent changes or substitutions to relevant technical features, and the technical solutions after these changes or substitutions will all fall within the protection scope of the present invention.

Claims (12)

Translated fromChinese
1.一种基于视觉和触觉的机械臂抓取方法,其特征在于,该方法包括:1. A mechanical arm grasping method based on vision and sense of touch, is characterized in that, the method comprises:步骤S10,获取待抓取目标的光照强度,若该光照强度在设定阈值范围之内则执行步骤S20,否则执行步骤S50;Step S10, acquiring the light intensity of the target to be grasped, if the light intensity is within the set threshold range, execute step S20, otherwise execute step S50;步骤S20,基于所述待抓取目标的拍摄图像进行目标图像的提取,得到第一图像,并基于所述待抓取目标对应的全局模型通过视图匹配的方法获取所述待抓取目标的姿态,作为第一姿态;Step S20, extracting the target image based on the captured image of the target to be captured to obtain a first image, and acquiring the posture of the target to be captured by a method of view matching based on the global model corresponding to the target to be captured , as the first gesture;步骤S30,对所述第一图像进行阴影移除,得到第二图像;Step S30, performing shadow removal on the first image to obtain a second image;步骤S40,以所述第一姿态为初始姿态,采用迭代最近点算法和高斯-牛顿算法,基于所述待抓取目标对应的全局模型获取与所述第二图像匹配的第二姿态,将所述第二姿态作为最终识别姿态,执行步骤S60;Step S40, using the first pose as the initial pose, using the iterative closest point algorithm and the Gauss-Newton algorithm, based on the global model corresponding to the target to be captured, to obtain a second pose that matches the second image, and convert the The second gesture is used as the final recognized gesture, and step S60 is performed;步骤S50,获取所述待抓取目标的触觉图像,基于预先构建的所述待抓取目标的触觉图像与姿态对应关系知识库,通过触觉图像匹配的方法获取所述待抓取目标的第三姿态,将所述第三姿态作为最终识别姿态;Step S50, acquiring the tactile image of the object to be grasped, based on the pre-built knowledge base of the corresponding relationship between the tactile image and the posture of the object to be grasped, and obtaining the third tactile image of the object to be grasped through the method of tactile image matching. gesture, using the third gesture as the final recognized gesture;步骤S60,机械臂根据得到的所述最终识别姿态,以及获取的所述待抓取目标的位置信息进行所述待抓取目标的抓取。Step S60, the robotic arm grasps the target to be grasped according to the obtained final recognized pose and the obtained position information of the target to be grasped.2.根据权利要求1所述的基于视觉和触觉的机械臂抓取方法,其特征在于,步骤S20中“基于所述待抓取目标对应的全局模型通过视图匹配的方法获取所述待抓取目标的姿态”,其方法为:基于所述待抓取目标对应的全局模型,获取通过虚拟球生成的不同视点的2D投影视图的集合,通过图像匹配的方法获取与所述第一图像匹配的视图,并将其对应的姿态作为所述待抓取目标的姿态。2. The visual and tactile-based robotic arm grasping method according to claim 1, wherein in step S20 "acquire the target to be grasped based on the global model corresponding to the target to be grasped by the method of view matching The pose of the target", the method is: based on the global model corresponding to the target to be captured, obtain a set of 2D projection views of different viewpoints generated by the virtual ball, and obtain the image matching with the first image by image matching. view, and use its corresponding pose as the pose of the target to be grasped.3.根据权利要求1所述的基于视觉和触觉的机械臂抓取方法,其特征在于,步骤S30中“对所述第一图像进行阴影移除”,其方法为:计算第一图像方差,将方差值小于设定阈值的点作为阴影点并移除。3. The visual and tactile-based robotic arm grasping method according to claim 1, characterized in that, in step S30, "remove the shadow of the first image", the method is: calculate the variance of the first image, Points whose variance value is less than the set threshold are taken as shaded points and removed.4.根据权利要求3所述的基于视觉和触觉的机械臂抓取方法,其特征在于,所述方差,其计算方法为:4. the grasping method based on vision and touch of mechanical arm according to claim 3, is characterized in that, described variance, its computing method is:其中,V(x,y)是像素点(x,y)的方差值,g(x,y)表示像素点(x,y)的平均灰度值,I(x,y)表示特定像素点的灰度值,NV为求取方差的区域的边长,x、y为像素点的二维坐标值。Among them, V(x, y) is the variance value of the pixel point (x, y), g(x, y) represents the average gray value of the pixel point (x, y), and I(x, y) represents a specific pixel The gray value of the point, NV is the side length of the area where the variance is calculated, and x and y are the two-dimensional coordinates of the pixel.5.根据权利要求4所述的基于视觉和触觉的机械臂抓取方法,其特征在于,所述平均灰度值,其计算方法为:5. The grasping method of the mechanical arm based on vision and touch according to claim 4, characterized in that, the calculation method of the average gray value is:其中,NA为求取平均灰度值的区域的边长。Among them, NA is the side length of the area where the average gray value is calculated.6.根据权利要求1所述的基于视觉和触觉的机械臂抓取方法,其特征在于,步骤S40中“以所述第一姿态为初始姿态,采用迭代最近点算法和高斯-牛顿算法,基于所述待抓取目标对应的全局模型获取与所述第二图像匹配的第二姿态”,通过融合迭代最近点算法和高斯-牛顿算法得到如下公式,并通多迭代求解达到预设收敛条件时的姿态:6. The robotic arm grasping method based on vision and touch according to claim 1, characterized in that, in step S40, "taking the first posture as the initial posture, using the iterative closest point algorithm and the Gauss-Newton algorithm, based on The global model corresponding to the target to be grasped obtains the second pose that matches the second image", the following formula is obtained by fusing the iterative closest point algorithm and the Gauss-Newton algorithm, and when the preset convergence condition is reached through multiple iterative solutions posture:pt+1=pt+Δppt+1 = pt +Δp其中,p为姿态估计值,Δp为更新向量,ε为差向量,Jε为ε相对于p的雅可比矩阵,t、t+1为时刻值,代表任一时刻和任一时刻的下一时刻,T为迭代周期。Among them, p is the attitude estimation value, Δp is the update vector, ε is the difference vector, Jε is the Jacobian matrix of ε relative to p, t and t+1 are the time values, representing any moment and the next Moment, T is the iteration period.7.根据权利要求1所述的基于视觉和触觉的机械臂抓取方法,其特征在于,步骤S50中所述触觉图像,通过将所述待抓取目标放置于触觉传感器表面获取。7. The visual and tactile-based grasping method for a robotic arm according to claim 1, wherein the tactile image in step S50 is acquired by placing the object to be grasped on the surface of the tactile sensor.8.根据权利要求7所述的基于视觉和触觉的机械臂抓取方法,其特征在于,所述触觉传感器为阵列触觉传感器。8. The visual and tactile-based grasping method for a robotic arm according to claim 7, wherein the tactile sensor is an array tactile sensor.9.一种基于视觉和触觉的机械臂抓取系统,其特征在于,该机械臂抓取系统包括图像采集装置、触觉传感器装置、放置平台、处理器、机械臂;9. A robotic arm grabbing system based on vision and touch, characterized in that the robotic arm grabbing system includes an image acquisition device, a tactile sensor device, a placement platform, a processor, and a robotic arm;所述图像采集装置设置于所述机械臂上部设定位置,用于采集所述放置平台上的待抓取目标的图像;The image acquisition device is arranged at a set position on the upper part of the mechanical arm, and is used to acquire the image of the target to be grasped on the placement platform;所述触觉传感器装置设置于所述放置平台上部,用于获取所述放置平台所放置的待抓取目标的触觉图像;The tactile sensor device is arranged on the upper part of the placement platform, and is used to acquire the tactile image of the target to be grasped placed on the placement platform;所述放置平台设置于所述机械臂的抓取半径内的设定位置,用于放置待抓取目标;The placing platform is set at a set position within the grasping radius of the robotic arm, and is used to place the target to be grasped;所述处理器,用于基于所述图像采集装置采集的待抓取目标的拍摄图像,和/或基于所述触觉传感器装置采集的待抓取目标的触觉图像,通过权利要求1-8任一项所述的基于视觉和触觉的机械臂抓取方法生成所述机械臂的抓取指令;The processor is configured to capture an image of the target to be captured based on the image acquisition device, and/or a tactile image of the target to be captured based on the tactile sensor device, through any one of claims 1-8 The grasping method of the mechanical arm based on vision and touch described in item generates the grasping instruction of the mechanical arm;所述机械臂,用于基于所述处理器输出的抓取指令抓取所述放置平台上的待抓取目标。The mechanical arm is used for grabbing the target to be grabbed on the placement platform based on the grabbing instruction output by the processor.10.根据权利要求9所述的基于视觉和触觉的机械臂抓取系统,其特征在于,所述机械臂抓取系统还包括显示装置,用于显示所述放置平台上待抓取目标的图像。10. The visual and tactile-based robotic arm grabbing system according to claim 9, wherein the robotic arm grabbing system also includes a display device for displaying an image of the target to be grabbed on the placement platform .11.一种存储装置,其中存储有多条程序,其特征在于,所述程序应用由处理器加载并执行以实现权利要求1-8任一项所述的基于视觉和触觉的机械臂抓取方法。11. A storage device, wherein a plurality of programs are stored, wherein the program application is loaded and executed by a processor to realize the visual and tactile-based robotic arm grasping according to any one of claims 1-8 method.12.一种处理设置,包括处理器、存储装置;处理器,适用于执行各条程序;存储装置,适用于存储多条程序;其特征在于,所述程序适用于由处理器加载并执行以实现权利要求1-8任一项所述的基于视觉和触觉的机械臂抓取方法。12. A processing arrangement comprising a processor and a storage device; the processor is suitable for executing various programs; the storage device is suitable for storing multiple programs; it is characterized in that the program is suitable for being loaded and executed by the processor Realize the grasping method of the mechanical arm based on vision and touch according to any one of claims 1-8.
CN201910629058.5A2019-07-122019-07-12View-based access control model and the mechanical arm grasping means of tactile, system, devicePendingCN110428465A (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN201910629058.5ACN110428465A (en)2019-07-122019-07-12View-based access control model and the mechanical arm grasping means of tactile, system, device

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201910629058.5ACN110428465A (en)2019-07-122019-07-12View-based access control model and the mechanical arm grasping means of tactile, system, device

Publications (1)

Publication NumberPublication Date
CN110428465Atrue CN110428465A (en)2019-11-08

Family

ID=68410466

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201910629058.5APendingCN110428465A (en)2019-07-122019-07-12View-based access control model and the mechanical arm grasping means of tactile, system, device

Country Status (1)

CountryLink
CN (1)CN110428465A (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN111055279A (en)*2019-12-172020-04-24清华大学深圳国际研究生院Multi-mode object grabbing method and system based on combination of touch sense and vision
CN111204476A (en)*2019-12-252020-05-29上海航天控制技术研究所Vision-touch fusion fine operation method based on reinforcement learning
CN111913204A (en)*2020-07-162020-11-10西南大学 A robotic arm guidance method based on RTK positioning
CN112809679A (en)*2021-01-252021-05-18清华大学深圳国际研究生院Method and device for grabbing deformable object and computer readable storage medium
CN113808198A (en)*2021-11-172021-12-17季华实验室 Method, device, electronic device and storage medium for labeling suction surface
CN114851227A (en)*2022-06-222022-08-05上海大学Device based on machine vision and sense of touch fuse perception
CN114872054A (en)*2022-07-112022-08-09深圳市麦瑞包装制品有限公司Method for positioning robot hand for industrial manufacturing of packaging container
CN115147411A (en)*2022-08-302022-10-04启东赢维数据信息科技有限公司Labeller intelligent positioning method based on artificial intelligence
CN115625713A (en)*2022-12-052023-01-20开拓导航控制技术股份有限公司Manipulator grabbing method based on touch-vision fusion perception and manipulator
CN115760805A (en)*2022-11-242023-03-07中山大学Positioning method for processing surface depression of element based on visual touch sense
CN115824470A (en)*2022-10-252023-03-21中国科学院自动化研究所 A kind of tactile sensor, preparation method and point cloud reconstruction method
CN119681901A (en)*2025-01-272025-03-25大连理工大学 A robot grasping posture optimization method based on visual-tactile fusion
CN120213952A (en)*2025-05-272025-06-27因湃电池科技有限公司 A battery defect detection method and system based on multimodal sensor

Citations (16)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN101097131A (en)*2006-06-302008-01-02廊坊智通机器人系统有限公司Method for marking workpieces coordinate system
US20080027580A1 (en)*2006-07-282008-01-31Hui ZhangRobot programming method and apparatus with both vision and force
US20100131235A1 (en)*2008-11-262010-05-27Canon Kabushiki KaishaWork system and information processing method
CN102622763A (en)*2012-02-212012-08-01芮挺Method for detecting and eliminating shadow
CN103530886A (en)*2013-10-222014-01-22上海安奎拉信息技术有限公司 A low-computation background removal method for video analysis
US20140277588A1 (en)*2013-03-152014-09-18Eli Robert PattSystem and method for providing a prosthetic device with non-tactile sensory feedback
CN205121556U (en)*2015-10-122016-03-30中国科学院自动化研究所Robot grasping system
CN105930854A (en)*2016-04-192016-09-07东华大学Manipulator visual system
US9579801B2 (en)*2013-06-112017-02-28Somatis Sensor Solutions LLCSystems and methods for sensing objects
CN106845354A (en)*2016-12-232017-06-13中国科学院自动化研究所Partial view base construction method, part positioning grasping means and device
CN107234625A (en)*2017-07-072017-10-10中国科学院自动化研究所The method that visual servo is positioned and captured
CN107921622A (en)*2015-08-252018-04-17川崎重工业株式会社Robot system
CN107972069A (en)*2017-11-272018-05-01胡明建The design method that a kind of computer vision and Mechanical Touch are mutually mapped with the time
CN108297083A (en)*2018-02-092018-07-20中国科学院电子学研究所Mechanical arm system
CN108537841A (en)*2017-03-032018-09-14株式会社理光A kind of implementation method, device and the electronic equipment of robot pickup
CN108638054A (en)*2018-04-082018-10-12河南科技学院A kind of intelligence explosive-removal robot five-needle pines blister rust control method

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN101097131A (en)*2006-06-302008-01-02廊坊智通机器人系统有限公司Method for marking workpieces coordinate system
US20080027580A1 (en)*2006-07-282008-01-31Hui ZhangRobot programming method and apparatus with both vision and force
US20100131235A1 (en)*2008-11-262010-05-27Canon Kabushiki KaishaWork system and information processing method
CN102622763A (en)*2012-02-212012-08-01芮挺Method for detecting and eliminating shadow
US20140277588A1 (en)*2013-03-152014-09-18Eli Robert PattSystem and method for providing a prosthetic device with non-tactile sensory feedback
US9579801B2 (en)*2013-06-112017-02-28Somatis Sensor Solutions LLCSystems and methods for sensing objects
CN103530886A (en)*2013-10-222014-01-22上海安奎拉信息技术有限公司 A low-computation background removal method for video analysis
CN107921622A (en)*2015-08-252018-04-17川崎重工业株式会社Robot system
CN205121556U (en)*2015-10-122016-03-30中国科学院自动化研究所Robot grasping system
CN105930854A (en)*2016-04-192016-09-07东华大学Manipulator visual system
CN106845354A (en)*2016-12-232017-06-13中国科学院自动化研究所Partial view base construction method, part positioning grasping means and device
CN108537841A (en)*2017-03-032018-09-14株式会社理光A kind of implementation method, device and the electronic equipment of robot pickup
CN107234625A (en)*2017-07-072017-10-10中国科学院自动化研究所The method that visual servo is positioned and captured
CN107972069A (en)*2017-11-272018-05-01胡明建The design method that a kind of computer vision and Mechanical Touch are mutually mapped with the time
CN108297083A (en)*2018-02-092018-07-20中国科学院电子学研究所Mechanical arm system
CN108638054A (en)*2018-04-082018-10-12河南科技学院A kind of intelligence explosive-removal robot five-needle pines blister rust control method

Non-Patent Citations (8)

* Cited by examiner, † Cited by third party
Title
CHAO MA ET AL.: "Flexible Robotic Grasping Strategy with Constrained Region in Environment", 《INTERNATIONAL JOURNAL OF AUTOMATION AND COMPUTING》*
DI GUO ET AL.: "Robotic grasping using visual and tactile sensing", 《INFORMATION SCIENCES》*
J. LI ET AL.: "Slip Detection with Combined Tactile and Visual Information", 《2018 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA)》*
伍于添 等: "《医学超声设备 原理•涉及•应用》", 30 April 2012, 科学技术文献出版社*
卢丹灵: "基于视触觉融合的机械手臂目标抓取研究", 《中国优秀硕士学位论文全文数据库信息科技辑》*
孙水发 等: "《视频前景检测及其在水电工程监测中的应用》", 31 December 2014, 国防工业出版社*
罗时光 等: "《实验设计与数据处理》", 30 April 2018, 中国铁道出版社*
郭迎达: "机器人抓取中视觉触觉融合的技术研究", 《中国优秀硕士学位论文全文数据库信息科技辑》*

Cited By (20)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN111055279A (en)*2019-12-172020-04-24清华大学深圳国际研究生院Multi-mode object grabbing method and system based on combination of touch sense and vision
CN111055279B (en)*2019-12-172022-02-15清华大学深圳国际研究生院Multi-mode object grabbing method and system based on combination of touch sense and vision
CN111204476A (en)*2019-12-252020-05-29上海航天控制技术研究所Vision-touch fusion fine operation method based on reinforcement learning
CN111204476B (en)*2019-12-252021-10-29上海航天控制技术研究所Vision-touch fusion fine operation method based on reinforcement learning
CN111913204A (en)*2020-07-162020-11-10西南大学 A robotic arm guidance method based on RTK positioning
CN111913204B (en)*2020-07-162024-05-03西南大学Mechanical arm guiding method based on RTK positioning
CN112809679A (en)*2021-01-252021-05-18清华大学深圳国际研究生院Method and device for grabbing deformable object and computer readable storage medium
CN113808198A (en)*2021-11-172021-12-17季华实验室 Method, device, electronic device and storage medium for labeling suction surface
CN113808198B (en)*2021-11-172022-03-08季华实验室Method and device for labeling suction surface, electronic equipment and storage medium
CN114851227B (en)*2022-06-222024-02-27上海大学Device based on machine vision and touch sense fusion perception
CN114851227A (en)*2022-06-222022-08-05上海大学Device based on machine vision and sense of touch fuse perception
CN114872054A (en)*2022-07-112022-08-09深圳市麦瑞包装制品有限公司Method for positioning robot hand for industrial manufacturing of packaging container
CN115147411A (en)*2022-08-302022-10-04启东赢维数据信息科技有限公司Labeller intelligent positioning method based on artificial intelligence
CN115824470A (en)*2022-10-252023-03-21中国科学院自动化研究所 A kind of tactile sensor, preparation method and point cloud reconstruction method
CN115760805A (en)*2022-11-242023-03-07中山大学Positioning method for processing surface depression of element based on visual touch sense
CN115760805B (en)*2022-11-242024-02-09中山大学Positioning method for processing element surface depression based on visual touch sense
CN115625713A (en)*2022-12-052023-01-20开拓导航控制技术股份有限公司Manipulator grabbing method based on touch-vision fusion perception and manipulator
CN119681901A (en)*2025-01-272025-03-25大连理工大学 A robot grasping posture optimization method based on visual-tactile fusion
CN120213952A (en)*2025-05-272025-06-27因湃电池科技有限公司 A battery defect detection method and system based on multimodal sensor
CN120213952B (en)*2025-05-272025-08-08因湃电池科技有限公司Battery defect detection method and system based on multi-mode sensor

Similar Documents

PublicationPublication DateTitle
CN110428465A (en)View-based access control model and the mechanical arm grasping means of tactile, system, device
CN115213896B (en) Object grasping method, system, device and storage medium based on robotic arm
CN109255813B (en)Man-machine cooperation oriented hand-held object pose real-time detection method
CN107590836B (en)Kinect-based charging pile dynamic identification and positioning method and system
CN111745640B (en) Object detection method, object detection device, and robot system
CN106940704B (en)Positioning method and device based on grid map
Song et al.CAD-based pose estimation design for random bin picking using a RGB-D camera
JP5455873B2 (en) Method for determining the posture of an object in a scene
CN110211180A (en)A kind of autonomous grasping means of mechanical arm based on deep learning
WO2020034872A1 (en)Target acquisition method and device, and computer readable storage medium
WO2018049581A1 (en)Method for simultaneous localization and mapping
JP2015090560A (en)Image processing apparatus, and image processing method
CN108416385A (en)It is a kind of to be positioned based on the synchronization for improving Image Matching Strategy and build drawing method
CN116249607A (en)Method and device for robotically gripping three-dimensional objects
JP7171294B2 (en) Information processing device, information processing method and program
WO2024021104A1 (en)Robot arm control method, apparatus and system, and electronic device and storage medium
CN113269723A (en)Unordered grasping system for three-dimensional visual positioning and mechanical arm cooperative work parts
CN116021519A (en)TOF camera-based picking robot hand-eye calibration method and device
CN114638891A (en)Target detection positioning method and system based on image and point cloud fusion
CN117103245A (en)Object grabbing method, object grabbing device, robot, readable storage medium and chip
CN116921932A (en)Welding track recognition method, device, equipment and storage medium
JP6041710B2 (en) Image recognition method
Kim et al.Structured light camera base 3D visual perception and tracking application system with robot grasping task
JP2018146347A (en)Image processing device, image processing method, and computer program
Zhang et al.Object detection and grabbing based on machine vision for service robot

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
RJ01Rejection of invention patent application after publication
RJ01Rejection of invention patent application after publication

Application publication date:20191108


[8]ページ先頭

©2009-2025 Movatter.jp