Movatterモバイル変換


[0]ホーム

URL:


CN118216332A - Kiwi bionic bud thinning claw and control method thereof - Google Patents

Kiwi bionic bud thinning claw and control method thereof
Download PDF

Info

Publication number
CN118216332A
CN118216332ACN202311407040.3ACN202311407040ACN118216332ACN 118216332 ACN118216332 ACN 118216332ACN 202311407040 ACN202311407040 ACN 202311407040ACN 118216332 ACN118216332 ACN 118216332A
Authority
CN
China
Prior art keywords
claw
bud
main
finger
bionic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311407040.3A
Other languages
Chinese (zh)
Inventor
程玉柱
李赵春
余伟
林原灵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Forestry University
Original Assignee
Nanjing Forestry University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Forestry UniversityfiledCriticalNanjing Forestry University
Priority to CN202311407040.3ApriorityCriticalpatent/CN118216332A/en
Publication of CN118216332ApublicationCriticalpatent/CN118216332A/en
Pendinglegal-statusCriticalCurrent

Links

Classifications

Landscapes

Abstract

The invention provides a kiwi fruit bionic bud thinning claw and a control method thereof, and the kiwi fruit bionic bud thinning claw comprises an inverted T-shaped frame, clamping claws, left adsorption claws, right adsorption claws, an image detection system and a control system. The control method adopts YOLOv and Mask RCNN deep fusion models, extracts the characteristics of the main pedicel, the left and right side buds and angles and azimuth thereof, realizes the detection of the pedicel and the buds of the kiwi fruit, controls the clamping claw and the adsorption claw, and has the effect equivalent to the control function of picking while looking at the clamping claw of a person.

Description

Translated fromChinese
一种猕猴桃仿生疏蕾爪及其控制方法Kiwi bionic bud thinning claw and control method thereof

技术领域Technical Field

本发明涉及猕猴桃疏蕾机械领域,具体涉及一种猕猴桃仿生疏蕾爪及其控制方法。The invention relates to the field of kiwifruit bud thinning machinery, and in particular to a kiwifruit bionic bud thinning claw and a control method thereof.

背景技术Background Art

猕猴桃栽培的各环节的管理质量都会影响猕猴桃果实的经济价值。而其中疏蕾是最直接的影响猕猴桃品质的一环。疏蕾是在花蕾期间,将多余的花蕾去除,减少树体养分消耗,确保树体有充足的养分供主花蕾生长,使得能量不浪费、产量最大化。疏蕾是前置步骤,猕猴桃的花期很短而蕾期较长,通常不进行疏花而只疏蕾,只有做好疏蕾工作,才能使得猕猴桃产量得到提高。The management quality of each link in kiwifruit cultivation will affect the economic value of kiwifruit. Among them, bud thinning is the link that directly affects the quality of kiwifruit. Bud thinning is to remove excess buds during the bud formation period to reduce the nutrient consumption of the tree and ensure that the tree has sufficient nutrients for the main buds to grow, so that energy is not wasted and the yield is maximized. Bud thinning is a preliminary step. The flowering period of kiwifruit is very short and the bud period is long. Usually, only bud thinning is performed instead of flower thinning. Only by doing a good job of bud thinning can the kiwifruit yield be increased.

目前采用人工方式进行猕猴桃蔬蕾,人眼观察每个枝条上花梗及花蕾分布情况,并判断出不需要的侧蕾与病蕾,用手指拧断或拽断,通常一根主花梗上有一个主蕾、两个左右侧蕾,最终仅保留中间部分的主蕾。猕猴桃花蕾小且娇嫩、容易受到损伤,增加了操作的难度。同时,随着劳动力成本的提高,人工疏蕾的工作量大,猕猴桃疏蕾难题日益凸显,已是猕猴桃产业发展中的现实问题。At present, kiwifruit bud thinning is done manually. The human eye observes the distribution of peduncles and buds on each branch, and determines the unnecessary side buds and diseased buds, which are then twisted or pulled off with fingers. Usually, there is one main bud and two side buds on a main peduncle, and only the main bud in the middle is retained. Kiwifruit buds are small and delicate, and easily damaged, which increases the difficulty of operation. At the same time, with the increase in labor costs, the workload of manual bud thinning is large, and the problem of kiwifruit bud thinning has become increasingly prominent, which has become a real problem in the development of the kiwifruit industry.

发明内容Summary of the invention

本发明为了解决现有技术中存在的上述缺陷和不足,提供了猕猴桃仿生疏蕾爪及其控制方法。In order to solve the above defects and shortcomings in the prior art, the present invention provides a kiwifruit bionic bud thinning claw and a control method thereof.

为解决上述技术问题,本发明提供一种猕猴桃仿生疏蕾爪,包括倒T型框架、夹持爪、左吸附爪、右吸附爪、图像检测系统和运动控制系统,其特征在于:所述夹持爪在倒T型框架中间位置并与倒T型框架连接,左吸附爪、右吸附爪对称位于倒T型框架左右位置并与倒T型框架连接;所述倒T型框架包括框架水平支架、框架竖直支架、旋转电机、固定基座,框架水平支架的中间位置设置有旋转电机,旋转电机设在固定基座上,框架水平支架的中间位置上侧设置有框架竖直支架,框架竖直支架的一侧设有夹持爪;框架水平支架的两端分别设有左吸附爪和右吸附爪;框架水平支架的中间位置一侧设置有一个主RGBD相机,左吸附爪、右吸附爪分别设有一个RGBD相机,主RGBD相机和两个RGBD相机均与图像检测系统连接,图像检测系统采集花蕾、花梗图像并利用深度学习进行图像检测,实时计算花梗、花蕾的位姿,控制系统分别控制倒T型框架的旋转、夹持爪的夹持锁紧主花梗、左吸附爪和右吸附爪的疏蕾操作。In order to solve the above technical problems, the present invention provides a kiwifruit bionic bud thinning claw, comprising an inverted T-shaped frame, a clamping claw, a left suction claw, a right suction claw, an image detection system and a motion control system, characterized in that: the clamping claw is located in the middle position of the inverted T-shaped frame and is connected to the inverted T-shaped frame, the left suction claw and the right suction claw are symmetrically located at the left and right positions of the inverted T-shaped frame and are connected to the inverted T-shaped frame; the inverted T-shaped frame comprises a frame horizontal support, a frame vertical support, a rotating motor, and a fixed base, a rotating motor is arranged in the middle position of the frame horizontal support, the rotating motor is arranged on the fixed base, and a rotating motor is arranged on the upper side of the middle position of the frame horizontal support The frame has a vertical bracket, and a clamping claw is provided on one side of the frame vertical bracket; a left suction claw and a right suction claw are respectively provided at both ends of the frame horizontal bracket; a main RGBD camera is arranged on one side of the middle position of the frame horizontal bracket, and an RGBD camera is respectively provided for the left suction claw and the right suction claw, and the main RGBD camera and the two RGBD cameras are connected to the image detection system. The image detection system collects images of flower buds and peduncles and uses deep learning for image detection, and calculates the posture of the peduncles and buds in real time. The control system controls the rotation of the inverted T-shaped frame, the clamping claw to clamp and lock the main peduncles, and the bud thinning operations of the left suction claw and the right suction claw.

优选地,夹持爪包括夹持爪辅助电机、主电动伸缩杆、支撑底板、球面副、中间支撑杆、中间主电动推杆、侧连杆和弧形指面夹板;主电动伸缩杆一端通过夹持爪辅助电机与框架竖直支架连接,另一端与支撑底板连接;支撑底板上设置有中间支撑杆、两个球面副,两个球面副分别与一侧连杆连接,两侧连杆的中间位置设有中间主电动推杆,并由中间主电动推杆驱动使侧连杆围绕球面副向两侧张开或闭合;两侧连杆的端部分别与一个弧形指面夹板连接,所述弧形指面夹板的中间表面设有柔性传感器,其实时测量压力大小,所述弧形指面夹板的两端设有气动柔性锁紧囊,用于固定主花梗。Preferably, the clamping claw includes a clamping claw auxiliary motor, a main electric telescopic rod, a supporting base plate, a spherical pair, an intermediate support rod, an intermediate main electric push rod, a side connecting rod and an arc-shaped finger surface splint; one end of the main electric telescopic rod is connected to the vertical bracket of the frame through the clamping claw auxiliary motor, and the other end is connected to the supporting base plate; an intermediate support rod and two spherical pairs are arranged on the supporting base plate, and the two spherical pairs are respectively connected to a side connecting rod, and an intermediate main electric push rod is arranged in the middle position of the two side connecting rods, and the side connecting rod is driven by the intermediate main electric push rod to open or close to both sides around the spherical pair; the ends of the two side connecting rods are respectively connected to an arc-shaped finger surface splint, and a flexible sensor is arranged on the middle surface of the arc-shaped finger surface splint, which actually measures the pressure in real time, and pneumatic flexible locking bags are arranged at both ends of the arc-shaped finger surface splint for fixing the main peduncle.

优选地,左吸附爪和右吸附爪的结构相同,左吸附爪和右吸附爪分别包括侧电机、主电动推杆、铰链、从电动推杆、关节旋转电机、手指调节底板、手指调节基座、手指调节推杆和三根手指;主电动推杆一端通过侧电机与框架水平支架连接,另一端通过铰链与从电动推杆连接,关节旋转电机设置在铰链中,从电动推杆的伸缩端设有旋转电机,旋转电机上设有手指调节底板,手指调节底板上设置有手指调节基座,手指调节基座设有滑动槽,手指调节推杆通过限位销滑动连接在滑动槽内,手指调节推杆一端设有手指,手指包括指根、指尖部,指根一端与手指调节推杆连接,指根另一端与指尖部连接。Preferably, the left suction claw and the right suction claw have the same structure, and the left suction claw and the right suction claw respectively include a side motor, a main electric push rod, a hinge, a slave electric push rod, a joint rotation motor, a finger adjustment base, a finger adjustment base, a finger adjustment push rod and three fingers; one end of the main electric push rod is connected to the horizontal bracket of the frame through the side motor, and the other end is connected to the slave electric push rod through a hinge, the joint rotation motor is arranged in the hinge, the telescopic end of the slave electric push rod is provided with a rotating motor, a finger adjustment base is provided on the rotating motor, a finger adjustment base is provided on the finger adjustment base, a sliding groove is provided on the finger adjustment push rod, the finger adjustment push rod is slidably connected in the sliding groove through a limit pin, a finger is provided at one end of the finger adjustment push rod, the finger includes a finger root and a fingertip, one end of the finger root is connected to the finger adjustment push rod, and the other end of the finger root is connected to the fingertip.

优选地,指尖部上设有仿生指甲,仿生指甲的一侧设有仿生肌肉,仿生肌肉一侧设有仿生螺纹,仿生螺纹包括阳电极、阴电极及PDMS基质,PDMS基质上设有阳电极、阴电极,阳电极、阴电极组合成自互电容式接近传感器,用于测量侧蕾与仿生螺纹的距离和侧蕾表面接触力大小。Preferably, a bionic nail is provided on the fingertip, a bionic muscle is provided on one side of the bionic nail, a bionic thread is provided on one side of the bionic muscle, the bionic thread includes an anode, a cathode and a PDMS matrix, an anode and a cathode are provided on the PDMS matrix, the anode and the cathode are combined into a self-mutual capacitance proximity sensor for measuring the distance between the side bud and the bionic thread and the contact force on the side bud surface.

优选地,一种猕猴桃仿生疏蕾爪的控制方法,其步骤如下:Preferably, a method for controlling a kiwifruit bionic bud thinning claw comprises the following steps:

步骤S1:猕猴桃仿生疏蕾爪运动到猕猴桃花蕾前;Step S1: The kiwifruit bionic bud thinning claw moves to the front of the kiwifruit buds;

步骤S2:对倒T型框架进行角度视觉伺服控制,使得主花梗直线方向与图像的垂直方向一致;Step S2: performing angle visual servo control on the inverted T-shaped frame so that the straight line direction of the main peduncle is consistent with the vertical direction of the image;

步骤S3:对夹持爪进行控制;其中夹持爪采用简单的平移控制,无需过多的精细控制,主RGBD相机检测主花梗与夹持爪的距离,直接移动夹持爪,使主花梗在夹持爪的正前方,便于夹持爪夹紧主花梗,使得主花梗不晃动;Step S3: Control the clamping claw; the clamping claw adopts simple translation control without too much fine control. The main RGBD camera detects the distance between the main peduncle and the clamping claw, and directly moves the clamping claw so that the main peduncle is directly in front of the clamping claw, so that the clamping claw can clamp the main peduncle tightly and prevent the main peduncle from shaking.

步骤S4:对左吸附爪、右吸附爪进行控制,左吸附爪、右吸附爪控制方法相同,即先进行侧蕾距离视觉伺服控制,再进行侧蕾方位视觉伺服控制,其中侧蕾距离视觉伺服控制实现末端左吸附爪、右吸附爪与侧蕾的距离调整,侧蕾方位视觉伺服控制实现侧蕾目标位于左吸附爪、右吸附爪的中心位置;当左吸附爪、右吸附爪移动到侧蕾的位置时,夹紧左右侧蕾;若左吸附爪、右吸附爪未移动到合适的位置,返回到距离视觉伺服控制前,再重复调整与侧蕾的距离及方位,直到符合最佳的位置为止;Step S4: Control the left suction claw and the right suction claw. The control method of the left suction claw and the right suction claw is the same, that is, first perform the side bud distance visual servo control, and then perform the side bud azimuth visual servo control, wherein the side bud distance visual servo control realizes the distance adjustment between the end left suction claw, the right suction claw and the side bud, and the side bud azimuth visual servo control realizes the side bud target is located at the center position of the left suction claw and the right suction claw; when the left suction claw and the right suction claw move to the position of the side bud, clamp the left and right side buds; if the left suction claw and the right suction claw do not move to the appropriate position, return to the position before the distance visual servo control, and then repeatedly adjust the distance and azimuth with the side bud until it meets the best position;

步骤S5:夹持爪夹紧主花梗,左吸附爪吸紧左侧蕾,右吸附爪吸紧右侧蕾,Step S5: The clamping claws clamp the main peduncle, the left suction claws suck the left buds, and the right suction claws suck the right buds.

步骤S6:夹持爪、左吸附爪和右吸附爪协同或单独控制将左侧蕾和右侧蕾疏除。Step S6: The clamping claw, the left suction claw and the right suction claw cooperate or control individually to remove the left bud and the right bud.

优选地,倒T型框架的角度视觉伺服控制步骤如下:先图像检测系统得到主花梗直线位置,与图像中的竖直位置相减,得到角度偏差e(k),偏差信号送入分数阶PID控制器,再得到电压控制信号u(k),驱动PWM脉冲发生器,并驱动旋转电机转动一定角度,图像检测系统动态监测主花梗的直线位置,使得夹持爪跟踪主花梗并保持与主花梗垂直,便于准确地夹持主花梗。Preferably, the angle visual servo control steps of the inverted T-shaped frame are as follows: first, the image detection system obtains the straight-line position of the main peduncle, subtracts it from the vertical position in the image, and obtains the angle deviation e(k). The deviation signal is sent to the fractional-order PID controller, and then the voltage control signal u(k) is obtained to drive the PWM pulse generator and drive the rotating motor to rotate a certain angle. The image detection system dynamically monitors the straight-line position of the main peduncle, so that the clamping claw tracks the main peduncle and remains perpendicular to the main peduncle, so as to accurately clamp the main peduncle.

优选地,侧蕾距离视觉伺服控制步骤如下:根据参考距离RS和主RGBD位移估计器反馈出的实际距离计算出距离误差,距离误差信号输入主控制器相关的传递函数G2(S),根据参考位移值和从RGBD位移估计器反馈的实际位移计算出位移误差,将位移误差信号输入从控制器相关的传递函数G1(S),再根据从控制器电压控制信号U1(S)计算出从电动推杆相关的传递函数L1(s),再根据U2(S)、负载扰动D(s)和主电动推杆的传递函数L2(s)计算出最终距离输出信号Y2(S),Y2(s)=U2(s)*L2(s)+D(s),其中U2(S)是主电动推杆内环输出电压或者外环输入电压,*为卷积。Preferably, the side bud distance visual servo control steps are as follows: the distance error is calculated based on the reference distanceRS and the actual distance fed back by the main RGBD displacement estimator, the distance error signal is input into the transfer functionG2 (S) related to the main controller, the displacement error is calculated based on the reference displacement value and the actual displacement fed back from the RGBD displacement estimator, the displacement error signal is input into the transfer functionG1 (S) related to the slave controller, and then the transfer functionL1 (s) related to the slave electric push rod is calculated based on the slave controller voltage control signalU1 (S), and then the final distance output signalY2 (S) is calculated based onU2 (S), the load disturbance D(s) and the transfer functionL2 (s) of the main electric push rod,Y2 (s)=U2 (s)*L2 (s)+D(s), whereU2 (S) is the inner loop output voltage or the outer loop input voltage of the main electric push rod, and * is the convolution.

优选地,侧蕾方位视觉伺服控制步骤如下:将设定的参考中心坐标和RGBD传感器获取的实际侧蕾中心坐标比较,得到的误差送给视觉控制器进行调整,再将设定的参考角位移和关节角位移传感器检测出的实际角位移比较,计算出的位移误差输入关节控制器,关节控制器控制关节旋转电机调整方位幅度。Preferably, the steps of visual servo control of the side bud orientation are as follows: compare the set reference center coordinates with the actual side bud center coordinates obtained by the RGBD sensor, and send the obtained error to the visual controller for adjustment, then compare the set reference angular displacement with the actual angular displacement detected by the joint angular displacement sensor, and input the calculated displacement error into the joint controller, and the joint controller controls the joint rotation motor to adjust the orientation amplitude.

优选地,在倒T型框架角度视觉伺服控制、左右侧蕾距离视觉伺服控制和侧蕾方位视觉伺服控制中,采用基于YOLOv8与Mask RCNN的目标检测与信息融合方法,实现深度学习的目标检测与特征提取,具体步骤包括:先采集一定数量的RGB图像,并对图像进行增广处理,再进行图像标注;再构建数据库,数据库包括训练数据库、验证数据库和测试数据库;选定目标检测模型,用训练数据库分别对YOLOv8与Mask RCNN模型进行训练,并用性能评价指标对模型进行评价,如果训练不达标,则重新建立模型和选择参数,进行下一轮训练,直到训练满足要求为止;如果训练达标则转到验证数据集进行验证,对验证效果进行评价,如果验证指标不达标,则转到新建模型处修改模型及参数,进行下一轮训练,完成训练后继续进行验证,直至验证符合指标要求;模型符合要求后,转到测试数据集进行测试;测试时先用YOLOv8模型测试得到主花梗、主花蕾、左侧蕾、右侧蕾目标检测与实例分割图、以及主花梗倾斜角度及左右侧蕾中心坐标值,再将主花梗、主花蕾、左侧蕾、右侧蕾的区域作为初始建议区域,再用Mask RCNN模型对图像进行测试,得到新的主花梗、主花蕾、左侧蕾、右侧蕾目标检测与实例分割图、以及主花梗倾斜角度及左右侧蕾中心坐标值;将YOLOv8模型和MaskRCNN模型各自得到的主花梗倾斜角度及左右侧蕾中心坐标值进行参数融合,融合公式为Preferably, in the inverted T-frame angle visual servo control, the left and right side bud distance visual servo control and the side bud orientation visual servo control, a target detection and information fusion method based on YOLOv8 and Mask RCNN is adopted to realize deep learning target detection and feature extraction. The specific steps include: first collecting a certain number of RGB images, augmenting the images, and then annotating the images; then building a database, which includes a training database, a verification database and a test database; selecting a target detection model, and using the training database to perform YOLOv8 and Mask RCNN respectively. The RCNN model is trained and the model is evaluated with performance evaluation indicators. If the training does not meet the standards, the model is rebuilt and parameters are selected for the next round of training until the training meets the requirements; if the training meets the standards, the validation data set is turned to for verification and the verification effect is evaluated. If the verification indicators do not meet the standards, the new model is turned to modify the model and parameters for the next round of training. After the training is completed, the verification is continued until the verification meets the indicator requirements; after the model meets the requirements, it is turned to the test data set for testing; during the test, the YOLOv8 model is first used to test the main peduncle, main bud, left bud, and right bud target detection and instance segmentation maps, as well as the main peduncle inclination angle and the left and right bud center coordinate values, and then the main peduncle, main bud, left bud, and right bud areas are used as the initial recommended areas, and then Mask is used. The RCNN model tests the image and obtains new main peduncle, main bud, left bud, and right bud target detection and instance segmentation maps, as well as the main peduncle inclination angle and the left and right side bud center coordinate values; the main peduncle inclination angle and the left and right side bud center coordinate values obtained by the YOLOv8 model and the MaskRCNN model are parameter fused, and the fusion formula is:

θf=wyoloyolo+wmaskmask,tcf=wcyolo*tcyolo,+wcmask*tcmaskθf =wyoloyolo +wmaskmask , tcf =wcyolo *tcyolo ,+wcmask *tcmask ,

其中,θyolo与θmask分别为YOLOv8模型和Mask RCNN模型得到的主花梗倾斜角度,wyolo与wmask为融合权重,θf为融合后的主花梗倾斜角度,tcyolo与tcmask分别为YOLOv8模型和Mask RCNN模型得到的左右侧蕾中心坐标值,wcyolo与wcmask为融合权重,tcf为融合后的左右侧蕾中心坐标值;最后将融合后的参数传送给视觉伺服控制器。Among them, θyolo and θmask are the main peduncle inclination angles obtained by the YOLOv8 model and the Mask RCNN model respectively, wyolo and wmask are fusion weights, θf is the main peduncle inclination angle after fusion, tcyolo and tcmask are the left and right side bud center coordinates obtained by the YOLOv8 model and the Mask RCNN model respectively, wcyolo and wcmask are fusion weights, tcf is the left and right side bud center coordinates after fusion; finally, the fused parameters are transmitted to the visual servo controller.

优选地,Mask RCNN模型分为两阶段,第一个阶段遍历整个图像并产生建议区域,第二个阶段对推荐的区域进行目标分类,得到边界框和掩码。Preferably, the Mask RCNN model is divided into two stages. The first stage traverses the entire image and generates proposed regions. The second stage classifies the recommended regions and obtains bounding boxes and masks.

本发明所达到的有益技术效果:使用时,该猕猴桃仿生疏蕾爪利用图像检测与控制系统实现主花梗、主蕾、侧蕾的检测定位与识别,柔性夹持爪通过弧形指面夹板,自适应动态感知力大小并驱动夹板夹紧花梗,左右吸附爪同时抓取左右两边的侧蕾并用指尖螺纹将其吸附固定,通过旋转或后移两种方式实现侧蕾的疏除。效果等同于我们人类的采摘过程,一边观察一边疏蕾,一手固定花梗,一手进行摘除,同时自适应动态调整手指力的大小,既保护了娇小的花蕾不被损伤,又实现了多余花蕾的疏除。本发明提供的猕猴桃疏蕾爪的控制方法,包括倒T型框架角度视觉伺服控制、夹持爪控制、左右吸附爪控制。倒T型框架整体角度控制实现框架竖直支架与主花梗平行,便于夹持爪直接平移并夹紧主花梗;左右吸附爪控制包括距离视觉伺服控制、方位视觉伺服控制,分别对双电动推杆进行伸缩控制和关节电机进行旋转控制;视觉特征检测采用YOLOv8与Mask RCNN深度融合模型,提取主花梗、左右侧蕾及其角度、方位特征;控制器采用分数阶PID模型,并用霸王龙捕猎算法对其分数阶数进行优化。该控制方法通过深度学习图像检测与实例分割算法,实现猕猴桃花梗及花蕾的检测,并对夹持爪和吸附爪进行控制,其效果等效于人的“边看边夹边摘”控制功能。The beneficial technical effects achieved by the present invention are as follows: when in use, the kiwifruit bionic bud thinning claw utilizes an image detection and control system to detect, locate and identify the main pedicel, main bud and side buds. The flexible clamping claw uses an arc-shaped finger surface splint to adaptively and dynamically sense the force and drive the splint to clamp the pedicel. The left and right suction claws simultaneously grab the side buds on both sides and fix them with the fingertip threads, and the side buds are thinned out by rotation or backward movement. The effect is equivalent to the picking process of humans, where the buds are thinned out while being observed, the pedicel is fixed with one hand and the pedicel is removed with the other hand, and the force of the fingers is adaptively and dynamically adjusted at the same time, which not only protects the delicate buds from being damaged, but also achieves the thinning out of excess buds. The control method of the kiwifruit bud thinning claw provided by the present invention includes inverted T-shaped frame angle visual servo control, clamping claw control, and left and right suction claw control. The overall angle control of the inverted T-shaped frame makes the vertical support of the frame parallel to the main peduncle, which is convenient for the clamping claw to directly translate and clamp the main peduncle; the left and right suction claws control includes distance visual servo control and azimuth visual servo control, which respectively control the telescopic control of the dual electric push rods and the rotation control of the joint motor; the visual feature detection uses the YOLOv8 and Mask RCNN deep fusion model to extract the main peduncle, left and right side buds and their angle and azimuth features; the controller uses a fractional order PID model, and uses the Tyrannosaurus Rex hunting algorithm to optimize its fractional order. This control method realizes the detection of kiwi peduncle and buds through deep learning image detection and instance segmentation algorithm, and controls the clamping claw and suction claw, which is equivalent to the human "watch, clamp and pick" control function.

附图说明BRIEF DESCRIPTION OF THE DRAWINGS

图1是猕猴桃仿生疏蕾爪的立体示意图;FIG1 is a three-dimensional schematic diagram of a kiwifruit bionic bud-sparse claw;

图2是猕猴桃仿生疏蕾爪的左视图;FIG2 is a left side view of the kiwifruit bionic bud claw;

图3是猕猴桃仿生疏蕾爪的俯视图;FIG3 is a top view of a kiwifruit bionic bud claw;

图4是图2的局部A放大图;FIG4 is an enlarged view of a local area A of FIG2 ;

图5是图2的局部B放大图;FIG5 is an enlarged view of a local area B of FIG2;

图6是图3的局部C放大图;FIG6 is an enlarged view of a portion C of FIG3 ;

图7是仿生螺纹结构示意图;FIG7 is a schematic diagram of a bionic thread structure;

图8是猕猴桃仿生疏蕾爪的控制方法流程图;FIG8 is a flow chart of a control method for kiwifruit bionic bud thinning claws;

图9是倒T型框架角度视觉伺服控制流程图;FIG9 is a flow chart of the inverted T-frame angle visual servo control;

图10是侧蕾视觉距离伺服控制流程图;FIG10 is a flow chart of the side bud visual distance servo control;

图11是侧蕾方位视觉伺服控制流程图;11 is a flow chart of the visual servo control of the side bud position;

图12是目标检测融合流程图;FIG12 is a flow chart of target detection fusion;

图13是RGB图像;Figure 13 is an RGB image;

图14是对图像进行增广处理,图14(a)是左旋90度图像,图14(b)是右旋90度,图14(c)是垂直翻转图像,图14(d)是水平翻转图像。FIG14 is an image augmentation process. FIG14( a ) is a left-rotated 90-degree image, FIG14( b ) is a right-rotated 90-degree image, FIG14( c ) is a vertically flipped image, and FIG14( d ) is a horizontally flipped image.

图15是图像标注;Figure 15 is an image annotation;

图16是侧蕾中心坐标值。Figure 16 shows the coordinate values of the center of the side buds.

具体实施方式DETAILED DESCRIPTION

下面结合具体实施例对本发明作进一步描述。以下实施例仅用于更加清楚地说明本发明的技术方案,而不能以此来限制本发明的保护范围。The present invention will be further described below in conjunction with specific examples. The following examples are only used to more clearly illustrate the technical solutions of the present invention, and are not intended to limit the scope of protection of the present invention.

下面结合附图和实施例对本发明专利进一步说明。The present invention is further described below in conjunction with the accompanying drawings and embodiments.

如图1-图7所示,一种猕猴桃仿生疏蕾爪包括倒T型框架、夹持爪、左吸附爪、右吸附爪、图像检测系统和控制系统,所述夹持爪在倒T型框架中间位置并与倒T型框架连接,左吸附爪、右吸附爪对称位于倒T型框架左右位置并与倒T型框架连接;所述倒T型框架包括框架水平支架3、框架竖直支架4、旋转电机2、固定基座1,框架水平支架3的中间位置设置有旋转电机2,旋转电机2设在固定基座1上,框架水平支架3的中间位置上侧设置有框架竖直支架4,框架竖直支架4的一侧设有夹持爪;框架水平支架3的两端分别设有左吸附爪和右吸附爪;图像检测系统包括一个主RGBD相机5和两个RGBD相机,框架水平支架的中间位置一侧设置有一个主RGBD相机5,左吸附爪、右吸附爪分别设有一个RGBD相机16,主RGBD相机5和两个RGBD相机均与图像检测系统连接,图像检测系统采集花蕾、花梗图像并利用深度学习进行图像检测,实时计算花梗、花蕾的位姿,控制系统分别控制倒T型框架的旋转、夹持爪的夹持锁紧主花梗、左吸附爪和右吸附爪的疏蕾操作。As shown in Figures 1 to 7, a kiwifruit bionic bud thinning claw includes an inverted T-shaped frame, a clamping claw, a left suction claw, a right suction claw, an image detection system and a control system. The clamping claw is located in the middle of the inverted T-shaped frame and is connected to the inverted T-shaped frame, and the left suction claw and the right suction claw are symmetrically located on the left and right positions of the inverted T-shaped frame and are connected to the inverted T-shaped frame; the inverted T-shaped frame includes a frame horizontal support 3, a frame vertical support 4, a rotating motor 2, and a fixed base 1. The rotating motor 2 is arranged in the middle of the frame horizontal support 3, and the rotating motor 2 is arranged on the fixed base 1. The frame vertical support 4 is arranged on the upper side of the middle position of the frame horizontal support 3, and a clamp is arranged on one side of the frame vertical support 4. Holding claw; A left suction claw and a right suction claw are respectively provided at both ends of the frame horizontal bracket 3; the image detection system includes a main RGBD camera 5 and two RGBD cameras, a main RGBD camera 5 is arranged on one side of the middle position of the frame horizontal bracket, and an RGBD camera 16 is respectively provided for the left suction claw and the right suction claw, and the main RGBD camera 5 and the two RGBD cameras are connected to the image detection system, the image detection system collects images of flower buds and peduncles and uses deep learning for image detection, calculates the position and posture of the peduncles and flower buds in real time, and the control system controls the rotation of the inverted T-shaped frame, the clamping claw to clamp and lock the main peduncles, and the bud thinning operation of the left suction claw and the right suction claw.

夹持爪包括夹持爪辅助电机6、主电动伸缩杆7、支撑底板8、球面副9、中间支撑杆10、中间主电动推杆11、侧连杆12和弧形指面夹板13;主电动伸缩杆7一端通过夹持爪辅助电机6与倒T型框架竖直支架4连接,另一端与支撑底板8连接;所述支撑底板8上设置有中间支撑杆10、两个球面副9,两个球面副9分别与一侧连杆连接,两侧连杆的中间位置设有中间主电动推杆11,并由中间主电动推杆驱动使侧连杆围绕球面副向两侧张开或闭合;两侧连杆的端部分别与一个弧形指面夹板13连接,所述弧形指面夹板13的中间表面设有柔性传感器32,其实时测量压力大小,反馈驱动弧形指面夹板夹紧主花梗,所述弧形指面夹板两端设有气动柔性锁紧囊31,用于固定主花梗。The clamping claw includes a clamping claw auxiliary motor 6, a main electric telescopic rod 7, a support base 8, a spherical pair 9, an intermediate support rod 10, an intermediate main electric push rod 11, a side connecting rod 12 and an arc-shaped finger surface splint 13; one end of the main electric telescopic rod 7 is connected to the inverted T-shaped frame vertical bracket 4 through the clamping claw auxiliary motor 6, and the other end is connected to the support base 8; the support base 8 is provided with an intermediate support rod 10 and two spherical pairs 9, and the two spherical pairs 9 are respectively connected to a side connecting rod, and an intermediate main electric push rod 11 is provided in the middle position of the two side connecting rods, and the side connecting rods are driven by the intermediate main electric push rod to open or close to both sides around the spherical pair; the ends of the two side connecting rods are respectively connected to an arc-shaped finger surface splint 13, and the middle surface of the arc-shaped finger surface splint 13 is provided with a flexible sensor 32, which measures the pressure in real time and feedback drives the arc-shaped finger surface splint to clamp the main peduncle, and pneumatic flexible locking bags 31 are provided at both ends of the arc-shaped finger surface splint for fixing the main peduncle.

左吸附爪和右吸附爪的结构相同,左吸附爪和右吸附爪分别包括侧电机24、主电动推杆23、铰链22、从电动推杆20、关节旋转电机21、手指调节底板19、手指调节基座18、手指调节推杆17和三根手指;主电动推杆23一端通过侧电机24与框架水平支架连接,另一端通过铰链22与从电动推杆20连接,关节旋转电机21设置在铰链中,从电动推杆20的伸缩端设有旋转电机25,旋转电机25上设有手指调节底板19,手指调节底板19上设置有手指调节基座18,手指调节基座18设有滑动槽27,手指调节推杆17通过限位销26滑动连接在滑动槽27内,手指调节推杆17一端设有手指,手指包括指根15、指尖部14,指根15一端与手指调节推杆17连接,指根15另一端与指尖部14连接。所述指尖部上设有仿生指甲28,仿生指甲28的一侧设有仿生肌肉29,仿生肌肉29一侧设有仿生螺纹30,所述仿生螺纹30具有静电吸附功能,用于吸附侧蕾。仿生螺纹30包括阳电极34、阴电极35及PDMS基质33,PDMS基质33上设有阳电极34、阴电极35,阳电极、阴电极组合成自互电容式接近传感器,用于测量侧蕾与仿生螺纹的距离和测量侧蕾表面接触力大小。The left suction claw and the right suction claw have the same structure, and the left suction claw and the right suction claw respectively include a side motor 24, a main electric push rod 23, a hinge 22, a slave electric push rod 20, a joint rotation motor 21, a finger adjustment base 19, a finger adjustment base 18, a finger adjustment push rod 17 and three fingers; one end of the main electric push rod 23 is connected to the frame horizontal bracket through the side motor 24, and the other end is connected to the slave electric push rod 20 through the hinge 22, the joint rotation motor 21 is arranged in the hinge, and the slave electric push rod 20 is connected to the finger adjustment base 18. The telescopic end of the rod 20 is provided with a rotary motor 25, on which a finger adjustment base plate 19 is provided, on which a finger adjustment base 18 is provided, on which a sliding groove 27 is provided, and a finger adjustment push rod 17 is slidably connected in the sliding groove 27 through a limit pin 26, and a finger is provided at one end of the finger adjustment push rod 17, and the finger includes a finger root 15 and a finger tip 14, one end of the finger root 15 is connected to the finger adjustment push rod 17, and the other end of the finger root 15 is connected to the finger tip 14. A bionic nail 28 is provided on the finger tip, and a bionic muscle 29 is provided on one side of the bionic nail 28, and a bionic thread 30 is provided on one side of the bionic muscle 29, and the bionic thread 30 has an electrostatic adsorption function for adsorbing side buds. The bionic thread 30 includes an anode 34, a cathode 35 and a PDMS matrix 33. The PDMS matrix 33 is provided with an anode 34 and a cathode 35. The anode and the cathode are combined into a self-mutual capacitance proximity sensor for measuring the distance between the side bud and the bionic thread and measuring the contact force on the side bud surface.

旋转电机2、夹持爪辅助电机6、主电动伸缩杆7、中间主电动推杆11、手指调节推杆17、从电动推杆20、关节旋转电机21、主电动推杆23、侧电机24、旋转电机25、气动柔性锁紧囊31、柔性传感器32、阳电极34、阴电极35均与运动控制系统连接。The rotating motor 2, the clamping claw auxiliary motor 6, the main electric telescopic rod 7, the middle main electric push rod 11, the finger adjustment push rod 17, the slave electric push rod 20, the joint rotating motor 21, the main electric push rod 23, the side motor 24, the rotating motor 25, the pneumatic flexible locking bag 31, the flexible sensor 32, the positive electrode 34, and the negative electrode 35 are all connected to the motion control system.

猕猴桃仿生疏蕾爪整体运动到猕猴桃主花梗正前方,主RGBD相机5与RGBD相机16实时检测并识别出主花梗、左花梗、右花梗、左侧蕾、右花蕾、主花蕾,动态调整柔性夹持爪与左右吸附爪的位姿。弧形指面夹板13运动到主花梗处并将其夹住,柔性传感器32测量弧形指面夹板13的表面压力,图像检测系统与控制系统根据压力大小反馈调节两块弧形指面夹板13的最佳距离,保证不破坏主花梗,气动柔性锁囊31锁紧主花梗使其不滑动。所述左右吸附爪通过调节每根手指的调节推杆17相对位置使得仿生螺纹30靠近并接触侧蕾,运动控制系统可测量侧蕾与仿生螺纹的距离及表面接触力大小,保证侧蕾所受的指尖压力可控。左右吸附爪的多根手指分别将左侧蕾、右侧蕾夹住,并对仿生螺纹30的阳电极34、阴电极35施加电场将侧蕾吸附住,保证两侧的花蕾既不脱落也不被压碎,轻微移动吸附爪使之与主花梗分开小段距离,便于后续疏蕾处理,以免影响主蕾。The kiwifruit bionic bud thinning claw moves as a whole to the front of the main peduncle of kiwifruit. The main RGBD camera 5 and the RGBD camera 16 detect and identify the main peduncle, the left peduncle, the right peduncle, the left bud, the right bud, and the main bud in real time, and dynamically adjust the posture of the flexible clamping claw and the left and right suction claws. The arc-shaped finger surface splint 13 moves to the main peduncle and clamps it. The flexible sensor 32 measures the surface pressure of the arc-shaped finger surface splint 13. The image detection system and the control system adjust the optimal distance between the two arc-shaped finger surface splints 13 according to the pressure feedback to ensure that the main peduncle is not damaged. The pneumatic flexible lock bag 31 locks the main peduncle so that it does not slide. The left and right suction claws adjust the relative position of the adjustment push rod 17 of each finger so that the bionic thread 30 approaches and contacts the side buds. The motion control system can measure the distance between the side buds and the bionic thread and the surface contact force to ensure that the fingertip pressure on the side buds is controllable. The multiple fingers of the left and right suction claws clamp the left and right buds respectively, and apply an electric field to the positive electrode 34 and the negative electrode 35 of the bionic thread 30 to adsorb the side buds, ensuring that the flower buds on both sides neither fall off nor are crushed. The suction claws are slightly moved to separate them from the main pedicel by a small distance, which is convenient for subsequent bud thinning to avoid affecting the main buds.

指尖的仿生螺纹30具有侧蕾距离、触摸力感知和静电吸附功能,阳电极34与阴电极35构成自-互电容互补式接近传感器。所述阳电极34与阴电极35工作于“自电容”独立方式时,可测量侧蕾与仿生螺纹的距离,用来判别指尖仿生螺纹与两侧的花蕾是否接触;当指尖仿生螺纹与侧蕾表面接触后,所述阳电极34与阴电极35工作于“互电容”方式时,可测量侧蕾表面的触摸力大小,保证吸附爪的手指既能夹住又不压碎侧蕾。所述阳电极34与阴电极35形成螺旋状并半嵌入到PDMS基质33中,当仿生螺纹工作疏蕾采摘方式时,电极构成静电吸附仿生螺纹,螺纹表面与侧蕾接触面处于零度,工作表面吸附力为The bionic thread 30 of the fingertip has the functions of side bud distance, touch force perception and electrostatic adsorption, and the positive electrode 34 and the negative electrode 35 constitute a self-mutual capacitance complementary proximity sensor. When the positive electrode 34 and the negative electrode 35 work in the "self-capacitance" independent mode, the distance between the side buds and the bionic thread can be measured to determine whether the bionic thread of the fingertip is in contact with the buds on both sides; when the bionic thread of the fingertip contacts the surface of the side buds, the positive electrode 34 and the negative electrode 35 work in the "mutual capacitance" mode, and the touch force on the surface of the side buds can be measured to ensure that the finger of the adsorption claw can clamp the side buds without crushing them. The positive electrode 34 and the negative electrode 35 form a spiral shape and are half-embedded in the PDMS matrix 33. When the bionic thread works in the bud thinning and picking mode, the electrodes constitute an electrostatic adsorption bionic thread, the contact surface between the thread surface and the side buds is at zero degrees, and the adsorption force on the working surface is

F=TS+RwF=TS+Rw

其中τ为摩擦应力,R为有效吸附面积,R吸附表面能量,w效吸附宽度,而τ与R均与施加的电压V有关,所述电压V的大小由运动控制系统决定,进而控制F的大小实现指尖仿生螺纹的静电吸附功能,完成侧蕾的吸紧,保证后续吸附爪在旋转或后移工作时,侧蕾不易脱落和压碎。Among them, τ is the friction stress, R is the effective adsorption area, R is the adsorption surface energy, w is the effective adsorption width, and τ and R are both related to the applied voltage V. The magnitude of the voltage V is determined by the motion control system, which in turn controls the size of F to realize the electrostatic adsorption function of the fingertip bionic thread, complete the suction of the side buds, and ensure that the side buds are not easily fallen off or crushed when the subsequent adsorption claws rotate or move backward.

疏蕾有两种方式,即旋转方式和后移方式。所述旋转方式以右侧疏蕾为例,右吸附爪的手指吸紧右侧蕾,运动小段距离后,控制系统驱动旋转电机旋转,从右花梗上分离右侧蕾,从而实现右侧疏蕾目的,等效于人手指抓住花蕾并旋转手腕将其摘除;所述后移方式以左侧疏蕾为例,左吸附爪的手指吸附抓住左侧蕾,运动小段距离后,控制系统驱动吸附爪的从电动推杆收缩,从左花梗上拽离左侧蕾,从而实现左侧疏蕾目的,等效于人手指抓紧花蕾并移动手指将其拽开。There are two ways to thin out the buds, namely the rotation method and the backward movement method. The rotation method takes the thinning of the right buds as an example. The fingers of the right suction claw suck the right buds tightly. After moving a short distance, the control system drives the rotary motor to rotate and separate the right buds from the right pedicel, thereby achieving the purpose of thinning the buds on the right side, which is equivalent to human fingers grabbing the buds and rotating the wrist to remove them; the backward movement method takes the thinning of the left buds as an example. The fingers of the left suction claw suck and grab the left buds. After moving a short distance, the control system drives the suction claw to retract the electric push rod and pull the left buds away from the left pedicel, thereby achieving the purpose of thinning the buds on the left side, which is equivalent to human fingers grabbing the buds and moving the fingers to pull them away.

猕猴桃仿生疏蕾爪的运动过程主要由图像检测系统与控制系统控制,而图像检测是实现准确控制的前提,通过基于深度学习的花梗、花蕾图像检测进而实现电机和推杆精准控制。The movement process of the kiwifruit bionic bud-thinning claw is mainly controlled by the image detection system and the control system. Image detection is the prerequisite for accurate control. The motor and push rod are precisely controlled through the image detection of peduncles and buds based on deep learning.

如图8所示,仿生疏蕾爪仿生疏蕾爪被机器人本体控制到猕猴桃花蕾前,进行疏蕾精确控制。首先,对倒T型框架进行角度视觉伺服控制,使得主花梗直线方向与图像的垂直方向一致,具体控制流程如图9所示;然后,分别对左吸附爪、夹持爪、右吸附爪进行控制,夹持爪采用简单的平移控制,无需过多的精细控制,只要主RGBD相机检测主花梗与夹持爪的距离,就可以直接移动夹持爪,由图像检测保证主花梗在夹持爪的正前方,便于夹持爪夹紧主花梗,使得主花梗不晃动;左右吸附爪对称设置且控制方法相同,左右吸附爪控制包括两部分,即距离视觉伺服控制、侧蕾方位视觉伺服控制。距离视觉伺服控制如图10所示,主要控制主从两支电动推杆,实现末端仿生手指指尖与侧蕾的距离调整。侧蕾方位视觉伺服控制如图11所示,主要控制两个关节旋转电机,实现侧蕾目标位于末端仿生手指的中心位置,便于吸附抓取疏蕾。当末端手指仿生螺纹移动到侧蕾的位置时,就可以收紧三指的指尖仿生螺纹并夹紧左右侧蕾。如果末端手指仿生螺纹未移动到合适的位置,程序返回到距离视觉伺服控制前,再重复调整与侧蕾的距离及方位,直到符合最佳的位置为止;最后,当夹持爪夹紧主花梗,左右吸附爪吸紧左右侧蕾后,三爪协同控制将左右侧蕾疏除,完成仿生疏蕾爪的功能。需要说明的是,左右吸附爪可单独工作,也可以一起工作。本发明的视觉图像检测与分割采用深度学习方法,模型为YOLOv8和Mask RCNN,其目标检测与信息融合如图12所示。As shown in Figure 8, the bionic bud thinning claw is controlled by the robot body to the front of the kiwi buds for precise bud thinning control. First, the angle visual servo control of the inverted T-shaped frame is performed so that the straight direction of the main peduncle is consistent with the vertical direction of the image. The specific control process is shown in Figure 9; then, the left suction claw, the clamping claw, and the right suction claw are controlled respectively. The clamping claw adopts simple translation control without too much fine control. As long as the main RGBD camera detects the distance between the main peduncle and the clamping claw, the clamping claw can be moved directly. The image detection ensures that the main peduncle is in front of the clamping claw, which is convenient for the clamping claw to clamp the main peduncle so that the main peduncle does not shake; the left and right suction claws are symmetrically set and the control method is the same. The control of the left and right suction claws includes two parts, namely, distance visual servo control and side bud orientation visual servo control. The distance visual servo control is shown in Figure 10, which mainly controls the master and slave electric push rods to achieve the distance adjustment between the end bionic finger tip and the side bud. The visual servo control of the side bud orientation is shown in Figure 11, which mainly controls the two joint rotation motors to realize that the side bud target is located at the center of the end bionic finger, which is convenient for adsorption and grasping of the sparse buds. When the end finger bionic thread moves to the position of the side bud, the fingertip bionic thread of the three fingers can be tightened and the left and right side buds can be clamped. If the end finger bionic thread does not move to the appropriate position, the program returns to the distance before the visual servo control, and then repeatedly adjusts the distance and orientation with the side buds until it meets the best position; finally, when the clamping claw clamps the main peduncle and the left and right adsorption claws suck the left and right side buds, the three claws coordinate control to remove the left and right side buds and complete the function of the bionic sparse bud claw. It should be noted that the left and right adsorption claws can work alone or together. The visual image detection and segmentation of the present invention adopts a deep learning method, and the model is YOLOv8 and Mask RCNN, and its target detection and information fusion are shown in Figure 12.

倒T型框架角度视觉伺服控制流程图如图9所示,猕猴桃仿生疏蕾爪的角度控制,首先图像检测系统得到主花梗直线位置,与图像中的竖直位置相减,得到角度偏差e(k)(图9中的θ),偏差信号送入分数阶PID控制器,得到控制信号u(k),驱动PWM脉冲发生器,并驱动旋转电机转动一定角度,图像检测系统动态监测主花梗的直线位置,使得柔性夹持手跟踪主花梗并保持与主花梗垂直,便于准确地夹持主花梗。The flow chart of the inverted T-frame angle visual servo control is shown in Figure 9. For the angle control of the kiwifruit bionic bud-thinning claw, first, the image detection system obtains the straight line position of the main peduncle, subtracts it from the vertical position in the image, and obtains the angle deviation e(k) (θ in Figure 9). The deviation signal is sent to the fractional-order PID controller to obtain the control signal u(k), which drives the PWM pulse generator and drives the rotary motor to rotate a certain angle. The image detection system dynamically monitors the straight line position of the main peduncle, so that the flexible gripper tracks the main peduncle and keeps it perpendicular to the main peduncle, which is convenient for accurately clamping the main peduncle.

角度控制器采用分数阶PID控制器,其表达式为GFPID=Kp+KIs+KDsμ,其中Kp,KI,KD,λ以及μ分别是比例增益、积分增益、微分增益、积分项的指数和微分项的指数,s是复频率。The angle controller adopts a fractional-order PID controller,and its expression isGFPID =Kp +KIs +KDsμ , whereKp ,KI ,KD , λ and μ are proportional gain, integral gain, differential gain, exponent of integral term and exponent of differential term respectively, and s is the complex frequency.

令主花梗直线位置图像检测传递函数图像检测增益为Ks,时间常数为Ts。为了得到最优的分数阶数,可用霸王龙捕猎算法对其优化。首先定义目标函数:Let the main peduncle straight line position image detection transfer function The image detection gain is Ks and the time constant is Ts . In order to obtain the optimal fractional order, the Tyrannosaurus Rex hunting algorithm can be used to optimize it. First, define the objective function:

Jobj=(1-e)*(wo*Mp+Ess)+e(Tset-Tr) (1)Jobj = (1-e )*(wo *Mp +Ess )+e (Tset -Tr ) (1)

式中wo为权重系数,Mp为超调量,Ess为稳态误差,Tset为稳定时间,β为权重因子,Tr为上升时间。Where wo is the weight coefficient, Mp is the overshoot,Ess is the steady-state error, Tset is the stabilization time, β is the weight factor, andTr is the rise time.

然后根据公式(2)生成N个初始猎物地点解,Then, according to formula (2), N initial prey location solutions are generated:

Xi=rand(np,di)*(ub-lb)+lb (2)Xi =rand(np,di)*(ub-lb)+lb (2)

式中Xi=[x1,x2,x3...xn]是猎物的位置,np是种群数量,n是维度,di是搜索空间的维数,ub、lb分别为上限值和下限值,rand是产生随机数函数。WhereXi = [x1 ,x2 ,x3 ...xn ] is the position of the prey, np is the population size, n is the dimension, di is the dimension of the search space, ub and lb are the upper and lower limits respectively, and rand is a function for generating random numbers.

当霸王龙看到离它最近的猎物时,它会尝试捕猎。有时作为猎物保护自己不被捕猎,或者它可能会逃跑。霸王龙狩猎包括幼龙追逐和捕捉猎物,所以当霸王龙狩猎时,它会随机狩猎,捕猎新位置公式为:When a Tyrannosaurus Rex sees prey closest to it, it will attempt to hunt it. Sometimes the prey will defend itself from being hunted, or it may flee. Hunting by a Tyrannosaurus Rex involves the young dinosaur chasing and catching prey, so when a Tyrannosaurus Rex hunts, it will hunt randomly, hunting new locations with the formula:

式中Er是到达分散猎物的距离估计,rand()为随机函数,Random为随机数。即当霸王龙开始捕猎时,猎物开始分散,并通过更新猎物位置来捕食猎物:Where Er is the estimated distance to the dispersed prey, rand() is a random function, and Random is a random number. That is, when the Tyrannosaurus Rex starts hunting, the prey starts to disperse, and it preys by updating the prey position:

xnew=x+rand()*sr*(tpos*tr-targ*pr) (4)xnew =x+rand()*sr*(tpos*tr-targ*pr) (4)

式中sr是介于[0.1,1]之间的狩猎成功率,tpos是霸王龙位置,x是代表捕猎原位置。如果成功率为0,说明猎物逃脱了,狩猎失败,猎物位置也要相应更新。targ是猎物到霸王龙的最小位置。tr是霸王龙的奔跑速度,霸王龙的奔跑速度考虑在[0.067,0.3]之间。pr是猎物的奔跑速度,它位于[0,1]之间,猎物的奔跑速度应该小于霸王龙的速度。选择过程取决于猎物的位置,即目标猎物的当前位置和先前位置。如果霸王龙捕猎失败,如果猎物逃跑或保护自己不被捕猎,猎物的位置就变成零。通过比较适应度函数来实现。Where sr is the hunting success rate between [0.1, 1], tpos is the position of the Tyrannosaurus Rex, and x represents the original hunting position. If the success rate is 0, it means that the prey has escaped, the hunting has failed, and the prey position should be updated accordingly. targ is the minimum position from the prey to the Tyrannosaurus Rex. tr is the running speed of the Tyrannosaurus Rex, and the running speed of the Tyrannosaurus Rex is considered to be between [0.067, 0.3]. pr is the running speed of the prey, which is between [0, 1], and the running speed of the prey should be less than the speed of the Tyrannosaurus Rex. The selection process depends on the position of the prey, that is, the current position and the previous position of the target prey. If the Tyrannosaurus Rex fails to hunt, if the prey escapes or protects itself from being hunted, the position of the prey becomes zero. This is achieved by comparing the fitness function.

式中f(X)是初始随机猎物位置的适应度函数,f(Xnew)是更新的猎物位置的适应度函数。根据目标函数和捕猎位置及最小距离的迭代计算,得到最优的阶数λ、μ。Where f(X) is the fitness function of the initial random prey position, and f(Xnew ) is the fitness function of the updated prey position. According to the iterative calculation of the objective function, the prey position and the minimum distance, the optimal order λ and μ are obtained.

左右侧蕾视觉距离伺服控制流程图如图10所示,仿生疏蕾爪的每只吸附爪可等效为两连杆机械臂,包括主电动推杆和从电动推杆,末端吸附爪位于从电动推杆上。吸附爪的距离控制如图10所示。从电动推杆对末端吸附爪进行精细微距离控制,主电动推杆对吸附爪主体进行距离粗控制,两者配合完成吸附爪与左右侧蕾的距离控制。The flow chart of the visual distance servo control of the left and right side buds is shown in Figure 10. Each suction claw of the bionic bud-sparse claw can be equivalent to a two-link mechanical arm, including a master electric push rod and a slave electric push rod, and the end suction claw is located on the slave electric push rod. The distance control of the suction claw is shown in Figure 10. The slave electric push rod performs fine micro-distance control on the end suction claw, and the master electric push rod performs coarse distance control on the suction claw body. The two cooperate to complete the distance control between the suction claw and the left and right side buds.

G1(s)和G2(s)分别表示与从控制器和主控制器相关的传递函数。此外,L1(s)和L2(s)分别是与内环和外环的对象相关的传递函数。系统的最终距离Y2(s)受到负载扰动D(s),计算公式如下:G1 (s) and G2 (s) represent the transfer functions associated with the slave and master controllers, respectively. In addition, L1 (s) and L2 (s) are the transfer functions associated with the objects of the inner and outer loops, respectively. The final distance Y2 (s) of the system is subject to the load disturbance D(s) and is calculated as follows:

Y2(s)=U2(s)*L2(s)+D(s) (6)Y2 (s)=U2 (s)*L2 (s)+D(s) (6)

式中U2(s)是内环输出或者外环输入。U2(S)控制Y(s)追踪信号R(s)。其中U1(s)是从控制器电压控制信号,同样,内环输出Y1(s)可以通过以下方式获得:Where U2 (s) is the inner loop output or outer loop input. U2 (S) controls Y(s) to track the signal R(s). Where U1 (s) is the voltage control signal from the controller. Similarly, the inner loop output Y1 (s) can be obtained by:

Y1(s)=U2(s)=U1(s)*L1(s) (7)Y1 (s)=U2 (s)=U1 (s)*L1 (s) (7)

控制系统利用两个分数阶控制器构建级联的系统。FPI控制器与TDμ控制器级联,其中FPI控制器构成主控制器G2(s)和TDμ控制器从机G1(s)。The control system uses two fractional order controllers to construct a cascade system. The FPI controller is cascaded with the TDμ controller, where the FPI controller constitutes the master controller G2 (s) and the TDμ controller is the slave G1 (s).

因此,级联的系统闭环传递函数表示如下:Therefore, the closed-loop transfer function of the cascaded system is expressed as follows:

两种分数阶级联控制器的阶数也用霸王龙捕猎算法优化计算得到。The orders of the two fractional cascade controllers are also optimized and calculated using the Tyrannosaurus Rex hunting algorithm.

吸附爪侧蕾方位视觉伺服控制流程图如图11所示,当仿生疏蕾爪的吸附爪末端执行器快接近侧蕾目标时,需要对吸附爪的方位进行精确控制。方位检测直接影响侧蕾疏除的效果,侧蕾方位图像检测与吸附爪的控制精度是疏蕾爪成功摘除侧蕾的前提。吸附爪的方位控制,主要包括视觉控制器和关节控制器。视觉控制器利用RGBD传感器获取的侧蕾中心坐标特征与设定的参考中心坐标比较,得到的误差送给视觉控制器对误差进行调整。关节控制器采用PD控制策略,驱动关节旋转电机实现驱动控制,并用李亚普诺夫(Lyapunov)函数说明其稳定性。整体实现“手眼联动”功能,视觉控制器观察方位误差,进而告诉关节控制器调整方位幅度。The visual servo control flow chart of the side bud orientation of the suction claw is shown in Figure 11. When the end effector of the suction claw of the bionic bud thinning claw is close to the side bud target, the orientation of the suction claw needs to be accurately controlled. Orientation detection directly affects the effect of side bud thinning. The side bud orientation image detection and the control accuracy of the suction claw are the prerequisites for the successful removal of the side bud by the thinning claw. The orientation control of the suction claw mainly includes the visual controller and the joint controller. The visual controller compares the side bud center coordinate characteristics obtained by the RGBD sensor with the set reference center coordinates, and the error obtained is sent to the visual controller to adjust the error. The joint controller adopts the PD control strategy to drive the joint rotation motor to realize drive control, and the Lyapunov function is used to illustrate its stability. The overall "hand-eye linkage" function is realized, and the visual controller observes the orientation error, and then tells the joint controller to adjust the orientation amplitude.

视觉控制器的任务就是利用末端执行器上的RGBD相机跟踪侧蕾目标的中心坐标,要建立图像特征变化和关节角度变化之间的联系。图像表达的信息首先被处理,然后根据理想针孔照相机模型被转换为相对于照相机的位置,并使用物体和照相机之间的关系被进一步转换为相对于基础帧的坐标。这样,从对象点的坐标(X,Y,Z)表示到相应的图像点(u,v)写为:The task of the visual controller is to use the RGBD camera on the end effector to track the center coordinates of the side bud target, and to establish the connection between the image feature changes and the joint angle changes. The information expressed by the image is first processed, and then converted to the position relative to the camera according to the ideal pinhole camera model, and further converted to the coordinates relative to the base frame using the relationship between the object and the camera. In this way, the coordinates (X, Y, Z) of the object point are expressed as the corresponding image point (u, v) as follows:

其中为RGBD相机固有矩阵,表示相机帧和图像帧之间的关系,可以通过给定视场角进行测量或计算获得。fx和fy是相机沿的有效焦距,以像素为单位xc和yc轴,ρ为相机倾斜因子,表示相机中心和图像中心的差异。表示对象帧和相机帧之间的关系,被定义为旋转矩阵,可以通过由极角和方位角构建的等效角轴表示来确定,t从照相机到物体的平移位移。in is the intrinsic matrix of the RGBD camera, which represents the relationship between the camera frame and the image frame, which can be measured or calculated with a given field of view.fx andfy are the effective focal lengths of the camera along the xc and yc axes in pixels, ρ is the camera tilt factor, and Represents the difference between the camera center and the image center. represents the relationship between the object frame and the camera frame, is defined as the rotation matrix, which can be determined by the equivalent angular axis representation constructed from the polar angle and the azimuth angle, t, the translational displacement from the camera to the object.

视觉控制器设计和相关控制增益的选择需要在图像雅可比矩阵中进行检查,该矩阵在图像坐标中将特征速度与相机速度相关联。定义分别为相机相对于图像帧的线速度和角速度。相机帧中的点P(X,Y,Z)与对应投影的图像空间中的点P(u,v)可通过图像雅可比矩阵J联系The vision controller design and the selection of the associated control gains require examination of the image Jacobian matrix, which relates feature velocities to camera velocities in image coordinates. and are the linear velocity and angular velocity of the camera relative to the image frame. The point P(X, Y, Z) in the camera frame and the point P(u, v) in the corresponding projected image space can be related through the image Jacobian matrix J

其中对吸附爪的反馈控制,需要图像帧的误差。如果期望的图像侧蕾中心位置被定义为(ud,vd)=(u0,v0),获取的期望中心为O(0,0),可令u0=0,v0=0,使得吸附爪RGBD相机时刻检测并跟踪侧蕾,使得吸附爪对准侧蕾中心位置,便于抓取侧蕾。由于角度和距离由其它控制器驱动,此视觉控制器仅仅考虑侧蕾方位误差。方位误差定义为:in Feedback control of the suction claw requires the error of the image frame. If the desired center position of the side bud in the image is defined as (ud , vd ) = (u0 , v0 ), the desired center is O(0,0), and u0 = 0, v0 = 0 can be set so that the suction claw RGBD camera always detects and tracks the side bud, so that the suction claw is aligned with the center position of the side bud, which is convenient for grasping the side bud. Since the angle and distance are driven by other controllers, this visual controller only considers the side bud orientation error. The orientation error is defined as:

(e1,e2)=(u-u0,v-v0) (13)(e1 , e2 )=(uu0 , vv0 ) (13)

其中,e1是水平误差项,e2是垂直误差项,u0是图像中心水平坐标值,设为0,v0是图像中心坐标值,设为0。侧蕾中心与图像平面的中心位置对准过程中,视觉控制器采用PD控制策略,Among them, e1 is the horizontal error term, e2 is the vertical error term, u0 is the horizontal coordinate value of the image center, which is set to 0, and v0 is the coordinate value of the image center, which is set to 0. During the alignment process between the center of the side bud and the center of the image plane, the visual controller adopts the PD control strategy.

其中(vx,vy)是相对于当前摄像机帧的平移速度;kpi,kdi,i=1,2,是正增益。取其方程的导数,从图像雅可比矩阵J及其方程和控制器方程获得误差动态趋近于0。Where (vx ,vy ) is the translation speed relative to the current camera frame; kpi , kdi , i = 1, 2, are positive gains. Taking the derivative of its equation, the error dynamics approaching 0 is obtained from the image Jacobian matrix J and its equation and the controller equation.

其中,是含义f为相机的有效焦距,Zc为深度距离。in, It means f is the effective focal length of the camera and Zc is the depth distance.

关节控制器不考虑摩擦和重力效应,吸附爪的方程为The joint controller does not consider the effects of friction and gravity, and the equation for the suction claw is:

式中D为正定惯性矩阵,C为离心和哥氏力项,为为关节加速度,为为关节速度,则关节控制器也选择PD控制率Where D is the positive definite inertia matrix, C is the centrifugal and Coriolis force terms, is the joint acceleration, For the joint speed, the joint controller also selects the PD control rate

其中跟踪误差e=qd-q,当定点控制时,qd为常数,所以则吸附爪方程为The tracking error e = qd -q. When the fixed-point control is used, qd is a constant, so Then the adsorption claw equation is

其中Kd为控制器微分项系数,Kp为控制器比例项系数,WhereKd is the differential term coefficient of the controller,Kp is the proportional term coefficient of the controller,

取李亚普诺夫(Lyapunov)函数为Take the Lyapunov function as

由于D、Kp正定,V也是全局正定的,则Since D andKp are positive definite, V is also globally positive definite, then

其中斜对称,则in If the

因为为负半定且Kd为正定,则吸附爪受控全局渐进稳定。because is negative semidefinite and Kd is positive definite, then the controlled adsorption claw is globally asymptotically stable.

视觉目标检测流程图如图12所示。在倒T型框架角度视觉伺服控制、左右侧蕾距离视觉伺服控制、左右吸附爪侧蕾方位视觉伺服控制中,都需要对图像进行特征提取和目标检测,包括主花梗、主花蕾、左右侧蕾、主花梗直线方向与图像的垂直方向的偏移角度、RGBD距离值、左右侧蕾中心坐标等。因此,需要采用基于YOLOv8与Mask RCNN的目标检测与信息融合方法,实现深度学习的目标检测与特征提取,为控制算法提供视觉传感反馈信号。首先,采集一定数量的RGB图像(如图13所示),并对图像进行增广处理(如图14,包含但不限于左旋90度、右旋90度、垂直翻转、水平翻转等),然后进行图像标注得到主花梗、主花蕾、左右侧蕾(见图15)及侧蕾中心坐标值。左右侧蕾中心坐标值获取的示意图参见图16,以图像中心O(0,0)为原点,可以,如t1左、t1右、t2左、t2右四个坐标值。以Y轴为主方向,得到主花梗直线方向与图像的垂直方向的偏移角度,如θ1、θ2。然后,构建数据库,包括训练数据库、验证数据库、测试数据库。选定目标检测模型,用训练数据库分别对YOLOv8和Mask RCNN模型进行训练,并用性能评价指标(公式13-16)对模型进行评价,如果训练不达标,则重新建立模型和选择参数,进行下一轮训练,直到训练满足要求为止。如果训练达标则转到验证数据集进行验证,对验证效果进行评价,如果验证指标不达标,则转到新建模型处修改模型及参数,进行下一轮训练,完成训练后继续进行验证,直至验证符合指标要求。模型符合要求后,转到测试数据集进行测试。测试环节包括YOLOv8模型和Mask RCNN模型,先用YOLOv8模型测试,得到主花梗、主花蕾、左侧蕾、右侧蕾目标检测与实例分割图、以及主花梗倾斜角度及左右侧蕾中心坐标值,再将主花梗、主花蕾、左侧蕾、右侧蕾的区域作为初始“建议区域”,再用Mask RCNN模型对图像进行测试,得到新的主花梗、主花蕾、左侧蕾、右侧蕾目标检测与实例分割图、以及主花梗倾斜角度及左右侧蕾中心坐标值。其次,将YOLOv8模型和Mask RCNN模型各自得到的主花梗倾斜角度及左右侧蕾中心坐标值进行参数融合,融合公式为:The visual target detection flow chart is shown in Figure 12. In the inverted T-frame angle visual servo control, the left and right side bud distance visual servo control, and the left and right suction claw side bud orientation visual servo control, it is necessary to extract features and detect targets from the image, including the main peduncle, the main bud, the left and right side buds, the offset angle between the main peduncle straight line and the vertical direction of the image, the RGBD distance value, the left and right side bud center coordinates, etc. Therefore, it is necessary to adopt the target detection and information fusion method based on YOLOv8 and Mask RCNN to realize deep learning target detection and feature extraction, and provide visual sensor feedback signals for the control algorithm. First, a certain number of RGB images are collected (as shown in Figure 13), and the images are augmented (as shown in Figure 14, including but not limited to left rotation 90 degrees, right rotation 90 degrees, vertical flip, horizontal flip, etc.), and then the image is annotated to obtain the main peduncle, main bud, left and right side buds (see Figure 15) and the side bud center coordinate values. The schematic diagram of obtaining the coordinate values of the center of the left and right buds is shown in Figure 16. With the image center O(0,0) as the origin, four coordinate values such as t1left , t1right , t2left , and t2right can be obtained. With the Y axis as the main direction, the offset angle between the main peduncle straight line direction and the vertical direction of the image is obtained, such asθ1 andθ2 . Then, a database is constructed, including a training database, a verification database, and a test database. The target detection model is selected, and the YOLOv8 and Mask RCNN models are trained respectively with the training database, and the model is evaluated with the performance evaluation index (Formula 13-16). If the training does not meet the standard, the model is re-established and the parameters are selected, and the next round of training is carried out until the training meets the requirements. If the training meets the standard, it is transferred to the verification data set for verification, and the verification effect is evaluated. If the verification index does not meet the standard, it is transferred to the newly created model to modify the model and parameters, and the next round of training is carried out. After the training is completed, the verification continues until the verification meets the index requirements. After the model meets the requirements, it is transferred to the test data set for testing. The test phase includes the YOLOv8 model and the Mask RCNN model. The YOLOv8 model is first used for testing to obtain the main peduncle, main bud, left bud, and right bud target detection and instance segmentation map, as well as the main peduncle tilt angle and the left and right side bud center coordinate values. The main peduncle, main bud, left bud, and right bud area are then used as the initial "suggested area". The Mask RCNN model is then used to test the image to obtain new main peduncle, main bud, left bud, and right bud target detection and instance segmentation map, as well as the main peduncle tilt angle and the left and right side bud center coordinate values. Secondly, the main peduncle tilt angle and the left and right side bud center coordinate values obtained by the YOLOv8 model and the Mask RCNN model are parameter fused. The fusion formula is:

θf=wyoloyolo+wmaskmask (22)θf =wyoloyolo +wmaskmask (22)

tcf=wcyolo*tcyolo+wcmask*tcmask (23)tcf =wcyolo *tcyolo +wcmask *tcmask (23)

其中θyolo与θmask分别为YOLOv8模型和Mask RCNN模型得到的主花梗倾斜角度,wyolo与wmask为融合权重,θf为融合后的主花梗倾斜角度。tcyolo与tcmask分别为YOLOv8模型和MaskRCNN模型得到的左右侧蕾中心坐标值,wcyolo与wcmask为融合权重,tcf为融合后的左右侧蕾中心坐标值。最后,将融合后的参数传送给视觉伺服控制器。Among them, θyolo and θmask are the main peduncle tilt angles obtained by the YOLOv8 model and the Mask RCNN model, wyolo and wmask are the fusion weights, and θf is the main peduncle tilt angle after fusion. tcyolo and tcmask are the left and right side bud center coordinates obtained by the YOLOv8 model and the MaskRCNN model, wcyolo and wcmask are the fusion weights, and tcf is the left and right side bud center coordinates after fusion. Finally, the fused parameters are transmitted to the visual servo controller.

利用YOLOv8的目标检测和实例分割的双重功能,满足实时性检测,将检测问题转换成回归问题。对于仿生疏蕾爪关注四类对象:主花梗、主花蕾、左侧蕾、右侧蕾。每个边界盒包括类别属性、回归框坐标、置信度、主花梗倾斜角度θ、左右侧蕾中心坐标t。The dual functions of target detection and instance segmentation of YOLOv8 are used to meet real-time detection and convert the detection problem into a regression problem. For the bionic bud claw, four types of objects are concerned: main peduncle, main bud, left bud, and right bud. Each bounding box includes category attributes, regression box coordinates, confidence, main peduncle tilt angle θ, and left and right bud center coordinates t.

损失函数的作用为度量神经网络预测信息与期望信息(标签)的距离,预测信息越接近期望信息,损失函数值越小。定义YOLO损失函数Lyolo,包括五个部分,即类别损失误差Eclass、矩形框坐标误差Ecorrd、回归损失IOU误差Eiou、主花梗倾斜角度误差Eθ、左右侧蕾中心坐标误差EcenterThe role of the loss function is to measure the distance between the neural network prediction information and the expected information (label). The closer the predicted information is to the expected information, the smaller the loss function value is. Define the YOLO loss functionLyolo , which includes five parts, namely, the class loss errorEclass , the rectangular frame coordinate errorEcorrd , the regression loss IOU errorEiou , the main peduncle tilt angle error , and the left and right bud center coordinate errorEcenter .

Lyolo=w1*Eclass+w2*Ecorrd+w3*Eiou+w4*Eθ+w5*Ecenter (24)Lyolo =w1 *Eclass +w2 *Ecorrd +w3 *Eiou +w4 *Eθ +w5 *Ecenter (24)

式中,w1,w2,w3,w4,w5分别是五种误差的权重,可根据重要性程度设置,且满足w1+w2+w3+w4+w5=1。Wherein, w1 , w2 , w3 , w4 , and w5 are weights of five kinds of errors respectively, which can be set according to the degree of importance and satisfy w1 +w2 +w3 +w4 +w5 =1.

式中L为分割的网格大小,N为分类目标数目,t′x,t′y,t′w,t′h,c′,p′(c),θ′x,t′cx,t′cy为标签中的各参数,分别为四类对象回归框的水平和垂直坐标、宽度和高度长度、回归框的置信度、目标类别概率、主花梗倾斜角度、左右侧蕾中心坐标。其相对应的网络预测结果为tx,ty,tw,th,c,p(x),θx,tcx,tcy。λcorrd为坐标误差系数权重,λno为非目标的回归损失误差权重,为第i个网格中的第j个预测框,其值为1时对应的标签信息被分配,其中,p′i(c)为第i个网格的目标类别概率,c′i为为第i个网格的回归框的置信度,pi(c)为第i个网格的目标类别概率预测值,ci为第i个网格的回归框的置信度预测值。Where L is the grid size of the segmentation, N is the number of classified targets, t′x , t′y , t′w , t′h , c′, p′(c), θ′x , t′cx , t′cy are the parameters in the label, which are the horizontal and vertical coordinates, width and height lengths of the regression box of the four types of objects, the confidence of the regression box, the target category probability, the main peduncle inclination angle, and the coordinates of the center of the left and right buds. The corresponding network prediction results are tx , ty , tw , th , c, p(x), θx , tcx , tcy . λcorrd is the coordinate error coefficient weight, λno is the regression loss error weight of the non-target, is the j-th prediction box in the ith grid, and the corresponding label information is assigned when its value is 1, where p′i (c) is the target category probability of the ith grid, c′i is the confidence of the regression box of the ith grid,pi (c) is the predicted value of the target category probability of the ith grid, andci is the predicted value of the confidence of the regression box of the ith grid.

Mask RCNN为两阶段框架,第一个阶段遍历整个图像并产生“建议区域”,第二个阶段对推荐的区域进行目标分类,得到边界框和掩码。其中“建议区域”由YOLOv8检测主花梗、主花蕾、左侧蕾、右侧蕾得到,第二阶段进行像素分类与检测,得到实例分割结果、掩膜、边界框架,亦可得到主花梗倾斜角度θ、左右侧蕾中心坐标t。可定义相应的损失函数进行训练,而多任务损失函数Lmask-rcnn由三部分组成:类别损失Lcls,包围盒丢失Lbox,以及预测掩码丢失Lmask。Lcls是主花梗、主花蕾、侧蕾类别的预测值和实际值之间的差值;Lbox表示每个实例的预测位置参数和实际位置参数(原点、宽度和高度、中心坐标t、倾斜程度θ)之间的距离;和Lmask表示前景对象(主花梗及花蕾等)和背景中每个像素的二元分类中的模型置信度,这是用于像素分类的二元交叉熵。Mask RCNN is a two-stage framework. The first stage traverses the entire image and generates "recommended regions". The second stage classifies the recommended regions and obtains bounding boxes and masks. The "recommended regions" are obtained by detecting the main peduncles, main buds, left buds, and right buds by YOLOv8. The second stage performs pixel classification and detection to obtain instance segmentation results, masks, and bounding frames. The main peduncles' inclination angle θ and the center coordinates t of the left and right side buds can also be obtained. The corresponding loss function can be defined for training, and the multi-task loss function Lmask-rcnn consists of three parts: category loss Lcls , bounding box loss Lbox , and prediction mask loss Lmask . Lcls is the difference between the predicted value and the actual value of the main peduncles, main buds, and side buds categories; Lbox represents the distance between the predicted position parameters and the actual position parameters (origin, width and height, center coordinate t, inclination θ) of each instance; and Lmask represents the model confidence in the binary classification of each pixel in the foreground object (main peduncles and buds, etc.) and the background, which is the binary cross entropy for pixel classification.

Lmask-rcnn=Lcls+Lbox+Lmask (29)Lmask-rcnn =Lcls +Lbox +Lmask (29)

式中Lcls为类别损失;Lbox为包围盒丢失;Lmask为预测掩码损失;pi为锚点i的预测概率和真值;Nreg为特征图中的像素数;ti为分别是花梗和花蕾的预测值和真实值的包围盒坐标;tc分别是左右侧蕾预测值和真实值的中心坐标;tθ分别是主花梗倾斜角度的预测值和真实值;R(·)为平滑L1函数。对于掩模分支,每个ROI产生m2的尺寸输出结果;因此,代表真实坐标(i,j)在m×m区域,以及yij表示预测结果。WhereLcls is the category loss;Lbox is the bounding box loss;Lmask is the predicted mask loss; pi and is the predicted probability and true value of anchor point i; Nreg is the number of pixels in the feature map;ti and are the bounding box coordinates of the predicted and true values of the pedicel and bud respectively; tc and are the center coordinates of the predicted and true values of the left and right buds respectively; tθ and are the predicted and true values of the main peduncle tilt angle, respectively; R(·) is a smooth L1 function. For the mask branch, each ROI produces an output result of size m2 ; therefore, represents the true coordinate (i, j) in the m×m area, and yij represents the predicted result.

YOLOv8和Mask RCNN的网络模型性能评价指标由平均准确度(AP)、平均召回率(recall)、F1得分、每秒传输帧数(FPS)组成。前三个参数用于评估花梗和花蕾的位置检测和分割的准确性,每秒传输帧数用于评估算法的速度。上述指标所涉及的精度和召回率分别表示为正确预测的阳性样本占总预测阳性样本的比例和正确预测的阳性样本占总阳性样本的比例。上述评价指标的计算方法如式(13-16)所示。The network model performance evaluation indicators of YOLOv8 and Mask RCNN are composed of average accuracy (AP), average recall, F1 score, and frames per second (FPS). The first three parameters are used to evaluate the accuracy of position detection and segmentation of peduncles and buds, and the frames per second are used to evaluate the speed of the algorithm. The precision and recall involved in the above indicators are respectively expressed as the proportion of correctly predicted positive samples to the total predicted positive samples and the proportion of correctly predicted positive samples to the total positive samples. The calculation method of the above evaluation indicators is shown in formulas (13-16).

式中TP为真阳性;FP为假阳性;FN为假阴性;NmF为总的训练推理图像;TT为训练推理的所需总时间。Where TP is true positive; FP is false positive; FN is false negative; NmF is the total training reasoning images; TT is the total time required for training reasoning.

以上已以较佳实施例公布了本发明,然其并非用以限制本发明型,凡采取等同替换或等效变换的方案所获得的技术方案,均落在本发明的保护范围内。The present invention has been disclosed above with preferred embodiments, but they are not intended to limit the present invention. Any technical solutions obtained by adopting equivalent replacement or equivalent transformation solutions fall within the protection scope of the present invention.

Claims (10)

1. The utility model provides a kiwi fruit imitative living bud thinning claw, includes down T frame, gripper jaw, left absorption claw, right absorption claw, image detection system and control system, its characterized in that: the clamping claw is arranged at the middle position of the inverted T-shaped frame and connected with the inverted T-shaped frame, and the left adsorption claw and the right adsorption claw are symmetrically arranged at the left and right positions of the inverted T-shaped frame and connected with the inverted T-shaped frame; the inverted T-shaped frame comprises a frame horizontal bracket, a frame vertical bracket, a rotating motor and a fixed base, wherein the rotating motor is arranged at the middle position of the frame horizontal bracket, the rotating motor is arranged on the fixed base, the frame vertical bracket is arranged at the upper side of the middle position of the frame horizontal bracket, and a clamping claw is arranged at one side of the frame vertical bracket; the two ends of the frame horizontal bracket are respectively provided with a left adsorption claw and a right adsorption claw; one side of the middle position of the frame horizontal support is provided with a main RGBD camera, the left adsorption claw and the right adsorption claw are respectively provided with an RGBD camera, the main RGBD camera and the two RGBD cameras are connected with an image detection system, the image detection system is used for collecting flower bud and flower stalk images and carrying out image detection by utilizing a deep learning technology, the pose of the flower stalk and flower bud is calculated in real time, and the control system respectively controls the rotation of the inverted T-shaped frame, the clamping locking of the clamping claw and the flower bud thinning operation of the main flower stalk, the left adsorption claw and the right adsorption claw.
2. The bionic bud thinning claw for kiwi fruits according to claim 1, wherein: the clamping claw comprises a clamping claw auxiliary motor, a main electric telescopic rod, a supporting bottom plate, a spherical pair, a middle supporting rod, a middle main electric push rod, a side connecting rod and an arc finger surface clamping plate; one end of the main electric telescopic rod is connected with the vertical frame bracket through the auxiliary motor of the clamping claw, and the other end of the main electric telescopic rod is connected with the supporting bottom plate; the middle supporting rod and the two spherical pairs are arranged on the supporting bottom plate, the two spherical pairs are respectively connected with one side connecting rod, the middle positions of the connecting rods at the two sides are provided with middle main electric push rods, and the middle main electric push rods drive the side connecting rods to open or close around the spherical pairs to the two sides; the ends of the connecting rods on two sides are respectively connected with an arc-shaped finger surface clamping plate, a flexible sensor is arranged on the middle surface of the arc-shaped finger surface clamping plate, the pressure is measured in real time, and pneumatic flexible locking bags are arranged at the two ends of the arc-shaped finger surface clamping plate and used for fixing main pedicel.
3. The bionic bud thinning claw for kiwi fruits according to claim 1, wherein: the left adsorption claw and the right adsorption claw have the same structure, and each of the left adsorption claw and the right adsorption claw comprises a side motor, a main electric push rod, a hinge, a secondary electric push rod, a joint rotating motor, a finger adjusting bottom plate, a finger adjusting base, a finger adjusting push rod and three fingers; one end of the main electric push rod is connected with the horizontal support of the frame through a side motor, the other end of the main electric push rod is connected with the auxiliary electric push rod through a hinge, a joint rotating motor is arranged in the hinge, a rotating motor is arranged at the telescopic end of the auxiliary electric push rod, a finger adjusting bottom plate is arranged on the rotating motor, a finger adjusting base is arranged on the finger adjusting bottom plate, a sliding groove is formed in the finger adjusting base, the finger adjusting push rod is slidably connected in the sliding groove through a limiting pin, one end of the finger adjusting push rod is provided with a finger, the finger comprises a finger root and a finger tip, one end of the finger root is connected with the finger adjusting push rod, and the other end of the finger root is connected with the finger tip.
Step S4: the left adsorption claw and the right adsorption claw are controlled, the left adsorption claw and the right adsorption claw are controlled by the same method, lateral bud distance visual servo control is firstly carried out, lateral bud azimuth visual servo control is then carried out, the lateral bud distance visual servo control realizes the distance adjustment between the left adsorption claw and the right adsorption claw at the tail end and the lateral buds, and the lateral bud azimuth visual servo control realizes that the lateral bud targets are positioned at the central positions of the left adsorption claw and the right adsorption claw; when the left adsorption claw and the right adsorption claw move to the positions of the side buds, the left side buds and the right side buds are clamped; if the left adsorption claw and the right adsorption claw do not move to the proper positions, the distance and the direction between the left adsorption claw and the side bud are repeatedly adjusted before the distance vision servo control is returned until the best position is met;
7. The control method according to claim 5, characterized in that: the lateral bud distance vision servo control steps are as follows: the distance error is calculated according to the reference distance RS and the actual distance fed back by the main RGBD displacement estimator, the distance error signal is input into a transfer function G2 (S) of the main controller, the displacement error is calculated according to the reference displacement value and the actual displacement fed back by the RGBD displacement estimator, the displacement error signal is input into a transfer function G1 (S) of the auxiliary controller, the transfer function L1 (S) of the auxiliary electric push rod is calculated according to the auxiliary controller voltage control signal U1 (S), and the final distance output signal Y2 (S), Y2(s)=U2(s)*L2 (S) +D (S) is calculated according to U2 (S), the load disturbance D (S) and the transfer function L2 (S) of the main electric push rod, wherein U2 (S) is the inner ring output voltage or the outer ring input voltage of the main electric push rod.
9. The control method according to claim 5, characterized in that: in the inverted T-shaped frame angle visual servo control, left and right side bud distance visual servo control and side bud azimuth visual servo control, adopting a target detection and information fusion method based on YOLOv and Mask RCNN to realize the target detection and feature extraction of deep learning, the specific steps comprise: firstly, a certain number of RGB images are collected, and the images are subjected to amplification treatment and image labeling; reconstructing a database, wherein the database comprises a training database, a verification database and a test database; selecting a target detection model, respectively training YOLOv and Mask RCNN models by using a training database, evaluating the models by using performance evaluation indexes, and if the training does not reach the standard, reestablishing the models and selecting parameters, and performing the next training until the training meets the requirements; if the training meets the standard, the verification data set is transferred to carry out verification, the verification effect is evaluated, if the verification index does not meet the standard, the new model is transferred to modify the model and parameters, the next training is carried out, and the verification is continued after the training is completed until the verification meets the index requirement; after the model meets the requirements, transferring to a test data set for testing; when in test, a YOLOv model is used for testing to obtain a main flower stalk, a main flower bud, a left side bud and a right side bud target detection and example segmentation diagram, a main flower stalk inclination angle and left and right side bud central coordinate values, the areas of the main flower stalk, the main flower bud, the left side bud and the right side bud are used as initial proposal areas, and then a Mask RCNN model is used for testing the images to obtain new main flower stalk, main flower bud, left side bud and right side bud target detection and example segmentation diagram, a main flower stalk inclination angle and left and right side bud central coordinate values; performing parameter fusion on the main peduncles inclination angle and the left and right side bud central coordinate values which are respectively obtained by the YOLOv model and the Mask RCNN model, wherein the fusion formula is that
CN202311407040.3A2023-10-272023-10-27 Kiwi bionic bud thinning claw and control method thereofPendingCN118216332A (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202311407040.3ACN118216332A (en)2023-10-272023-10-27 Kiwi bionic bud thinning claw and control method thereof

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202311407040.3ACN118216332A (en)2023-10-272023-10-27 Kiwi bionic bud thinning claw and control method thereof

Publications (1)

Publication NumberPublication Date
CN118216332Atrue CN118216332A (en)2024-06-21

Family

ID=91500116

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202311407040.3APendingCN118216332A (en)2023-10-272023-10-27 Kiwi bionic bud thinning claw and control method thereof

Country Status (1)

CountryLink
CN (1)CN118216332A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN118609576A (en)*2024-07-312024-09-06西安工程大学 Bird sound target recognition method based on three-channel deep neural network under low signal-to-noise ratio
CN120206543A (en)*2025-05-282025-06-27福建省曾志环保科技有限公司 Intelligent garbage sorting method and system based on multimodal recognition

Citations (16)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20180243902A1 (en)*2015-08-252018-08-30Kawasaki Jukogyo Kabushiki KaishaRobot system
EP3373203A1 (en)*2017-03-092018-09-12Panasonic CorporationEquipment of localization of peduncle and method of localization of peduncle
CN108605508A (en)*2018-05-082018-10-02湖州佳创自动化科技有限公司A kind of end effector of spheral fruit picking robot
CN108811738A (en)*2018-07-272018-11-16浙江机电职业技术学院Adjustable apple assists picker
CN109773832A (en)*2017-11-152019-05-21精工爱普生株式会社 Sensors and Robots
CN112425373A (en)*2020-12-022021-03-02陕西中建建乐智能机器人股份有限公司Kiwi fruit picking and sorting robot and kiwi fruit sorting method thereof
CN112606011A (en)*2020-12-112021-04-06华南农业大学Banana bud breaking method based on visual recognition, bionic banana bud breaking mechanism, banana bud breaking robot and application
CN113733142A (en)*2021-11-052021-12-03广东电网有限责任公司江门供电局Manipulator system and control method for manipulator system
CN215011703U (en)*2021-03-222021-12-07新疆大学Rose picking robot
CN114842187A (en)*2022-03-082022-08-02中国农业科学院茶叶研究所Tea tender shoot picking point positioning method based on fusion of thermal image and RGB image
CN115643903A (en)*2022-05-202023-01-31广西师范大学Automatic apple picking device based on machine vision and control method thereof
CN218680166U (en)*2022-08-102023-03-24东北林业大学Kiwi fruit picking robot
CN116038745A (en)*2022-12-292023-05-02睿尔曼智能科技(北京)有限公司 A multifunctional dexterous mechanical claw
CN116326372A (en)*2023-04-132023-06-27山东农业大学Comb-tooth type grape flower thinning end effector and control method
CN219628381U (en)*2022-05-062023-09-05南京农业大学 A three-arm picking robot
CN116724778A (en)*2023-08-032023-09-12西北农林科技大学Kiwi fruit precise bud thinning robot based on machine vision and laser

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20180243902A1 (en)*2015-08-252018-08-30Kawasaki Jukogyo Kabushiki KaishaRobot system
EP3373203A1 (en)*2017-03-092018-09-12Panasonic CorporationEquipment of localization of peduncle and method of localization of peduncle
CN109773832A (en)*2017-11-152019-05-21精工爱普生株式会社 Sensors and Robots
CN108605508A (en)*2018-05-082018-10-02湖州佳创自动化科技有限公司A kind of end effector of spheral fruit picking robot
CN108811738A (en)*2018-07-272018-11-16浙江机电职业技术学院Adjustable apple assists picker
CN112425373A (en)*2020-12-022021-03-02陕西中建建乐智能机器人股份有限公司Kiwi fruit picking and sorting robot and kiwi fruit sorting method thereof
CN112606011A (en)*2020-12-112021-04-06华南农业大学Banana bud breaking method based on visual recognition, bionic banana bud breaking mechanism, banana bud breaking robot and application
CN215011703U (en)*2021-03-222021-12-07新疆大学Rose picking robot
CN113733142A (en)*2021-11-052021-12-03广东电网有限责任公司江门供电局Manipulator system and control method for manipulator system
CN114842187A (en)*2022-03-082022-08-02中国农业科学院茶叶研究所Tea tender shoot picking point positioning method based on fusion of thermal image and RGB image
CN219628381U (en)*2022-05-062023-09-05南京农业大学 A three-arm picking robot
CN115643903A (en)*2022-05-202023-01-31广西师范大学Automatic apple picking device based on machine vision and control method thereof
CN218680166U (en)*2022-08-102023-03-24东北林业大学Kiwi fruit picking robot
CN116038745A (en)*2022-12-292023-05-02睿尔曼智能科技(北京)有限公司 A multifunctional dexterous mechanical claw
CN116326372A (en)*2023-04-132023-06-27山东农业大学Comb-tooth type grape flower thinning end effector and control method
CN116724778A (en)*2023-08-032023-09-12西北农林科技大学Kiwi fruit precise bud thinning robot based on machine vision and laser

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
庹会英;: "疏花疏果对红阳猕猴桃果实大小和产量的影响研究", 四川农业科技, no. 03, 15 March 2015 (2015-03-15)*

Cited By (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN118609576A (en)*2024-07-312024-09-06西安工程大学 Bird sound target recognition method based on three-channel deep neural network under low signal-to-noise ratio
CN120206543A (en)*2025-05-282025-06-27福建省曾志环保科技有限公司 Intelligent garbage sorting method and system based on multimodal recognition

Similar Documents

PublicationPublication DateTitle
Kumra et al.Antipodal robotic grasping using generative residual convolutional neural network
CN118216332A (en) Kiwi bionic bud thinning claw and control method thereof
US20230042756A1 (en)Autonomous mobile grabbing method for mechanical arm based on visual-haptic fusion under complex illumination condition
JP6873941B2 (en) Robot work system and control method of robot work system
WO2023056670A1 (en)Mechanical arm autonomous mobile grabbing method under complex illumination conditions based on visual-tactile fusion
Ott et al.A humanoid two-arm system for dexterous manipulation
Hu et al.Simplified 4-DOF manipulator for rapid robotic apple harvesting
Koenemann et al.Real-time imitation of human whole-body motions by humanoids
WO2024027647A1 (en)Robot control method and system and computer program product
EP2813194B1 (en)Control of limb device
JP6671694B1 (en) Machine learning device, machine learning system, data processing system, and machine learning method
KR20230122118A (en) Guided autonomous gripping
CN115139315B (en)Picking mechanical arm grabbing motion planning method
EP4177013A1 (en)Control of an industrial robot for a gripping task
JPWO2003019475A1 (en) Robot device, face recognition method, and face recognition device
CN114905508A (en)Robot grabbing method based on heterogeneous feature fusion
CN115890744A (en) A 6-DOF object manipulation training method and system for a robotic arm based on TD3
Kim et al.Eye-in-hand stereo visual servoing of an assistive robot arm in unstructured environments
JPWO2019239562A1 (en) Machine learning device and robot system equipped with it
Metta et al.Learning to track colored objects with log-polar vision
Yang et al.Development of a pumpkin fruits pick-and-place robot using an RGB-D camera and a YOLO based object detection AI model
Wang et al.Towards assistive robotic pick and place in open world environments
CN118700147A (en) A robot grasping control method
Singh et al.Robust pollination for tomato farming using deep learning and visual servoing
Maeda et al.Frequency response experiments of 3-d pose full-tracking visual servoing with eye-vergence hand-eye robot system

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination

[8]ページ先頭

©2009-2025 Movatter.jp