技术领域technical field
本发明涉及模块化机械臂的控制领域,特别涉及一种模块化机械臂的智能分拣方法。The invention relates to the field of control of modular manipulators, in particular to an intelligent sorting method for modular manipulators.
背景技术Background technique
模块化机械臂相对于传统的机械臂有着占用空间小,成本低,功能多样,灵活性强的优势。近年来,国内外越来越多的机器人公司或机器人研究所开展了对模块化机械臂的研究并取得了一定的成果。Compared with traditional robotic arms, modular robotic arms have the advantages of small footprint, low cost, diverse functions, and strong flexibility. In recent years, more and more robot companies or robotics research institutes at home and abroad have carried out research on modular robotic arms and achieved certain results.
在模块化机械臂分拣作业平台上融入三维物体识别,人机交互与推理,大大提高了分拣系统的智能水平。Incorporating three-dimensional object recognition, human-computer interaction and reasoning into the modular robotic arm sorting operation platform greatly improves the intelligence level of the sorting system.
文献检索查到相关专利:2015年4月22日公开的申请号为CN201410723309.3的发明专利《智能分拣系统及分拣方法》,公开了一种智能分拣方法,该项发明能够识别多种颜色的物体,并且能进行引导性学习,自动将不同颜色的物体放到指定位置。Relevant patents were found by literature search: the invention patent "Intelligent Sorting System and Sorting Method" published on April 22, 2015 with the application number CN201410723309.3 discloses an intelligent sorting method, which can identify multiple Objects of different colors, and can conduct guided learning, automatically put objects of different colors in the designated position.
但是,上述专利只涉及到物体颜色识别,并不能识别大小、形状以及物体的三维坐标;同时上述专利是利用学习模块预先进行引导性学习,然后重复学习的动作,在一定程度上降低了系统的智能水平,没有使用推理的方法,没有用到反问引导与期望分析。However, the above-mentioned patent only involves object color recognition, and cannot recognize the size, shape, and three-dimensional coordinates of the object; at the same time, the above-mentioned patent uses the learning module to perform guided learning in advance, and then repeats the learning action, which reduces the system to a certain extent. The level of intelligence does not use reasoning methods, rhetorical question guidance and expectation analysis.
发明内容Contents of the invention
本发明是为了解决上述问题而进行的,目的在于把三维物体识别,人机交互与推理融入到模块化机械臂分拣作业平台上,提出一种模块化机械臂的智能分拣方法。The present invention is carried out to solve the above problems, and the purpose is to integrate three-dimensional object recognition, human-computer interaction and reasoning into the sorting operation platform of the modular manipulator, and propose an intelligent sorting method of the modular manipulator.
本发明提供的模块化机械臂的智能分拣方法,具有这样的特征,包括以下步骤:The intelligent sorting method of the modularized mechanical arm provided by the present invention has such features, including the following steps:
步骤一,通过体感传感器完成目标检测与识别,获取场景中各物体的准确空间位置信息,得到三维场景语义地图描述文件;Step 1: complete target detection and recognition through somatosensory sensors, obtain accurate spatial position information of each object in the scene, and obtain a 3D scene semantic map description file;
步骤二,以人机对话的形式确定意图,推理得到分拣规则;以及Step 2, determine the intention in the form of human-computer dialogue, and obtain the sorting rules by reasoning; and
步骤三,接收解决方案,通过自然语言编程将解决方案编程为机器人指令,解析编译执行指令,并控制机械臂进行智能分拣。Step 3: Receive the solution, program the solution into robot instructions through natural language programming, analyze, compile and execute the instructions, and control the robotic arm for intelligent sorting.
本发明提供的模块化机械臂的智能分拣方法,还具有这样的特征:其中,步骤一,将体感传感器采集到的地图深度信息和颜色信息经过融合处理后生成三维点云数据,通过计算机获取数据后,经过预处理、关键点提取、计算特征描述子、将得到的特征描述子与模型库进行匹配、生成转变假设并验证,得到三维场景语义地图描述文件。The intelligent sorting method of the modularized mechanical arm provided by the present invention also has the following characteristics: wherein, in step 1, the map depth information and color information collected by the somatosensory sensor are fused to generate three-dimensional point cloud data, which is obtained by computer After the data, after preprocessing, key point extraction, feature descriptor calculation, matching the obtained feature descriptor with the model library, generation of transition hypothesis and verification, the 3D scene semantic map description file is obtained.
本发明提供的模块化机械臂的智能分拣方法,还具有这样的特征:其中,步骤二,人机对话由语音识别部分,推理机部分和语音合成部分组成,首先在语音识别部分,麦克风阵列把用户输入的语音信号进行降噪处理,并采用预定算法进行特征提取,然后结合HMM声学模型和N-gram语言模型,通过语音解码搜索算法将语音信号转化为文本发送给推理机部分,推理机部分接收文本,采用预定推理机制,将文本与案例库中的案例进行检索寻找最相似的案例,结合三维场景语义地图描述文件进行地图匹配、期望分析和引导,从而完善用户的期望,最后生成解决方案,用户的引导信息以文本的形式发送给语音合成部分,该部分将得到的文本通过文本分析、韵律建模和语音合成三个步骤生成相应的语音信号输出交互语音。The intelligent sorting method of the modularized mechanical arm provided by the present invention also has such characteristics: wherein, in step 2, the man-machine dialogue is composed of a speech recognition part, an inference engine part and a speech synthesis part, first in the speech recognition part, the microphone array The voice signal input by the user is denoised, and a predetermined algorithm is used for feature extraction, and then combined with the HMM acoustic model and the N-gram language model, the voice signal is converted into text through the voice decoding search algorithm and sent to the inference engine. Partially receive the text, use the predetermined reasoning mechanism to search the text and the cases in the case library to find the most similar case, combine the 3D scene semantic map description file for map matching, expectation analysis and guidance, so as to improve the user's expectation, and finally generate a solution In the scheme, the user's guidance information is sent to the speech synthesis part in the form of text, and the part generates corresponding speech signals through three steps of text analysis, prosody modeling and speech synthesis to output interactive speech.
本发明提供的模块化机械臂的智能分拣方法,还具有这样的特征:其中,步骤三,首先通过自然语言获取模块获取自然语言的解决方案,然后由自然语言解释模块把解决方案解释成机器人指令,将机器人指令发送给解析编译模块,解析编译模块按预定顺序进行解析,解析编译完机器人指令,最后通过执行器模块,接收并执行可执行指令。The intelligent sorting method of the modularized mechanical arm provided by the present invention also has the following characteristics: wherein, in step 3, the natural language solution is first obtained through the natural language acquisition module, and then the natural language interpretation module interprets the solution into a robot Instructions, send the robot instructions to the analysis and compilation module, and the analysis and compilation module analyzes and compiles the robot instructions in a predetermined order, and finally receives and executes the executable instructions through the executor module.
本发明提供的模块化机械臂的智能分拣方法,还具有这样的特征:其中,预定算法为MFCC算法。The intelligent sorting method of the modularized robotic arm provided by the present invention also has the feature that the predetermined algorithm is an MFCC algorithm.
本发明提供的模块化机械臂的智能分拣方法,还具有这样的特征:其中,通过预定推理机制为改进的CBR-BDI推理机制。The intelligent sorting method of the modularized manipulator provided by the present invention also has the feature that the predetermined inference mechanism is an improved CBR-BDI inference mechanism.
本发明提供的模块化机械臂的智能分拣方法,还具有这样的特征:其中,语音合成使用TTS技术。The intelligent sorting method of the modularized robotic arm provided by the present invention also has the feature that TTS technology is used for speech synthesis.
本发明提供的模块化机械臂的智能分拣方法,还具有这样的特征:其中,自然语言获取模块通过UDP的传输来实现通信,获取解决方案,解决方案里面有自然语言命令,物体的坐标和末端执行器姿态。The intelligent sorting method of the modularized mechanical arm provided by the present invention also has the following characteristics: wherein, the natural language acquisition module realizes communication through UDP transmission, acquires a solution, and there are natural language commands, object coordinates and End effector pose.
本发明提供的模块化机械臂的智能分拣方法,还具有这样的特征:其中,自然语言解释模块把解决方案解释成机器人指令的具体的步骤为将自然语言命令进行分词,词法分析,语法分析,语义分析后得到机器人语言指令。The intelligent sorting method of the modularized robotic arm provided by the present invention also has the following characteristics: wherein, the specific steps for the natural language interpretation module to interpret the solution into robot instructions are to perform word segmentation, lexical analysis, and grammatical analysis on the natural language command , the robot language instructions are obtained after semantic analysis.
发明作用和效果Invention function and effect
根据本发明所涉及的模块化机械臂的智能分拣方法,首先通过体感传感器获取实时的三维场景语义地图描述文件,然后以对话的形式建立分拣规则,最后将图像文件里的物体坐标分别发送给模块化机械臂,机械臂接收到物体坐标后抓取物体,将物体放到对应的篮子里面,从而实现了智能分拣,随着机械臂尤其是轻量级的模块化机械臂的发展,模块化机械臂的控制系统越来越重要,作为模块化机械臂分拣系统,本系统具有视觉功能和推理功能,拥有人机交互和完善期望的功能,同时本系统可以通过自然语言的解决方案来实现机械臂的控制,从而实现了自动编程。According to the intelligent sorting method of the modularized manipulator involved in the present invention, the real-time three-dimensional scene semantic map description file is first obtained through the somatosensory sensor, then the sorting rules are established in the form of dialogue, and finally the coordinates of the objects in the image file are sent separately For the modular robotic arm, the robotic arm grabs the object after receiving the coordinates of the object, and puts the object into the corresponding basket, thus realizing intelligent sorting. With the development of the robotic arm, especially the lightweight modular robotic arm, The control system of the modular robotic arm is becoming more and more important. As a modular robotic arm sorting system, this system has visual functions and reasoning functions, and has human-computer interaction and perfect expected functions. At the same time, this system can solve problems through natural language To realize the control of the robotic arm, thus realizing the automatic programming.
附图说明Description of drawings
图1是本发明在实施例中的模块化机械臂的智能分拣方法的步骤图;Fig. 1 is the step diagram of the intelligent sorting method of the modularized mechanical arm in the embodiment of the present invention;
图2是本发明在实施例中的点云采集与物体识别的总体结构图;Fig. 2 is the overall structural diagram of point cloud collection and object recognition in the embodiment of the present invention;
图3是本发明在实施例中的多目标场景的空间点云物体识别与理解系统的总体流程图;Fig. 3 is the overall flowchart of the space point cloud object recognition and understanding system of the multi-target scene in the embodiment of the present invention;
图4是本发明在实施例中的语音识别部分的结构示意图;Fig. 4 is the structural representation of the speech recognition part in the embodiment of the present invention;
图5是本发明在实施例中的改进的CBR-BDI推理机制的结构示意图;Fig. 5 is the structural representation of the improved CBR-BDI reasoning mechanism of the present invention in the embodiment;
图6是本发明在实施例中的语音合成单元的结构示意图;Fig. 6 is the structural representation of the speech synthesis unit in the embodiment of the present invention;
图7是本发明在实施例中的机械臂控制模块的结构示意图;以及Fig. 7 is a structural schematic diagram of the control module of the manipulator in the embodiment of the present invention; and
图8是本发明在实施例中的解析编译模块的流程图。Fig. 8 is a flow chart of the parsing and compiling module in the embodiment of the present invention.
具体实施方式detailed description
以下参照附图及实施例对本发明所涉及的模块化机械臂的智能分拣方法作详细的描述。The intelligent sorting method of the modular robot arm involved in the present invention will be described in detail below with reference to the drawings and embodiments.
图1是本发明在实施例中的模块化机械臂的智能分拣方法的步骤图。Fig. 1 is a step diagram of an intelligent sorting method of a modular robotic arm in an embodiment of the present invention.
如图1所示,模块化机械臂的智能分拣方法具有以下步骤:As shown in Figure 1, the intelligent sorting method of the modular robotic arm has the following steps:
步骤一:通过体感传感器完成目标检测与识别,获取场景中各物体的准确空间位置信息,得到三维场景语义地图描述文件,进入步骤二。Step 1: Complete target detection and recognition through the somatosensory sensor, obtain accurate spatial location information of each object in the scene, obtain a 3D scene semantic map description file, and enter step 2.
图2是本发明在实施例中的点云采集与物体识别的总体结构图。Fig. 2 is an overall structural diagram of point cloud collection and object recognition in an embodiment of the present invention.
如图2所示,将体感传感器采集到的地图深度信息和颜色信息经过融合处理后生成三维点云数据,通过计算机获取数据后,经过预处理、关键点提取、计算特征描述子、将得到的特征描述子与模型库进行匹配、生成转变假设并验证,得到三维场景语义地图描述文件。As shown in Figure 2, the map depth information and color information collected by the somatosensory sensor are fused to generate 3D point cloud data. The feature descriptor is matched with the model library, the transformation hypothesis is generated and verified, and the 3D scene semantic map description file is obtained.
图3是本发明在实施例中的多目标场景的空间点云物体识别与理解系统的总体流程图。Fig. 3 is an overall flow chart of the system for recognizing and understanding objects in a space point cloud of a multi-target scene in an embodiment of the present invention.
物体识别与理解系统主要由离线和在线两个部分组成。The object recognition and understanding system mainly consists of two parts: offline and online.
离线过程:模型库建立是个离线的过程,涉及到的关键技术有预处理、图像分割、特征描述、体感传感器的自运动估计、密集三维点云模型生成。首先进行数据滤波预处理,然后进行物体检测,也就是从场景中分割出每个视角的单一聚类,接着提取特征点与特征描述子。通过棋盘格标定算法,利用特征描述子的匹配得到各帧数据间的4×4刚性变化矩阵,将不同视角下的数据对奇并累加,从而获得三维物体的完整点云,用于获取物体的几何形状,并人为设定物体模型标识。Offline process: The establishment of the model library is an offline process, and the key technologies involved include preprocessing, image segmentation, feature description, self-motion estimation of somatosensory sensors, and dense 3D point cloud model generation. Firstly, data filtering preprocessing is performed, and then object detection is performed, that is, a single cluster of each view is segmented from the scene, and then feature points and feature descriptors are extracted. Through the checkerboard calibration algorithm, the 4×4 rigid change matrix between each frame of data is obtained by using the matching of feature descriptors, and the data from different perspectives are aligned and accumulated to obtain the complete point cloud of the three-dimensional object, which is used to obtain the object’s Geometry, and artificially set the object model identification.
在线过程:物体识别与位姿估计是在线进行的。涉及到的关键技术有基于距离阈值的特征匹配、转变假设与验证、位姿矩阵坐标转换,通过实时的获取体感传感器的一帧点云数据,经过基于局部表面特征的3D识别算法给场景中各物体分配一个合适的物体模型类,另外还可以得到物体模型到场景对应点的相对位姿变换矩阵。最后将识别的物体的三维几何特征和图像纹理信息写入xml文件,构建三维场景语义地图描述文件。Online process: Object recognition and pose estimation are performed online. The key technologies involved include feature matching based on distance threshold, transformation assumption and verification, and pose matrix coordinate conversion. By acquiring a frame of point cloud data from the somatosensory sensor in real time, the 3D recognition algorithm based on local surface features is given to each scene in the scene. The object is assigned a suitable object model class, and the relative pose transformation matrix from the object model to the corresponding point of the scene can also be obtained. Finally, the 3D geometric features and image texture information of the recognized objects are written into the xml file, and the 3D scene semantic map description file is constructed.
步骤二:以人机对话的形式确定意图,推理得到分拣规则,进入步骤三。Step 2: Determine the intention in the form of man-machine dialogue, and get the sorting rules by reasoning, then go to Step 3.
人机对话由语音识别部分,推理机部分和语音合成部分组成。The human-computer dialogue is composed of speech recognition part, reasoning machine part and speech synthesis part.
首先在语音识别部分,麦克风阵列把用户输入的语音信号进行降噪处理,并采用MFCC算法进行特征提取,然后结合HMM声学模型和N-gram语言模型,通过语音解码搜索算法将语音信号转化为文本发送给推理机部分,推理机部分接收文本,采用改进的CBR-BDI推理机制,将文本与案例库中的案例进行匹配寻找最相似的案例,结合三维场景语义地图描述文件进行地图匹配、期望分析和引导,从而完善用户的期望,最后生成解决方案。其中对用户的引导信息以文本的形式发送给语音合成部分,该部分将得到的文本通过文本分析、韵律建模和语音合成三个步骤生成相应的语音信号输出交互语音。First in the voice recognition part, the microphone array performs noise reduction processing on the voice signal input by the user, and uses the MFCC algorithm for feature extraction, and then combines the HMM acoustic model and the N-gram language model to convert the voice signal into text through the voice decoding search algorithm Send to the inference engine part, the inference engine part receives the text, adopts the improved CBR-BDI reasoning mechanism, matches the text with the cases in the case library to find the most similar case, and performs map matching and expectation analysis in combination with the 3D scene semantic map description file and guidance to refine user expectations and finally generate a solution. The guide information for the user is sent to the speech synthesis part in the form of text, and this part generates corresponding speech signals through three steps of text analysis, prosody modeling and speech synthesis to output interactive speech.
图4是本发明在实施例中的语音识别部分的结构示意图。Fig. 4 is a schematic structural diagram of the speech recognition part in the embodiment of the present invention.
如图4所示,在语音识别部分里面,当用户对机器人说话时,麦克风先接收到语音信号,然后系统的预处理部分对语音信号进行降噪处理,并采用MFCC算法进行特征提取,之后系统结合声学模型和语言模型,通过语音解码搜索算法将语音信号转化为文本语句。As shown in Figure 4, in the voice recognition part, when the user speaks to the robot, the microphone first receives the voice signal, and then the preprocessing part of the system performs noise reduction processing on the voice signal, and uses the MFCC algorithm for feature extraction, and then the system Combined with the acoustic model and language model, the speech signal is converted into text sentences through the speech decoding search algorithm.
图5是本发明在实施例中的改进的CBR-BDI推理机制的结构示意图。Fig. 5 is a schematic structural diagram of an improved CBR-BDI reasoning mechanism in an embodiment of the present invention.
如图5所示,推理机部分把CBR-BDI推理机制作为核心,同时在推理机部分中加入地图匹配,期望分析和引导从而实现CBR-BDI推理机制的改进。As shown in Figure 5, the inference engine part takes the CBR-BDI inference mechanism as the core, and at the same time adds map matching to the inference engine part, and expects analysis and guidance to realize the improvement of the CBR-BDI inference mechanism.
在推理机单元接收到文本语句后,通过语义相似度和句子结构相似度计算,从案例库中如果获取了相似案例,系统将对该案例结合规则进行任务属性计算,判别任务属性是否完整,如任务属性完整,即m_num>0,则进入下一步,否则由系统按一定规则提出反问句,进入引导提问;After the inference engine unit receives the text sentence, through the calculation of semantic similarity and sentence structure similarity, if a similar case is obtained from the case base, the system will calculate the task attribute of the case combined with the rules to determine whether the task attribute is complete, such as If the task attributes are complete, that is, m_num>0, enter the next step; otherwise, the system will raise rhetorical questions according to certain rules and enter the guiding question;
将任务属性完整的案例与当前实时地图文件进行匹配得到m_match,其判定规则如下:Match the case with complete task attributes with the current real-time map file to obtain m_match, and its judgment rules are as follows:
m_match=0:表示场景中没有符合要求的物体;m_match=0: Indicates that there is no object meeting the requirements in the scene;
0<m_match<1:表示场景中该物体数量少于用户期望的数量;0<m_match<1: Indicates that the number of objects in the scene is less than the number expected by the user;
m_match=1:表示场景中两者数量正好相等;m_match=1: Indicates that the number of the two in the scene is exactly equal;
m_match>1:表示场景中该物体数量多于用户期望的数量。m_match>1: Indicates that the number of objects in the scene is more than the number expected by the user.
只有当m_match≥1时,表明案例任务可以在当前环境中得以执行,进入下一步期望分析,否则按规则进行相应反问引导;Only when m_match ≥ 1, it indicates that the case task can be executed in the current environment, and enter the next step of expectation analysis, otherwise follow the rules to conduct corresponding rhetorical guidance;
将案例任务与当前设定作业规则必须条目进行匹配分析(期望分析),判别用户意图相对当前作业规则条目来说,是否可行,如可行,则生成自然语言解决方案,否则,系统导向规则引导反问,要求用户补全规则。Matching analysis (expectation analysis) between the case task and the necessary items of the currently set operation rules, to determine whether the user's intention is feasible compared to the current operation rule items, if feasible, generate a natural language solution, otherwise, the system guides the rules to guide the rhetorical question , asking the user to complete the rule.
图6是本发明在实施例中的语音合成单元的结构示意图。Fig. 6 is a schematic structural diagram of a speech synthesis unit in an embodiment of the present invention.
如图6所示,推理机的引导反问信息将以文本的形式传给语音合成单元,最后以语音信号的形式输出。As shown in Fig. 6, the guided rhetorical information of the reasoning machine will be sent to the speech synthesis unit in the form of text, and finally output in the form of voice signal.
步骤三,接收解决方案,通过自然语言编程将解决方案编程为机器人指令,解析编译执行指令,并控制机械臂进行智能分拣。Step 3: Receive the solution, program the solution into robot instructions through natural language programming, analyze, compile and execute the instructions, and control the robotic arm for intelligent sorting.
图7是本发明在实施例中的机械臂控制模块的结构示意图。Fig. 7 is a schematic structural diagram of the control module of the manipulator in the embodiment of the present invention.
如图7所示,在接收到步骤二的解决方案后,接下来就要完成模块化机械臂的自然语言编程、自动解析执行与运动控制。模块化机械臂的自动编程与解析运动控制一共分为4个模块,自然语言获取模块、自然语言解释模块、解析编译模块和执行器模块。As shown in Figure 7, after receiving the solution in step 2, the next step is to complete the natural language programming, automatic analysis and execution and motion control of the modular robotic arm. The automatic programming and analytical motion control of the modular manipulator is divided into four modules, natural language acquisition module, natural language interpretation module, analysis and compilation module and actuator module.
首先通过自然语言获取模块获取自然语言的解决方案,自然语言获取模块通过UDP的传输来实现通信,获取解决方案,解决方案里面有自然语言命令,物体的坐标和末端执行器姿态。Firstly, obtain the natural language solution through the natural language acquisition module. The natural language acquisition module realizes communication through UDP transmission and obtains the solution. The solution contains natural language commands, object coordinates and end effector posture.
然后由自然语言解释模块把解决方案解释成机器人指令,将机器人指令发送给解析编译模块,解析编译模块按预定顺序进行解析,解析编译完机器人指令,最后通过执行器模块,接收并执行编译后语句。Then the natural language interpretation module interprets the solution into robot instructions, sends the robot instructions to the analysis and compilation module, and the analysis and compilation module analyzes them in a predetermined order, after analyzing and compiling the robot instructions, and finally receives and executes the compiled statements through the executor module .
自然语言解释模块把解决方案解释成机器人指令的具体的步骤为将自然语言命令进行分词,词法分析,语法分析,语义分析后得到机器人语言指令。The specific steps for the natural language interpretation module to interpret the solution into robot instructions are word segmentation, lexical analysis, grammatical analysis, and semantic analysis of the natural language instructions to obtain robot language instructions.
图8是本发明在实施例中的解析编译模块的流程图。Fig. 8 is a flow chart of the parsing and compiling module in the embodiment of the present invention.
如图8所示,在得到机器人语言指令后将其发送给解析编译模块,解析编译模块按顺序进行解析,按行读取并解析编译语句文本,解析编译机器人语句,如果是有move的语句,调用逆解函数求出对应坐标的关节轴度数,由movep和movel来判断使用曲线插补函数还是直线插补函数,接着编译每个插补点为相应的CAN指令;如果是有hand的语句,调用末端执行器器控制函数,由hand on和hand off来判断末端执行器的打开或关闭,编译成末端执行器的打开或关闭CAN指令;如果是有round的语句,调用第一关节轴旋转函数,旋转round后面的角度,把旋转后每个关节轴度数编译成相应的CAN指令,调用正解函数求出旋转后位置;如果是有Nop的语句则是空操作;如果是有Time的语句就读取后面的时间,编译这段延时时间;如果是有end语句表示解析结束,编译结束标识符,返回。As shown in Figure 8, after obtaining the robot language instruction, it is sent to the analysis and compilation module. The analysis and compilation module analyzes in order, reads and analyzes the text of the compiled statement line by line, and analyzes and compiles the robot statement. If there is a move statement, Call the inverse solution function to find the degree of the joint axis of the corresponding coordinates, use movep and movel to judge whether to use the curve interpolation function or the linear interpolation function, and then compile each interpolation point into the corresponding CAN command; if there is a hand statement, Call the end effector control function, judge the opening or closing of the end effector by hand on and hand off, and compile it into the opening or closing CAN command of the end effector; if there is a round statement, call the first joint axis rotation function , rotate the angle behind round, compile the degree of each joint axis after rotation into the corresponding CAN command, call the positive solution function to find the position after rotation; if there is a statement with Nop, it is a no-op; if it is a statement with Time, read it Take the later time and compile this delay time; if there is an end statement to indicate the end of parsing, compile the end identifier, and return.
解析编译完机器人语言指令最后通过执行器模块,接收并执行可执行指令,如果是move,round的语句,则发送相应的CAN指令给机械臂,如果是hand语句,则发送相应的CAN指令给末端执行器,如果是Time语句则延时相应的时间,如果是结束标识符则执行完毕,返回。After parsing and compiling the robot language instructions, the executable instructions are received and executed through the actuator module. If it is a move, round statement, then send the corresponding CAN instruction to the robot arm. If it is a hand statement, send the corresponding CAN instruction to the end. For the executor, if it is a Time statement, it will delay the corresponding time, if it is an end identifier, it will complete the execution and return.
实施例的作用与效果Function and effect of embodiment
根据本实施例所涉及模块化机械臂的智能分拣方法,首先通过体感传感器获取实时的三维场景语义地图描述文件,然后以对话的形式建立分拣规则,最后将图像文件里的物体坐标分别发送给模块化机械臂,机械臂接收到物体坐标后抓取物体,将物体放到对应的篮子里面,从而实现了智能分拣,随着机械臂尤其是轻量级的模块化机械臂的发展,模块化机械臂的控制系统越来越重要,作为模块化机械臂分拣系统,本系统具有视觉功能和推理功能,拥有人机交互和完善期望的功能,同时本系统可以通过自然语言的解决方案来实现机械臂的控制,从而实现了自动编程。According to the intelligent sorting method of the modularized robotic arm involved in this embodiment, the real-time three-dimensional scene semantic map description file is first obtained through the somatosensory sensor, and then the sorting rules are established in the form of dialogue, and finally the coordinates of the objects in the image file are sent separately For the modular robotic arm, the robotic arm grabs the object after receiving the coordinates of the object, and puts the object into the corresponding basket, thus realizing intelligent sorting. With the development of the robotic arm, especially the lightweight modular robotic arm, The control system of the modular robotic arm is becoming more and more important. As a modular robotic arm sorting system, this system has visual functions and reasoning functions, and has human-computer interaction and perfect expected functions. At the same time, this system can solve problems through natural language To realize the control of the robotic arm, thus realizing the automatic programming.
上述实施方式为本发明的优选案例,并不用来限制本发明的保护范围。The above embodiments are preferred examples of the present invention, and are not intended to limit the protection scope of the present invention.
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201610212575.9ACN105931218B (en) | 2016-04-07 | 2016-04-07 | The intelligent sorting method of modular mechanical arm |
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201610212575.9ACN105931218B (en) | 2016-04-07 | 2016-04-07 | The intelligent sorting method of modular mechanical arm |
| Publication Number | Publication Date |
|---|---|
| CN105931218Atrue CN105931218A (en) | 2016-09-07 |
| CN105931218B CN105931218B (en) | 2019-05-17 |
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201610212575.9AActiveCN105931218B (en) | 2016-04-07 | 2016-04-07 | The intelligent sorting method of modular mechanical arm |
| Country | Link |
|---|---|
| CN (1) | CN105931218B (en) |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN106681323A (en)* | 2016-12-22 | 2017-05-17 | 北京光年无限科技有限公司 | Interactive output method used for robot and the robot |
| CN107127757A (en)* | 2017-05-24 | 2017-09-05 | 西安科技大学 | Dynamic task allocation method is equipped in a kind of multi-robot Cooperation Wire driven robot dirt extraction |
| CN107622523A (en)* | 2017-09-21 | 2018-01-23 | 深圳市晟达机械设计有限公司 | A kind of intelligent robot |
| CN107742311A (en)* | 2017-09-29 | 2018-02-27 | 北京易达图灵科技有限公司 | A kind of method and device of vision positioning |
| CN108044621A (en)* | 2017-09-20 | 2018-05-18 | 广东拓斯达科技股份有限公司 | A kind of robot of computer readable storage medium and the application medium |
| CN108247601A (en)* | 2018-02-09 | 2018-07-06 | 中国科学院电子学研究所 | Semantic crawl robot based on deep learning |
| CN109146163A (en)* | 2018-08-07 | 2019-01-04 | 上海大学 | Optimization method, equipment and the storage medium of Automated Sorting System sorting distance |
| CN110253588A (en)* | 2019-08-05 | 2019-09-20 | 江苏科技大学 | A New Dynamic Grabbing System of Robotic Arm |
| CN110666806A (en)* | 2019-10-31 | 2020-01-10 | 湖北文理学院 | Article sorting method, article sorting device, robot and storage medium |
| CN111260761A (en)* | 2020-01-15 | 2020-06-09 | 北京猿力未来科技有限公司 | Method and device for generating mouth shape of animation character |
| CN112232141A (en)* | 2020-09-25 | 2021-01-15 | 武汉云极智能科技有限公司 | Mechanical arm interaction method and equipment capable of identifying spatial position of object |
| CN112667823A (en)* | 2020-12-24 | 2021-04-16 | 西安电子科技大学 | Semantic analysis method and system for task execution sequence of mechanical arm and computer readable medium |
| CN112809689A (en)* | 2021-02-26 | 2021-05-18 | 同济大学 | Language-guidance-based mechanical arm action element simulation learning method and storage medium |
| CN113021333A (en)* | 2019-12-25 | 2021-06-25 | 沈阳新松机器人自动化股份有限公司 | Object grabbing method and system and terminal equipment |
| CN113742458A (en)* | 2021-09-18 | 2021-12-03 | 苏州大学 | Natural language instruction disambiguation method and system for mechanical arm grabbing |
| TWI801629B (en)* | 2018-07-17 | 2023-05-11 | 美商艾提史畢克斯有限責任公司 | Method, system, and computer program product for communication with an intelligent industrial assistant and industrial machine |
| CN118528258A (en)* | 2024-05-28 | 2024-08-23 | 湖大粤港澳大湾区创新研究院(广州增城) | Intelligent robot control method for recycling garbage |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN1293752A (en)* | 1999-03-19 | 2001-05-02 | 松下电工株式会社 | Three-D object recognition method and pin picking system using the method |
| CN102615052A (en)* | 2012-02-21 | 2012-08-01 | 上海大学 | Machine visual identification method for sorting products with corner point characteristics |
| CN103324938A (en)* | 2012-03-21 | 2013-09-25 | 日电(中国)有限公司 | Method for training attitude classifier and object classifier and method and device for detecting objects |
| WO2014140129A1 (en)* | 2013-03-12 | 2014-09-18 | Centre National D'etudes Spatiales | Method of measuring the direction of a line of sight of an imaging device |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN1293752A (en)* | 1999-03-19 | 2001-05-02 | 松下电工株式会社 | Three-D object recognition method and pin picking system using the method |
| CN102615052A (en)* | 2012-02-21 | 2012-08-01 | 上海大学 | Machine visual identification method for sorting products with corner point characteristics |
| CN103324938A (en)* | 2012-03-21 | 2013-09-25 | 日电(中国)有限公司 | Method for training attitude classifier and object classifier and method and device for detecting objects |
| WO2014140129A1 (en)* | 2013-03-12 | 2014-09-18 | Centre National D'etudes Spatiales | Method of measuring the direction of a line of sight of an imaging device |
| Title |
|---|
| 付维 等: "基于Julius的机器人语音识别系统构建", 《单片机与嵌入式系统应用》* |
| 吴凡 等: "一种实时的三维语义地图生成方法", 《计算机工程与应用》* |
| 周昊天 等: "改进的分拣作业机械臂基于范例推理 - 信念期望意图推理机制", 《计算机应用》* |
| 熊志恒 等: "基于自然语言的分拣机器人解析器技术研究", 《计算机工程与应用》* |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN106681323A (en)* | 2016-12-22 | 2017-05-17 | 北京光年无限科技有限公司 | Interactive output method used for robot and the robot |
| CN106681323B (en)* | 2016-12-22 | 2020-05-19 | 北京光年无限科技有限公司 | Interactive output method for robot and robot |
| CN107127757A (en)* | 2017-05-24 | 2017-09-05 | 西安科技大学 | Dynamic task allocation method is equipped in a kind of multi-robot Cooperation Wire driven robot dirt extraction |
| CN107127757B (en)* | 2017-05-24 | 2023-03-31 | 西安科技大学 | Dynamic task allocation method for multi-robot cooperation flexible cable driven gangue picking equipment |
| CN108044621A (en)* | 2017-09-20 | 2018-05-18 | 广东拓斯达科技股份有限公司 | A kind of robot of computer readable storage medium and the application medium |
| CN107622523A (en)* | 2017-09-21 | 2018-01-23 | 深圳市晟达机械设计有限公司 | A kind of intelligent robot |
| CN107742311A (en)* | 2017-09-29 | 2018-02-27 | 北京易达图灵科技有限公司 | A kind of method and device of vision positioning |
| CN107742311B (en)* | 2017-09-29 | 2020-02-18 | 北京易达图灵科技有限公司 | Visual positioning method and device |
| CN108247601A (en)* | 2018-02-09 | 2018-07-06 | 中国科学院电子学研究所 | Semantic crawl robot based on deep learning |
| US11651034B2 (en) | 2018-07-17 | 2023-05-16 | iT SpeeX LLC | Method, system, and computer program product for communication with an intelligent industrial assistant and industrial machine |
| TWI801629B (en)* | 2018-07-17 | 2023-05-11 | 美商艾提史畢克斯有限責任公司 | Method, system, and computer program product for communication with an intelligent industrial assistant and industrial machine |
| CN109146163B (en)* | 2018-08-07 | 2021-12-07 | 上海大学 | Method and equipment for optimizing sorting distance of automatic sorting system and storage medium |
| CN109146163A (en)* | 2018-08-07 | 2019-01-04 | 上海大学 | Optimization method, equipment and the storage medium of Automated Sorting System sorting distance |
| CN110253588A (en)* | 2019-08-05 | 2019-09-20 | 江苏科技大学 | A New Dynamic Grabbing System of Robotic Arm |
| CN110666806A (en)* | 2019-10-31 | 2020-01-10 | 湖北文理学院 | Article sorting method, article sorting device, robot and storage medium |
| CN110666806B (en)* | 2019-10-31 | 2021-05-14 | 湖北文理学院 | Article sorting method, article sorting device, robot and storage medium |
| CN113021333A (en)* | 2019-12-25 | 2021-06-25 | 沈阳新松机器人自动化股份有限公司 | Object grabbing method and system and terminal equipment |
| CN111260761A (en)* | 2020-01-15 | 2020-06-09 | 北京猿力未来科技有限公司 | Method and device for generating mouth shape of animation character |
| CN112232141A (en)* | 2020-09-25 | 2021-01-15 | 武汉云极智能科技有限公司 | Mechanical arm interaction method and equipment capable of identifying spatial position of object |
| CN112232141B (en)* | 2020-09-25 | 2023-06-20 | 武汉云极智能科技有限公司 | Mechanical arm interaction method and equipment capable of identifying object space position |
| CN112667823A (en)* | 2020-12-24 | 2021-04-16 | 西安电子科技大学 | Semantic analysis method and system for task execution sequence of mechanical arm and computer readable medium |
| CN112809689B (en)* | 2021-02-26 | 2022-06-14 | 同济大学 | Language-guided action meta-imitation learning method and storage medium for robotic arm |
| CN112809689A (en)* | 2021-02-26 | 2021-05-18 | 同济大学 | Language-guidance-based mechanical arm action element simulation learning method and storage medium |
| CN113742458A (en)* | 2021-09-18 | 2021-12-03 | 苏州大学 | Natural language instruction disambiguation method and system for mechanical arm grabbing |
| CN118528258A (en)* | 2024-05-28 | 2024-08-23 | 湖大粤港澳大湾区创新研究院(广州增城) | Intelligent robot control method for recycling garbage |
| Publication number | Publication date |
|---|---|
| CN105931218B (en) | 2019-05-17 |
| Publication | Publication Date | Title |
|---|---|---|
| CN105931218A (en) | Intelligent sorting method of modular mechanical arm | |
| CN106056207B (en) | A kind of robot depth interaction and inference method and device based on natural language | |
| Chen et al. | A joint network for grasp detection conditioned on natural language commands | |
| CN118744426A (en) | Human-computer interactive assembly method and system based on multimodal large model and reinforcement learning | |
| Perzanowski et al. | Integrating natural language and gesture in a robotics domain | |
| CN106095109A (en) | The method carrying out robot on-line teaching based on gesture and voice | |
| CN102323817A (en) | A service robot control platform system and its method for realizing multi-mode intelligent interaction and intelligent behavior | |
| CN101187990A (en) | A conversational robot system | |
| CN106023993A (en) | Robot control system based on natural language and control method thereof | |
| Fan et al. | A vision-language-guided robotic action planning approach for ambiguity mitigation in human–robot collaborative manufacturing | |
| JP2009066692A (en) | Orbit search device | |
| CN118744425A (en) | Industrial robot assembly method and system based on multimodal large model | |
| CN118502456A (en) | A method and system for realizing perception and control of unmanned aerial vehicle flight assistant | |
| CN111369991B (en) | Mobile control method supporting natural language instruction and system thereof | |
| CN119610090A (en) | A natural language control method for humanoid robot | |
| CN119772883A (en) | Indoor mobile service robot interactive task execution method, device and storage medium, indoor mobile service robot system | |
| CN106055244B (en) | Man-machine interaction method based on Kinect and voice | |
| CN119115928A (en) | A humanoid robot multimodal interaction method and device based on large language model | |
| Fardana et al. | Controlling a mobile robot with natural commands based on voice and gesture | |
| Omeed et al. | Integrating Computer Vision and language model for interactive AI-Robot | |
| Giachos et al. | A contemporary survey on intelligent human-robot interfaces focused on natural language processing | |
| CN110434859B (en) | An intelligent service robot system for commercial office environment and its operation method | |
| Ahn et al. | Natural-language-based robot action control using a hierarchical behavior model | |
| CN116993965A (en) | A robot target object grabbing method, system, equipment and storage medium | |
| Kim et al. | SGGNet 2: Speech-scene graph grounding network for speech-guided navigation |
| Date | Code | Title | Description |
|---|---|---|---|
| C06 | Publication | ||
| PB01 | Publication | ||
| C10 | Entry into substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |