Movatterモバイル変換


[0]ホーム

URL:


CN114131635A - Multi-degree-of-freedom auxiliary external limb grasping robot system integrating visual sense and tactile sense active perception - Google Patents

Multi-degree-of-freedom auxiliary external limb grasping robot system integrating visual sense and tactile sense active perception
Download PDF

Info

Publication number
CN114131635A
CN114131635ACN202111492543.6ACN202111492543ACN114131635ACN 114131635 ACN114131635 ACN 114131635ACN 202111492543 ACN202111492543 ACN 202111492543ACN 114131635 ACN114131635 ACN 114131635A
Authority
CN
China
Prior art keywords
manipulator
grasping
tactile
degree
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111492543.6A
Other languages
Chinese (zh)
Other versions
CN114131635B (en
Inventor
李可
胡元栋
李光林
魏娜
田新诚
李贻斌
宋锐
侯莹
何文晶
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong University
Original Assignee
Shandong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong UniversityfiledCriticalShandong University
Priority to CN202111492543.6ApriorityCriticalpatent/CN114131635B/en
Publication of CN114131635ApublicationCriticalpatent/CN114131635A/en
Application grantedgrantedCritical
Publication of CN114131635BpublicationCriticalpatent/CN114131635B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Classifications

Landscapes

Abstract

Translated fromChinese

本发明提供了一种融合视触觉主动感知的多自由度辅助抓握外肢体机器人系统,包括脑电信号采集模块,用于采集使用者的脑电信号;视觉信息采集模块,用于采集视觉信息;多自由度辅助抓握外肢体机器人具有多自由度的机械手,所述机械手五指上分别设置有传感器,用于检测相应触觉信息;控制系统,被配置为从视觉信息中提取待抓握物体位置和待选位置,对脑电信号进行处理分析,获取运动意图,控制机械手按照所述运动意图,执行抓握动作,以将目标物体移至目标位置,在上述抓握执行过程中接收机械手反馈的触觉信息,根据触觉信息与预设阈值的差值控制机械手的抓握力。本发明将运动意图、机器视觉和机器触觉进行有效融合建立对被抓握物体及环境的主动感知,从而实现对外肢体的抓握控制。

Figure 202111492543

The invention provides a multi-degree-of-freedom assisted grasping external limb robot system integrating visual and tactile active perception, comprising an EEG signal acquisition module for acquiring the user's EEG signals; a visual information acquisition module for acquiring visual information A multi-degree-of-freedom assisted grasping external limb robot has a multi-degree-of-freedom manipulator, the five fingers of the manipulator are respectively provided with sensors for detecting corresponding tactile information; the control system is configured to extract the position of the object to be grasped from the visual information and the position to be selected, process and analyze the EEG signal, obtain the motion intention, control the manipulator to perform the grasping action according to the motion intention, so as to move the target object to the target position, and receive the feedback feedback from the manipulator during the above-mentioned grasping execution process. Haptic information, control the grasping force of the manipulator according to the difference between the tactile information and the preset threshold. The present invention effectively integrates motion intention, machine vision and machine tactile sense to establish active perception of grasped objects and the environment, thereby realizing grasping control of external limbs.

Figure 202111492543

Description

Multi-degree-of-freedom auxiliary external limb grasping robot system integrating visual sense and tactile sense active perception
Technical Field
The invention belongs to the technical field of auxiliary robots, and particularly relates to a multi-degree-of-freedom auxiliary external limb grasping robot system integrating visual sense and tactile sense active sensing.
Background
The statements in this section merely provide background information related to the present disclosure and may not necessarily constitute prior art.
The robot is used for assisting the disabled to perform the hand stretching and holding movement operations required by daily activities, and is widely applied in recent years. Input devices such as a joystick, a keyboard or a touch pad which are commonly used for controlling the robot are not suitable for physically disabled people. The brain-computer interface technology is a novel human-computer interaction mode which is emerging in recent years. Brain signals are recorded and interpreted through technologies such as electroencephalograms, direct control over the robot can be achieved without depending on traditional input equipment, and recognition of movement intentions is achieved through real-time and accurate decoding and classification of the electroencephalogram signals. When the robot faces to actual operability tasks, because the robot faces to objects with different shapes and complex and changeable environments, the real-time decoding and accurate classification of electroencephalogram still remain one of key technical problems which need continuous attention and are important to break through.
In order to make the robot better assist human life, it is also important to make the robot have active perception capability. The core of active perception is that the robot receives environment information passively, and collects and integrates the environment information by utilizing multi-modal sensors such as visual touch and the like, so that active cognition, active understanding and active adaptation to the environment are formed. The acquisition of visual information is to use camera and computer to replace human eyes to identify, track and measure the target, and further to do graphic processing, to provide the computer with usable digital information. These digital information include not only two-dimensional pictures but also three-dimensional scenes, video sequences, and the like. Through the digital information, the robot can fully know the environment and the operated object, and the robot can be conveniently controlled by calling forward experience. In addition to visual information, the robot needs to obtain a touch perception capability when touching an object.
Humans are able to efficiently perform their tasks, relying in large part on mechanoreceptors that are densely populated with human hands. These receptors can transmit contact sensing information such as pressure, vibration and the like when a human hand contacts an object to the central nervous system through a sensory nerve pathway. Through processing and analyzing the touch perception information, the central nervous system can establish the cognition of key information such as contact position, magnitude and direction of contact force, shape, weight, center of mass, surface texture, friction coefficient and the like of the manipulated object. This information allows a human being to operate and recognize an object with great skill only by the sense of touch. How to enable robots to have the same tactile mechanism is a hot spot in current robot research, and the core of the robot is to accurately identify contact information between fingers and objects in real time when an executing mechanism (such as a manipulator) of the robot grips the objects with different sizes and shapes, and establish contact cognition through comprehensive analysis and judgment, so that the motion is controlled through higher-level decision.
The vision and touch of the robot play an important role in the process of actively sensing the environment by the robot. However, how to effectively fuse the perception information of the two different modalities, how to realize motion decision and control of the robot by analyzing and fusing the visual and tactile information and combining the subjective will of the disabled, and these problems are key problems that restrict the robot from being used for the disabled to take care of the life.
Disclosure of Invention
The invention aims to solve the problems and provides a multi-degree-of-freedom auxiliary grasping outer limb robot system integrating visual sense and tactile sense active sensing.
According to some embodiments, the invention adopts the following technical scheme:
a multi-degree-of-freedom auxiliary grasping outer limb robot system fusing visual sense and tactile active perception comprises:
the electroencephalogram signal acquisition module is used for acquiring electroencephalogram signals of a user;
the visual information acquisition module is used for acquiring visual information;
the robot comprises a multi-degree-of-freedom auxiliary grasping outer limb robot and a multi-degree-of-freedom mechanical hand, wherein sensors are respectively arranged on five fingers of the mechanical hand and used for detecting corresponding touch information;
the control system is configured to extract the position of an object to be grasped and the position to be selected from the visual information, process and analyze the electroencephalogram signals, acquire a movement intention, control the mechanical arm to execute a grasping action according to the movement intention so as to move the target object to the target position, receive tactile information fed back by the mechanical arm in the grasping execution process, and control the grasping force of the mechanical arm according to the difference value between the tactile information and a preset threshold value.
As an alternative embodiment, the process of processing and analyzing the brain electrical signals by the control system comprises the following steps: after filtering the electroencephalogram signals, extracting the characteristics of the set frequency band, classifying the extracted characteristics, determining the movement intention, and combining preset manipulator control instructions according to the movement intention.
As an alternative embodiment, the movement intent includes a target object, a target location, and an action instruction.
As an alternative embodiment, the visual information acquisition module comprises an imaging device arranged in front of the user for detecting the object to be gripped and the environmental information.
As an alternative embodiment, the specific process of the control system extracting the position of the object to be gripped and the position to be selected from the visual information includes: identifying an image acquired by an imaging device, determining an approximate area of each position to be selected, and extracting coordinates of each position to be selected; extracting key points from point cloud data of an area where an object to be grasped is located, calculating a three-dimensional rapid point feature histogram of the key points, describing the relative direction of a normal between the two points, comparing the histogram with a histogram of a target to be identified possibly of a known model to obtain a point-to-point corresponding relation, and determining the position of the object to be grasped.
Further, when the control system extracts the position of the object to be gripped and the position to be selected from the visual information, the position of the manipulator needs to be determined, and the position of the manipulator is determined by a positioning module on the manipulator;
or extracting key points from the point cloud data of the area where the visual information manipulator is located, calculating a three-dimensional fast point feature histogram of the key points, describing the relative direction of the normal between the two points, comparing the histogram with the histogram of the target to be identified of the known model to obtain the point-to-point corresponding relation, and determining the position of the manipulator.
In an alternative embodiment, the control system takes visual information and movement intention as priority control input before the manipulator performs the gripping action, the manipulator selects a gripping control mode according to a classification result after decoding the movement intention, calculates the distance and the pose of the manipulator relative to the target object by combining the visual information, and determines the movement path of the manipulator according to the distance and the pose.
In an alternative embodiment, when the manipulator performs the gripping action, the control system takes the tactile information fed back by the manipulator as a priority control input, and completes the gripping control of the object by combining with the inherent motion mode preset by the manipulator.
As an alternative embodiment, in the process that the manipulator performs the gripping action and moves the target object, the control system uses the movement intention and the visual information acquired in real time as a feedforward control basis, and uses the manipulator touch information as a feedback control source, so as to realize the movement control of the target object.
In an alternative embodiment, the sensors are force sensors, and the control system compares the detection value of each force sensor with its threshold value, and when the difference exceeds a set range, adjusts the motion of the corresponding manipulator finger to increase or decrease the gripping force until the difference between the detection value and its threshold value is within the set range.
Compared with the prior art, the invention has the beneficial effects that:
the invention effectively fuses machine vision and machine touch to establish active perception of the grasped object and environment, thereby realizing grasping control of the external limb. The invention can provide important assistance for personnel with inconvenient limb activities, provides important technical support for assisting the required personnel to complete daily life, and has wide application value.
In the implementation process, the control logics of the visual information, the tactile information and the user movement intention are organically coordinated according to different process links, different control modes are adopted according to different grasping stages, high-efficiency operation can be realized, and the working efficiency and the accuracy of the whole system are ensured.
According to the invention, the pressure sensor is pasted on each fingertip at the tail end of the outer limb manipulator, a proper threshold value is set for the sensor, the measurement value during the gripping is monitored in real time, the finger position is adjusted, finally the measurement value of the pressure sensor reaches the vicinity of the set threshold value, and the gripping tends to personification in the gripping process is ensured.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, are included to provide a further understanding of the invention, and are incorporated in and constitute a part of this specification, illustrate exemplary embodiments of the invention and together with the description serve to explain the invention and not to limit the invention.
FIG. 1 is a schematic diagram of a system operating environment;
FIG. 2 is a schematic diagram of the overall control strategy of the system;
FIGS. 3(a) and (b) are decoding diagrams of the motion intentions;
FIG. 4 is a schematic view of tactile detection;
FIG. 5 is a block diagram of a control strategy for the outer limb;
fig. 6 shows the overall flow chart.
Wherein, 1 is an external limb mechanical arm of 7DO-F, and the tail end is provided with a mechanical arm; 2, a force sensor is arranged at the tail end of the touch sensor for real-time touch detection; 3, an electroencephalogram acquisition device for acquiring electroencephalogram signals of a user; 4, imaging equipment for acquiring object and environment information in real time; 5. 6 and 7 are respectively a handbag for holding objects, a mobile phone and a water cup.
The specific implementation mode is as follows:
the invention is further described with reference to the following figures and examples.
It is to be understood that the following detailed description is exemplary and is intended to provide further explanation of the invention as claimed. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of exemplary embodiments according to the invention. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, and it should be understood that when the terms "comprises" and/or "comprising" are used in this specification, they specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof, unless the context clearly indicates otherwise.
A brain-control multi-degree-of-freedom auxiliary grasping system integrating visual sense and tactile sense active perception mainly comprises the following parts:
the method includes the steps of collecting electroencephalogram signals, preprocessing and feature extraction, obtaining Spatial features of different frequency bands by adopting a Filter Bank Common Spatial Pattern (FBCSP) algorithm, and finally classifying the processed signals by adopting Linear Discriminant Analysis (LDA).
The second part is to obtain visual information and pass this information to the control system for use. In order to properly interact with the user and the environment, the robotic system needs to perceive them. Firstly, a camera is used for shooting an experimental scene in real time, and then an object to be operated is identified in real time from the acquired RGB-D data. The homogeneous transformation matrix between the frame fixed on the camera (which may be replaced by an RGB sensor in some embodiments) and the frame fixed on the robot base is preliminarily calibrated, and the detection of the position and the direction of the marker in the scene can be realized by using the database.
The third part is the processing of tactile information, which the outer limb acquires and makes the grip personified. In order to enable the outer limb to have a better performance in the gripping process, a pressure sensor is pasted on each fingertip at the tail end of the manipulator of the outer limb, a proper threshold value is set for the sensor, the measurement value during gripping is monitored in real time, the position of the finger is adjusted, and finally the measurement value of the pressure sensor reaches the set threshold value.
The fourth part is that the visual information, the tactile information and the movement intention are combined to cooperatively send commands to control the movement of the outer limb. The method comprises the steps of writing required instructions including object grasping, moving to a desired position, starting and stopping control and the like into an upper computer or a control system in advance, and selecting and combining after decoding of the movement intention is completed, so that the outer limbs are mobilized to realize corresponding movement control. The joint angle limitation is set in the motion process of the outer limbs, so that the continuity in the motion is ensured.
In order to make the grasping process of the outer limb coherent and efficient, the control logics of visual information, tactile information and user intention need to be coordinated well, and different control modes are adopted according to different grasping stages: (1) before an object is grasped, establishing control input taking visual information and user intention as main control input, selecting a specific grasping control mode for grasping the outer limb according to a classification result obtained after decoding the movement intention at the moment, realizing feedforward control based on the movement mode, and simultaneously calculating the distance, the pose, the movement path and the like of the outer limb relative to the grasped object by combining the visual information to realize control based on the vision; (2) when the external limb grasps the object, feedback control input with tactile information as a main information source is adopted, and grasping control of the object is completed by combining an inherent motion mode arranged in the external limb; (3) in the moving process of the outer limb gripping object, the user intention and the visual information acquired in real time are used as a feedforward control basis, and the real-time acquired tactile information is used as a feedback control source, so that the movement control of the gripped object is realized.
As a typical embodiment, as shown in fig. 1, a brain-controlled multi-degree-of-freedom assisted grasping system integrating visual and tactile active sensing includes an external limb mechanical arm (or called as an external limb or a mechanical arm), an executing end of the external limb mechanical arm, and a force sensor disposed on each finger;
the electroencephalogram acquisition device is used for acquiring electroencephalogram signals of a user;
and the imaging equipment acquires the object to be gripped and the environmental information in real time.
Firstly, the processing of brain electrical signals and the identification of movement intentions comprise: signal preprocessing, feature extraction, intention identification and control command selection. The first step of the pretreatment is to filter the 32-channel electroencephalogram signals by adopting a 50Hz notch filter and a 0.5Hz high-pass filter. In the aspect of feature extraction, as the first stage of the FBCSP algorithm, 4 band-pass filters are adopted for filtering, and the spatial features of alpha and beta frequency bands of 5-10Hz, 10-15Hz, 15-20Hz and 20-25Hz of each channel are obtained.
Discriminant features associated with each athletic intent are extracted from each preprocessed data window for classifier training and testing. The second stage of the FBCSP algorithm is applied to the signal in each filtered band, and the spatial filter is designed to enhance the differences between the different types, different modes. Acquiring an N X T dimensional electroencephalogram signal X, wherein N is the number of channels, T is the number of samples, and calculating a spatial filter matrix W. The 10 types to be distinguished in this case are respectively that the desired object is a cup (X)1) Mobile phone (X)2) Handbag (X)3) Period of time ofThe expected position is a mouth (X)4) Ear, ear (X)5) Hand (X)6) Functional instruction is Start (X)7) Pause (X)8) Refresh (X)9) Stop (X)10). The normalized covariance matrix for each class is
Figure BDA0003398949570000101
trace(Xi) Expression solving matrix XiTrace of (a) refers to the sum of the elements of the main diagonal of the matrix.
The composite spatial covariance matrix is derived from the sum of these average normalized covariance matrices and can be decomposed into
Figure BDA0003398949570000102
Wherein U is0And a is the eigenvector matrix and the diagonal matrix of eigenvalues, respectively.
Transform (3) converts the mean normalized covariance matrix to (4)
Figure BDA0003398949570000103
Figure BDA0003398949570000104
Wherein SiIs calculated as (5), they have a common matrix of eigenvalues U, the sum of these eigenvalue matrices being the identification matrix.
Si=UAiUT,i∈[1,10] (5)
Figure BDA0003398949570000105
Obtaining a projection matrix of
W=UTP (7)
The original signal X is projected through a projection matrix W to obtain a characteristic matrix as follows
Z=WX (8)
The resulting signal Z has the same dimensions as X, the feature information is mainly concentrated in the head and tail components of the feature matrix, while the middle feature information is not significantly negligible, so the first m rows and the last m rows of Z (2m < N) are selected as features of the original input data. Therefore, only the variance of the components of the front m rows and the back m rows of Z is considered for feature extraction, and is defined as Zp
Calculating ZpThe variance of (2) is var (Z)p) And expressed by logarithmic normalization as
Figure BDA0003398949570000111
yiIs the feature matrix after the normalization of the ith sample.
And classifying the result by using the LDA as a classification method.
LDA is the projection of data in a low dimension, after which it is desirable that the projected points of each category of data are as close as possible, while the distance between the category centers of the different categories of data is as large as possible.
Given a data set of y ═ y1,y2,...,yi,...,y2m) For yiIn particular, the mean vector is μiThe covariance matrix is ∑i. Arbitrarily take two kinds of samples to be defined as X0、X1The projection of the central points of the two types of samples on the straight line is omegaTμ0、ωTμ1The covariance of the two samples is ωTΣ0ω、ωTΣ1ω。
To ensure that the projection points of the same kind of samples are as close as possible, it is necessary to obtain ω as small as possible1Value of
ω1=ωTΣ0ω+ωTΣ1ω (10)
To ensure that the projection points of the heterogeneous samples are divided as much as possibleBulk, it is desirable to obtain as much ω as possible2Value of
Figure BDA0003398949570000112
Thus maximizing the quotient J
Figure BDA0003398949570000113
Defining a divergence matrix within a class
Figure BDA0003398949570000121
Defining an inter-class divergence matrix
Sb=(μ01)(μ01)T (14)
In this embodiment, there are 10 classes, so the intra-class divergence matrix is
Figure BDA0003398949570000122
The inter-class divergence matrix is
Figure BDA0003398949570000123
Thus simplifying J to
Figure BDA0003398949570000124
Let ω be because the goal is to maximize JTSωω is 1, and is obtained by a lagrange multiplier method (18), and a matrix value of ω is obtained by further calculation.
Figure BDA0003398949570000125
After the value of omega is obtained through the processing, a sample set is mapped by using omega to obtain new samples with the best classification effect, the samples meet the condition that the projections of similar sample points are close, the projections of different sample points are dispersed as much as possible, and each dense projection point area corresponds to one class. Thus, the electroencephalogram classifier capable of distinguishing ten categories is obtained, and multi-classification of user intentions is realized.
In this embodiment, the classification results are respectively corresponding to three objects to be grasped, three alternative/candidate positions, and four function instructions. The three objects to be gripped are respectively: drinking cup, cell-phone, handbag, three alternative/positions of waiting to select are respectively: mouth, ear, hand, four kinds of functional instruction are respectively: refresh, start, pause, stop, as shown in fig. 3 (a).
Of course, in other embodiments, the object to be grasped, the alternative/candidate location, and the function instruction may all be changed according to the user requirement, and are not described herein again.
And after the movement intention generated by the user is obtained, combining preset control instructions of the outer limbs according to the classification result. For example, if the user desires to move the cup to the mouth, the user selects a combination of the instruction to grip the cup and the instruction to move to the mouth to wait for the next operation.
Active perception of visual information. Firstly, detecting the face position of a user, firstly, acquiring an image with full high-definition resolution from a sensor, detecting the positions of the mouth and the ears of the image on a two-dimensional plane, identifying the face in a scene by using Haar-like features (Haar), picking out two regions of the face image, wherein the two regions respectively comprise the mouth and the ears, and then, applying a Haar feature algorithm to find the coordinates of the mouth and the ears in the two regions. The second step is to acquire full high-count cloud data from the sensors to estimate the distance between the mouth and ears and the sensors, filter through a voxel grid filter in order to reduce the number of points that need to be computed, then extract points from selected regions in the image, compute the x, y, z coordinates of the center of the mouth and the center of the ears.
Then, the hand position of the user is detected, and the obtained image is processed. Firstly, detecting the region of a palm by using a skin color detection algorithm, expanding pixel points, avoiding cutting off finger parts, filtering the image after binarization processing to remove background noise, and selecting a maximum outline. The maximum outline is approximated to be a polygon, the position of the central point can be calculated, the distance between the palm and the sensor is estimated by combining the full-altitude counting cloud data, and finally the x, y and z coordinates of the central position of the palm are calculated.
And finally, identifying the object by using the point cloud database. Firstly, extracting key points from a scene point cloud with uniform sampling distance in an area where an object is located, then calculating a three-dimensional fast point feature histogram of the key points, describing the relative direction of a normal between the two points, and comparing the histogram with a histogram of a possible target to be identified of a prior known model to obtain a point-to-point corresponding relation. These correspondences are then merged to enhance the geometric consistency between them, and if a fixed reference is not found, the model is aligned with the instances in the scene. These correspondences are processed using a random sample consistency estimator to find the best transformation between them. Finally, hypothesis verification is performed, and the geometric information of the object is used to reduce the error.
After the processing, the position information of the object and the expected moving position information can be obtained, and the outer limb can start actual movement by combining the combined instruction selected by the movement intention.
Active perception of tactile information. The sensor structure used is a capacitive structure, as shown in fig. 4, the upper gray layer is made of a conductive material, and the middle black layer is made of an elastic insulating material. When force is applied to the upper layer, the middle layer deforms, so that the structural capacitance value formed by the middle layer changes, force value information is obtained, and the fact that the capacitance value and the force value form a certain linear relation in a certain range can be known according to experience.
For the sensed tactile information, the contact force of each five fingers when the human hand grips three objects is firstly collected and recorded as
Figure BDA0003398949570000141
Where i ∈ [1,5 ]]Representing 5 fingers. And then, the pressure sensor is used as a threshold value of the pressure sensor, the force sensor is arranged on the five fingers at the tail end of the outer limb manipulator, and the sensor data is monitored in real time when an object is grasped. When a certain sensor reaches a threshold value for the first time, finely adjusting the position of a fingertip motor: when the measured value is slightly larger than the threshold value, the motor rotates outwards to reduce the measured value; when the measurement is slightly less than the threshold, the motor spins in causing the measurement to increase. And finally, the measured value of the motor is always kept near the threshold value after the motor is at a certain position until all the sensors repeat the operation, and the detection is finished. The image of the force values detected in the actual process is shown in fig. 4.
Through the operation, the outer limb can adjust the gripping posture of the outer limb through actively sensing the touch information in the actual movement process, so that the gripping process is more anthropomorphic.
The embodiment establishes the active perception of fusion visual touch, and controls the multi-degree-of-freedom outer limb mechanical arm to carry out auxiliary grasping by combining a brain-computer interface so as to meet the grasping requirement of the disabled in daily life.
In the motion control of the outer limb, joint angle limitation is added besides the basic motion, so that the motion pause caused by singular points generated in the motion process is prevented. For a 7-degree-of-freedom outer limb robotic arm, given a task control objective, i.e., grasping an object and moving, the system state is σ (q), the joint state q,
Figure BDA0003398949570000151
the relationship between the system velocity and the task space velocity is
Figure BDA0003398949570000152
Wherein
Figure BDA0003398949570000153
Is a matrix of the jacobian matrix,
Figure BDA0003398949570000154
is the joint velocity vector.
After the corresponding position coordinates are obtained, the expected motion path can be obtained by combining the preset motion command, and the task value sigma is driven to the expected value sigma by means of a closed-loop inverse kinematics algorithmd
Figure BDA0003398949570000155
Where K is a positive definite matrix of the chosen gains,
Figure BDA0003398949570000161
is a task error, J*=JT(JJT)-1Is Moore-Penrose pseudo-inverse of J.
The control framework for the outer limb contains two feedforward control terms and one feedback control term. Two feed-forward terms are object information x after decoding of movement intentiontLocation information x in the environmentpThe feedback item is contact force information x in the gripping processfAs shown in detail in fig. 5. X is to bet、xpAdding control tasks to calculate correspondences
Figure BDA0003398949570000162
Calculated
Figure BDA0003398949570000163
To the controller, xfFor feedback control of the joint state, x, of a robot at the end of the outer limbt、xp、xfFinally outputting corresponding joint moment tau through combination, so that the outer limb makes corresponding movement.
There are different controls that work at different stages of the grip: (1) the feed forward term x before gripping the objecttAnd xpPlays a main control role, wherein xtIs responsible for providing the information of the shape, the mass center and the like of the object, xpThe robot is responsible for providing object position information and cooperatively controlling the outer limb to call the inherent movement mode to generate movement; (2) at the gripFeedback term x during object processfThe robot has a main control function, is responsible for providing contact force information of an object, and controls the manipulator at the tail end of the outer limb to adjust the corresponding posture so as to better grasp the object; (3) the feed forward term x during the movement of the gripping objectpAnd a feedback term xfPlays a main control role, wherein xpThe control module is responsible for providing the actual position information of the object which is expected by the user to be finally positioned, and controlling the outer limb gripping object to move to the corresponding position, xfThe device is responsible for providing contact force information of an object and ensuring that the outer limb has a continuous and stable gripping posture in the moving process. As shown in particular in fig. 5.
The overall process is as follows, the external limb acquires object and environment information in real time to wait for the user to participate, the user wears the electroencephalogram cap, after decoding the movement intention, the target and the position in the interactive interface select a corresponding area of a column to be highlighted, if the highlighted area is not the object or the expected position expected by the user, the operation is repeated, and the interactive interface is as shown in fig. 3 (b). And searching the corresponding object position and the expected position in the visual information acquired in real time after the selection is successful. At the moment, the outer limb obtains coordinate point information and calls the combined instruction to finish the action of grasping the object. When the external limb manipulator just contacts the object, the gripping posture is adjusted in real time by combining the sensed tactile information. After the gripping is completed, the movement is continued to the desired position. If the user needs the outer limb to stay at the current position in the moving process, a pause instruction is called after the user decodes the movement intention, the outer limb pauses the movement, and the next operation is waited.
In the process, the classification result corresponds to four functional instructions except that the three objects are respectively a water cup, a mobile phone, a handbag and three positions, namely a mouth, an ear and a hand. The start command represents a resumption of motion of the outer limb for use after the pause command; pause instructions represent that the user needs to pause the movement in some cases in the movement of the external limb; the refreshing instruction represents that the current task is ended and restarted, if the current task occurs in the target selection stage, the object is reselected to be grasped, and if the current task occurs in the position selection stage, the expected position is reselected; the stop command represents the outer limb stopping the current task and no longer moving for emergency situations. Specifically, as shown in FIG. 3 (a).
The system has the working flow as shown in fig. 6, the user wears the electroencephalogram cap, then grips the user according to the intention of the user and the active perception of the visual touch of the outer limb, and after the user moves the desired object to the desired position, one flow is finished.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
Although the embodiments of the present invention have been described with reference to the accompanying drawings, it is not intended to limit the scope of the present invention, and it should be understood by those skilled in the art that various modifications and variations can be made without inventive efforts by those skilled in the art based on the technical solution of the present invention.

Claims (10)

Translated fromChinese
1.一种融合视触觉主动感知的多自由度辅助抓握外肢体机器人系统,其特征是:包括:1. A multi-degree-of-freedom auxiliary robot system for grasping external limbs that integrates visual and tactile active perception, it is characterized in that: comprising:脑电信号采集模块,用于采集使用者的脑电信号;The EEG signal acquisition module is used to collect the user's EEG signal;视觉信息采集模块,用于采集视觉信息;Visual information collection module, used to collect visual information;多自由度辅助抓握外肢体机器人,具有多自由度的机械手,所述机械手五指上分别设置有传感器,用于检测相应触觉信息;The multi-degree-of-freedom auxiliary grasping external limb robot is a multi-degree-of-freedom manipulator, and the five fingers of the manipulator are respectively provided with sensors for detecting corresponding tactile information;控制系统,被配置为从视觉信息中提取待抓握物体位置和待选位置,对脑电信号进行处理分析,获取运动意图,控制机械手按照所述运动意图,执行抓握动作,以将目标物体移至目标位置,在上述抓握执行过程中接收机械手反馈的触觉信息,根据触觉信息与预设阈值的差值控制机械手的抓握力。The control system is configured to extract the position of the object to be grasped and the position to be selected from the visual information, process and analyze the electroencephalogram signal, obtain the motion intention, and control the manipulator to perform the grasping action according to the motion intention to move the target object. Move to the target position, receive the tactile information fed back by the manipulator during the above-mentioned grasping execution process, and control the grasping force of the manipulator according to the difference between the tactile information and the preset threshold.2.如权利要求1所述的一种融合视触觉主动感知的多自由度辅助抓握外肢体机器人系统,其特征是:所述控制系统对脑电信号进行处理分析的过程包括:对脑电信号进行滤波处理后,提取设定频段的特征,对提取的特征进行分类,确定运动意图,根据运动意图对预先设置好的机械手控制指令进行组合。2. The multi-degree-of-freedom auxiliary robot system for grasping external limbs that integrates visual and tactile active perception as claimed in claim 1, wherein: the process of the control system processing and analyzing the EEG signal comprises: After the signal is filtered, the features of the set frequency band are extracted, the extracted features are classified, the motion intention is determined, and the preset manipulator control instructions are combined according to the motion intention.3.如权利要求1所述的一种融合视触觉主动感知的多自由度辅助抓握外肢体机器人系统,其特征是:所述运动意图包括目标物体、目标位置和动作指令。3 . The multi-degree-of-freedom assisted grasping external limb robot system integrating visual and tactile active perception according to claim 1 , wherein the motion intention includes a target object, a target position and an action instruction. 4 .4.如权利要求1所述的一种融合视触觉主动感知的多自由度辅助抓握外肢体机器人系统,其特征是:所述视觉信息采集模块包括设置于使用者前方的成像装置,用于检测待抓握物体和环境信息。4. The multi-degree-of-freedom robot system for assisted grasping of external limbs integrating visual and tactile active perception according to claim 1, wherein the visual information acquisition module comprises an imaging device arranged in front of the user, for Detects objects to be grasped and environmental information.5.如权利要求1所述的一种融合视触觉主动感知的多自由度辅助抓握外肢体机器人系统,其特征是:所述控制系统从视觉信息中提取待抓握物体位置和待选位置的具体过程包括:对成像装置采集的图像进行识别,确定各个待选位置的大致区域,提取各个待选位置的坐标;对待抓握物体所在区域的点云数据中提取关键点,计算关键点的三维快速点特征直方图,描述两点间法线的相对方向,并与已知模型的可能待识别目标的直方图进行比较,得到点对点的对应关系,确定待抓握物体的位置。5. The multi-degree-of-freedom robot system for assisted grasping of external limbs that integrates visual and tactile active perception as claimed in claim 1, wherein the control system extracts the position of the object to be grasped and the position to be selected from the visual information The specific process includes: recognizing the images collected by the imaging device, determining the approximate area of each candidate position, and extracting the coordinates of each candidate position; extracting key points from the point cloud data of the area where the object to be grasped is located, and calculating the key points. The three-dimensional fast point feature histogram describes the relative direction of the normal between two points, and compares it with the histogram of the possible target to be identified from the known model, obtains the point-to-point correspondence, and determines the position of the object to be grasped.6.如权利要求5所述的一种融合视触觉主动感知的多自由度辅助抓握外肢体机器人系统,其特征是:所述控制系统从视觉信息中提取待抓握物体位置和待选位置时,还需要确定机械手的位置,所述机械手的位置由机械手上的定位模块确定;6. The multi-degree-of-freedom robot system for assisted grasping of external limbs that integrates visual and tactile active perception as claimed in claim 5, wherein the control system extracts the position of the object to be grasped and the position to be selected from the visual information When , it is also necessary to determine the position of the manipulator, and the position of the manipulator is determined by the positioning module on the manipulator;或根据视觉信息机械手所在区域的点云数据中提取关键点,计算关键点的三维快速点特征直方图,描述两点间法线的相对方向,并与已知模型的可能待识别目标的直方图进行比较,得到点对点的对应关系,确定机械手的位置。Or extract key points from the point cloud data in the area where the visual information manipulator is located, calculate the three-dimensional fast point feature histogram of the key points, describe the relative direction of the normal between the two points, and compare the histogram of the possible target to be recognized with the known model. Compare, get the point-to-point correspondence, and determine the position of the manipulator.7.如权利要求1所述的一种融合视触觉主动感知的多自由度辅助抓握外肢体机器人系统,其特征是:所述控制系统在机械手执行抓握动作前,以视觉信息和运动意图作为优先控制输入,所述机械手根据运动意图解码后的分类结果选择抓握控制模式,结合视觉信息,计算机械手相对目标物体的距离、位姿,并根据距离和位姿确定机械手的运动路径。7. The multi-degree-of-freedom robot system for assisted grasping of external limbs that integrates visual and tactile active perception as claimed in claim 1, wherein the control system uses visual information and motion intentions before the manipulator performs the grasping action. As the priority control input, the manipulator selects the grasping control mode according to the classification result decoded by the motion intention, calculates the distance and pose of the manipulator relative to the target object in combination with the visual information, and determines the motion path of the manipulator according to the distance and the pose.8.如权利要求1所述的一种融合视触觉主动感知的多自由度辅助抓握外肢体机器人系统,其特征是:所述控制系统在机械手执行抓握动作时,以机械手反馈的触觉信息作为优先控制输入,结合机械手预先设置的固有运动模式完成对物体的抓握控制。8. The multi-degree-of-freedom robot system for assisted grasping of external limbs that integrates visual and tactile active perception as claimed in claim 1, wherein the control system uses the tactile information fed back by the manipulator when the manipulator performs the grasping action. As the priority control input, the grasping control of the object is completed in combination with the inherent motion mode preset by the manipulator.9.如权利要求1所述的一种融合视触觉主动感知的多自由度辅助抓握外肢体机器人系统,其特征是:所述控制系统在机械手执行抓握动作,并移动目标物体的过程中,以运动意图和实时获取的视觉信息作为前馈控制依据,以机械手触觉信息作为反馈控制来源,实现对目标物体的移动控制。9. The multi-degree-of-freedom robot system for assisted grasping of external limbs incorporating visual and tactile active perception according to claim 1, wherein the control system is in the process of the manipulator performing the grasping action and moving the target object. , the movement intention and real-time acquired visual information are used as the basis for feedforward control, and the tactile information of the manipulator is used as the feedback control source to realize the movement control of the target object.10.如权利要求1所述的一种融合视触觉主动感知的多自由度辅助抓握外肢体机器人系统,其特征是:所述传感器为力传感器,所述控制系统将各力传感器的检测值与其阈值进行作差,当差值超过设定范围时,调整相应机械手手指的动作,增大或减少抓握力,直至检测值与其阈值的差值在设定范围内。10. The multi-degree-of-freedom assisted grasping external limb robot system integrating visual and tactile active perception according to claim 1, wherein the sensor is a force sensor, and the control system uses the detection value of each force sensor Make a difference with its threshold. When the difference exceeds the set range, adjust the movement of the corresponding manipulator fingers to increase or decrease the gripping force until the difference between the detected value and its threshold is within the set range.
CN202111492543.6A2021-12-082021-12-08Multi-degree-of-freedom auxiliary grasping outer limb robot system integrating visual touch active sensingActiveCN114131635B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202111492543.6ACN114131635B (en)2021-12-082021-12-08Multi-degree-of-freedom auxiliary grasping outer limb robot system integrating visual touch active sensing

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202111492543.6ACN114131635B (en)2021-12-082021-12-08Multi-degree-of-freedom auxiliary grasping outer limb robot system integrating visual touch active sensing

Publications (2)

Publication NumberPublication Date
CN114131635Atrue CN114131635A (en)2022-03-04
CN114131635B CN114131635B (en)2024-07-12

Family

ID=80385205

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202111492543.6AActiveCN114131635B (en)2021-12-082021-12-08Multi-degree-of-freedom auxiliary grasping outer limb robot system integrating visual touch active sensing

Country Status (1)

CountryLink
CN (1)CN114131635B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN115081522A (en)*2022-06-132022-09-20中国科学院计算技术研究所Environmental state discrimination method and system based on multi-modal perception
CN115284321A (en)*2022-07-222022-11-04广东技术师范大学 Bionic manipulator controlled by brain waves
CN115463003A (en)*2022-09-092022-12-13燕山大学 A Control Method of Upper Limb Rehabilitation Robot Based on Information Fusion
CN116945146A (en)*2022-04-132023-10-27深圳市越疆科技有限公司 Tactile control system, method, robotic arm, robot and chip for robotic arm
CN119407801A (en)*2025-01-092025-02-11启元实验室 Control method, system and storage medium based on biped robot door operation
CN119974028A (en)*2025-04-162025-05-13绵阳师范学院 A method and system for adaptively adjusting the center of gravity of a handling robot

Citations (6)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN1824472A (en)*2002-10-292006-08-30松下电器产业株式会社 Robot grasping control device and robot grasping control method
CN106671084A (en)*2016-12-202017-05-17华南理工大学Mechanical arm self-directed auxiliary system and method based on brain-computer interface
CN106994689A (en)*2016-01-232017-08-01鸿富锦精密工业(武汉)有限公司The intelligent robot system and method controlled based on EEG signals
CN109366508A (en)*2018-09-252019-02-22中国医学科学院生物医学工程研究所A kind of advanced machine arm control system and its implementation based on BCI
CN208601545U (en)*2018-06-212019-03-15东莞理工学院 A pressure-sensing manipulator and robot
WO2020094205A1 (en)*2018-11-082020-05-14Mcs Free ZoneAn enhanced reality underwater maintenance syestem by using a virtual reality manipulator (vrm)

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN1824472A (en)*2002-10-292006-08-30松下电器产业株式会社 Robot grasping control device and robot grasping control method
CN106994689A (en)*2016-01-232017-08-01鸿富锦精密工业(武汉)有限公司The intelligent robot system and method controlled based on EEG signals
CN106671084A (en)*2016-12-202017-05-17华南理工大学Mechanical arm self-directed auxiliary system and method based on brain-computer interface
CN208601545U (en)*2018-06-212019-03-15东莞理工学院 A pressure-sensing manipulator and robot
CN109366508A (en)*2018-09-252019-02-22中国医学科学院生物医学工程研究所A kind of advanced machine arm control system and its implementation based on BCI
WO2020094205A1 (en)*2018-11-082020-05-14Mcs Free ZoneAn enhanced reality underwater maintenance syestem by using a virtual reality manipulator (vrm)

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张娜,等: "精确抓握力量控制的脑动力学研究", 《中国生物医学工程学报》, vol. 39, no. 6, pages 711 - 718*

Cited By (6)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN116945146A (en)*2022-04-132023-10-27深圳市越疆科技有限公司 Tactile control system, method, robotic arm, robot and chip for robotic arm
CN115081522A (en)*2022-06-132022-09-20中国科学院计算技术研究所Environmental state discrimination method and system based on multi-modal perception
CN115284321A (en)*2022-07-222022-11-04广东技术师范大学 Bionic manipulator controlled by brain waves
CN115463003A (en)*2022-09-092022-12-13燕山大学 A Control Method of Upper Limb Rehabilitation Robot Based on Information Fusion
CN119407801A (en)*2025-01-092025-02-11启元实验室 Control method, system and storage medium based on biped robot door operation
CN119974028A (en)*2025-04-162025-05-13绵阳师范学院 A method and system for adaptively adjusting the center of gravity of a handling robot

Also Published As

Publication numberPublication date
CN114131635B (en)2024-07-12

Similar Documents

PublicationPublication DateTitle
CN114131635B (en)Multi-degree-of-freedom auxiliary grasping outer limb robot system integrating visual touch active sensing
Qi et al.Computer vision-based hand gesture recognition for human-robot interaction: a review
Rabhi et al.A facial expression controlled wheelchair for people with disabilities
Shi et al.Computer vision-based grasp pattern recognition with application to myoelectric control of dexterous hand prosthesis
Ahuja et al.Static vision based Hand Gesture recognition using principal component analysis
CN109993073B (en) A complex dynamic gesture recognition method based on Leap Motion
CN112990074B (en)VR-based multi-scene autonomous control mixed brain-computer interface online system
Tang et al.Wearable supernumerary robotic limb system using a hybrid control approach based on motor imagery and object detection
US20130335318A1 (en)Method and apparatus for doing hand and face gesture recognition using 3d sensors and hardware non-linear classifiers
Ahuja et al.Hand gesture recognition using PCA
CN114495273B (en) A robot gesture remote operation method and related device
CN107621880A (en) An interactive control method for a robot wheelchair based on an improved head pose estimation method
Zhang et al.Robotic control of dynamic and static gesture recognition
Noh et al.A decade of progress in human motion recognition: A comprehensive survey from 2010 to 2020
Huda et al.Real-time hand-gesture recognition for the control of wheelchair
Nandwana et al.A survey paper on hand gesture recognition
Xiong et al.Robotic telemanipulation with EMG-driven strategy-assisted shared control method
CN112149574A (en)Accompanying robot-oriented intention flexible mapping method and device
Wei et al.Fusing EMG and visual data for hands-free control of an intelligent wheelchair
CN117523669A (en) Gesture recognition method, device, electronic device and storage medium
Chu et al.Hands-free assistive manipulator using augmented reality and tongue drive system
Xu et al.A Powered Prosthetic Hand with Vision System for Enhancing the Anthropopathic Grasp
CN113552945A (en) A human-computer interaction glove system
Qiu et al.Research on Intention Flexible Mapping Algorithm for Elderly Escort Robot
Zendehdel et al.Hands-Free UAV Control: Real-Time Eye Movement Detection Using EOG and LSTM Networks

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp