





技术领域technical field
本发明涉及机器单目视觉技术领域和医疗设备技术领域,特别是涉及一种基于手势的手部标记物跟踪方法及系统。The present invention relates to the technical field of machine monocular vision and the technical field of medical equipment, and in particular, to a method and system for tracking hand markers based on gestures.
背景技术Background technique
中医手法是治疗神经根型颈椎病的重要方法,一般认为,旋转手法是其关键。但是旋转手法操作的不规范严重影响了其临床应用和推广。Traditional Chinese medicine manipulation is an important method for the treatment of cervical spondylosis of radiculopathy, and it is generally believed that rotation manipulation is the key. However, the irregular operation of rotational manipulation has seriously affected its clinical application and promotion.
中医旋转类手法可以分为自定位、预加载、快速动作、恢复四个步骤,以右侧旋转类手法为例:患者端坐位,颈部自然放松,医者采用按法、揉法、滚法等手法放松颈部软组织5-10min;让患者头部主动水平旋转至极限角度,最大屈曲后再旋转,达到固定感;医生以肘部托患者下颌,轻轻向上牵引3-5s;让患者放松肌肉,肘部用短力快速向上提拉;操作成功后可以听到一声或多声弹响;应用提、拿等手法再次将颈部肌肉放松。TCM rotational manipulations can be divided into four steps: self-positioning, preloading, rapid action, and recovery. Take the right-handed rotating manipulation as an example: the patient sits upright, the neck relaxes naturally, and the doctor uses pressing, kneading, rolling, etc. Manually relax the soft tissues of the neck for 5-10 minutes; let the patient's head actively rotate horizontally to the limit angle, and then rotate after the maximum flexion to achieve a sense of fixation; the doctor supports the patient's jaw with the elbow and gently pulls it upward for 3-5s; let the patient relax the muscles , the elbow is quickly lifted upwards with a short force; after the operation is successful, one or more clicks can be heard; the neck muscles can be relaxed again by lifting, holding and other techniques.
在实际操作过程中,病人需要根据医生的指导调整头部位置到生理限位,此过程需要靠医师凭借经验目测并指导病人完成。目前中医旋转类手法培训中采用机器人来模拟颈椎病人以为医生提供练习平台,但是控制机器人采用的方法主要为使用遥控器遥控操作,采用遥控器控制机器人的头部转到需要的位置,目前已有的方案分为单纯的标记物跟踪,或者依靠视觉方法进行手势识别后的目标跟踪。以目前的技术水平来说,单纯凭借视觉算法实现此类交互有较多不稳定因素,如光照变化、视角切换、遮挡、阴影等仍是难以解决的技术问题,而稳定可靠的交互形式和运行状态则是产品应用和推广的基础。In the actual operation process, the patient needs to adjust the head position to the physiological limit according to the doctor's guidance. This process needs to be completed by the doctor's experience and guidance of the patient. At present, robots are used in the training of traditional Chinese medicine rotation manipulations to simulate cervical vertebra patients to provide doctors with a training platform. However, the method used to control the robot is mainly to use a remote control to operate remotely. The remote control is used to control the head of the robot to move to the required position. The solutions are divided into pure marker tracking, or target tracking after gesture recognition relying on visual methods. At the current level of technology, there are many unstable factors in realizing such interactions solely by visual algorithms, such as illumination changes, viewing angle switching, occlusion, shadows, etc., which are still technical problems that are difficult to solve. Status is the basis for product application and promotion.
发明内容SUMMARY OF THE INVENTION
本发明的目的是提供一种基于手势的手部标记物跟踪方法及系统,使机器人可以根据被测人员的手势跟踪被测人员手部标记物进行运动,真实模拟实际的医患之间的交互操作,为医生提供治疗手法的练习平台。The purpose of the present invention is to provide a gesture-based hand marker tracking method and system, so that the robot can track the movement of the tested person's hand markers according to the tested person's gesture, and simulate the actual interaction between doctors and patients. Operation, providing doctors with a practice platform for treatment techniques.
为实现上述目的,本发明提供了如下方案:For achieving the above object, the present invention provides the following scheme:
一种基于手势的手部标记物跟踪方法,包括:A gesture-based hand marker tracking method, comprising:
由可穿戴手势测量设备确定被测人员的手势;The gesture of the tested person is determined by the wearable gesture measurement device;
获取被测人员手部的彩色图像;Obtain a color image of the tested person's hand;
根据所述彩色图像确定手部标记物的空间位置;determining the spatial location of the hand marker from the color image;
当所述手势为设定手势时,机器人根据所述标记物的空间位置开始跟踪所述标记物进行头部旋转运动;When the gesture is a set gesture, the robot starts to track the marker to perform a head rotation movement according to the spatial position of the marker;
当所述手势不为设定手势时,所述机器人停止进行头部旋转运动,所述机器人头部固定于停止位置。When the gesture is not a set gesture, the robot stops rotating the head, and the robot head is fixed at the stop position.
可选地,所述可穿戴手势测量设备包括可穿戴手套、位于所述可穿戴手套的指尖位置的手指姿态测量单元以及位于所述可穿戴手套的手背位置的无线通信模块和处理器模块;所述手指姿态测量单元通过所述无线通信模块与所述处理器模块连接;Optionally, the wearable gesture measurement device includes a wearable glove, a finger gesture measurement unit located at the fingertip of the wearable glove, and a wireless communication module and a processor module located at the back of the hand of the wearable glove; The finger posture measurement unit is connected with the processor module through the wireless communication module;
所述手指姿态测量单元用于测量被测人员的手指姿态数据;所述处理器模块用于对所述手指姿态数据进行处理,得到被测人员的手势。The finger gesture measurement unit is used to measure the finger gesture data of the tested person; the processor module is used to process the finger gesture data to obtain the gesture of the tested person.
可选地,所述手指姿态测量单元包括三轴加速度计和三轴陀螺仪;所述无线通信模块为Zigbee无线通信模块;所述处理器模块为MSP430单片机;所述单片机内嵌有卡尔曼滤波算法、一阶平滑滤波算法和异常值剔除算法。Optionally, the finger attitude measurement unit includes a three-axis accelerometer and a three-axis gyroscope; the wireless communication module is a Zigbee wireless communication module; the processor module is an MSP430 single-chip microcomputer; the single-chip microcomputer is embedded with a Kalman filter. algorithm, first-order smoothing filtering algorithm and outlier elimination algorithm.
可选地,根据所述彩色图像确定手部标记物的空间位置,具体包括:Optionally, determining the spatial position of the hand marker according to the color image specifically includes:
建立差异特征直方图;所述差异特征直方图包括标记物差异特征直方图以及背景差异特征直方图;establishing a difference feature histogram; the difference feature histogram includes a marker difference feature histogram and a background difference feature histogram;
将所述彩色图像转换到对应的颜色名称空间,得到第一颜色名称图像;Converting the color image to a corresponding color name space to obtain a first color name image;
根据所述标记物差异特征直方图和所述背景差异特征直方图,计算所述第一颜色名称图像中各像素点属于标记物像素点的概率;According to the marker difference feature histogram and the background difference feature histogram, calculate the probability that each pixel in the first color name image belongs to the marker pixel;
选取所述概率大于预设概率的像素点组成标记物图像;Selecting pixels whose probability is greater than a preset probability to form a marker image;
根据图像矩公式计算所述标记物图像的中心位置;Calculate the center position of the marker image according to the image moment formula;
根据所述中心位置确定所述标记物的空间位置。The spatial position of the marker is determined based on the central position.
可选地,所述建立差异特征直方图,具体包括:Optionally, the establishment of a difference feature histogram specifically includes:
将预先采集的手部彩色图像从RGB颜色空间映射到颜色名称空间,得到第二颜色名称图像;Map the pre-collected hand color image from the RGB color space to the color name space to obtain a second color name image;
根据所述第二颜色名称图像建立第一直方图;establishing a first histogram based on the second color name image;
将所述第二颜色名称图像进行一次反向投影过程,得到所述标记物的最优估计区域,并根据所述最优估计区域建立目标最优估计直方图;Perform a back-projection process on the second color name image to obtain the optimal estimation area of the marker, and establish a target optimal estimation histogram according to the optimal estimation area;
基于所述第一直方图以及所述目标最优估计直方图确定所述标记物的背景特征最优估计直方图;determining an optimal estimation histogram of the background feature of the marker based on the first histogram and the target optimal estimation histogram;
根据所述目标最优估计直方图以及所述背景特征最优估计直方图计算特征值,并筛选使得所述特征值最大时的前两个颜色名称;Calculate eigenvalues according to the target optimal estimation histogram and the background feature optimal estimation histogram, and screen the first two color names when the eigenvalue is the largest;
根据所述标记物的搜索窗口大小建立位置编码函数;establishing a position encoding function according to the search window size of the marker;
根据所述位置编码函数对所述第二颜色名称图像进行变换;transforming the second color name image according to the position encoding function;
根据变换后的第二颜色名称图像以及所述前两个颜色名称建立差异特征直方图。A difference feature histogram is established according to the transformed second color name image and the first two color names.
可选地,根据所述中心位置确定所述标记物的空间位置,具体包括:Optionally, determining the spatial position of the marker according to the central position, specifically including:
根据标定时所述标记物距离相机光心的距离以及所述标记物的尺度,计算当前所述标记物与相机光心的距离;Calculate the current distance between the marker and the camera optical center according to the distance between the marker and the optical center of the camera and the scale of the marker during calibration;
根据当前所述标记物与相机光心的距离以及所述中心位置,计算所述标记物的三维空间坐标,所述三维空间坐标为所述空间位置。According to the current distance between the marker and the optical center of the camera and the center position, the three-dimensional space coordinate of the marker is calculated, and the three-dimensional space coordinate is the space position.
可选地,当所述手势为设定手势时,机器人根据所述标记物的空间位置开始跟踪所述标记物进行头部旋转运动,具体包括:Optionally, when the gesture is a set gesture, the robot starts to track the marker to perform a head rotation movement according to the spatial position of the marker, which specifically includes:
根据所述三维空间坐标确定所述机器人头部的旋转方向;Determine the rotation direction of the robot head according to the three-dimensional space coordinates;
控制所述机器人头部按照所述旋转方向进行旋转。The robot head is controlled to rotate according to the rotation direction.
本发明还提供了一种基于手势的手部标记物跟踪系统,包括:The present invention also provides a gesture-based hand marker tracking system, comprising:
手势确定模块,用于确定被测人员的手势;Gesture determination module, used to determine the gesture of the tested person;
彩色图像获取模块,用于获取被测人员手部的彩色图像;The color image acquisition module is used to acquire the color image of the tested person's hand;
标记物空间位置确定模块,用于根据所述彩色图像确定手部标记物的空间位置;a marker spatial position determination module, configured to determine the spatial position of the hand marker according to the color image;
跟踪模块,用于当所述手势为设定手势时,机器人根据所述标记物的空间位置开始跟踪所述标记物进行头部旋转运动;a tracking module, used for when the gesture is a set gesture, the robot starts to track the marker to perform a head rotation movement according to the spatial position of the marker;
固定模块,用于当所述手势不为设定手势时,所述机器人停止进行头部旋转运动,所述机器人头部固定于停止位置。The fixing module is configured to stop the robot from performing the head rotation movement when the gesture is not a set gesture, and the robot head is fixed at the stop position.
根据本发明提供的具体实施例,本发明公开了以下技术效果:According to the specific embodiments provided by the present invention, the present invention discloses the following technical effects:
本发明提供了一种基于手势的手部标记物跟踪方法及系统,该方法包括:由可穿戴手势测量设备确定被测人员的手势;获取被测人员手部的彩色图像;根据所述彩色图像确定手部标记物的空间位置;当所述手势为设定手势时,机器人根据所述标记物的空间位置开始跟踪所述标记物进行头部旋转运动;当所述手势不为设定手势时,所述机器人停止进行头部旋转运动,所述机器人头部固定于停止位置。本发明能够使机器人可以跟踪被测人员的手势及位置,并运动至练习需要的角度和位置,能够真实模拟实际的医患之间的交互操作,为医生提供治疗手法的练习平台。The present invention provides a gesture-based hand marker tracking method and system. The method includes: determining the gesture of a tested person by a wearable gesture measuring device; acquiring a color image of the tested person's hand; according to the color image Determine the spatial position of the hand marker; when the gesture is a set gesture, the robot starts to track the marker to perform head rotation according to the spatial position of the marker; when the gesture is not a set gesture , the robot stops rotating its head, and the robot head is fixed at the stop position. The invention enables the robot to track the gesture and position of the tested person, and move to the angle and position required for practice, can truly simulate the actual interactive operation between doctors and patients, and provide a practice platform for the treatment techniques for doctors.
附图说明Description of drawings
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the accompanying drawings required in the embodiments will be briefly introduced below. Obviously, the drawings in the following description are only some of the present invention. In the embodiments, for those of ordinary skill in the art, other drawings can also be obtained according to these drawings without creative labor.
图1为本发明提供的基于手势的手部标记物跟踪方法的流程图;1 is a flowchart of a gesture-based hand marker tracking method provided by the present invention;
图2为本发明提供的可穿戴手势测量设备的结构示意图;2 is a schematic structural diagram of a wearable gesture measurement device provided by the present invention;
图3为本发明提供的标记物图像中心位置计算示意图;3 is a schematic diagram of calculating the center position of the marker image provided by the present invention;
图4为本发明提供的标记物位置与其对应的距离的示意图;4 is a schematic diagram of the position of the marker provided by the present invention and its corresponding distance;
图5为本发明提供的相机坐标系位置计算示意图;5 is a schematic diagram of the position calculation of the camera coordinate system provided by the present invention;
图6为本发明提供的图像平面稳态区域示意图。FIG. 6 is a schematic diagram of a stable region of an image plane provided by the present invention.
具体实施方式Detailed ways
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are only a part of the embodiments of the present invention, but not all of the embodiments. Based on the embodiments of the present invention, all other embodiments obtained by those of ordinary skill in the art without creative efforts shall fall within the protection scope of the present invention.
本发明的目的是提供一种基于手势的手部标记物跟踪方法及系统,使机器人可以根据被测人员的手势跟踪被测人员手部标记物进行运动,真实模拟实际的医患之间的交互操作,为医生提供治疗手法的练习平台。The purpose of the present invention is to provide a gesture-based hand marker tracking method and system, so that the robot can track the movement of the tested person's hand markers according to the tested person's gesture, and simulate the actual interaction between doctors and patients. Operation, providing doctors with a practice platform for treatment techniques.
为使本发明的上述目的、特征和优点能够更加明显易懂,下面结合附图和具体实施方式对本发明作进一步详细的说明。In order to make the above objects, features and advantages of the present invention more clearly understood, the present invention will be described in further detail below with reference to the accompanying drawings and specific embodiments.
如图1所示,本发明提供的基于手势的手部标记物跟踪方法,包括以下步骤:As shown in Figure 1, the gesture-based hand marker tracking method provided by the present invention includes the following steps:
步骤101:由可穿戴手势测量设备确定被测人员的手势。Step 101: Determine the gesture of the tested person by the wearable gesture measurement device.
所述可穿戴手势测量设备包括可穿戴手套、位于所述可穿戴手套的指尖位置的手指姿态测量单元以及位于所述可穿戴手套的手背位置的无线通信模块和处理器模块;手指姿态测量单元通过无线通信模块与处理器模块连接。手指姿态测量单元用于测量被测人员的手指姿态数据;处理器模块用于对手指姿态数据进行处理,得到被测人员的手势。The wearable gesture measurement device includes a wearable glove, a finger gesture measurement unit located at the fingertip position of the wearable glove, and a wireless communication module and a processor module located at the back of the wearable glove; the finger gesture measurement unit It is connected with the processor module through the wireless communication module. The finger gesture measurement unit is used to measure the finger gesture data of the tested person; the processor module is used to process the finger gesture data to obtain the gesture of the tested person.
如图2所示,手指姿态测量单元包括三轴加速度计和三轴陀螺仪;无线通信模块为Zigbee无线通信模块;处理器模块为MSP430单片机;单片机内嵌有卡尔曼滤波算法、一阶平滑滤波算法和异常值剔除算法。As shown in Figure 2, the finger attitude measurement unit includes a three-axis accelerometer and a three-axis gyroscope; the wireless communication module is a Zigbee wireless communication module; the processor module is an MSP430 single-chip microcomputer; the single-chip microcomputer is embedded with Kalman filtering algorithm and first-order smoothing filter Algorithms and Outlier Removal Algorithms.
在实验开始之前需要操作人员穿戴手势测量装置后固定手部姿态于给定位置并保持5秒静止。由微型三轴陀螺仪测量指尖运动速度,假设指尖姿态的先验估计为:Before the start of the experiment, the operator was required to wear the gesture measurement device and then fix the hand posture at a given position and keep it still for 5 seconds. Fingertip motion velocity is measured by a micro three-axis gyroscope, assuming a priori estimation of fingertip pose for:
式中F表示状态矩阵,B表示输入矩阵,为前一时刻系统测量姿态,where F represents the state matrix, B represents the input matrix, measure the attitude of the system for the previous moment,
为前一时刻手指姿态测量单元所测得数据。 It is the data measured by the finger posture measurement unit at the previous moment.
则姿态观测值:Then the attitude observation value :
Hk、vk、xk分别表示观测矩阵、测量误差和系统在k时刻的最优估计。Hk ,vk , andxk represent the observation matrix, the measurement error and the optimal estimation of the system at time k, respectively.
系统协方差矩阵的先验估计可由下式得到:System covariance matrix a priori estimate of It can be obtained by the following formula:
其中表示前一时刻噪声协方差矩阵,T表示转置,由此可以得到卡尔曼增益矩阵:in represents the noise covariance matrix at the previous moment, and T represents the transpose, from which the Kalman gain matrix can be obtained :
Rk表示测量过程的噪声矩阵。Rk represents the noise matrix of the measurement process.
所以,的最优估计可以得到:so, The optimal estimate of can be obtained as:
表示姿态观测值的先验估计,I表示单位矩阵。 represents the prior estimate of the pose observations, and I represents the identity matrix.
由计算得到的手指姿态定义当前手势。The current gesture is defined by the calculated finger pose.
步骤102:获取被测人员手部的彩色图像。Step 102: Acquire a color image of the tested person's hand.
步骤103:根据所述彩色图像确定手部标记物的空间位置。具体包括:Step 103: Determine the spatial position of the hand marker according to the color image. Specifically include:
步骤1031:建立差异特征直方图;所述差异特征直方图包括标记物差异特征直方图以及背景差异特征直方图。Step 1031: Establish a difference feature histogram; the difference feature histogram includes a marker difference feature histogram and a background difference feature histogram.
步骤1032:将所述彩色图像转换到对应的颜色名称空间,得到第一颜色名称图像。Step 1032: Convert the color image to a corresponding color name space to obtain a first color name image.
步骤1033:根据所述标记物差异特征直方图和所述背景差异特征直方图,计算所述第一颜色名称图像中各像素点属于标记物像素点的概率。Step 1033: Calculate the probability that each pixel in the first color name image belongs to a marker pixel according to the marker difference feature histogram and the background difference feature histogram.
步骤1034:选取所述概率大于预设概率的像素点组成标记物图像。Step 1034: Selecting pixels whose probability is greater than a preset probability to form a marker image.
步骤1035:根据图像矩公式计算所述标记物图像的中心位置。Step 1035: Calculate the center position of the marker image according to the image moment formula.
步骤1036:根据所述中心位置确定所述标记物的空间位置。Step 1036: Determine the spatial position of the marker according to the center position.
其中,步骤1031具体包括:Wherein, step 1031 specifically includes:
步骤10311:将预先采集的手部彩色图像从RGB颜色空间映射到颜色名称空间,得到第二颜色名称图像;Step 10311: Map the pre-collected hand color image from the RGB color space to the color name space to obtain a second color name image;
步骤10312:根据所述第二颜色名称图像建立第一直方图。Step 10312: Create a first histogram according to the second color name image.
步骤10313:将所述第二颜色名称图像进行一次反向投影过程,得到所述标记物的最优估计区域,并根据所述最优估计区域建立目标最优估计直方图。Step 10313: Perform a back-projection process on the second color name image to obtain the optimal estimation area of the marker, and establish a target optimal estimation histogram according to the optimal estimation area.
步骤10314:基于所述第一直方图以及所述目标最优估计直方图确定所述标记物的背景特征最优估计直方图。Step 10314: Determine the optimal estimation histogram of the background feature of the marker based on the first histogram and the optimal estimation histogram of the target.
步骤10315:根据所述目标最优估计直方图以及所述背景特征最优估计直方图计算特征值,并筛选使得所述特征值最大时的前两个颜色名称。Step 10315: Calculate eigenvalues according to the target optimal estimation histogram and the background feature optimal estimation histogram, and filter the first two color names when the eigenvalue is the largest.
步骤10316:根据所述标记物的搜索窗口大小建立位置编码函数。Step 10316: Establish a position encoding function according to the size of the search window of the marker.
步骤10317:根据所述位置编码函数对所述第二颜色名称图像进行变换。Step 10317: Transform the second color name image according to the position encoding function.
步骤10318:根据变换后的第二颜色名称图像以及所述前两个颜色名称建立差异特征直方图。Step 10318: Create a difference feature histogram according to the transformed second color name image and the first two color names.
其中,步骤1036具体包括:Wherein, step 1036 specifically includes:
步骤10361:根据标定时所述标记物距离相机光心的距离以及所述标记物的尺度,计算当前所述标记物与相机光心的距离。Step 10361 : Calculate the current distance between the marker and the camera optical center according to the distance between the marker and the optical center of the camera during calibration and the scale of the marker.
步骤10362:根据当前所述标记物与相机光心的距离以及所述中心位置,计算所述标记物的三维空间坐标,所述三维空间坐标为所述空间位置。Step 10362: Calculate the three-dimensional space coordinates of the marker according to the current distance between the marker and the optical center of the camera and the center position, where the three-dimensional space coordinates are the spatial position.
在实际应用中,手势测量手套上已经涂装预定的标记颜色(即标记物),先将当前彩色图像从RGB颜色空间映射到颜色名称(Color Name)空间,设映射函数为:In practical applications, the gesture measurement glove has been painted with a predetermined marker color (ie, marker), and the current color image is first mapped from the RGB color space to the color name (Color Name) space, and the mapping function is set as:
此时RGB颜色图像变为颜色名称图像(即,第二颜色名称图像),仍为彩色图像;此时统计整幅图像内的颜色名称信息,建立第一直方图hw。At this time, the RGB color image becomes a color name image (ie, the second color name image), which is still a color image; at this time, the color name information in the entire image is counted, and a first histogram hw is established.
将第二颜色名称图像进行一次反向投影过程,得到手势测量手套标记物的最优估计区域,并由标记物的最优估计区域计算目标最优估计直方图o,并且根据下式得到此时整幅颜色名称图像的背景特征最优估计直方图b:Perform a back-projection process on the second color name image to obtain the optimal estimation area of the gesture measurement glove marker, and calculate the optimal estimation histogram of the target from the optimal estimation area of the markero , and the optimal estimation histogram of the background feature of the entire color name image at this time is obtained according to the following formulab :
比对目标最优估计直方图o和背景特征最优估计直方图b,按如下公式计算特征值:Compare the best estimate histogram of the targeto and background feature optimal estimation histogramb , calculate the eigenvalues according to the following formula :
筛选使得最大的前两个颜色名称。filter makes The largest first two color names.
设标记物的当前搜索窗口大小为,建立位置编码函数为:Let the current search window size of the marker be , the establishment of the position encoding function is:
w表示搜索窗口的宽度,h表示搜索窗口的高度。w represents the width of the search window and h represents the height of the search window.
设为搜索窗左上角点,对的所有像素点做如下变换:Assume is the upper left corner of the search window, right All pixels of , do the following transformation:
根据变换后的第二颜色名称图像以及前两个颜色名称建立差异特征直方图。差异特征直方图包括标记物差异特征直方图以及背景差异特征直方图。具体过程如下:A difference feature histogram is built according to the transformed second color name image and the first two color names. The difference feature histogram includes the marker difference feature histogram and the background difference feature histogram. The specific process is as follows:
对位于搜索窗内的任一像素,将其映射到颜色名称空间后,判断其值是否为前两个颜色名称,若是,在直方图对应分量处增加位置编码函数所对应的值然后开始处理下一像素,否则不累加直方图分量,直接处理下一像素,直至搜索窗内像素全部处理完成,得到标记物差异特征直方图;对于搜索窗外的任一像素,将其映射到颜色名称空间,判断其值是否为前两个颜色名称,若是,则在直方图对应分量处增加位置编码函数所对应的值,否则在对应分量处直接加一,直至所有像素处理完成后,得到背景差异特征直方图;对上述跟踪标记物差异特征直方图和背景差异特征直方图进行归一化处理。For any pixel located in the search window, after mapping it to the color name space, determine whether its value is the first two color names, if so, add the value corresponding to the position encoding function at the corresponding component of the histogram, and then start processing. One pixel, otherwise, the histogram components are not accumulated, and the next pixel is directly processed until all the pixels in the search window are processed, and the marker difference feature histogram is obtained; for any pixel outside the search window, it is mapped to the color namespace, and the judgment Whether its value is the first two color names, if so, add the value corresponding to the position coding function at the corresponding component of the histogram, otherwise add one directly at the corresponding component, until all pixels are processed, and the background difference feature histogram is obtained ; Normalize the above tracking marker difference feature histogram and background difference feature histogram.
采集一幅新的图像,将彩色图像转换到颜色名称空间,得到第一颜色名称图像,在搜索窗内进行密集采样,估计采样像素点I(x,y)属于跟踪目标的概率:Collect a new image, convert the color image to the color name space, get the first color name image, perform dense sampling in the search window, and estimate the probability that the sampled pixel point I (x, y) belongs to the tracking target:
其中为标记物差异特征直方图,为背景差异特征直方图,为位于位置处的像素点属于标记物的概率。选取概率大于预设概率的像素点组成标记物图像。in is the marker difference feature histogram, is the background difference feature histogram, for the The probability that the pixel at the location belongs to the marker. Pixels with a probability greater than a preset probability are selected to form a marker image.
利用图像矩公式计算M00、M10和M01,其中M00可认为是标记物图像中包含的标记物像素点的个数,M10为一阶矩,是标记物图像中所有像素沿x轴方向的加权偏移,同理M01为沿y轴的加权偏移,上述xy轴沿用公认的图像坐标轴系,即沿列分布为x轴,沿行分布为y轴。利用物理学中求质心的加权平均法可以得到标记物的中心位置(xc,yc),计算过程和方法如图3所示。Use the image moment formula to calculate M00 , M10 and M01 , where M00 can be considered as the number of marker pixels included in the marker image, and M10 is the first moment, which is the direction of all pixels in the marker image along x The weighted offset in the axis direction, similarly M01 is the weighted offset along the y-axis. The above-mentioned xy-axis follows the recognized image coordinate axis system, that is, the distribution along the column is the x-axis, and the distribution along the row is the y-axis. The center position (xc , yc ) of the marker can be obtained by the weighted average method of finding the centroid in physics. The calculation process and method are shown in Figure 3.
利用事先标定的先验零阶矩分布M和标定时对应的距离z,可以得到当前待跟踪标记物与相机光心的距离zc。如图4所示,X0、X1、X2、X3分别为某一时刻标记物的位置,z = {z0,z1,z2,z3}分别为其对应的距离,M = { M0,M1,M2,M3}为对应的先验零阶矩分布。根据小孔成像原理,距离光心越远的物体,其在相平面上成像越小,像素点越少,而像素点的个数与目标距光心的距离成比例关系,即:Using the pre-calibrated prior zero-order moment distribution M and the corresponding distance z during calibration, the distance zc between the current marker to be tracked and the optical center of the camera can be obtained. As shown in Figure 4, X0 , X1 , X2 , and X3 are the positions of markers at a certain moment, respectively, z = {z0 , z1 , z2 , z3 } are the corresponding distances, respectively, M = { M0 , M1 , M2 , M3 } is the corresponding prior zero-order moment distribution. According to the principle of pinhole imaging, the farther the object is from the optical center, the smaller the image on the phase plane, the fewer the pixel points, and the number of pixel points is proportional to the distance between the target and the optical center, that is:
其中,M00为标记物在当前空间位置处于图像中的零阶矩(即像素点的个数),在计算时,从标定的先验零阶矩分布M = { M0,M1,M2,M3}中选择与M00绝对误差最小的零阶矩Mq,q=0、1、2或3,z为选定的Mq对应的标记物距离相机光心的距离,zc表示当前标记物距离相机光心的距离。定义标记物的尺度:Among them, M00 is the zero-order moment (that is, the number of pixels) of the marker in the image at the current spatial position. During calculation, from the calibrated prior zero-order moment distribution M = { M0 , M1 , M2 , M3 }, select the zero-order moment Mq with the smallest absolute error of M00 , q=0, 1, 2 or 3, z is the distance between the marker corresponding to the selected Mq and the camera optical center, zc Indicates the distance of the current marker from the camera's optical center. Define the scale of the marker:
s为当前标记物的尺度,则标记物在空间中任意移动时,根据其像素点(M00)的多少选择距离最近的标定位置,并获得该位置对应的距离z,可以估计出任一位置处标记物的距离zc,估计原理如下:s is the scale of the current marker, then when the marker moves arbitrarily in space, select the nearest calibration position according to the number of its pixel points (M00 ), and obtain the distance z corresponding to this position, which can be estimated at any position. The distance zc of the marker is estimated as follows:
然后利用摄像机成像模型,可以计算在相机坐标系下,标记物在三维空间中的坐标:Then, using the camera imaging model, the coordinates of the marker in three-dimensional space can be calculated in the camera coordinate system:
其中K为相机内参矩阵,通过事先标定可以轻松得到,为相机坐标系下标记物中心点的三维坐标。whereK is the camera internal parameter matrix, which can be easily obtained by pre-calibrating, is the three-dimensional coordinate of the center point of the marker in the camera coordinate system.
步骤104:当手势为设定手势时,机器人根据标记物的空间位置开始跟踪所述标记物进行头部旋转运动。Step 104: When the gesture is a set gesture, the robot starts to track the marker to perform a head rotation movement according to the spatial position of the marker.
利用三角函数原理得到相机坐标系x轴和y轴的转角误差,空间关系如图5所示,点P为相机坐标系下标记物的坐标,Pf为点P在相平面中的投影点,f为相机焦距,其像素坐标为(xc,yc),Pxz,Pyz分别为点P在XZ平面和YZ平面的投影点,P0为相机主光轴与相平面交点,在相平面中的像素坐标为(u0,v0),P1为点P在zf轴的投影点,根据三角函数可知,机器人头部角位置误差(即旋转角度)、为:Using the principle of trigonometric function to obtain the angle error of the x-axis and y-axis of the camera coordinate system, the spatial relationship is shown in Figure 5, point P is the coordinate of the marker in the camera coordinate system, Pf is the projection point of point P in the phase plane, f is the focal length of the camera, and its pixel coordinates are (xc , yc ), Pxz , Pyz are the projection points of point P on the XZ plane and the YZ plane, respectively, and P0 is the intersection of the main optical axis of the camera and the phase plane. The pixel coordinates in the plane are (u0 , v0 ), and P1 is the projection point of point P on the z-f axis. According to the trigonometric function, the angular position error of the robot head (ie the rotation angle) , for:
由于上述距离zc的计算过程为估算,存在较大误差,且图像处理过程中尺度估计不够稳定,为了避免机器人头部运动过程中发生抖动,故在当前图像平面内定义稳态区域为,如图6所示:Since the calculation process of the above distance zc is an estimation, there is a large error, and the scale estimation in the image processing process is not stable enough. In order to avoid the shaking during the movement of the robot head, the stable region in the current image plane is defined as ,As shown in Figure 6:
其中,表示状态空间的中心点像素坐标,表示状态空间的尺寸。当标记物的坐标位置位于稳态区域内时,即认为已经达到跟踪位置,不再调整机器人头部位置。in, represents the pixel coordinates of the center point in the state space, Represents the size of the state space. When the coordinate position of the marker is within the steady state region, it is considered that the tracking position has been reached, and the position of the robot head is no longer adjusted.
步骤105:当手势不为设定手势时,机器人停止进行头部旋转运动,机器人头部固定于停止位置。Step 105 : when the gesture is not the set gesture, the robot stops rotating the head, and the head of the robot is fixed at the stop position.
本发明还提供了一种基于手势的手部标记物跟踪系统,包括:The present invention also provides a gesture-based hand marker tracking system, comprising:
手势确定模块,用于确定被测人员的手势;Gesture determination module, used to determine the gesture of the tested person;
彩色图像获取模块,用于获取被测人员手部的彩色图像;The color image acquisition module is used to acquire the color image of the tested person's hand;
标记物空间位置确定模块,用于根据所述彩色图像确定手部标记物的空间位置;a marker spatial position determination module, configured to determine the spatial position of the hand marker according to the color image;
跟踪模块,用于当所述手势为设定手势时,机器人根据所述标记物的空间位置开始跟踪所述标记物进行头部旋转运动;a tracking module, used for when the gesture is a set gesture, the robot starts to track the marker to perform a head rotation movement according to the spatial position of the marker;
固定模块,用于当所述手势不为设定手势时,所述机器人停止进行头部旋转运动,所述机器人头部固定于停止位置。The fixing module is configured to stop the robot from performing the head rotation movement when the gesture is not a set gesture, and the robot head is fixed at the stop position.
通过本发明提供的方法和系统能够使机器人可以跟踪被测人员的手势及位置,并运动至练习需要的角度和位置,能够真实模拟实际的医患之间的交互操作,为医生提供治疗手法的练习平台。The method and system provided by the present invention can enable the robot to track the gesture and position of the tested person, and move to the angle and position required for the practice, which can truly simulate the actual interaction between doctors and patients, and provide doctors with a better understanding of treatment techniques. practice platform.
本说明书中各个实施例采用递进的方式描述,每个实施例重点说明的都是与其他实施例的不同之处,各个实施例之间相同相似部分互相参见即可。对于实施例公开的系统而言,由于其与实施例公开的方法相对应,所以描述的比较简单,相关之处参见方法部分说明即可。The various embodiments in this specification are described in a progressive manner, and each embodiment focuses on the differences from other embodiments, and the same and similar parts between the various embodiments can be referred to each other. For the system disclosed in the embodiment, since it corresponds to the method disclosed in the embodiment, the description is relatively simple, and the relevant part can be referred to the description of the method.
本文中应用了具体个例对本发明的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本发明的方法及其核心思想;同时,对于本领域的一般技术人员,依据本发明的思想,在具体实施方式及应用范围上均会有改变之处。综上所述,本说明书内容不应理解为对本发明的限制。The principles and implementations of the present invention are described herein using specific examples. The descriptions of the above embodiments are only used to help understand the method and the core idea of the present invention; meanwhile, for those skilled in the art, according to the present invention There will be changes in the specific implementation and application scope. In conclusion, the contents of this specification should not be construed as limiting the present invention.
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202211075387.8ACN115145403B (en) | 2022-09-05 | 2022-09-05 | A gesture-based hand marker tracking method and system |
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202211075387.8ACN115145403B (en) | 2022-09-05 | 2022-09-05 | A gesture-based hand marker tracking method and system |
| Publication Number | Publication Date |
|---|---|
| CN115145403Atrue CN115145403A (en) | 2022-10-04 |
| CN115145403B CN115145403B (en) | 2022-12-02 |
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202211075387.8AActiveCN115145403B (en) | 2022-09-05 | 2022-09-05 | A gesture-based hand marker tracking method and system |
| Country | Link |
|---|---|
| CN (1) | CN115145403B (en) |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20130096575A1 (en)* | 2009-07-22 | 2013-04-18 | Eric S. Olson | System and method for controlling a remote medical device guidance system in three-dimensions using gestures |
| CN107204005A (en)* | 2017-06-12 | 2017-09-26 | 北京理工大学 | A kind of hand mark tracking and system |
| CN107247466A (en)* | 2017-06-12 | 2017-10-13 | 中山长峰智能自动化装备研究院有限公司 | Robot head gesture control method and system |
| CN107263541A (en)* | 2017-06-19 | 2017-10-20 | 中山长峰智能自动化装备研究院有限公司 | Robot and control method and system for force tracking error of robot |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20130096575A1 (en)* | 2009-07-22 | 2013-04-18 | Eric S. Olson | System and method for controlling a remote medical device guidance system in three-dimensions using gestures |
| CN107204005A (en)* | 2017-06-12 | 2017-09-26 | 北京理工大学 | A kind of hand mark tracking and system |
| CN107247466A (en)* | 2017-06-12 | 2017-10-13 | 中山长峰智能自动化装备研究院有限公司 | Robot head gesture control method and system |
| CN107263541A (en)* | 2017-06-19 | 2017-10-20 | 中山长峰智能自动化装备研究院有限公司 | Robot and control method and system for force tracking error of robot |
| Publication number | Publication date |
|---|---|
| CN115145403B (en) | 2022-12-02 |
| Publication | Publication Date | Title |
|---|---|---|
| JP6000387B2 (en) | Master finger tracking system for use in minimally invasive surgical systems | |
| JP5702797B2 (en) | Method and system for manual control of remotely operated minimally invasive slave surgical instruments | |
| CN110931121A (en) | Remote operation guiding device based on Hololens and operation method | |
| JP2015186651A (en) | Method and system for detecting the presence of a hand in a minimally invasive surgical system | |
| CN102245100A (en) | graphic representation | |
| CA2831618A1 (en) | Gesture operated control for medical information systems | |
| Placidi et al. | Overall design and implementation of the virtual glove | |
| CN109781104B (en) | Motion attitude determination and positioning method and device, computer equipment and medium | |
| CN114037738B (en) | Control method of human vision driven upper limb auxiliary robot | |
| JP7700968B2 (en) | Chest X-ray system and method | |
| Chen et al. | Measurement of body joint angles for physical therapy based on mean shift tracking using two low cost Kinect images | |
| CN107247466B (en) | Robot head gesture control method and system | |
| Jovanov et al. | Avatar—A multi-sensory system for real time body position monitoring | |
| Niu et al. | A survey on IMU-and-vision-based human pose estimation for rehabilitation | |
| CN109620142B (en) | A system and method for measuring cervical spine mobility based on machine vision | |
| CN110638461A (en) | Human body posture recognition method and system on electric hospital bed | |
| Weidenbacher et al. | Detection of head pose and gaze direction for human-computer interaction | |
| CN115145403B (en) | A gesture-based hand marker tracking method and system | |
| CN110646014B (en) | IMU installation error calibration method based on human joint position capture equipment | |
| Wang et al. | Spatially Compact Visual Navigation System for Automated Suturing Robot Towards Oral and Maxillofacial Surgery | |
| Luo et al. | An interactive therapy system for arm and hand rehabilitation | |
| CN107204005B (en) | Hand marker tracking method and system | |
| WO2019152566A1 (en) | Systems and methods for subject specific kinematic mapping | |
| Zaccardi et al. | HoloMoCap: Real-Time Clinical Motion Capture with HoloLens 2 | |
| CN116725664A (en) | Registration method and related device |
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |