Movatterモバイル変換


[0]ホーム

URL:


CN111488823B - Augmented dimensional gesture recognition and interaction system and method based on 2D lidar - Google Patents

Augmented dimensional gesture recognition and interaction system and method based on 2D lidar
Download PDF

Info

Publication number
CN111488823B
CN111488823BCN202010271685.9ACN202010271685ACN111488823BCN 111488823 BCN111488823 BCN 111488823BCN 202010271685 ACN202010271685 ACN 202010271685ACN 111488823 BCN111488823 BCN 111488823B
Authority
CN
China
Prior art keywords
module
laser radar
dimensional
gesture recognition
swing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010271685.9A
Other languages
Chinese (zh)
Other versions
CN111488823A (en
Inventor
李建微
占家旺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fuzhou University
Original Assignee
Fuzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fuzhou UniversityfiledCriticalFuzhou University
Priority to CN202010271685.9ApriorityCriticalpatent/CN111488823B/en
Publication of CN111488823ApublicationCriticalpatent/CN111488823A/en
Application grantedgrantedCritical
Publication of CN111488823BpublicationCriticalpatent/CN111488823B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Classifications

Landscapes

Abstract

The invention provides a dimension-increasing gesture recognition and interaction system and method based on a two-dimensional laser radar. And acquiring data through a swinging laser radar, and acquiring periodic information in the data of the three-dimensional point cloud reconstruction module so as to estimate the third-dimensional data and acquire the three-dimensional point cloud. The periodic point clouds are graphed via a graphing module. And performing gesture recognition by using a trained network in a gesture recognition and feature extraction module, and performing feature extraction in the imaged picture. The multi-frame comparison module identifies gestures of long-time and multi-gesture combinations. And the interactive control module completes intention identification and interactive control. The invention has simple structure, high precision and high cost performance. The interaction method can be applied to various occasions and applications, and has high flexibility.

Description

Translated fromChinese
基于二维激光雷达的增维手势识别与交互系统及方法Augmented dimensional gesture recognition and interaction system and method based on 2D lidar

技术领域technical field

本发明属于人机交互、激光雷达领域,应用于演示和互动中的手势识别实现交互,尤其涉及一种基于二维激光雷达的增维手势识别与交互系统及方法。The invention belongs to the fields of human-computer interaction and laser radar, and is applied to gesture recognition in demonstration and interaction to realize interaction, in particular to a two-dimensional laser radar-based augmented dimensional gesture recognition and interaction system and method.

背景技术Background technique

随着多媒体技术深入人们的生活,AR、VR技术的发展,应用于其中的交互技术也在快速的变化发展。对交互技术的可靠性,易用性,无感性和使用体验的沉浸性提出了更高的需求。人机交互技术也不再局限于简单的键盘鼠标等按键操作,更多先进的交互技术如体感交互,手势交互,表情交互,姿态交互被提出和应用,甚至脑机接口这样的也被广泛关注和研究。With the penetration of multimedia technology into people's lives, the development of AR and VR technology, the interactive technology applied to them is also rapidly changing and developing. Higher demands are placed on the reliability, ease of use, insensitivity and immersion of the use experience of interactive technology. Human-computer interaction technology is no longer limited to simple key operations such as keyboard and mouse. More advanced interaction technologies such as somatosensory interaction, gesture interaction, facial expression interaction, and gesture interaction have been proposed and applied, and even brain-computer interface has also been widely concerned. and research.

随着激光雷达技术的发展,激光雷达凭借其精度高、稳定性好的特点得到了广泛的应用。低成本的二维激光雷达更是让激光雷达技术在很多领域的应用没有了成本的限制。With the development of lidar technology, lidar has been widely used due to its high precision and good stability. The low-cost two-dimensional lidar makes the application of lidar technology in many fields without cost restrictions.

在演示和互动中基于手势识别的交互技术是一个非常有前景的技术。其中基于的图像的方法比较流行,相较于基于摄像的图像的手势识别,激光雷达能够提供更加精确的位置,角度等信息,这为更高精度的交互技术提供了可能。Interaction technology based on gesture recognition is a very promising technology in presentation and interaction. Among them, image-based methods are more popular. Compared with gesture recognition based on camera images, lidar can provide more accurate information such as position and angle, which provides the possibility for higher-precision interactive technology.

发明内容SUMMARY OF THE INVENTION

针对现有技术的不足,本发明的目的是提供一种基于二维激光雷达的增维手势识别与交互系统及方法。利用在二维激光雷达上增加一个摆动的附件,实现雷达的增维,通过对此三维点云的分析实现手势识别与意图分析,从而实现交互。包括激光雷达摆动模块、三维点云重建模块、图化模块,手势识别与特征提取模块、多帧比较模块和交互控制模块。通过摆动的激光雷达获得数据,在三维点云重建模块的数据中获得周期信息,以此进行第三维数据估计,获得三维点云。将每周期点云经过图化模块图化。在手势识别与特征提取模块用训练好的网络进行手势识别,并在图化后的图中进行特征提取。多帧比较模块识别长时间和多手势组合的手势。交互控制模块完成意图识别与交互控制。本发明结构简单、精度高、性价比高。本发明提供的交互方法可以应用于多种场合和多种应用之中,具有很高的灵活性。In view of the deficiencies of the prior art, the purpose of the present invention is to provide a system and method for augmented dimensional gesture recognition and interaction based on two-dimensional laser radar. A swinging accessory is added to the two-dimensional lidar to realize the increased dimension of the radar, and gesture recognition and intention analysis are realized through the analysis of this three-dimensional point cloud, so as to realize the interaction. Including lidar swing module, 3D point cloud reconstruction module, graphic module, gesture recognition and feature extraction module, multi-frame comparison module and interactive control module. The data is obtained by the swinging lidar, and the period information is obtained in the data of the 3D point cloud reconstruction module, so as to estimate the third dimensional data and obtain the 3D point cloud. Graph the point cloud of each cycle through the graphing module. In the gesture recognition and feature extraction module, the trained network is used for gesture recognition, and feature extraction is performed in the graphed graph. The multi-frame comparison module recognizes gestures with long and multi-gesture combinations. The interactive control module completes intention identification and interactive control. The invention has the advantages of simple structure, high precision and high cost performance. The interaction method provided by the present invention can be applied to various occasions and applications, and has high flexibility.

本发明具体采用以下技术方案:The present invention specifically adopts the following technical solutions:

一种基于二维激光雷达的增维手势识别与交互系统,其特征在于,包括:激光雷达摆动模块、三维点云重建模块、图化模块,手势识别与特征提取模块、多帧比较模块和交互控制模块;An augmented dimensional gesture recognition and interaction system based on two-dimensional laser radar, characterized in that it includes: a laser radar swing module, a three-dimensional point cloud reconstruction module, a graphical module, a gesture recognition and feature extraction module, a multi-frame comparison module and an interaction module control module;

所述激光雷达摆动模块控制激光雷达在与旋转平面存在夹角的摆动平面上进行角度固定的往复摆动;The lidar swing module controls the lidar to perform a reciprocating swing with a fixed angle on a swing plane that has an included angle with the rotation plane;

所述三维点云重建模块以激光雷达的扫描数据作为输入,进行三维点云重建,包括:周期计算模块、第三维插值模块、坐标转换模块、区域识别模块;所述周期计算模块用于标定和记录激光雷达每个摆动周期的起始;所述第三维插值模块用于对每一个二维点进行摆动角度的估计,获得第三维的分量;所述坐标转换模块用于将极坐标转换为直角坐标;所述区域识别模块用于在所有扫描点中分辨出位于手势识别区域内的点,用于手势识别;The three-dimensional point cloud reconstruction module takes the scanning data of the laser radar as input, and performs three-dimensional point cloud reconstruction, including: a period calculation module, a third-dimensional interpolation module, a coordinate conversion module, and a region identification module; the period calculation module is used for calibration and Record the start of each swing cycle of the lidar; the third-dimensional interpolation module is used to estimate the swing angle of each two-dimensional point to obtain the third-dimensional component; the coordinate conversion module is used to convert polar coordinates into right angles Coordinates; the area recognition module is used to distinguish points located in the gesture recognition area among all scanning points for gesture recognition;

所述图化模块利用三维点云携带的信息,对已经进行第三维插值并且被区域识别模块选中的点云进行图形化,保留手势的信息,包括手势的位置和形态,保证之后的手势识别与特征提取模块可以获取准确的信息。The graphic module uses the information carried by the three-dimensional point cloud to graphic the point cloud that has been subjected to third-dimensional interpolation and has been selected by the area recognition module, and retains the information of the gesture, including the position and shape of the gesture, to ensure that the subsequent gesture recognition and The feature extraction module can obtain accurate information.

所述手势识别与特征提取模块通过图化的点云图经过机器学习训练完成的神经网络识别出手势,提取出手势所在位置和手势的角度特征;The gesture recognition and feature extraction module recognizes the gesture through a neural network trained by machine learning through a graphical point cloud, and extracts the position of the gesture and the angle feature of the gesture;

所述多帧比较模块根据手势识别与特征提取模块识别的手势,将连续动作或由多个手势构成的动作进行比较;The multi-frame comparison module compares continuous actions or actions composed of multiple gestures according to the gestures recognized by the gesture recognition and feature extraction module;

所述交互控制模块将手势识别与特征提取模块和/或多帧比较模块的结果转换为对应的意图,根据预设的交互方式,完成交互的控制。其可以与其他配套的软硬件进行结合,完成预期的交互操作。The interaction control module converts the results of the gesture recognition and feature extraction module and/or the multi-frame comparison module into corresponding intentions, and completes the interaction control according to a preset interaction mode. It can be combined with other supporting software and hardware to complete the expected interaction.

优选地,所述旋转平面和摆动平面相互垂直。Preferably, the rotation plane and the swing plane are perpendicular to each other.

优选地,所述第三维插值模块插入的是摆动角度的值θ,公式如下:Preferably, the third-dimensional interpolation module inserts the value θ of the swing angle, and the formula is as follows:

Figure BDA0002443341560000021
Figure BDA0002443341560000021

其中,N为一个周期内扫描的总的点数,n为当前扫描点在整个周期中是第几个扫描点,θmax为最大摆动角度。Among them, N is the total number of points scanned in one cycle, n is the number of scan points of the current scan point in the entire cycle, and θmax is the maximum swing angle.

优选地,所述手势识别区域为长方体形区域。其可以映射为一个屏幕或其他视野区域,通过手势与手势位置和屏幕进行交互。Preferably, the gesture recognition area is a rectangular parallelepiped area. It can be mapped to a screen or other field of view and interact with the gesture position and screen through gestures.

优选地,所述图化模块将长方体识别区域从底部高度到顶部高度分为若干级,对应若干个灰度值,将点云的点的高度信息根据其在底部高度到顶部高度中的比例确定为若干个灰度值的一个,在其在x、y的位置上延伸出一个由z转换成的灰度值的像素,形成灰度图。Preferably, the graphic module divides the cuboid recognition area into several levels from the bottom height to the top height, corresponding to several grayscale values, and determines the height information of the point cloud according to its ratio from the bottom height to the top height. It is one of several grayscale values, and a pixel of the grayscale value converted by z is extended at the position of x and y to form a grayscale image.

优选地,所述交互控制模块在长方体形区域内定义有确认操作面。即在一些操作中只有当手超过这个面之后操作才被确认;控制模块根据要控制的内容设定一个交互语言库,交互语言为交互控制模块和要交互的对象都认可和使用的交互方式,并且可以通过这个方式实现相应的功能。如果手势的输入在交互语言库存在,那么就通过交互控制模块实施相应的交互控制。Preferably, the interactive control module defines a confirmation operation surface in the rectangular area. That is, in some operations, the operation is only confirmed when the hand exceeds this surface; the control module sets an interactive language library according to the content to be controlled, and the interactive language is an interaction method recognized and used by both the interactive control module and the object to be interacted with. And the corresponding function can be realized in this way. If the input of the gesture exists in the interactive language library, the corresponding interactive control is implemented through the interactive control module.

优选地,所述激光雷达摆动模块只提供摆动,而不提供摆动角度的数据,装置包括:激光雷达固定装置、支座、活动套筒和转盘;所述激光雷达固定装置通过第一铰链安装在支座上;所述转盘的转动平面与支座垂直;所述活动套筒的套筒端与转盘偏心连接,连杆端通过第二铰链连接激光雷达固定装置的边沿。Preferably, the lidar swing module only provides swing, but does not provide the data of the swing angle. The device includes: a lidar fixing device, a support, a movable sleeve and a turntable; the lidar fixing device is mounted on the first hinge through a first hinge. The rotating plane of the turntable is perpendicular to the support; the sleeve end of the movable sleeve is eccentrically connected to the turntable, and the connecting rod end is connected to the edge of the lidar fixing device through a second hinge.

其通过特别设计的结构,使摆动周期的某一个时期可以使扫描到的一部分点的距离有一个突变。使周期计算子模块扫通过分析扫描到的一部分点的距离有的周期性突变获得周期。Through a specially designed structure, a certain period of the swing cycle can cause a sudden change in the distance of a part of the scanned points. The period calculation submodule is made to scan and obtain the period by analyzing the periodic sudden change in the distance of a part of the scanned points.

优选地,所述周期计算模块标定激光雷达每个摆动周期的起始,基于激光雷达固定装置在激光雷达扫描方向上的第一开孔,以及开设在支座上对应位置的第二开孔。Preferably, the period calculation module calibrates the start of each swing period of the lidar based on the first opening of the lidar fixing device in the scanning direction of the lidar and the second opening at the corresponding position on the support.

优选地,在所述激光雷达摆动模块和激光雷达的工作过程中,三维点云重建模块工作过程包括以下步骤:Preferably, in the working process of the lidar swing module and lidar, the working process of the 3D point cloud reconstruction module includes the following steps:

步骤S1:所述周期计算模块对采集的扫描点进行分析,将所述第一开孔和第二开孔位置重合时产生的最大跳变扫描点作为参照,以相邻两个最大跳变点作为一个周期,记录一个周期中扫描到的总点数;Step S1: The cycle calculation module analyzes the collected scan points, and takes the maximum jump scan point generated when the positions of the first opening and the second opening coincide as a reference, and takes the adjacent two maximum jump points as a reference. As a cycle, record the total number of points scanned in a cycle;

步骤S2:所述三维插值模块采用如下公式进行插值:Step S2: The three-dimensional interpolation module uses the following formula to perform interpolation:

Figure BDA0002443341560000031
Figure BDA0002443341560000031

其中,θ为摆动角度的值,N为一个周期内扫描的总的点数,n为当前扫描点在整个周期中是第几个扫描点,θmax为最大摆动角度;Among them, θ is the value of the swing angle, N is the total number of points scanned in a cycle, n is the scan point of the current scanning point in the entire cycle, and θmax is the maximum swing angle;

步骤3:所述坐标转换模块根据激光雷达摆动角度θ、激光雷达旋转角度α、激光雷达测距距离l构成的极坐标,以及坐标转换公式:Step 3: The coordinate conversion module is based on the polar coordinates formed by the lidar swing angle θ, the lidar rotation angle α, the lidar ranging distance l, and the coordinate conversion formula:

x=lcosαsinθ,y=lsinα,z=lcosαcosθ;x=lcosαsinθ,y=lsinα,z=lcosαcosθ;

将极坐标系转换为直角坐标系;Convert polar coordinate system to Cartesian coordinate system;

步骤4:所述区域识别模块对位于手势识别区域的点进行识别。Step 4: The area identification module identifies points located in the gesture identification area.

与现有技术相比,本发明及其优选方案有如下有益效果:Compared with the prior art, the present invention and its preferred scheme have the following beneficial effects:

1、本发明方案的构建基于二维激光雷达作为测量传感器,其价格较便宜,数据精度高、测得数据可靠;1. The construction of the solution of the present invention is based on two-dimensional laser radar as a measurement sensor, which is relatively cheap, has high data accuracy, and is reliable in measured data;

2、本发明不要求摆动模块提供第三维即摆动角度这一参数,完全依靠独立的二维激光雷达的数据进行三维扫描,摆动附件可以是简单的机械构造,不需要另外的数据数测定与传输部分,使系统硬件系统更加简化,降低成本;2. The present invention does not require the swing module to provide the third dimension, namely the swing angle, and completely relies on independent two-dimensional laser radar data for three-dimensional scanning. The swing accessory can be a simple mechanical structure, and no additional data measurement and transmission are required. Parts, make the system hardware system more simplified and reduce the cost;

3、本发明不仅可以进行手势识别,还可以获得手势位置和角度信息,使交互可以更加精细化;3. The present invention can not only perform gesture recognition, but also obtain gesture position and angle information, so that the interaction can be more refined;

4、用本发明所述的交互方法可以应用于多种场合和多种应用之中,具有很高的灵活性。4. The interaction method described in the present invention can be applied to various occasions and applications, and has high flexibility.

附图说明Description of drawings

下面结合附图和具体实施方式对本发明进一步详细的说明:The present invention will be described in further detail below in conjunction with the accompanying drawings and specific embodiments:

图1为本发明实施例的系统结构框图;Fig. 1 is a system structure block diagram of an embodiment of the present invention;

图2为本发明实施例的激光雷达摆动模块示意图;2 is a schematic diagram of a lidar swing module according to an embodiment of the present invention;

图3为本发明实施例的角度表示和旋转方向示意图;3 is a schematic diagram of an angle representation and a rotation direction according to an embodiment of the present invention;

图4为本发明实施例的手势识别进行交互区域的示意图;FIG. 4 is a schematic diagram of an interaction area for gesture recognition according to an embodiment of the present invention;

图中:1-第一铰链;2-第二铰链;3-滑动套筒;4-转盘;5-激光雷达;6-计算周期所用开孔(第二开孔);7-激光雷达固定装置;8-激光雷达固定装置上的开孔(第一开孔);9-激光雷达旋转方式;10-激光雷达摆动方式;11-坐标原点;12-确认操作面;13-识别区域。In the figure: 1- The first hinge; 2- The second hinge; 3- The sliding sleeve; 4- The turntable; ; 8- The opening on the lidar fixing device (the first opening); 9- lidar rotation mode; 10- lidar swing mode; 11- coordinate origin; 12- confirm the operation surface; 13- identify the area.

具体实施方式Detailed ways

为让本专利的特征和优点能更明显易懂,下文特举实施例,作详细说明如下:In order to make the features and advantages of this patent more obvious and easy to understand, the following specific examples are given and described in detail as follows:

如图1所示,本实施例提供的基于二维激光雷达的增维手势识别与交互方法的系统,包括:激光雷达摆动模块、三维点云重建模块、图化模块,手势识别与特征提取模块、多帧比较模块和交互控制模块,其中:As shown in FIG. 1 , the system for the 2D lidar-based dimensional augmented gesture recognition and interaction method provided in this embodiment includes: a lidar swing module, a 3D point cloud reconstruction module, a graphing module, and a gesture recognition and feature extraction module , a multi-frame comparison module and an interactive control module, where:

激光雷达摆动模块如图2所示,激光雷达贴附在带有透光区域的激光雷达固定装置7上,定义当固定装置7上的开孔8和计算周期所用开孔6重合时,激光雷达的摆动角度为零度,激光雷达绕第一铰链1作摆动运动。激光旋转方式如9所示,激光摆动方式如10所示。旋转雷达摆动结构的摆动轴和激光雷达旋转轴垂直相交即第一铰链1旋转轴过激光雷达旋转轴保证坐标计算最简化。第二铰链2获取圆周运动的一个方向的分量传递给激光雷达固定装置7进而传递给激光雷达5。滑动套筒3在传递转动的同时克服连接点间的距离变化。转盘4以一定的速度匀速转动,为其提供圆周运动的一个运动分量。The lidar swing module is shown in Figure 2. The lidar is attached to thelidar fixing device 7 with a light-transmitting area. The swing angle is zero degrees, and the lidar swings around the first hinge 1 . The laser rotation method is shown in 9, and the laser wobble method is shown in 10. The swing axis of the rotating radar swing structure and the lidar rotation axis intersect vertically, that is, the rotation axis of the first hinge 1 passes through the lidar rotation axis to ensure that the coordinate calculation is simplified. Thesecond hinge 2 acquires a component of the circular motion in one direction and transmits it to thelidar fixing device 7 and then to thelidar 5 . The slidingsleeve 3 overcomes the distance change between the connection points while transmitting the rotation. Theturntable 4 rotates at a constant speed and provides it with a motion component of the circular motion.

作为一种优选方式,本实施例方案的激光雷达5采用思岚RPLIDAR A2,用XH2.54-5P规范的公头插头连接激光雷达,通过思岚RPLIDAR A2配套的模块后与计算机用USB数据线相连,扫描点信息进入三维点云重建模块。As a preferred way, thelaser radar 5 of the solution in this embodiment adopts the Silan RPLIDAR A2, and uses the XH2.54-5P standard male plug to connect the lidar, and then connects to the computer with a USB data cable through the module matching the Silan RPLIDAR A2. connected, and the scanned point information enters the 3D point cloud reconstruction module.

三维点云重建模块包括:周期计算模块、第三维插值模块、坐标转换模块、区域识别模块。将数据线传来的扫描数据,先接收并储存起来再进行三维点云重建。具体步骤为:The three-dimensional point cloud reconstruction module includes: a period calculation module, a third-dimensional interpolation module, a coordinate conversion module, and a region identification module. The scan data from the data line is first received and stored, and then the 3D point cloud is reconstructed. The specific steps are:

步骤1:周期计算模块起作用,分析储存起来扫描点,因为激光雷达固定装置7特别的结构,在激光雷达旋转到激光雷达固定装置的开孔8的位置的角度时,会有一个从连续值到更高值的跳变,当恰好固定装置7上的开孔8和计算周期所用开孔6重合时,这个跳变最大,从两个最大跳变点中得到周期,记录一个周期里面的扫描到的总点数。Step 1: The period calculation module works, analyzes and stores the scanning points, because of the special structure of thelidar fixture 7, when the lidar rotates to the angle of the position of theopening 8 of the lidar fixture, there will be a continuous value. Jump to a higher value, when theopening 8 on thefixture 7 coincides with theopening 6 used to calculate the cycle, the jump is the largest, the cycle is obtained from the two maximum jump points, and the scan in one cycle is recorded The total number of points reached.

步骤2:第三维插值模块起作用,利用如下公式进行插值。Step 2: The third-dimensional interpolation module works, and the following formula is used for interpolation.

Figure BDA0002443341560000051
其中θ的意义如图3所示。
Figure BDA0002443341560000051
The meaning of θ is shown in Figure 3.

其中,N为一个周期内扫描的总的点数,n为当前扫描点在整个周期中是第几个扫描点,θmax为最大摆动角度其意义如图2中标记10所示的夹角度数。Among them, N is the total number of points scanned in one cycle, n is the number of scan points of the current scan point in the entire cycle, and θmax is the maximum swing angle whose meaning is the number of angles shown bymark 10 in Figure 2.

步骤3:坐标转换模块起作用,如图3所示,激光雷达摆动角度θ、激光雷达旋转角度α、激光雷达测距距离为l。其在图2上,激光雷达摆动角度θ对应于标记10,激光雷达旋转角度α对应于标记9。θ、α、l构成极坐标。具体的坐标转换方式为:Step 3: The coordinate conversion module works. As shown in Figure 3, the lidar swing angle θ, the lidar rotation angle α, and the lidar ranging distance are l. In FIG. 2 , the lidar swing angle θ corresponds to themark 10 , and the lidar rotation angle α corresponds to the mark 9 . θ, α, l constitute polar coordinates. The specific coordinate conversion method is:

x=lcosαsinθ,y=lsinα,z=lcosαcosθ。通过上公式将极坐标系转换为直角坐标系。x=lcosαsinθ, y=lsinα, z=lcosαcosθ. Convert the polar coordinate system to the rectangular coordinate system by the above formula.

步骤4:区域识别模块起作用,如图4所示,坐标原点11是激光雷达的位置,其可探测区域为扇形绕顶点旋转360度后的形状,但是只有在识别区域13的才进行识别,手势位置超过确认操作面12才进行确认。Step 4: The area identification module works. As shown in Figure 4, the coordinateorigin 11 is the position of the lidar, and its detectable area is the shape of a sector rotated 360 degrees around the vertex, but it is only recognized in theidentification area 13. Confirmation is performed only when the gesture position exceeds theconfirmation operation surface 12 .

图化模块,将坐标转换后的扫描点的高度z,转换为256个灰度值,在x、y的位置上延伸出一个由z转换成的灰度值的像素,最终形成的图是以图4中长h1和宽h2的类似的形式展开,其像素点灰度值由高度z转换而来。灰度值转换的公式如下:The graphic module converts the height z of the scanned point after coordinate conversion into 256 grayscale values, and extends a pixel of the grayscale value converted from z at the position of x and y. The final image is The similar form of the length h1 and the width h2 in Fig. 4 is expanded, and the gray value of the pixel point is converted from the height z. The formula for gray value conversion is as follows:

Figure BDA0002443341560000061
其中G表示灰度值,其大小为0到1;h3如图4所示为长方体形识别区域的高度,z'为长方体形识别区域的底部高度。
Figure BDA0002443341560000061
Among them, G represents the gray value, and its size ranges from 0 to 1; h3 is the height of the cuboid-shaped identification area as shown in Figure 4, and z' is the bottom height of the cuboid-shaped identification area.

手势识别与特征提取模块用经过图化模块图化的一个周期的点云进行分析。用已经标注好的是什么手势的图化点云图,经过机器学习的方法训练出具有高准确率的手势识别网络,在本实施例提供的方案当中,未提出新的机器学习的算法,采用现有的诸多与图像识别相关的机器学习算法均能够实现本发明的目的。手势识别网络不直接给出手势是什么,而是给出各个手势的可能概率,若所以手势的概率都不超过一个设定值则识别为其他手势不做处理。The Gesture Recognition and Feature Extraction module analyzes a cycle of point clouds that have been graphed by the graphing module. A gesture recognition network with high accuracy is trained by means of machine learning using a graphical point cloud of what gesture has been marked. In the solution provided in this embodiment, no new machine learning algorithm is proposed, and the existing Some machine learning algorithms related to image recognition can achieve the purpose of the present invention. The gesture recognition network does not directly give what the gesture is, but gives the possible probability of each gesture. If the probability of all gestures does not exceed a set value, it will be recognized as other gestures and not processed.

作为一种优选方式,标注的手势可以包括相对静止的拳头、张开的手掌、闭合的手掌、单手指指向,以及相对运动的滑动和戳击。As a preferred manner, the marked gestures may include relatively static fists, open palms, closed palms, single-finger pointing, and sliding and poking relative motions.

作为一种优选方式,特征提取主要提取的信息为手势位置,灰度图的颜色蕴含着手势的上下信息,灰度图中手势的位置蕴含着手势的左右和前后信息。灰度的变化和手势的位置还蕴含着手势的角度信息,如果需要也可以进行提取。As a preferred method, the information mainly extracted by feature extraction is the gesture position, the color of the grayscale image contains the up and down information of the gesture, and the position of the gesture in the grayscale image contains the left and right and front and rear information of the gesture. The change of grayscale and the position of the gesture also contain the angle information of the gesture, which can also be extracted if necessary.

作为一种优选方式,手势的标注可以采用人工标注的方式提高标注的准确率。As a preferred way, the labeling of gestures can be done manually to improve the accuracy of labeling.

多帧比较模块,采用存储比较的方式,首先设定一些操作是由那几个手势组成的,多帧比较模块如果发现有这样的组合,就把信息发给交互控制模块。主要适用于滑动和戳击这样相对运动手势的情况。The multi-frame comparison module adopts the method of storage comparison. First, it is set that some operations are composed of several gestures. If the multi-frame comparison module finds such a combination, it will send the information to the interactive control module. Mainly suitable for relative motion gestures such as swipe and poke.

交互控制模块,接收手势识别与特征提取模块和多帧比较模块。如果想要显示手势,也可以把图化模块的输出输入用以手势显示。主要功能在于意图识别和进行交互控制。The interactive control module receives the gesture recognition and feature extraction module and the multi-frame comparison module. If you want to display gestures, you can also use the output and input of the graphics module to display gestures. The main function lies in intent recognition and interactive control.

作为一种优选方式,交互控制模块需要根据要控制的内容设定一个交互语言库,交互语言为交互控制模块和要交互的对象都认可和使用的交互方式,并且可以通过这个方式实现相应的功能。如果手势的输入在交互语言库存在,那么就通过交互控制模块实施相应的交互控制。As a preferred way, the interactive control module needs to set an interactive language library according to the content to be controlled. The interactive language is an interactive mode recognized and used by both the interactive control module and the object to be interacted with, and corresponding functions can be implemented in this way. . If the input of the gesture exists in the interactive language library, the corresponding interactive control is implemented through the interactive control module.

作为一种优选方式,在长方体型识别区域中设置有一个确认操作面,如图4所示的标记12,即当手超过这个面之后操作才被确认,特别是对于点击这样的操作,首先在确认操作面之后进行位置的调整,然后超过确认操作面进行操作的确认。交互控制模块需要通过手势的位置等信息来确认操作。As a preferred way, a confirmation operation surface is set in the cuboid shape recognition area, as shown in themark 12 in Figure 4, that is, the operation is confirmed only after the hand exceeds this surface. After confirming the operation surface, adjust the position, and then confirm the operation beyond the confirmation operation surface. The interaction control module needs to confirm the operation through information such as the position of the gesture.

本专利不局限于上述最佳实施方式,任何人在本专利的启示下都可以得出其它各种形式的基于二维激光雷达的增维手势识别与交互系统及方法,凡依本发明申请专利范围所做的均等变化与修饰,皆应属本专利的涵盖范围。This patent is not limited to the above-mentioned best embodiment, anyone can come up with other various forms of two-dimensional lidar-based augmented gesture recognition and interaction systems and methods under the inspiration of this patent. Anyone who applies for a patent according to the present invention Equivalent changes and modifications made in the scope shall fall within the scope of this patent.

Claims (5)

1. An dimension-increasing gesture recognition and interaction system based on a two-dimensional laser radar is characterized by comprising: the system comprises a laser radar swing module, a three-dimensional point cloud reconstruction module, a graphing module, a gesture recognition and feature extraction module, a multi-frame comparison module and an interaction control module;
the laser radar swing module controls the laser radar to swing at a fixed angle on a swing plane which forms an included angle with the rotation plane;
the three-dimensional point cloud reconstruction module takes scanning data of a laser radar as input to carry out three-dimensional point cloud reconstruction, and comprises the following steps: the system comprises a period calculation module, a third-dimensional interpolation module, a coordinate conversion module and a region identification module; the period calculation module is used for calibrating and recording the start of each swing period of the laser radar; the third-dimensional interpolation module is used for estimating the swing angle of each two-dimensional point to obtain a third-dimensional component; the coordinate conversion module is used for converting the polar coordinate into a rectangular coordinate; the region identification module is used for identifying points in the gesture identification region from all the scanning points;
the imaging module uses the information carried by the three-dimensional point cloud to carry out imaging on the point cloud which is subjected to the third-dimensional interpolation and selected by the area identification module, and retains the information of the gesture, including the position and the form of the gesture;
the gesture recognition and feature extraction module recognizes gestures through a neural network which is formed by a graphical point cloud picture through machine learning training, and extracts the positions of the gestures and the angle features of the gestures;
the multi-frame comparison module compares continuous actions or actions formed by a plurality of gestures according to the gestures identified by the gesture identification and feature extraction module;
the interaction control module converts the results of the gesture recognition and feature extraction module and/or the multi-frame comparison module into corresponding intentions, and completes interaction control according to a preset interaction mode;
the rotating plane and the swinging plane are vertical to each other;
the third dimension interpolation module inserts the value theta of the swing angle, and the formula is as follows:
Figure FDA0003644409840000011
wherein, N is the total number of points scanned in a period, N is the current scanning point which is the number of the scanning points in the whole period, and thetamaxIs the maximum swing angle;
the lidar swing module includes: the device comprises a laser radar fixing device, a support, a movable sleeve and a turntable; the laser radar fixing device is arranged on the support through a first hinge; the rotating plane of the turntable is vertical to the support; the sleeve end of the movable sleeve is eccentrically connected with the rotary table, and the connecting rod end is connected with the edge of the laser radar fixing device through a second hinge;
the period calculation module is used for calibrating the starting of each swing period of the laser radar, and is based on a first hole of the laser radar fixing device in the scanning direction of the laser radar and a second hole arranged at a corresponding position on the support;
in the working process of the laser radar swing module and the laser radar, the working process of the three-dimensional point cloud reconstruction module comprises the following steps:
step S1: the period calculation module analyzes the collected scanning points, takes the maximum jump scanning points generated when the positions of the first opening and the second opening are superposed as reference, takes two adjacent maximum jump points as a period, and records the total number of points scanned in the period;
step S2: the three-dimensional interpolation module adopts the following formula to perform interpolation:
Figure FDA0003644409840000021
wherein, theta is the value of the swing angle, N is the total number of points scanned in a period, N is the number of the current scanning point in the whole period, and theta ismaxIs the maximum swing angle;
step S3: the coordinate conversion module is used for converting a coordinate into a coordinate formula according to a polar coordinate formed by a laser radar swing angle theta, a laser radar rotation angle alpha and a laser radar ranging distance l:
x=lcosαsinθ,y=lsinα,z=lcosαcosθ;
converting the polar coordinate system into a rectangular coordinate system;
step S4: the region recognition module recognizes points located in the gesture recognition region.
2. The two-dimensional lidar based dimension-increasing gesture recognition and interaction system of claim 1, wherein: the rotation plane and the swing plane are perpendicular to each other.
3. The two-dimensional lidar based dimension-increasing gesture recognition and interaction system of claim 1, wherein: the gesture recognition area is a cuboid area.
4. The two-dimensional lidar based dimension-increasing gesture recognition and interaction system of claim 3, wherein: the imaging module divides the cuboid identification area into a plurality of levels from the bottom height to the top height, the height information of the point cloud is determined to be one of a plurality of gray values according to the proportion of the height information of the point cloud from the bottom height to the top height, and a pixel of the gray value converted from z extends out of the position of the point cloud from x and y to form a gray image.
5. The two-dimensional lidar based dimension-increasing gesture recognition and interaction system of claim 3, wherein: the interactive control module is defined with a confirmation operation surface in a rectangular parallelepiped area.
CN202010271685.9A2020-04-092020-04-09 Augmented dimensional gesture recognition and interaction system and method based on 2D lidarActiveCN111488823B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202010271685.9ACN111488823B (en)2020-04-092020-04-09 Augmented dimensional gesture recognition and interaction system and method based on 2D lidar

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202010271685.9ACN111488823B (en)2020-04-092020-04-09 Augmented dimensional gesture recognition and interaction system and method based on 2D lidar

Publications (2)

Publication NumberPublication Date
CN111488823A CN111488823A (en)2020-08-04
CN111488823Btrue CN111488823B (en)2022-07-08

Family

ID=71798257

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202010271685.9AActiveCN111488823B (en)2020-04-092020-04-09 Augmented dimensional gesture recognition and interaction system and method based on 2D lidar

Country Status (1)

CountryLink
CN (1)CN111488823B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN112363156A (en)*2020-11-122021-02-12苏州矽典微智能科技有限公司Air gesture recognition method and device and intelligent equipment
CN112241204B (en)*2020-12-172021-08-27宁波均联智行科技股份有限公司Gesture interaction method and system of vehicle-mounted AR-HUD
CN112904999A (en)*2020-12-302021-06-04江苏奥格视特信息科技有限公司Augmented reality somatosensory interaction method and system based on laser radar
CN115436894A (en)*2021-06-012022-12-06富士通株式会社 Key point identification device and method based on wireless radar signal
CN114245542B (en)*2021-12-172024-03-22深圳市恒佳盛电子有限公司 Radar sensor light and control method thereof

Citations (7)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN103455144A (en)*2013-08-222013-12-18深圳先进技术研究院Vehicle-mounted man-machine interaction system and method
CN104808192A (en)*2015-04-152015-07-29中国矿业大学 Swing device for three-dimensional laser scanning and its coordinate conversion method
CN106199626A (en)*2016-06-302016-12-07上海交通大学Based on the indoor three-dimensional point cloud map generation system and the method that swing laser radar
CN108361780A (en)*2018-01-252018-08-03宁波隔空智能科技有限公司Cooker hood controller based on microwave radar Gesture Recognition and control method
CN108535736A (en)*2017-03-052018-09-14苏州中德睿博智能科技有限公司Three dimensional point cloud acquisition methods and acquisition system
CN108873715A (en)*2018-07-042018-11-23深圳众厉电力科技有限公司Intelligent home control system based on gesture identification
CN110784253A (en)*2018-07-312020-02-11深圳市白麓嵩天科技有限责任公司Information interaction method based on gesture recognition and Beidou satellite

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
JP6499154B2 (en)*2013-03-112019-04-10マジック リープ, インコーポレイテッドMagic Leap,Inc. Systems and methods for augmented and virtual reality

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN103455144A (en)*2013-08-222013-12-18深圳先进技术研究院Vehicle-mounted man-machine interaction system and method
CN104808192A (en)*2015-04-152015-07-29中国矿业大学 Swing device for three-dimensional laser scanning and its coordinate conversion method
CN106199626A (en)*2016-06-302016-12-07上海交通大学Based on the indoor three-dimensional point cloud map generation system and the method that swing laser radar
CN108535736A (en)*2017-03-052018-09-14苏州中德睿博智能科技有限公司Three dimensional point cloud acquisition methods and acquisition system
CN108361780A (en)*2018-01-252018-08-03宁波隔空智能科技有限公司Cooker hood controller based on microwave radar Gesture Recognition and control method
CN108873715A (en)*2018-07-042018-11-23深圳众厉电力科技有限公司Intelligent home control system based on gesture identification
CN110784253A (en)*2018-07-312020-02-11深圳市白麓嵩天科技有限责任公司Information interaction method based on gesture recognition and Beidou satellite

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于组合RNN网络的EMG信号手势识别;周旭峰等;《光学精密工程》;20200215(第02期);全文*

Also Published As

Publication numberPublication date
CN111488823A (en)2020-08-04

Similar Documents

PublicationPublication DateTitle
CN111488823B (en) Augmented dimensional gesture recognition and interaction system and method based on 2D lidar
US10832039B2 (en)Facial expression detection method, device and system, facial expression driving method, device and system, and storage medium
CN112819947B (en) Three-dimensional face reconstruction method, device, electronic device and storage medium
CN108776773B (en)Three-dimensional gesture recognition method and interaction system based on depth image
CN112771539B (en) Use 3D data predicted from 2D images using neural networks for 3D modeling applications
CN101989326B (en)Human posture recognition method and device
Li et al.One-shot high-fidelity talking-head synthesis with deformable neural radiance field
CN102638653B (en)Automatic face tracing method on basis of Kinect
CN100407798C (en) 3D geometric modeling system and method
CN104317391B (en)A kind of three-dimensional palm gesture recognition exchange method and system based on stereoscopic vision
JP2023549821A (en) Deformable neural radiance field
WO2021004257A1 (en)Line-of-sight detection method and apparatus, video processing method and apparatus, and device and storage medium
CN108573527A (en)A kind of expression picture generation method and its equipment, storage medium
US12008159B2 (en)Systems and methods for gaze-tracking
CN109145802B (en) Kinect-based multi-person gesture human-computer interaction method and device
CN108363973A (en)A kind of unconfined 3D expressions moving method
US20170161903A1 (en)Method and apparatus for gesture recognition
CN117372604B (en)3D face model generation method, device, equipment and readable storage medium
WO2024118586A1 (en)3d generation of diverse categories and scenes
CN109215128B (en) Method and system for synthesizing images of object motion gestures
CN115309113A (en) A guide method for parts assembly and related equipment
Ekmen et al.From 2D to 3D real-time expression transfer for facial animation
CN116797713A (en) A three-dimensional reconstruction method and terminal equipment
CN110032270A (en)A kind of man-machine interaction method based on gesture identification
LiuSemantic mapping: a semantics-based approach to virtual content placement for immersive environments

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp