Movatterモバイル変換


[0]ホーム

URL:


CN111914790A - Real-time human body rotation angle recognition method in different scenarios based on dual cameras - Google Patents

Real-time human body rotation angle recognition method in different scenarios based on dual cameras
Download PDF

Info

Publication number
CN111914790A
CN111914790ACN202010816048.5ACN202010816048ACN111914790ACN 111914790 ACN111914790 ACN 111914790ACN 202010816048 ACN202010816048 ACN 202010816048ACN 111914790 ACN111914790 ACN 111914790A
Authority
CN
China
Prior art keywords
human body
key point
camera
rotation angle
coordinates
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010816048.5A
Other languages
Chinese (zh)
Other versions
CN111914790B (en
Inventor
陈安成
李若铖
陈林
张康
王权泳
吴哲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of ChinafiledCriticalUniversity of Electronic Science and Technology of China
Priority to CN202010816048.5ApriorityCriticalpatent/CN111914790B/en
Publication of CN111914790ApublicationCriticalpatent/CN111914790A/en
Application grantedgrantedCritical
Publication of CN111914790BpublicationCriticalpatent/CN111914790B/en
Expired - Fee Relatedlegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Classifications

Landscapes

Abstract

Translated fromChinese

本发明公开了一种基于双摄像头的不同场景下实时人体转动角度识别方法,其实现成本低,且易实现,不容易受环境影响。本发明设置两个参数相同的摄像头,并通过两个摄像头分别获取人体的实时图像,然后使用人体骨骼关键点识别技术获取图像中人体的骨骼关键点坐标,最后根据左右摄像头得到的人体骨骼关键点坐标的差异,来计算出此时人体的相对转动角度。本发明能够适用于不同的应用场景,并且准确实时地获取人体转动角度。

Figure 202010816048

The invention discloses a real-time human body rotation angle recognition method under different scenes based on dual cameras, which has low implementation cost, is easy to implement, and is not easily affected by the environment. The present invention sets up two cameras with the same parameters, and obtains real-time images of the human body through the two cameras respectively, then uses the human skeleton key point recognition technology to obtain the coordinates of the human skeleton key points in the image, and finally obtains the human skeleton key points according to the left and right cameras. The difference in coordinates is used to calculate the relative rotation angle of the human body at this time. The present invention can be applied to different application scenarios, and can acquire the rotation angle of the human body accurately and in real time.

Figure 202010816048

Description

Translated fromChinese
基于双摄像头的不同场景下实时人体转动角度识别方法Real-time human rotation angle recognition method in different scenarios based on dual cameras

技术领域technical field

本发明属于计算机视觉领域,具体涉及基于双摄像头的不同场景下实时人体转动角度识别方法。The invention belongs to the field of computer vision, and in particular relates to a real-time human body rotation angle recognition method under different scenes based on dual cameras.

背景技术Background technique

目前的姿势识别研究主要从人体骨骼关键点检测出发,理论上只要获取了人体的骨骼关键点的三维坐标,便能够计算出关节或者整个人体的转动角度。当前二维的人体关键点识别技术已经趋于成熟,不管是单人还是多人场景下,识别准确率都非常准确。目前也有基于单张图片做三维人体骨骼关键点识别的,但是准确性很难得到保证。因此关键难点在于如何获取准确的深度信息。目前常用的深度图像获取方法主要有双目立体视觉、结构光以及激光雷达等方法。其中双目立体视觉虽然具有对相机硬件要求不高,成本低廉的优点,但是其对环境光照非常敏感、不适用于单调缺乏纹理的场景、计算复杂度高和相机基线限制了测量范围等缺点使得其适用的领域范围受到了很大限制。微软公司推出的Kinect设备,在结构光的基础上开发出光编码技术,Kinect低廉的价格与实时高分辨率的深度图像捕捉特性使得其在消费电子领域得到了迅猛发展,然而Kinect的有效测距范围仅为800毫米到4000毫米,对处在测距范围之外的物体,Kinect并不能保证准确深度值的获取。Kinect捕获的深度图像存在深度缺失的区域,其体现为深度值为零,该区域意味着Kinect无法获得该区域的深度值。而除此之外,其深度图像还存在着深度图像边缘与彩色图像边缘不对应、深度噪声等问题。激光雷达由于其测距范围广、测量精度高的特性被广泛地用于室外三维空间感知的人工智能系统中,然而,激光雷达所捕获的三维信息体现在彩色图像坐标系下是不均匀并且稀疏的。由于单位周期内,激光扫描的点数是有限的,当把激光雷达捕获的三维点投射到彩色图像坐标系下得到深度图像时,其深度图像的深度值以离散的点的形式呈现,深度图像中许多区域的深度值是未知的。这也就意味着彩色图像中的某些像素点并没有对应的深度信息。The current posture recognition research mainly starts from the detection of key points of human skeleton. In theory, as long as the three-dimensional coordinates of the key points of the human skeleton are obtained, the rotation angle of the joint or the entire human body can be calculated. The current two-dimensional human body key point recognition technology has become mature, and the recognition accuracy is very accurate in single or multi-person scenarios. At present, there are also 3D human skeleton key point recognition based on a single image, but the accuracy is difficult to guarantee. Therefore, the key difficulty lies in how to obtain accurate depth information. At present, the commonly used depth image acquisition methods mainly include binocular stereo vision, structured light, and lidar. Among them, binocular stereo vision has the advantages of low requirements for camera hardware and low cost, but it is very sensitive to ambient light, not suitable for scenes with monotonous textures, high computational complexity, and camera baselines limit the measurement range and other shortcomings. The scope of its application has been greatly limited. The Kinect device launched by Microsoft has developed optical coding technology on the basis of structured light. Kinect's low price and real-time high-resolution depth image capture characteristics have made it develop rapidly in the field of consumer electronics. However, the effective ranging range of Kinect Only 800 mm to 4000 mm, Kinect cannot guarantee the acquisition of accurate depth values for objects outside the range of measurement. The depth image captured by Kinect has an area with missing depth, which is represented by zero depth value, which means that Kinect cannot obtain the depth value of this area. In addition, its depth image also has problems such as depth image edge and color image edge mismatch, depth noise and so on. Lidar is widely used in artificial intelligence systems for outdoor 3D space perception due to its wide ranging range and high measurement accuracy. However, the 3D information captured by lidar is not uniform and sparse in the color image coordinate system. of. Since the number of points scanned by the laser in a unit period is limited, when the three-dimensional points captured by the lidar are projected into the color image coordinate system to obtain a depth image, the depth value of the depth image is presented in the form of discrete points. The depth values of many regions are unknown. This also means that some pixels in the color image do not have corresponding depth information.

发明内容SUMMARY OF THE INVENTION

针对现有技术中的上述不足,本发明提供的基于双摄像头的不同场景下实时人体转动角度识别方法解决了现有技术中易受环境影响和计算复杂度高的问题。In view of the above deficiencies in the prior art, the real-time human body rotation angle recognition method based on dual cameras in different scenarios solves the problems of being easily affected by the environment and high computational complexity in the prior art.

为了达到上述发明目的,本发明采用的技术方案为:基于双摄像头的不同场景下实时人体转动角度识别方法,包括以下步骤:In order to achieve the above purpose of the invention, the technical solution adopted in the present invention is: a real-time human body rotation angle recognition method under different scenarios based on dual cameras, comprising the following steps:

S1、在一面墙壁上设置两个参数相同且连线水平的摄像头;S1. Set two cameras with the same parameters and horizontal connection on one wall;

S2、通过两个摄像头实时采集人体图像,并采用人体关键点识别算法获取人体骨骼关键点坐标,所述骨骼关键点坐标包括左肩关键点坐标和右肩关键点坐标;S2. Collect human body images in real time through two cameras, and use a human body key point recognition algorithm to obtain the coordinates of key points of human skeletons, where the coordinates of key bone points include the coordinates of the left shoulder key point and the coordinates of the right shoulder key point;

S3、根据第一摄像头所采集的人体骨骼关键点坐标获取第一转向识别参数K1,根据第二摄像头所采集的人体骨骼关键点坐标第二转向识别参数K2S3, obtaining the first steering recognition parameter K1 according to the coordinates of the human skeleton key points collected by the first camera, and the second steering recognition parameter K2 according to the coordinates of the human skeleton key points collected by the second camera;

S4、判断第一摄像头采集的左肩关键点横坐标是否大于其采集的右肩关键点横坐标,若是,则进入步骤S5,否则进入步骤S6;S4, determine whether the abscissa of the left shoulder key point collected by the first camera is greater than the abscissa of the right shoulder key point collected by the first camera, if so, enter step S5, otherwise enter step S6;

S5、判断第二摄像头采集的左肩关键点横坐标是否大于其采集的右肩关键点横坐标,若是,则进入步骤S7,否则进入步骤S8;S5, determine whether the abscissa of the left shoulder key point collected by the second camera is greater than the abscissa of the right shoulder key point collected by the second camera, if so, enter step S7, otherwise enter step S8;

S6、判断第二摄像头采集的左肩关键点横坐标是否大于其采集的右肩关键点横坐标,若是,则进入步骤S10,否则进入步骤S9;S6, determine whether the abscissa of the left shoulder key point collected by the second camera is greater than the abscissa of the right shoulder key point collected by the second camera, if so, enter step S10, otherwise enter step S9;

S7、判断第一转向识别参数K1是否大于第二转向识别参数K2,若是,则判定为第一左转过程,并计算转动角度,得到人体转动角度识别结果,否则判定为第一右转过程,并计算转动角度,得到人体转动角度识别结果;S7. Determine whether the first turning recognition parameter K1 is greater than the second turning recognition parameter K2 , if so, it is determined as the first left turn process, and the rotation angle is calculated to obtain the human body rotation angle recognition result, otherwise it is determined as the first right turn process, and calculate the rotation angle to obtain the recognition result of the rotation angle of the human body;

S8、判定当前人体转向过程为第二左转过程,并计算转动角度,得到人体转动角度识别结果;S8, determine that the current human body turning process is the second left turning process, and calculate the rotation angle to obtain the recognition result of the human body rotation angle;

S9、判断第一转向识别参数K1是否大于第二转向识别参数K2,若是,则判定为第三右转过程,并计算转动角度,得到人体转动角度识别结果,否则判定为第三左转过程,并计算转动角度,得到人体转动角度识别结果;S9, determine whether the first turning identification parameter K1 is greater than the second turning identification parameter K2 , if so, it is determined as the third right turn process, and the rotation angle is calculated to obtain the human body rotation angle recognition result, otherwise it is determined as the third left turn process, and calculate the rotation angle to obtain the recognition result of the rotation angle of the human body;

S10、判定当前人体转向过程为第二右转过程,并计算转动角度,得到人体转动角度识别结果。S10. Determine that the current human body turning process is the second right turning process, and calculate the rotation angle to obtain a human body rotation angle identification result.

进一步地,所述步骤S1中两个摄像头的镜头轴线与墙壁的夹角均为90-α度,所述两个摄像头的镜头轴线相交且平行于水平面,所述两个摄像头的镜头之间的距离为d米,α表示常数,α∈(0,45)。Further, in the step S1, the angles between the lens axes of the two cameras and the wall are both 90-α degrees, the lens axes of the two cameras intersect and are parallel to the horizontal plane, and the angle between the lenses of the two cameras is 90-α degrees. The distance is d meters, α represents a constant, α∈(0,45).

进一步地,所述步骤S2中两个摄像头包括第一摄像头和第二摄像头,所述第一摄像头采集的人体骨骼关键点坐标包括左肩关键点坐标(xlj1,ylj1)、右肩关键点坐标(xrj1,yrj1)、左髋关键点坐标(xlk1,ylk1)和右髋关键点坐标(xrk1,yrk1),所述第二摄像头采集的人体骨骼关键点坐标包括左肩关键点坐标(xlj2,ylj2)、右肩关键点坐标(xrj2,yrj2)、左髋关键点坐标(xlk2,ylk2)和右髋关键点坐标(xrk2,yrk2)。Further, in the step S2, the two cameras include a first camera and a second camera, and the human skeleton key point coordinates collected by the first camera include left shoulder key point coordinates (xlj1 , ylj1 ), right shoulder key point coordinates. (xrj1 , yrj1 ), left hip key point coordinates (xlk1 , ylk1 ) and right hip key point coordinates (xrk1 , yrk1 ), the human skeleton key point coordinates collected by the second camera include the left shoulder key point Coordinates (xlj2 , ylj2 ), right shoulder key point coordinates (xrj2 , yrj2 ), left hip key point coordinates (xlk2 , ylk2 ) and right hip key point coordinates (xrk2 , yrk2 ).

进一步地,所述步骤S3中根据第一摄像头所采集的人体骨骼关键点坐标获取第一转向识别参数K1为:Further, in the step S3, obtaining the first steering recognition parameter K1 according to the coordinates of the key points of the human skeleton collected by thefirst camera is:

Figure BDA0002632728140000031
Figure BDA0002632728140000031

其中,D1表示第一摄像头采集的左肩关键点与右肩关键点坐标在墙壁上的投影点之间的横坐标差,D2表示第一摄像头采集的左髋关键点与左肩关键点在墙壁上的投影点之间的纵坐标差,xlj1表示第一摄像头采集的左肩关键点lj的横坐标,xrj1表示第一摄像头采集的右肩关键点rj的横坐标,ylk1表示第一摄像头采集的左髋关键点lk的纵坐标,ylj1表示第一摄像头采集的左肩关键点lj的纵坐标;Wherein, D1 represents the abscissa difference between the left shoulder key point collected by the first camera and the projection point of the right shoulder key point coordinates on the wall, D2 represents the left hip key point collected by the first camera and the left shoulder key point on the wall The vertical coordinate difference between the projection points on the , xlj1 represents the abscissa of the left shoulder key point lj collected by the first camera, xrj1 represents the abscissa of the right shoulder key point rj collected by the first camera, and ylk1 represents the first camera. The ordinate of the collected left hip key point lk, ylj1 represents the ordinate of the left shoulder key point lj collected by the first camera;

所述步骤S3中根据第二摄像头所采集的人体骨骼关键点坐标第二转向识别参数K2为:In the step S3, the second steering recognition parameter K2 according to the coordinates of the key points of the human skeleton collected by the second camera is:

Figure BDA0002632728140000041
Figure BDA0002632728140000041

其中,D3表示第二摄像头采集的左肩关键点与右肩关键点坐标在墙壁上的投影点之间的横坐标差,D4表示第二摄像头采集的左髋关键点与左肩关键点在墙壁上的投影点之间的纵坐标差,xlj2表示第二摄像头采集的左肩关键点lj的横坐标,xrj2表示第二摄像头采集的右肩关键点rj的横坐标,ylk2表示第二摄像头采集的左髋关键点lk的纵坐标,ylj2表示第二摄像头采集的左肩关键点lj的纵坐标。Among them, D3 represents the abscissa difference between the left shoulder key point collected by the second camera and the projection point of the right shoulder key point coordinates on the wall, D4 represents the left hip key point collected by the second camera and the left shoulder key point on the wall The vertical coordinate difference between the projection points on the , xlj2 represents the abscissa of the left shoulder key point lj collected by the second camera, xrj2 represents the abscissa of the right shoulder key point rj collected by the second camera, ylk2 represents the second camera The collected ordinate of the left hip key point lk, ylj2 represents the ordinate of the left shoulder key point lj collected by the second camera.

进一步地,所述步骤S7中第一左转过程具体为:人体正对墙壁时,向左转动角度为(0,90-α)度的过程;所述第一右转过程为:人体正对墙壁时,向右转动角度为(0,90-α)度的过程;Further, the first left turn process in the step S7 is specifically: when the human body is facing the wall, the left turn angle is (0,90-α) degrees; the first right turn process is: the human body is facing the wall. When facing the wall, the process of turning the angle to the right is (0,90-α) degrees;

所述步骤S7包括以下分步骤:The step S7 includes the following sub-steps:

S71、判断第一转向识别参数K1是否大于第二转向识别参数K2,若是,则进入步骤S72,否则进入步骤S73;S71, determine whether the first steering identification parameter K1 is greater than the second steering identification parameter K2 , if so, go to step S72, otherwise go to step S73;

S72、判定此时转向过程为第一左转过程,计算转动角度angle为:S72, it is determined that the steering process is the first left-turn process at this time, and the rotation angle angle is calculated as:

angle=arccos(K2/Kmax)/PI*180-αangle=arccos(K2 /Kmax )/PI*180-α

S73、判定此时转向过程为第一右转过程,计算转动角度angle为:S73, it is determined that the steering process is the first right-turn process at this time, and the rotation angle angle is calculated as:

angle=arccos(K1/Kmax)/PI*180-αangle=arccos(K1 /Kmax )/PI*180-α

其中,Kmax表示左右肩关键点之间的最大距离;PI表示圆周率。Among them, Kmax represents the maximum distance between the left and right shoulder key points; PI represents the pi.

进一步地,所述步骤S8中第二左转过程具体为:以人体正对墙壁为参考,在人体左侧(90-α,90+α)度范围内向左转的过程;Further, the second left turn process in the step S8 is specifically: a process of turning left within the range of (90-α, 90+α) degrees on the left side of the human body with the human body facing the wall as a reference;

所述步骤S8中计算转动角度angle为:The rotation angle angle calculated in the step S8 is:

angle=arccos(K1/Kmax)/PI*180+α。angle=arccos(K1 /Kmax )/PI*180+α.

进一步地,所述步骤S9中第三左转过程具体为:以人体正对墙壁为参考,在人体左侧(90+α,180)度范围内向左转的过程;所述第三右转过程具体为:以人体正对墙壁为参考,在人体右侧(90+α,180)度范围内向右转的过程;Further, the third turn left process in the step S9 is specifically: the process of turning left within the range of (90+α, 180) degrees on the left side of the human body with the human body facing the wall as a reference; the third right turning process Specifically: the process of turning right within the range of (90+α, 180) degrees on the right side of the human body with the human body facing the wall as a reference;

所述步骤S9包括以下分步骤:The step S9 includes the following sub-steps:

S91、判断第一转向识别参数K1是否大于第二转向识别参数K2,若是,则进入步骤S92,否则进入步骤S93;S91, determine whether the first steering identification parameter K1 is greater than the second steering identification parameter K2 , if so, go to step S92, otherwise go to step S93;

S92、判定转向过程为第三左转过程,并计算转动角度angle为:S92, determine that the steering process is the third left-turn process, and calculate the rotation angle angle as:

angle=180-arccos(K1/Kmax)/PI*180+αangle=180-arccos(K1 /Kmax )/PI*180+α

S93、判定转向过程为第三右转过程,并计算转动角度angle为:S93, determine that the turning process is the third right turning process, and calculate the turning angle angle as:

angle=180-arccos(K2/Kmax)/PI*180+α。angle=180-arccos(K2 /Kmax )/PI*180+α.

进一步地,所述步骤S10中第二右转过程具体为:以人体正对墙壁为参考,在人体右侧(90-α,90+α)度范围内向右转的过程;Further, the second right turning process in the step S10 is specifically: a process of turning right within the range of (90-α, 90+α) degrees on the right side of the human body with the human body facing the wall as a reference;

所述步骤S10中计算转动角度angle为:The rotation angle angle calculated in the step S10 is:

angle=arccos(K2/Kmax)/PI*180+α。angle=arccos(K2 /Kmax )/PI*180+α.

进一步地,所述左右肩关键点之间的最大距离Kmax初始化为0.65,在每次所述计算转动角度之前,对最大距离Kmax进行更新,所述更新步骤为:判断第一转向识别参数K1是否大于最大距离Kmax,若是,则令最大距离Kmax的计数值为第一转向识别参数K1的计数值,否则不改变最大距离Kmax的计数值,完成最大距离Kmax的更新。Further, the maximum distance Kmax between the left and right shoulder key points is initialized to 0.65, and before each rotation angle is calculated, the maximum distance Kmax is updated, and the update step is: judging the first steering recognition parameter. Whether K1 is greater than the maximum distance Kmax , and if so, set the count value of the maximum distance Kmax to the count value of the first steering identification parameter K1 , otherwise the count value of the maximum distance Kmax is not changed, and the update of the maximum distance Kmax is completed .

本发明的有益效果为:The beneficial effects of the present invention are:

(1)本发明提出了一种基于双摄像头的不同场景下实时人体转动角度识别方法,其实现成本低,且易实现,不容易受环境影响。(1) The present invention proposes a real-time human body rotation angle recognition method in different scenarios based on dual cameras, which has low implementation cost, is easy to implement, and is not easily affected by the environment.

(2)本发明设置两个参数相同的摄像头,并通过两个摄像头分别获取人体的实时图像,然后使用人体骨骼关键点识别技术获取图像中人体的骨骼关键点坐标,最后根据左右摄像头得到的人体骨骼关键点坐标的差异,来计算出此时人体的相对转动角度。(2) The present invention sets two cameras with the same parameters, and obtains real-time images of the human body through the two cameras respectively, and then uses the human skeleton key point recognition technology to obtain the coordinates of the human skeleton key points in the image, and finally obtains the human body according to the left and right cameras. The difference between the coordinates of the key points of the bones is used to calculate the relative rotation angle of the human body at this time.

(3)本发明能够适用于不同的应用场景,并且准确实时地获取人体转动角度。(3) The present invention can be applied to different application scenarios, and the rotation angle of the human body can be obtained accurately and in real time.

附图说明Description of drawings

图1为本发明提出的基于双摄像头的不同场景下实时人体转动角度识别方法流程图。FIG. 1 is a flowchart of a method for real-time human body rotation angle recognition in different scenarios based on dual cameras proposed by the present invention.

图2为本发明中摄像头安装示意图。FIG. 2 is a schematic diagram of the installation of the camera in the present invention.

具体实施方式Detailed ways

下面对本发明的具体实施方式进行描述,以便于本技术领域的技术人员理解本发明,但应该清楚,本发明不限于具体实施方式的范围,对本技术领域的普通技术人员来讲,只要各种变化在所附的权利要求限定和确定的本发明的精神和范围内,这些变化是显而易见的,一切利用本发明构思的发明创造均在保护之列。The specific embodiments of the present invention are described below to facilitate those skilled in the art to understand the present invention, but it should be clear that the present invention is not limited to the scope of the specific embodiments. For those skilled in the art, as long as various changes Such changes are obvious within the spirit and scope of the present invention as defined and determined by the appended claims, and all inventions and creations utilizing the inventive concept are within the scope of protection.

下面结合附图详细说明本发明的实施例。The embodiments of the present invention will be described in detail below with reference to the accompanying drawings.

如图1所示,一种基于双摄像头的不同场景下实时人体转动角度识别方法,包括以下步骤:As shown in Figure 1, a real-time human body rotation angle recognition method in different scenarios based on dual cameras includes the following steps:

S1、在一面墙壁上设置两个参数相同且连线水平的摄像头;S1. Set two cameras with the same parameters and horizontal connection on one wall;

S2、通过两个摄像头实时采集人体图像,并采用人体关键点识别算法获取人体骨骼关键点坐标,所述骨骼关键点坐标包括左肩关键点坐标和右肩关键点坐标;S2. Collect human body images in real time through two cameras, and use a human body key point recognition algorithm to obtain the coordinates of key points of human skeletons, where the coordinates of key bone points include the coordinates of the left shoulder key point and the coordinates of the right shoulder key point;

S3、根据第一摄像头所采集的人体骨骼关键点坐标获取第一转向识别参数K1,根据第二摄像头所采集的人体骨骼关键点坐标第二转向识别参数K2S3, obtaining the first steering recognition parameter K1 according to the coordinates of the human skeleton key points collected by the first camera, and the second steering recognition parameter K2 according to the coordinates of the human skeleton key points collected by the second camera;

S4、判断第一摄像头采集的左肩关键点横坐标是否大于其采集的右肩关键点横坐标,若是,则进入步骤S5,否则进入步骤S6;S4, determine whether the abscissa of the left shoulder key point collected by the first camera is greater than the abscissa of the right shoulder key point collected by the first camera, if so, enter step S5, otherwise enter step S6;

S5、判断第二摄像头采集的左肩关键点横坐标是否大于其采集的右肩关键点横坐标,若是,则进入步骤S7,否则进入步骤S8;S5, determine whether the abscissa of the left shoulder key point collected by the second camera is greater than the abscissa of the right shoulder key point collected by the second camera, if so, enter step S7, otherwise enter step S8;

S6、判断第二摄像头采集的左肩关键点横坐标是否大于其采集的右肩关键点横坐标,若是,则进入步骤S10,否则进入步骤S9;S6, determine whether the abscissa of the left shoulder key point collected by the second camera is greater than the abscissa of the right shoulder key point collected by the second camera, if so, enter step S10, otherwise enter step S9;

S7、判断第一转向识别参数K1是否大于第二转向识别参数K2,若是,则判定为第一左转过程,并计算转动角度,得到人体转动角度识别结果,否则判定为第一右转过程,并计算转动角度,得到人体转动角度识别结果;S7. Determine whether the first turning recognition parameter K1 is greater than the second turning recognition parameter K2 , if so, it is determined as the first left turn process, and the rotation angle is calculated to obtain the human body rotation angle recognition result, otherwise it is determined as the first right turn process, and calculate the rotation angle to obtain the recognition result of the rotation angle of the human body;

S8、判定当前人体转向过程为第二左转过程,并计算转动角度,得到人体转动角度识别结果;S8, determine that the current human body turning process is the second left turning process, and calculate the rotation angle to obtain the recognition result of the human body rotation angle;

S9、判断第一转向识别参数K1是否大于第二转向识别参数K2,若是,则判定为第三右转过程,并计算转动角度,得到人体转动角度识别结果,否则判定为第三左转过程,并计算转动角度,得到人体转动角度识别结果;S9, determine whether the first turning identification parameter K1 is greater than the second turning identification parameter K2 , if so, it is determined as the third right turn process, and the rotation angle is calculated to obtain the human body rotation angle recognition result, otherwise it is determined as the third left turn process, and calculate the rotation angle to obtain the recognition result of the rotation angle of the human body;

S10、判定当前人体转向过程为第二右转过程,并计算转动角度,得到人体转动角度识别结果。S10. Determine that the current human body turning process is the second right turning process, and calculate the rotation angle to obtain a human body rotation angle identification result.

如图2所示,所述步骤S1中两个摄像头的镜头轴线与墙壁的夹角均为90-α度,所述两个摄像头的镜头轴线相交且平行于水平面,所述两个摄像头的镜头之间的距离为d米,α表示常数,α∈(0,45)。As shown in FIG. 2 , in the step S1, the angles between the lens axes of the two cameras and the wall are both 90-α degrees, the lens axes of the two cameras intersect and are parallel to the horizontal plane, and the lenses of the two cameras are The distance between them is d meters, α represents a constant, α∈(0,45).

所述步骤S2中两个摄像头包括第一摄像头和第二摄像头,所述第一摄像头采集的人体骨骼关键点坐标包括左肩关键点坐标(xlj1,ylj1)、右肩关键点坐标(xrj1,yrj1)、左髋关键点坐标(xlk1,ylk1)和右髋关键点坐标(xrk1,yrk1),所述第二摄像头采集的人体骨骼关键点坐标包括左肩关键点坐标(xlj2,ylj2)、右肩关键点坐标(xrj2,yrj2)、左髋关键点坐标(xlk2,ylk2)和右髋关键点坐标(xrk2,yrk2)。In the step S2, the two cameras include a first camera and a second camera, and the human skeleton key point coordinates collected by the first camera include left shoulder key point coordinates (xlj1 , ylj1 ), right shoulder key point coordinates (xrj1 ) , yrj1 ), left hip key point coordinates (xlk1 , ylk1 ) and right hip key point coordinates (xrk1 , yrk1 ), the human skeleton key point coordinates collected by the second camera include left shoulder key point coordinates (x rk1 , y rk1 )lj2 , ylj2 ), right shoulder key point coordinates (xrj2 , yrj2 ), left hip key point coordinates (xlk2 , ylk2 ) and right hip key point coordinates (xrk2 , yrk2 ).

所述步骤S3中根据第一摄像头所采集的人体骨骼关键点坐标获取第一转向识别参数K1为:In the step S3, the first steering recognition parameter K1 is obtained according to the coordinates of the key points of the human skeleton collected by the first camera as follows:

Figure BDA0002632728140000081
Figure BDA0002632728140000081

其中,D1表示第一摄像头采集的左肩关键点与右肩关键点坐标在墙壁上的投影点之间的横坐标差,D2表示第一摄像头采集的左髋关键点与左肩关键点在墙壁上的投影点之间的纵坐标差,xlj1表示第一摄像头采集的左肩关键点lj的横坐标,xrj1表示第一摄像头采集的右肩关键点rj的横坐标,ylk1表示第一摄像头采集的左髋关键点lk的纵坐标,ylj1表示第一摄像头采集的左肩关键点lj的纵坐标;Wherein, D1 represents the abscissa difference between the left shoulder key point collected by the first camera and the projection point of the right shoulder key point coordinates on the wall, D2 represents the left hip key point collected by the first camera and the left shoulder key point on the wall The vertical coordinate difference between the projection points on the , xlj1 represents the abscissa of the left shoulder key point lj collected by the first camera, xrj1 represents the abscissa of the right shoulder key point rj collected by the first camera, and ylk1 represents the first camera. The ordinate of the collected left hip key point lk, ylj1 represents the ordinate of the left shoulder key point lj collected by the first camera;

所述步骤S3中根据第二摄像头所采集的人体骨骼关键点坐标第二转向识别参数K2为:In the step S3, the second steering recognition parameter K2 according to the coordinates of the key points of the human skeleton collected by the second camera is:

Figure BDA0002632728140000082
Figure BDA0002632728140000082

其中,D3表示第二摄像头采集的左肩关键点与右肩关键点坐标在墙壁上的投影点之间的横坐标差,D4表示第二摄像头采集的左髋关键点与左肩关键点在墙壁上的投影点之间的纵坐标差,xlj2表示第二摄像头采集的左肩关键点lj的横坐标,xrj2表示第二摄像头采集的右肩关键点rj的横坐标,ylk2表示第二摄像头采集的左髋关键点lk的纵坐标,ylj2表示第二摄像头采集的左肩关键点lj的纵坐标。Among them, D3 represents the abscissa difference between the left shoulder key point collected by the second camera and the projection point of the right shoulder key point coordinates on the wall, D4 represents the left hip key point collected by the second camera and the left shoulder key point on the wall The vertical coordinate difference between the projection points on the , xlj2 represents the abscissa of the left shoulder key point lj collected by the second camera, xrj2 represents the abscissa of the right shoulder key point rj collected by the second camera, ylk2 represents the second camera The collected ordinate of the left hip key point lk, ylj2 represents the ordinate of the left shoulder key point lj collected by the second camera.

所述步骤S7中第一左转过程具体为:人体正对墙壁时,向左转动角度为(0,90-α)度的过程;所述第一右转过程为:人体正对墙壁时,向右转动角度为(0,90-α)度的过程;In the step S7, the first left turn process is specifically: when the human body is facing the wall, the left rotation angle is (0,90-α) degrees; the first right turn process is: when the human body is facing the wall , the process of turning the angle to the right is (0,90-α) degrees;

所述步骤S7包括以下分步骤:The step S7 includes the following sub-steps:

S71、判断第一转向识别参数K1是否大于第二转向识别参数K2,若是,则进入步骤S72,否则进入步骤S73;S71, determine whether the first steering identification parameter K1 is greater than the second steering identification parameter K2 , if so, go to step S72, otherwise go to step S73;

S72、判定此时转向过程为第一左转过程,计算转动角度angle为:S72, it is determined that the steering process is the first left-turn process at this time, and the rotation angle angle is calculated as:

angle=arccos(K2/Kmax)/PI*180-αangle=arccos(K2 /Kmax )/PI*180-α

S73、判定此时转向过程为第一右转过程,计算转动角度angle为:S73, it is determined that the steering process is the first right-turn process at this time, and the rotation angle angle is calculated as:

angle=arccos(K1/Kmax)/PI*180-αangle=arccos(K1 /Kmax )/PI*180-α

其中,Kmax表示左右肩关键点之间的最大距离;PI表示圆周率。Among them, Kmax represents the maximum distance between the left and right shoulder key points; PI represents the pi.

所述步骤S8中第二左转过程具体为:以人体正对墙壁为参考,在人体左侧(90-α,90+α)度范围内向左转的过程;The second left turn process in the step S8 is specifically: a process of turning left within the range of (90-α, 90+α) degrees on the left side of the human body with the human body facing the wall as a reference;

所述步骤S8中计算转动角度angle为:The rotation angle angle calculated in the step S8 is:

angle=arccos(K1/Kmax)/PI*180+α。angle=arccos(K1 /Kmax )/PI*180+α.

所述步骤S9中第三左转过程具体为:以人体正对墙壁为参考,在人体左侧(90+α,180)度范围内向左转的过程;所述第三右转过程具体为:以人体正对墙壁为参考,在人体右侧(90+α,180)度范围内向右转的过程;In the step S9, the third turning process to the left is specifically: with the human body facing the wall as a reference, the process of turning left within the range of (90+α, 180) degrees on the left side of the human body; the third right turning process is specifically: The process of turning right within the range of (90+α, 180) degrees on the right side of the human body with the human body facing the wall as a reference;

所述步骤S9包括以下分步骤:The step S9 includes the following sub-steps:

S91、判断第一转向识别参数K1是否大于第二转向识别参数K2,若是,则进入步骤S92,否则进入步骤S93;S91, determine whether the first steering identification parameter K1 is greater than the second steering identification parameter K2 , if so, go to step S92, otherwise go to step S93;

S92、判定转向过程为第三左转过程,并计算转动角度angle为:S92, determine that the steering process is the third left-turn process, and calculate the rotation angle angle as:

angle=180-arccos(K1/Kmax)/PI*180+αangle=180-arccos(K1 /Kmax )/PI*180+α

S93、判定转向过程为第三右转过程,并计算转动角度angle为:S93, determine that the turning process is the third right turning process, and calculate the turning angle angle as:

angle=180-arccos(K2/Kmax)/PI*180+α。angle=180-arccos(K2 /Kmax )/PI*180+α.

所述步骤S10中第二右转过程具体为:以人体正对墙壁为参考,在人体右侧(90-α,90+α)度范围内向右转的过程;The second right turn process in the step S10 is specifically: a process of turning right within the range of (90-α, 90+α) degrees on the right side of the human body with the human body facing the wall as a reference;

所述步骤S10中计算转动角度angle为:The rotation angle angle calculated in the step S10 is:

angle=arccos(K2/Kmax)/PI*180+α。angle=arccos(K2 /Kmax )/PI*180+α.

所述左右肩关键点之间的最大距离Kmax初始化为0.65,在每次所述计算转动角度之前,对最大距离Kmax进行更新,所述更新步骤为:判断第一转向识别参数K1是否大于最大距离Kmax,若是,则令最大距离Kmax的计数值为第一转向识别参数K1的计数值,否则不改变最大距离Kmax的计数值,完成最大距离Kmax的更新。The maximum distance Kmax between the left and right shoulder key points is initialized to 0.65. Before each rotation angle is calculated, the maximum distance Kmax is updated. The update step is: judging whether the first steering identification parameter K1 is is greater than the maximum distance Kmax , if so, set the count value of the maximum distance Kmax to the count value of the first turning identification parameter K1 , otherwise the count value of the maximum distance Kmax is not changed to complete the update of the maximum distance Kmax .

Claims (9)

1. A real-time human body rotation angle identification method based on two cameras under different scenes is characterized by comprising the following steps:
s1, arranging two cameras with the same parameters and horizontal connecting lines on one wall;
s2, acquiring human body images in real time through two cameras, and acquiring human body skeleton key point coordinates by adopting a human body key point identification algorithm, wherein the skeleton key point coordinates comprise a left shoulder key point coordinate and a right shoulder key point coordinate;
s3, acquiring a first steering identification parameter K according to the coordinates of the key points of the human skeleton acquired by the first camera1And a second steering identification parameter K according to the coordinates of the key points of the human skeleton collected by the second camera2
S4, judging whether the abscissa of the left shoulder key point acquired by the first camera is larger than the abscissa of the right shoulder key point acquired by the first camera, if so, entering a step S5, otherwise, entering a step S6;
s5, judging whether the abscissa of the left shoulder key point acquired by the second camera is larger than the abscissa of the right shoulder key point acquired by the second camera, if so, entering a step S7, otherwise, entering a step S8;
s6, judging whether the abscissa of the left shoulder key point acquired by the second camera is larger than the abscissa of the right shoulder key point acquired by the second camera, if so, entering a step S10, otherwise, entering a step S9;
s7, judging the first steering identification parameter K1Whether or not it is greater than the second steering identification parameter K2If so, judging as a first left-turn process and calculating a rotation angle to obtain a human body rotation angle identification result, otherwise, judging as a first right-turn process and calculating a rotation angle to obtain a human body rotation angle identification result;
s8, judging that the current human body turning process is a second left turning process, and calculating a turning angle to obtain a human body turning angle identification result;
s9, judging the first steering identification parameter K1Whether or not it is greater than the second steering identification parameter K2If so, judging as a third right-turn process and calculating a rotation angle to obtain a human body rotation angle identification result, otherwise, judging as a third left-turn process and calculating a rotation angle to obtain a human body rotation angle identification result;
and S10, judging that the current human body turning process is a second right turning process, and calculating the turning angle to obtain a human body turning angle recognition result.
2. The method for identifying the human body rotation angle in real time under different scenes based on two cameras as claimed in claim 1, wherein the included angles between the lens axes of the two cameras and the wall in the step S1 are both 90- α degrees, the lens axes of the two cameras intersect and are parallel to the horizontal plane, the distance between the lenses of the two cameras is d meters, α represents a constant, and α ∈ (0, 45).
3. The method for real-time human body rotation angle recognition under different scenes based on two cameras as claimed in claim 2, wherein the two cameras in the step S2 comprise a first camera and a second camera, and the human body bone key point coordinates collected by the first camera comprise left shoulder key point coordinates (x) and left shoulder key point coordinates (x)lj1,ylj1) Right shoulder key point coordinate (x)rj1,yrj1) Coordinates of key points of hip on the left (x)lk1,ylk1) And right hip keypoint coordinates (x)rk1,yrk1) The human skeleton key point coordinates acquired by the second camera comprise left shoulder key point coordinates (x)lj2,ylj2) Right shoulder key point coordinate (x)rj2,yrj2) Coordinates of key points of hip on the left (x)lk2,ylk2) And right hip keypoint coordinates (x)rk2,yrk2)。
4. The method for real-time human body rotation angle recognition under different scenes based on two cameras as claimed in claim 2, wherein in step S3, the first turning recognition parameter K is obtained according to the coordinates of the key points of the human body skeleton collected by the first camera1Comprises the following steps:
Figure FDA0002632728130000021
wherein D is1Represents the difference of the horizontal coordinates between the projected points of the left shoulder key point and the right shoulder key point coordinates on the wall acquired by the first camera, D2Represents the difference of the vertical coordinates, x, between the projected points of the left hip key point and the left shoulder key point on the wall collected by the first cameralj1The abscissa, x, representing the left shoulder keypoint lj acquired by the first camerarj1The abscissa, y, representing the right shoulder key point rj acquired by the first cameralk1Represents the ordinate, y, of the left hip key point lk acquired by the first cameralj1A vertical coordinate representing a left shoulder key point lj acquired by the first camera;
in the step S3, a second steering identification parameter K is acquired according to the human skeleton key point coordinate acquired by the second camera2Comprises the following steps:
Figure FDA0002632728130000031
wherein D is3Represents the difference of the horizontal coordinates between the projected points of the coordinates of the left shoulder key point and the right shoulder key point collected by the second camera on the wall, D4Representing the left hip key point and the left shoulder key point collected by the second cameraDifference in vertical coordinate, x, between projected points of key points on the walllj2The abscissa, x, of the left shoulder keypoint lj acquired by the second camerarj2The abscissa, y, representing the right shoulder key point rj acquired by the second cameralk2Represents the ordinate, y, of the left hip key point lk acquired by the second cameralj2And the ordinate of the left shoulder key point lj acquired by the second camera is shown.
5. The method for identifying the human body rotation angle in real time under different scenes based on two cameras according to claim 4, wherein the first left turn process in the step S7 specifically comprises: when the human body is opposite to the wall, the human body rotates leftwards by an angle of (0, 90-alpha); the first right turn process is: a process of rotating rightwards by an angle of (0, 90-alpha) degrees when the human body is opposite to the wall;
the step S7 includes the following sub-steps:
s71, judging the first steering identification parameter K1Whether or not it is greater than the second steering identification parameter K2If yes, go to step S72, otherwise go to step S73;
s72, judging that the steering process is the first left-turning process at the moment, and calculating a turning angle as follows:
angle=arccos(K2/Kmax)/PI*180-α
s73, judging that the steering process is the first right steering process at the moment, and calculating a rotation angle as:
angle=arccos(K1/Kmax)/PI*180-α
wherein, KmaxRepresenting the maximum distance between the left and right shoulder keypoints; PI represents a circumferential ratio.
6. The method for identifying the human body rotation angle in real time under different scenes based on two cameras according to claim 5, wherein the second left turn process in the step S8 specifically comprises: a process of turning left within the range of (90-alpha, 90+ alpha) degrees at the left side of the human body by taking the human body right opposite to the wall as reference;
in step S8, the calculated rotation angle is:
angle=arccos(K1/Kmax)/PI*180+α。
7. the method for identifying the human body rotation angle in real time under different scenes based on two cameras according to claim 5, wherein the third left turn process in the step S9 specifically comprises: a process of turning left within the range of (90+ alpha, 180) degrees at the left side of the human body by taking the human body right opposite to the wall as reference; the third right-turn process specifically comprises: a process of turning right within the range of (90+ alpha, 180) degrees at the right side of the human body by taking the human body right opposite to the wall as reference;
the step S9 includes the following sub-steps:
s91, judging the first steering identification parameter K1Whether or not it is greater than the second steering identification parameter K2If yes, go to step S92, otherwise go to step S93;
s92, judging that the steering process is the third left-turning process, and calculating a turning angle as:
angle=180-arccos(K1/Kmax)/PI*180+α
s93, judging that the steering process is the third right steering process, and calculating a rotation angle as:
angle=180-arccos(K2/Kmax)/PI*180+α。
8. the method for identifying the human body rotation angle in real time under different scenes based on two cameras according to claim 5, wherein the second right turning process in the step S10 specifically comprises: a process of turning right within the range of (90-alpha, 90+ alpha) degrees at the right side of the human body by taking the human body right facing the wall as a reference;
in step S10, the calculated rotation angle is:
angle=arccos(K2/Kmax)/PI*180+α。
9. the method for identifying the human body rotation angle in different scenes based on two cameras according to claim 5, wherein the maximum distance K between the key points of the left shoulder and the right shouldermaxInitialized to 0.65, at each saidBefore calculating the rotation angle, the maximum distance K is calculatedmaxUpdating, wherein the updating step comprises the following steps: judging a first steering identification parameter K1Whether or not it is greater than the maximum distance KmaxIf yes, let the maximum distance KmaxIs the first steering identification parameter K1Otherwise the maximum distance K is not changedmaxCount value of (c), completion maximum distance KmaxAnd (4) updating.
CN202010816048.5A2020-08-142020-08-14 Real-time human rotation angle recognition method in different scenarios based on dual camerasExpired - Fee RelatedCN111914790B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202010816048.5ACN111914790B (en)2020-08-142020-08-14 Real-time human rotation angle recognition method in different scenarios based on dual cameras

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202010816048.5ACN111914790B (en)2020-08-142020-08-14 Real-time human rotation angle recognition method in different scenarios based on dual cameras

Publications (2)

Publication NumberPublication Date
CN111914790Atrue CN111914790A (en)2020-11-10
CN111914790B CN111914790B (en)2022-08-02

Family

ID=73284641

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202010816048.5AExpired - Fee RelatedCN111914790B (en)2020-08-142020-08-14 Real-time human rotation angle recognition method in different scenarios based on dual cameras

Country Status (1)

CountryLink
CN (1)CN111914790B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN112488005A (en)*2020-12-042021-03-12临沂市新商网络技术有限公司On-duty monitoring method and system based on human skeleton recognition and multi-angle conversion
CN113435364A (en)*2021-06-302021-09-24平安科技(深圳)有限公司Head rotation detection method, electronic device, and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
JP2004294657A (en)*2003-03-262004-10-21Canon Inc Lens device
CN102855379A (en)*2012-05-302013-01-02无锡掌游天下科技有限公司Skeleton joint data based standardizing method
CN106296720A (en)*2015-05-122017-01-04株式会社理光Human body based on binocular camera is towards recognition methods and system
US20180335930A1 (en)*2017-05-162018-11-22Apple Inc.Emoji recording and sending
CN110495889A (en)*2019-07-042019-11-26平安科技(深圳)有限公司Postural assessment method, electronic device, computer equipment and storage medium
CN110969114A (en)*2019-11-282020-04-07四川省骨科医院Human body action function detection system, detection method and detector

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
JP2004294657A (en)*2003-03-262004-10-21Canon Inc Lens device
CN102855379A (en)*2012-05-302013-01-02无锡掌游天下科技有限公司Skeleton joint data based standardizing method
CN106296720A (en)*2015-05-122017-01-04株式会社理光Human body based on binocular camera is towards recognition methods and system
US20180335930A1 (en)*2017-05-162018-11-22Apple Inc.Emoji recording and sending
CN110495889A (en)*2019-07-042019-11-26平安科技(深圳)有限公司Postural assessment method, electronic device, computer equipment and storage medium
CN110969114A (en)*2019-11-282020-04-07四川省骨科医院Human body action function detection system, detection method and detector

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
C. WEERASINGHE; P. OGUNBONA; WANQING LI: "2D to pseudo-3D conversion of "head and shoulder" images using feature based parametric disparity maps", 《PROCEEDINGS 2001 INTERNATIONAL CONFERENCE ON IMAGE PROCESSING 》*
刘皓,郭立等: "基于3D骨架和MCRF模型的行为识别", 《中国科学技术大学学报》*

Cited By (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN112488005A (en)*2020-12-042021-03-12临沂市新商网络技术有限公司On-duty monitoring method and system based on human skeleton recognition and multi-angle conversion
CN112488005B (en)*2020-12-042022-10-14临沂市新商网络技术有限公司On-duty monitoring method and system based on human skeleton recognition and multi-angle conversion
CN113435364A (en)*2021-06-302021-09-24平安科技(深圳)有限公司Head rotation detection method, electronic device, and storage medium
CN113435364B (en)*2021-06-302023-09-26平安科技(深圳)有限公司Head rotation detection method, electronic device, and storage medium

Also Published As

Publication numberPublication date
CN111914790B (en)2022-08-02

Similar Documents

PublicationPublication DateTitle
CN107292965B (en)Virtual and real shielding processing method based on depth image data stream
CN110599540B (en)Real-time three-dimensional human body shape and posture reconstruction method and device under multi-viewpoint camera
CN106940704B (en)Positioning method and device based on grid map
CN104463880B (en)A kind of RGB D image acquiring methods
CN111414798A (en)Head posture detection method and system based on RGB-D image
US20200334842A1 (en)Methods, devices and computer program products for global bundle adjustment of 3d images
CN107843251B (en) Pose Estimation Methods for Mobile Robots
CN108401461A (en)Three-dimensional mapping method, device and system, cloud platform, electronic equipment and computer program product
CN107240129A (en)Object and indoor small scene based on RGB D camera datas recover and modeling method
CN105550670A (en)Target object dynamic tracking and measurement positioning method
CN103106688A (en)Indoor three-dimensional scene rebuilding method based on double-layer rectification method
CN106485690A (en)Cloud data based on a feature and the autoregistration fusion method of optical image
CN110728671A (en) Vision-Based Dense Reconstruction Methods for Textureless Scenes
CN111862299A (en) Human body three-dimensional model construction method, device, robot and storage medium
WO2024032233A1 (en)Stereophotogrammetric method based on binocular vision
CN109613974B (en) An AR home experience method in a large scene
CN106920276A (en)A kind of three-dimensional rebuilding method and system
CN119180908A (en)Gaussian splatter-based laser enhanced visual three-dimensional reconstruction method and system
CN111998862A (en)Dense binocular SLAM method based on BNN
CN115330992B (en) Indoor positioning method, device, equipment and storage medium based on multi-visual feature fusion
CN115035235A (en)Three-dimensional reconstruction method and device
Wan et al.A study in 3d-reconstruction using kinect sensor
CN114494582A (en) A dynamic update method of 3D model based on visual perception
CN113065506B (en)Human body posture recognition method and system
CN111914790B (en) Real-time human rotation angle recognition method in different scenarios based on dual cameras

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant
CF01Termination of patent right due to non-payment of annual fee
CF01Termination of patent right due to non-payment of annual fee

Granted publication date:20220802


[8]ページ先頭

©2009-2025 Movatter.jp