技术领域Technical Field
本发明涉及计算机视觉领域,尤其涉及一种Qt开发环境下的激光雷达目标演示与提取方法。The present invention relates to the field of computer vision, and in particular to a laser radar target demonstration and extraction method in a Qt development environment.
背景技术Background technique
Qt是一套完整的跨平台C++图形用户界面应用程序开发框架,具备广泛的开发基础和良好的封装机制,其高度的模块化设计、简化的内存回收机制以及丰富的API,可为用户提供移植性强、易用性高及运行速度快的开发环境。Qt is a complete cross-platform C++ graphical user interface application development framework with a broad development foundation and good encapsulation mechanism. Its highly modular design, simplified memory recycling mechanism and rich API can provide users with a development environment with strong portability, high ease of use and fast running speed.
激光雷达技术具备方向性好、测量精度高的特点,它可利用主动探测技术生成周围环境实时、高分辨率3D点云,同时不受外界自然光影响。LiDAR technology has the characteristics of good directionality and high measurement accuracy. It can use active detection technology to generate real-time, high-resolution 3D point cloud of the surrounding environment without being affected by external natural light.
因此,如何将二者优势相结合,从而更加直观、流畅地完成点云数据的演示与目标识别成为新的课题,目前关于Qt与激光雷达的结合,存在以下问题:Therefore, how to combine the advantages of the two to more intuitively and smoothly complete the presentation of point cloud data and target recognition has become a new topic. At present, there are the following problems in the combination of Qt and LiDAR:
第一,激光雷达可通过ROS节点发布点云数据,传统意义上,利用Qt获取ROS节点数据,需要安装ROS Qt Creator插件、配置环境变量、创建工作空间(WorkSpace)、修改CMakelists.txt等,步骤繁多易错,且难于理解。First, the lidar can publish point cloud data through the ROS node. Traditionally, using Qt to obtain ROS node data requires installing the ROS Qt Creator plug-in, configuring environment variables, creating a workspace (WorkSpace), modifying CMakelists.txt, etc. The steps are numerous, error-prone, and difficult to understand.
第二,Qt中关于三维点云图像的绘制,最直接的方法是利用自带的Datavisualization(数据可视化)模块,然而该模块却存在CPU占用率较高导致点云画面演示卡顿,以及无法将反射率强度信息以伪彩色表征的问题。Second, the most direct way to draw 3D point cloud images in Qt is to use the built-in Data Visualization module. However, this module has a high CPU usage rate, which causes the point cloud image presentation to be stuck, and is unable to represent the reflectivity intensity information in pseudo-color.
第三,目前激光雷达的目标提取主要包括基于体素(Voxel)和基于原始点云的方法。其中基于体素的目标提取方法,大多需要经过3D卷积神经网络抽象,运算过程较为复杂,不利于帧间级目标提取与跟踪。Third, the current target extraction of LiDAR mainly includes voxel-based and raw point cloud-based methods. Among them, the voxel-based target extraction method mostly needs to be abstracted by 3D convolutional neural network, and the calculation process is relatively complicated, which is not conducive to inter-frame level target extraction and tracking.
发明内容Summary of the invention
为了解决上述技术所存在的不足之处,本发明提供了一种Qt开发环境下的激光雷达目标演示与提取方法。In order to solve the shortcomings of the above-mentioned technology, the present invention provides a laser radar target demonstration and extraction method in a Qt development environment.
为了解决以上技术问题,本发明采用的技术方案是,一种Qt开发环境下的激光雷达目标演示与提取方法,包括以下步骤:In order to solve the above technical problems, the technical solution adopted by the present invention is a laser radar target demonstration and extraction method in a Qt development environment, comprising the following steps:
S1、利用Qt在ROS中订阅激光雷达点云数据;S1. Subscribe to lidar point cloud data in ROS using Qt;
S2、利用Qt中的OPENGL模块动态演示彩色三维点云数据;S2. Use the OPENGL module in Qt to dynamically demonstrate color 3D point cloud data;
S3、通过“体素连接法”,对单帧数据完成多目标提取;S3, through the "voxel connection method", complete the multi-target extraction of single frame data;
S4、通过帧间相关性分析,完成多目标跟踪。S4. Complete multi-target tracking through inter-frame correlation analysis.
进一步地,步骤S1具体包括:Furthermore, step S1 specifically includes:
S11、在Ubuntu桌面操作系统中安装Qt及ROS melodic;S11. Install Qt and ROS melodic in Ubuntu desktop operating system;
S12、在Qt工程文件中添加ROS依赖的动态链接库及其路径;S12. Add the dynamic link library and its path that ROS depends on in the Qt project file;
S13、在Qt中创建订阅节点,用于在ROS中订阅激光雷达点云数据;S13. Create a subscription node in Qt to subscribe to the lidar point cloud data in ROS;
S14、订阅节点创建后,启动激光雷达发布节点,通过重写订阅节点的静态回调函数获得激光雷达发布的格式数据。S14. After the subscription node is created, the lidar publishing node is started, and the format data published by the lidar is obtained by rewriting the static callback function of the subscription node.
进一步地,步骤S2具体包括:Furthermore, step S2 specifically includes:
S21、点云数据格式转换;S21, point cloud data format conversion;
S22、数据转出;S22, data transfer out;
S23、利用OPENCV将单帧点云反射率灰度数据映射为彩色数据;S23, using OPENCV to map the single-frame point cloud reflectivity grayscale data into color data;
S24、使用OPENGL渲染点云数据;S24. Use OPENGL to render point cloud data;
S25、动态更新;S25, dynamic update;
S26、图形变换。S26. Graphic transformation.
进一步地,步骤S3单帧数据是指激光雷达单周期扫描所获得的数据,步骤S3具体包括:Furthermore, the single frame data in step S3 refers to the data obtained by a single cycle scan of the laser radar, and step S3 specifically includes:
S31、建立体素;S31, establishing voxels;
S32、获取背景数据;S32, obtaining background data;
S33、鉴别目标;S33, identify the target;
S34、目标确认。S34. Target confirmation.
进一步地,步骤S4具体包括:Furthermore, step S4 specifically includes:
S41、根据当前帧各目标的亮格数组,记录各目标中心点位置;S41, recording the center point position of each target according to the bright grid array of each target in the current frame;
S42、获取下一帧各目标的亮格数组,记录各目标中心点位置;对前后两帧各目标的亮格数组进行相关性分析,通过遍历法,获取前一帧中某目标相关性最大的后一帧数组;S42, obtaining the bright grid array of each target in the next frame, and recording the center point position of each target; performing correlation analysis on the bright grid arrays of each target in the previous and next frames, and obtaining the array of the next frame with the greatest correlation of a certain target in the previous frame by traversal method;
S43、计算同一目标两帧间空间距离,得到该目标速度;S43, calculating the spatial distance between two frames of the same target to obtain the target speed;
S44、将后一帧设为当前帧,当下一帧到达时,按照步骤S41、S42、S43的方法完成迭代,各目标速度以激光雷达扫描周期进行更新。S44, set the next frame as the current frame. When the next frame arrives, complete the iteration according to the method of steps S41, S42, and S43, and the speed of each target is updated according to the laser radar scanning cycle.
进一步地,步骤S21中的格式转换是指利用ROS库自带函数转换点云数据类型;Furthermore, the format conversion in step S21 refers to converting the point cloud data type using the functions provided by the ROS library;
步骤S22中的数据是指步骤S1中的静态回调函数中的点云数据;The data in step S22 refers to the point cloud data in the static callback function in step S1;
步骤S23的单帧点云反射率灰度数据是指激光雷达单周期扫描所获得的数据;The single-frame point cloud reflectivity grayscale data of step S23 refers to the data obtained by a single-cycle scan of the laser radar;
步骤S24对于点云数据中的任意一点p,应包含位置信息(px、py、pz)以及颜色信息(pR、pG、pB);将单帧点云全部上述信息写入顶点缓冲对象QOpenGLBuffer*VBO中,再利用GLSL语言完成顶点着色器和片段着色器的编写,实现各点位置及颜色的计算及显示;Step S24: For any point p in the point cloud data, the position information (px , py , pz ) and color information (pR , pG , pB ) should be included; all the above information of the single-frame point cloud is written into the vertex buffer object QOpenGLBuffer*VBO, and then the vertex shader and fragment shader are written using the GLSL language to realize the calculation and display of the position and color of each point;
S25、设定单帧点云在画面中的显示持续时间tP,若界面接收点云时间为t1,则在[t1,t1+tP]范围内该帧点云得以显示,超过t1+tP后,该帧数据被替代更新,从而实现动态显示并及时释放内存;S25, setting the display duration tP of a single frame of point cloud in the screen. If the interface receives the point cloud at t1 , the frame of point cloud is displayed within the range of [t1 , t1 + tP ]. After t1 + tP , the frame data is replaced and updated, thereby achieving dynamic display and releasing memory in time.
S26、结合OPENGL中摄像机、视角、旋转函数,重写Qt中鼠标事件,实现鼠标拖拽图像的旋转、鼠标滚轮缩放图像功能,流畅展示百万级点云数据。S26. Combine the camera, viewing angle, and rotation functions in OPENGL, rewrite the mouse events in Qt, implement the functions of rotating the image by dragging the mouse, and zooming the image by the mouse wheel, and smoothly display millions of point cloud data.
进一步地,步骤S22的数据转出的具体过程为:在静态回调函数中建立信号槽,将数据传递给本类的普通槽函数,在普通槽函数中,发射与外部设计师界面类对象建立的信号,即可完成静态函数的数据通过信号槽向外部类对象的传递过程。Furthermore, the specific process of data transfer in step S22 is: establish a signal slot in the static callback function, pass the data to the common slot function of this class, and in the common slot function, emit a signal established with the external designer interface class object to complete the data transfer process of the static function to the external class object through the signal slot.
进一步地,步骤S23中利用OPENCV将单帧点云反射率灰度数据映射为彩色数据包括如下步骤:Furthermore, in step S23, mapping the single-frame point cloud reflectivity grayscale data into color data using OPENCV includes the following steps:
S231、在Ubuntu桌面操作系统中安装OPENCV;S231. Install OPENCV in Ubuntu desktop operating system;
S232、在Qt工程文件中添加OPENCV依赖的动态链接库。S232. Add the dynamic link library that OPENCV depends on to the Qt project file.
进一步地,步骤S31具体为:设置背景采样时间ts=5s,在[0,ts]内只有背景点云;首先获取背景点云在X、Y、Z轴方向上坐标绝对值的最大值,记为xm、ym、zm,单位为米,则可在空间直角坐标系中建立长方体完整外包当前全部点云,范围为[-xm,xm],[-ym,ym],[-zm,zm];以0.1m为长度单位建立正方体体素,则点云空间划分为20·xm·20·ym·20·zm个体素;Further, step S31 is specifically as follows: setting the background sampling timets = 5s, and there is only a background point cloud in [0,ts ]; firstly obtaining the maximum absolute value of the coordinates of the background point cloud in the X, Y, and Z axis directions, recorded asxm ,ym ,zm , in meters, and then establishing a rectangular parallelepiped in the spatial rectangular coordinate system to completely enclose all current point clouds, with a range of [-xm ,xm ], [-ym ,ym ], [-zm ,zm ]; establishing a cube voxel with a length unit of 0.1m, and then dividing the point cloud space into 20·xm ·20·ym ·20·zm voxels;
步骤S32具体为:计算在ts内落入到某体素中的扫描点数Ns,选取Ns中反射率最大值rmax与最小值rmin,则该体素的背景反射率区间为[rmin,rmax];以此类推,记录外包长方体中全部体素的反射率区间,可作为体素属性存于计算机内存中;Step S32 is specifically as follows: calculate the number of scanning points Ns falling into a certain voxel within ts , select the maximum reflectivity rmax and the minimum reflectivity rmin in Ns , then the background reflectivity interval of the voxel is [rmin , rmax ]; and so on, record the reflectivity intervals of all voxels in the outer cuboid, which can be stored in the computer memory as voxel attributes;
步骤S33鉴别目标的条件为:背景采集完成后,当动目标出现时,激光照射到目标产生回波,当单帧回波数据满足以下条件之一时,即可判定为目标;The conditions for identifying the target in step S33 are: after the background acquisition is completed, when a moving target appears, the laser irradiates the target to generate an echo, and when the single frame echo data meets one of the following conditions, it can be determined as a target;
(1)位置pi(xi,yi,zi)不属于任何一个体素单元;此时,应根据目标位置坐标,扩展外包长方体范围,以完整包含目标点云;(1) The position pi (xi ,yi , zi ) does not belong to any voxel unit; in this case, the outer cuboid range should be expanded according to the target position coordinates to completely include the target point cloud;
(2)目标点的位置pi(xi,yi,zi)属于某体素,但其反射率信息ri不在该体素对应的背景反射区间内;(2) The position of the target point pi (xi ,yi , zi ) belongs to a certain voxel, but its reflectivity information ri is not within the background reflection interval corresponding to the voxel;
步骤S34具体为:从背景中鉴别出来的点云信息,可能代表多个目标,因此需要对其进行有效分割,分割的依据是包含目标的体素是否交联,基于“体素连接法”提取多目标。Step S34 is specifically as follows: the point cloud information identified from the background may represent multiple targets, so it needs to be effectively segmented. The basis for segmentation is whether the voxels containing the target are cross-linked, and multiple targets are extracted based on the "voxel connection method".
进一步地,“体素连接法”提取多目标,具体步骤为:Furthermore, the “voxel connection method” extracts multiple targets, and the specific steps are as follows:
S341、对于外包长方体,将所有包含目标点云的体素记为“亮格”,以QVector3D类型的变量保存各亮格的中心点坐标,计入到QList<QVector3D>类型的对象blist中;作为备选池,blist即表示目标全部点云所在的亮格序列;S341. For the outer rectangular parallelepiped, all voxels containing the target point cloud are recorded as "bright grids", and the coordinates of the center point of each bright grid are saved in a variable of QVector3D type, and are counted into an object blist of QList<QVector3D> type; as a candidate pool, blist represents the bright grid sequence where all the target point clouds are located;
S342、选取blist中任意一点m0(x0,y0,z0),它是体素M0的中心;与体素M0共面的体素数为6,与M0每1条棱共棱的立方体数为1,因此与M0交接的其他体素数量为18,记各相邻的体素为M0i(i=0,1,2,…17);S342, select any point m0 (x0 , y0 , z0 ) in blist, which is the center of voxel M0 ; the number of voxels coplanar with voxel M0 is 6, the number of cubes sharing an edge with each edge of M0 is 1, and therefore the number of other voxels intersecting with M0 is 18, and each adjacent voxel is denoted as M0i (i=0, 1, 2, ...17);
S343、根据M0i与M0的相对位置关系(ui,vi,wi),计算各相邻体素的中心坐标m0i(x0+ui,y0+vi,z0+wi);S343, according to the relative position relationship (ui ,vi , wi ) between M0i and M0 , calculate the center coordinates m0i (x0 +ui , y0 +vi , z0 +wi ) of each adjacent voxel;
S344、在blist中寻找m0i,若存在,则存入目标0的中心点数组blist_0中,数据类型仍为QList<QVector3D>;为防止重复查找,需将m0i从blist中删除;换言之,将m0i从备选池blist移入目标池blist_0中;S344. Search for m0i in blist. If it exists, store it in the center point array blist_0 of target 0. The data type is still QList<QVector3D>. To prevent repeated searches, m0i needs to be deleted from blist. In other words, move m0i from the candidate pool blist to the target pool blist_0.
S345、对blist_0中的第一个元素m01,寻找其相邻的18个体素并获取中心坐标值,记为m01i(x01+ui,y01+vi,z01+wi)(i=0,1,2,...17),若其存在于blist中,则存入blist_0中,并将m01i从blist中删除;按照此方法可对blist_0各元素进行遍历;而且,blist_0在遍历的同时不断完成扩容,以确保不断有属于当前目标的亮格加入;S345. For the first elementm01 in blist_0, find its 18 adjacent voxels and obtain the center coordinate values, recorded asm01i (x01 +ui ,y01 +vi ,z01 +wi ) (i=0, 1, 2, ...17). If it exists in blist, store it in blist_0 and deletem01i from blist. According to this method, each element of blist_0 can be traversed. Moreover, blist_0 is continuously expanded while being traversed to ensure that bright cells belonging to the current target are continuously added.
S346、当遍历结束时,即blist_0数量不再增加时,以体素M0为中心的逐层选亮点过程结束;blist_0即构成了目标0的全部亮格;S346, when the traversal is finished, that is, when the number of blist_0 no longer increases, the process of selecting bright spots layer by layer with voxel M0 as the center is finished; blist_0 constitutes all bright grids of target 0;
S347、判断blist中元素数量;若为0,说明只存在一个目标,其亮格即为blist;若大于0,则说明还存在多个目标;此时按照步骤S342~S347的思路,对blist进行逐层法提取多目标blist_1,blist_2,…,blist_N,直至备选池blist中元素数量为0,表示全部目标提取结束。S347, determine the number of elements in blist; if it is 0, it means there is only one target, and its bright grid is blist; if it is greater than 0, it means there are multiple targets; at this time, according to the ideas of steps S342 to S347, the blist is extracted layer by layer to extract multiple targets blist_1, blist_2, ..., blist_N until the number of elements in the alternative pool blist is 0, indicating that the extraction of all targets is completed.
本发明公开了一种Qt开发环境下的激光雷达目标演示与提取方法,该方法可在ROS中订阅激光雷达传感器发布的消息来获取三维点云数据,利用OPENGL绘制与渲染的三维彩色点云模型,然后使用“体素连接法”完成单帧多目标分割提取,通过比对帧间目标体素的相关性,实现目标跟踪与实时速度测量。该方法步骤相对简单,避免使用自带的数据可视化模块,针对整个运算过程进行优化,利于帧间级目标提取与跟踪。The present invention discloses a laser radar target demonstration and extraction method in a Qt development environment, which can obtain three-dimensional point cloud data by subscribing to messages released by laser radar sensors in ROS, draw and render three-dimensional color point cloud models using OPENGL, and then use the "voxel connection method" to complete single-frame multi-target segmentation and extraction, and achieve target tracking and real-time speed measurement by comparing the correlation of target voxels between frames. The method is relatively simple in steps, avoids using a built-in data visualization module, optimizes the entire operation process, and is conducive to inter-frame level target extraction and tracking.
附图说明BRIEF DESCRIPTION OF THE DRAWINGS
图1是本发明的总流程图。Fig. 1 is a general flow chart of the present invention.
图2是本发明中单帧目标提取流程图。FIG. 2 is a flow chart of single-frame target extraction in the present invention.
图3是本发明中“体素连接法”目标分割流程图FIG. 3 is a flow chart of target segmentation using the “voxel connection method” in the present invention.
具体实施方式Detailed ways
下面结合附图和具体实施方式对本发明作进一步详细的说明。The present invention will be further described in detail below in conjunction with the accompanying drawings and specific embodiments.
如图1所示,对本发明一种Qt开发环境下的激光雷达目标演示与提取方法,其实现过程为:As shown in FIG1 , a laser radar target demonstration and extraction method in a Qt development environment of the present invention is implemented as follows:
S1、利用Qt在ROS中订阅激光雷达点云数据;S1. Subscribe to lidar point cloud data in ROS using Qt;
S2、利用Qt中的OPENGL模块动态演示彩色三维点云数据;S2. Use the OPENGL module in Qt to dynamically demonstrate color 3D point cloud data;
S3、通过“体素连接法”,对单帧数据完成多目标提取;S3, through the "voxel connection method", complete the multi-target extraction of single frame data;
S4、通过帧间相关性分析,完成多目标跟踪。S4. Complete multi-target tracking through inter-frame correlation analysis.
步骤S1具体包括:Step S1 specifically includes:
S11、在Ubuntu 18.04系统中,安装Qt 5.9.9及ROS melodic;S11. Install Qt 5.9.9 and ROS melodic in Ubuntu 18.04.
S12、在Qt工程文件中添加下列ROS依赖的动态链接库及其路径:S12. Add the following ROS-dependent dynamic link libraries and their paths in the Qt project file:
INCLUDEPATH+=/opt/ros/melodic/includeINCLUDEPATH+=/opt/ros/melodic/include
DEPENDPATH+=/opt/ros/melodic/libDEPENDPATH+=/opt/ros/melodic/lib
LIBS+=-L$$DEPENDPATH-lrosbag\LIBS+=-L$$DEPENDPATH-lrosbag\
-lroscpp\-lroscpp\
-lroslib\-lroslib\
-lroslz4\-lroslz4\
-lrostime\-lrostime\
-lroscpp_serialization\-lroscpp_serialization\
-lrospack\-lrospack\
-lcpp_common\-lcpp_common\
-lrosbag_storage\-lrosbag_storage\
-lrosconsole\-lrosconsole\
-lxmlrpcpp\-lxmlrpcpp\
-lrosconsole_backend_interface\-lrosconsole_backend_interface\
-lrosconsole_log4cxx\;-lrosconsole_log4cxx\;
S13、在Qt中创建订阅节点类QNodeSub,用于在ROS中订阅激光雷达数据,该类继承于Qt线程类Qthread;类的主程序中包含头文件#include<ros/ros.h>,创建句柄ros::NodeHandle node,定义变量ros::Subscriber chatter_subscriber=node.subscribe("/livox/lidar",1000,QNodeSub::chatterCallback),即可完成订阅(subscriber)节点对象chatter_subscriber的创建;S13. Create a subscription node class QNodeSub in Qt, which is used to subscribe to lidar data in ROS. This class inherits from the Qt thread class Qthread. The main program of the class contains the header file #include<ros/ros.h>, creates a handle ros::NodeHandle node, and defines a variable ros::Subscriber chatter_subscriber = node.subscribe("/livox/lidar", 1000, QNodeSub::chatterCallback), which completes the creation of the subscription node object chatter_subscriber.
S14、订阅节点创建后,启动激光雷达发布(publisher)节点,通过重写订阅节点的静态回调函数void QNodeSub::chatterCallback(const sensor_msgs::PointCloud2&msg),即可获得激光雷达传感器发布的sensor_msg::PointCloud2格式数据。S14. After the subscription node is created, start the lidar publisher node. By rewriting the static callback function of the subscription node void QNodeSub::chatterCallback(const sensor_msgs::PointCloud2&msg), you can obtain the sensor_msg::PointCloud2 format data published by the lidar sensor.
步骤S2具体包括:Step S2 specifically includes:
S21、点云数据格式转换;S21, point cloud data format conversion;
S22、数据转出;S22, data transfer out;
S23、利用OPENCV将单帧点云反射率灰度数据映射为彩色数据;S23, using OPENCV to map the single-frame point cloud reflectivity grayscale data into color data;
S24、使用OPENGL渲染点云数据;S24. Use OPENGL to render point cloud data;
S25、动态更新;S25, dynamic update;
S26、图形变换。S26. Graphic transformation.
步骤S21中的格式转换,是指利用ROS库自带函数sensor_msgs::convertPointCloud2ToPointCloud,将sensor_msg::PointCloud2类点云数据转换成sensor_msg::PointCloud类数据。The format conversion in step S21 refers to converting the sensor_msg::PointCloud2 class point cloud data into sensor_msg::PointCloud class data using the sensor_msgs::convertPointCloud2ToPointCloud function provided by the ROS library.
步骤S22中的数据,是指步骤S14中静态回调函数中的点云数据PointCloud类变量h;The data in step S22 refers to the point cloud data PointCloud class variable h in the static callback function in step S14;
步骤S22中的转出,具体过程为:在回调函数中建立信号槽,将h传递给本类的普通槽函数;在普通槽函数中,发射与外部设计师界面类对象建立的信号,其中信号的参数为h,即可完成静态函数的数据通过信号槽向外部类对象的传递过程。The specific process of the transfer in step S22 is: establish a signal slot in the callback function and pass h to the common slot function of this class; in the common slot function, emit a signal established with the external designer interface class object, where the parameter of the signal is h, thereby completing the process of transferring the data of the static function to the external class object through the signal slot.
步骤S23中利用OPENCV可按照如下步骤完成:Step S23 can be completed by using OPENCV as follows:
S231、在Ubuntu 18.04中安装OPENCV 4.5.4;S231. Install OPENCV 4.5.4 in Ubuntu 18.04.
S232、在Qt工程文件中添加OPENCV依赖的动态链接库:S232. Add the dynamic link library that OPENCV depends on to the Qt project file:
INCLUDEPATH+=/usr/local/include\INCLUDEPATH+=/usr/local/include\
/usr/local/include/opencv4\/usr/local/include/opencv4\
/usr/local/include/opencv4/opencv2\/usr/local/include/opencv4/opencv2\
LIBS+=/usr/local/lib/libopencv_calib3d.so.4.5.4\、LIBS+=/usr/local/lib/libopencv_calib3d.so.4.5.4\,
/usr/local/lib/libopencv_core.so.4.5.4\/usr/local/lib/libopencv_core.so.4.5.4\
/usr/local/lib/libopencv_highgui.so.4.5.4\/usr/local/lib/libopencv_highgui.so.4.5.4\
/usr/local/lib/libopencv_imgcodecs.so.4.5.4\/usr/local/lib/libopencv_imgcodecs.so.4.5.4\
/usr/local/lib/libopencv_imgproc.so.4.5.4\/usr/local/lib/libopencv_imgproc.so.4.5.4\
/usr/local/lib/libopencv_dnn.so.4.5.4\/usr/local/lib/libopencv_dnn.so.4.5.4\
步骤S23中将单帧点云反射率灰度数据映射为彩色数据,具体包括:创建图像容器类(cv::Mat)对象mapt,格式为CV_8UC1,图像矩阵大小为1*单帧点云数据长度N,即:cv::Mat mapt=cv::Mat::zeros(1,N,CV_8UC1);将单帧PointCloud格式的点云数组h中的反射率灰度数据注入到img中:In step S23, the single-frame point cloud reflectivity grayscale data is mapped to color data, specifically including: creating an image container class (cv::Mat) object mapt, the format is CV_8UC1, the image matrix size is 1*single-frame point cloud data length N, that is: cv::Mat mapt = cv::Mat::zeros(1,N,CV_8UC1); injecting the reflectivity grayscale data in the point cloud array h in the single-frame PointCloud format into img:
定义cv::Mat类对象mapc,使用cv::applyColorMap(mapt,mapc,cv::COLORMAP_JET),即可将灰度图mapt映射为JET伪彩图mapc;对于mapc中第i像元,其R、G、B值分别对应于mapc.at<Vec3b>(0,i)[2]、mapc.at<Vec3b>(0,i)[1]、mapc.at<Vec3b>(0,i)[0]。Define a cv::Mat class object mapc, and use cv::applyColorMap(mapt,mapc,cv::COLORMAP_JET) to map the grayscale image mapt to the JET pseudo-color image mapc; for the i-th pixel in mapc, its R, G, and B values correspond to mapc.at<Vec3b>(0,i)[2], mapc.at<Vec3b>(0,i)[1], and mapc.at<Vec3b>(0,i)[0] respectively.
步骤S24中渲染点云数据,具体为:对于点云中的任意一点p,应包含位置信息(px、py、pz)以及颜色信息(pR、pG、pB);设单帧点云长度为N,则表征单帧点云的数组维度为N×6;将上述数组写入顶点缓冲对象QOpenGLBuffer*VBO中,再利用GLSL语言完成顶点着色器和片段着色器的编写,实现各点位置及颜色的计算及显示。Point cloud data is rendered in step S24, specifically: for any point p in the point cloud, it should include position information (px , py , pz ) and color information (pR , pG , pB ); assuming that the length of a single-frame point cloud is N, the array dimension representing the single-frame point cloud is N×6; the above array is written into the vertex buffer object QOpenGLBuffer*VBO, and then the GLSL language is used to complete the writing of the vertex shader and the fragment shader to realize the calculation and display of the position and color of each point.
步骤S25具体包括:设定单帧点云在画面中的显示持续时间tP,若界面接收点云时间为t1,则在[t1,t1+tP]范围内该帧点云得以显示,超过t1+tP后,该帧数据被替代更新,从而实现动态显示并及时释放内存。Step S25 specifically includes: setting the display duration tP of a single frame of point cloud in the screen. If the interface receives the point cloud for t1 , the frame of point cloud is displayed within the range of [t1 , t1 + tP ]. After t1 + tP , the frame data is replaced and updated, thereby achieving dynamic display and timely releasing memory.
步骤S3的单帧数据,是指激光雷达单周期扫描所获得的数据。The single frame data in step S3 refers to the data obtained by a single cycle scan of the laser radar.
结合附图2的单帧目标提取流程图,步骤为设置循环,遍历一帧点云数据中的全部点,判断该点是否属于背景,如果否转到下一点,如果是,则判断该点所在体素置入目标备选池,之后获取单帧全部目标“体素连接法”完成分割,步骤S3具体包括:Combined with the single-frame target extraction flow chart of FIG2 , the steps are to set a loop, traverse all points in a frame of point cloud data, determine whether the point belongs to the background, if not, go to the next point, if yes, then determine the voxel where the point is located and put it into the target candidate pool, and then obtain all the targets of the single frame "voxel connection method" to complete the segmentation, step S3 specifically includes:
S31、建立体素;S31, establishing voxels;
S32、获取背景数据;S32, obtaining background data;
S33、鉴别目标;S33, identify the target;
S34、目标确认。S34. Target confirmation.
步骤S31具体为:设置背景采样时间ts=5s,在[0,ts]内只有背景点云;首先获取背景点云在X、Y、Z轴方向上坐标绝对值的最大值(如为浮点型则向上取整),记为xm、ym、zm,单位为米,则可在空间直角坐标系中建立长方体完整外包当前全部点云,范围为[-xm,xm],[-ym,ym],[-zm,zm];以0.1m(精度可调)为长度单位建立正方体体素,则点云空间划分为20·xm·20·ym·20·zm个体素。Step S31 is specifically as follows: setting the background sampling timets = 5s, there is only a background point cloud in [0,ts ]; first obtaining the maximum absolute value of the coordinates of the background point cloud in the X, Y, and Z axis directions (rounded up if floating point type), recorded asxm ,ym ,zm , in meters, then a rectangular block can be established in the spatial rectangular coordinate system to completely enclose all current point clouds, with a range of [-xm ,xm ], [-ym ,ym ], [-zm ,zm ]; establishing a cube voxel with 0.1m (adjustable precision) as the length unit, then the point cloud space is divided into 20·xm ·20·ym ·20·zmvoxels .
步骤S32具体为:计算在ts内落入到某体素中的扫描点数Ns,选取Ns中反射率最大值rmax与最小值rmin,则该体素的背景反射率区间为[rmin,rmax];以此类推,记录外包长方体中全部体素的反射率区间,可作为体素属性存于计算机内存中。Step S32 is specifically as follows: calculate the number of scanning points Ns falling into a certain voxel within ts , select the maximum reflectivity rmax and the minimum reflectivity rmin in Ns , then the background reflectivity interval of the voxel is [rmin , rmax ]; and so on, record the reflectivity intervals of all voxels in the outer cuboid, which can be stored in the computer memory as voxel attributes.
步骤S33中的鉴别目标的条件为:背景采集完成后,当动目标出现时,激光照射到目标产生回波,当单帧回波数据满足下述条件之一,即可判定为目标。The conditions for identifying the target in step S33 are: after the background acquisition is completed, when a moving target appears, the laser irradiates the target to generate an echo. When a single frame of echo data meets one of the following conditions, it can be determined as a target.
(1)位置pi(xi,yi,zi)不属于任何一个体素单元;此时,应根据目标位置坐标,扩展外包长方体范围,以完整包含目标点云;(1) The position pi (xi ,yi , zi ) does not belong to any voxel unit; in this case, the outer cuboid range should be expanded according to the target position coordinates to completely include the target point cloud;
(2)目标点的位置pi(xi,yi,zi)属于某体素,但其反射率信息ri不在该体素对应的背景反射区间内。(2) The position of the target point pi (xi ,yi ,zi ) belongs to a certain voxel, but its reflectivity informationri is not within the background reflection interval corresponding to the voxel.
步骤S34具体为:从背景中鉴别出来的点云信息,可能代表多个目标,因此需要对其进行有效分割,分割的依据是包含目标的体素是否交联,下面将基于“体素连接法”提取多目标,结合附图3所示的“体素连接法”目标分割流程图,首先,设置循环,选取单帧备选池blist中一点,判断该点边是否存在“亮格”,如果不存在转到下一点,如果存在,将亮格从备选池blist移至目标池blist 0,之后遍历blist_0,若发现亮格则存入该序列,直至blist 0元素不再增加,当blist 0表示目标0对应的体素提取完成,最后判断blist所含元素是否为0,如果否,回到初始位,如果是,目标分割结束。Step S34 is specifically as follows: the point cloud information identified from the background may represent multiple targets, so it needs to be effectively segmented. The basis for segmentation is whether the voxels containing the target are cross-linked. The following will extract multiple targets based on the "voxel connection method". Combined with the "voxel connection method" target segmentation flow chart shown in Figure 3, first, set a loop, select a point in the single-frame candidate pool blist, and determine whether there is a "bright grid" around the point. If not, go to the next point. If so, move the bright grid from the candidate pool blist to the target pool blist 0, and then traverse blist_0. If a bright grid is found, it is stored in the sequence until the blist 0 element no longer increases. When blist 0 indicates that the voxel extraction corresponding to target 0 is completed, finally determine whether the elements contained in blist are 0. If not, return to the initial position. If so, the target segmentation is completed.
具体步骤为:The specific steps are:
S341、对于外包长方体,将所有包含目标点云的体素记为“亮格”,以QVector3D类型的变量保存各亮格的中心点坐标,计入到QList<QVector3D>类型的对象blist中;作为备选池,blist即表示目标全部点云所在的亮格序列;S341. For the outer rectangular parallelepiped, all voxels containing the target point cloud are recorded as "bright grids", and the coordinates of the center point of each bright grid are saved in a variable of QVector3D type, and are counted into an object blist of QList<QVector3D> type; as a candidate pool, blist represents the bright grid sequence where all the target point clouds are located;
S342、选取blist中任意一点m0(x0,y0,z0),它是体素M0的中心;与体素M0共面的体素数为6,与M0每1条棱共棱的立方体数为1,因此与M0交接的其他体素数量为18,记各相邻的体素为M0i(i=0,1,2,…17);S342, select any point m0 (x0 , y0 , z0 ) in blist, which is the center of voxel M0 ; the number of voxels coplanar with voxel M0 is 6, the number of cubes sharing an edge with each edge of M0 is 1, and therefore the number of other voxels intersecting with M0 is 18, and each adjacent voxel is denoted as M0i (i=0, 1, 2, ...17);
S343、根据M0i与M0的相对位置关系(ui,vi,wi),计算各相邻体素的中心坐标m0i(x0+ui,y0+vi,z0+wi);S343, according to the relative position relationship (ui ,vi , wi ) between M0i and M0 , calculate the center coordinates m0i (x0 +ui , y0 +vi , z0 +wi ) of each adjacent voxel;
S344、在blist中寻找m0i,若存在,则存入目标0的中心点数组blist_0中,数据类型仍为QList<QVector3D>;为防止重复查找,需将m0i从blist中删除;换言之,将m0i从备选池blist移入目标池blist_0中;S344. Search for m0i in blist. If it exists, store it in the center point array blist_0 of target 0. The data type is still QList<QVector3D>. To prevent repeated searches, m0i needs to be deleted from blist. In other words, move m0i from the candidate pool blist to the target pool blist_0.
S345、对blist_0中的第一个元素m01,寻找其相邻的18个体素并获取中心坐标值,记为m01i(x01+ui,y01+vi,z01+wi)(i=0,1,2,...17),若其存在于blist中,则存入blist_0中,并将m01i从blist中删除;按照此方法可对blist_0各元素进行遍历;而且,blist_0在遍历的同时不断完成扩容,以确保不断有属于当前目标的亮格加入;S345. For the first elementm01 in blist_0, find its 18 adjacent voxels and obtain the center coordinate values, recorded asm01i (x01 +ui ,y01 +vi ,z01 +wi ) (i=0, 1, 2, ...17). If it exists in blist, store it in blist_0 and deletem01i from blist. According to this method, each element of blist_0 can be traversed. Moreover, blist_0 is continuously expanded while being traversed to ensure that bright cells belonging to the current target are continuously added.
S346、当遍历结束时,即blist_0数量不再增加时,以体素M0为中心的逐层选亮点过程结束;blist_0即构成了目标0的全部亮格;S346, when the traversal is finished, that is, when the number of blist_0 no longer increases, the process of selecting bright spots layer by layer with voxel M0 as the center is finished; blist_0 constitutes all bright grids of target 0;
S347、判断blist中元素数量;若为0,说明只存在一个目标,其亮格即为blist;若大于0,则说明还存在多个目标;此时按照步骤342~步骤347的思路,对blist进行逐层法提取多目标blist_1,blist_2,…,blist_N,直至备选池blist中元素数量为0,表示全部目标提取结束。S347, determine the number of elements in blist; if it is 0, it means there is only one target, and its bright grid is blist; if it is greater than 0, it means there are multiple targets; at this time, according to the ideas of steps 342 to 347, the blist is extracted layer by layer to extract multiple targets blist_1, blist_2, ..., blist_N until the number of elements in the alternative pool blist is 0, indicating that the extraction of all targets is completed.
步骤S4具体包括:Step S4 specifically includes:
S41、根据当前帧各目标的亮格数组,记录各目标中心点位置Targeti;S41, recording the center point position Targeti of each target according to the bright grid array of each target in the current frame;
S42、获取下一帧各目标的亮格数组,记录各目标中心点位置Targetj;对前后两帧各目标的亮格数组进行相关性分析,通过遍历法,获取前一帧中某目标相关性最大的后一帧数组,即可认为是两个数组对应于同一目标,从而实现目标跟踪;具体来讲,以前一帧图像中目标0的亮格序列blist_0i为基准,与后一帧中各目标亮格序列进行比较,由于帧间隔时间极短(0.1s),查找后一帧中各目标亮格序列与blist_0i序列中重复元素最多的序列,即识别为同一目标;类似地,可对前一帧图像中各目标完成帧间相关性分析;S42, obtaining the bright grid array of each target in the next frame, and recording the position of the center point of each target Targetj ; performing correlation analysis on the bright grid arrays of each target in the previous and next frames, and obtaining the array of the next frame with the largest correlation of a certain target in the previous frame by traversal method, it can be considered that the two arrays correspond to the same target, thereby realizing target tracking; specifically, taking the bright grid sequence blist_0i of target 0 in the previous frame image as a reference, and comparing it with the bright grid sequence of each target in the next frame, since the frame interval time is extremely short (0.1s), searching for the sequence with the most repeated elements in the bright grid sequence of each target in the next frame and the blist_0i sequence, that is, identifying them as the same target; similarly, inter-frame correlation analysis can be completed for each target in the previous frame image;
S43、计算同一目标两帧间的中心点Targeti、Targetj的空间距离,即可得到该目标速度;S43, calculating the spatial distance between the center points Targeti and Targetj of the same target in two frames, and obtaining the target speed;
S44、将后一帧设为当前帧,当下一帧到达时,按照步骤S41、S42、S43的方法,完成迭代,各目标速度以激光雷达扫描周期进行更新。S44, set the next frame as the current frame. When the next frame arrives, complete the iteration according to the method of steps S41, S42, and S43, and update the speed of each target according to the laser radar scanning cycle.
综上所述,本Qt开发环境下的激光雷达目标演示与提取方法,包括Qt中建立ROS订阅节点,获取点云数据;利用QT的OPENGL模块动态展示彩色点云;建立体素模型,获取背景反射率区间;确认单帧目标所在体素;“体素连接法”分割目标;利用帧间相关性实现目标跟踪。该方法可在ROS中订阅激光雷达传感器发布的消息来获取三维点云数据,利用OPENGL绘制与渲染的三维彩色点云模型,然后使用“体素连接法”完成单帧多目标分割提取,通过比对帧间目标体素的相关性,实现目标跟踪与实时速度测量。In summary, the laser radar target demonstration and extraction method in the Qt development environment includes establishing a ROS subscription node in Qt to obtain point cloud data; using QT's OPENGL module to dynamically display colored point clouds; establishing a voxel model to obtain the background reflectivity range; confirming the voxel where the single-frame target is located; "voxel connection method" to segment the target; and using inter-frame correlation to achieve target tracking. This method can subscribe to the message published by the laser radar sensor in ROS to obtain three-dimensional point cloud data, use OPENGL to draw and render a three-dimensional colored point cloud model, and then use the "voxel connection method" to complete the single-frame multi-target segmentation and extraction, and achieve target tracking and real-time speed measurement by comparing the correlation of target voxels between frames.
上述实施方式并非是对本发明的限制,本发明也并不仅限于上述举例,本技术领域的技术人员在本发明的技术方案范围内所做出的变化、改型、添加或替换,也均属于本发明的保护范围。The above implementation modes are not limitations of the present invention, and the present invention is not limited to the above examples. Any changes, modifications, additions or substitutions made by technicians in this technical field within the scope of the technical solution of the present invention also belong to the protection scope of the present invention.
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202310002862.7ACN116091533B (en) | 2023-01-03 | 2023-01-03 | Laser radar target demonstration and extraction method in Qt development environment |
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202310002862.7ACN116091533B (en) | 2023-01-03 | 2023-01-03 | Laser radar target demonstration and extraction method in Qt development environment |
| Publication Number | Publication Date |
|---|---|
| CN116091533A CN116091533A (en) | 2023-05-09 |
| CN116091533Btrue CN116091533B (en) | 2024-05-31 |
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202310002862.7AActiveCN116091533B (en) | 2023-01-03 | 2023-01-03 | Laser radar target demonstration and extraction method in Qt development environment |
| Country | Link |
|---|---|
| CN (1) | CN116091533B (en) |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2019023892A1 (en)* | 2017-07-31 | 2019-02-07 | SZ DJI Technology Co., Ltd. | Correction of motion-based inaccuracy in point clouds |
| CN110210389A (en)* | 2019-05-31 | 2019-09-06 | 东南大学 | A kind of multi-targets recognition tracking towards road traffic scene |
| CN110264468A (en)* | 2019-08-14 | 2019-09-20 | 长沙智能驾驶研究院有限公司 | Point cloud data mark, parted pattern determination, object detection method and relevant device |
| CN110853037A (en)* | 2019-09-26 | 2020-02-28 | 西安交通大学 | A lightweight color point cloud segmentation method based on spherical projection |
| CN111476822A (en)* | 2020-04-08 | 2020-07-31 | 浙江大学 | Laser radar target detection and motion tracking method based on scene flow |
| CN111781608A (en)* | 2020-07-03 | 2020-10-16 | 浙江光珀智能科技有限公司 | Moving target detection method and system based on FMCW laser radar |
| CN113075683A (en)* | 2021-03-05 | 2021-07-06 | 上海交通大学 | Environment three-dimensional reconstruction method, device and system |
| CN114419152A (en)* | 2022-01-14 | 2022-04-29 | 中国农业大学 | A method and system for target detection and tracking based on multi-dimensional point cloud features |
| CN114746872A (en)* | 2020-04-28 | 2022-07-12 | 辉达公司 | Model predictive control techniques for autonomous systems |
| CN114862901A (en)* | 2022-04-26 | 2022-08-05 | 青岛慧拓智能机器有限公司 | Road-end multi-source sensor fusion target sensing method and system for surface mine |
| CN115032614A (en)* | 2022-05-19 | 2022-09-09 | 北京航空航天大学 | Bayesian optimization-based solid-state laser radar and camera self-calibration method |
| CN115330923A (en)* | 2022-08-10 | 2022-11-11 | 小米汽车科技有限公司 | Point cloud data rendering method and device, vehicle, readable storage medium and chip |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN104641399B (en)* | 2012-02-23 | 2018-11-23 | 查尔斯·D·休斯顿 | System and method for creating an environment and for sharing location-based experiences in an environment |
| US11049266B2 (en)* | 2018-07-31 | 2021-06-29 | Intel Corporation | Point cloud viewpoint and scalable compression/decompression |
| US20200074233A1 (en)* | 2018-09-04 | 2020-03-05 | Luminar Technologies, Inc. | Automatically generating training data for a lidar using simulated vehicles in virtual space |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2019023892A1 (en)* | 2017-07-31 | 2019-02-07 | SZ DJI Technology Co., Ltd. | Correction of motion-based inaccuracy in point clouds |
| CN110210389A (en)* | 2019-05-31 | 2019-09-06 | 东南大学 | A kind of multi-targets recognition tracking towards road traffic scene |
| CN110264468A (en)* | 2019-08-14 | 2019-09-20 | 长沙智能驾驶研究院有限公司 | Point cloud data mark, parted pattern determination, object detection method and relevant device |
| CN110853037A (en)* | 2019-09-26 | 2020-02-28 | 西安交通大学 | A lightweight color point cloud segmentation method based on spherical projection |
| CN111476822A (en)* | 2020-04-08 | 2020-07-31 | 浙江大学 | Laser radar target detection and motion tracking method based on scene flow |
| CN114746872A (en)* | 2020-04-28 | 2022-07-12 | 辉达公司 | Model predictive control techniques for autonomous systems |
| CN111781608A (en)* | 2020-07-03 | 2020-10-16 | 浙江光珀智能科技有限公司 | Moving target detection method and system based on FMCW laser radar |
| CN113075683A (en)* | 2021-03-05 | 2021-07-06 | 上海交通大学 | Environment three-dimensional reconstruction method, device and system |
| CN114419152A (en)* | 2022-01-14 | 2022-04-29 | 中国农业大学 | A method and system for target detection and tracking based on multi-dimensional point cloud features |
| CN114862901A (en)* | 2022-04-26 | 2022-08-05 | 青岛慧拓智能机器有限公司 | Road-end multi-source sensor fusion target sensing method and system for surface mine |
| CN115032614A (en)* | 2022-05-19 | 2022-09-09 | 北京航空航天大学 | Bayesian optimization-based solid-state laser radar and camera self-calibration method |
| CN115330923A (en)* | 2022-08-10 | 2022-11-11 | 小米汽车科技有限公司 | Point cloud data rendering method and device, vehicle, readable storage medium and chip |
| Title |
|---|
| Arash Kiani.Point Cloud Registration of Tracked Objects and Real-time Visualization of LiDAR Data on Web and Web VR.《Master's Thesis in Informatics》.2020,1-56.* |
| Qt与MATLAB混合编程设计雷达信号验证软件;吴阳勇 等;《电子测量技术》;20201123;第43卷(第22期);13-18* |
| 基于激光视觉数据融合的三维场景重构与监控;赵次郎;《中国优秀硕士学位论文全文数据库 信息科技辑》;20150715(第(2015)07期);I138-1060* |
| 基于激光雷达传感器的三维多目标检测与跟踪技术研究;吴开阳;《中国优秀硕士学位论文全文数据库 信息科技辑》;20220615(第(2022)06期);I136-366* |
| 移动机器人视觉伺服操作臂控制方法研究;石泽亮;《中国优秀硕士学位论文全文数据库信息科技辑》;20221115(第(2022)11期);I140-111* |
| Publication number | Publication date |
|---|---|
| CN116091533A (en) | 2023-05-09 |
| Publication | Publication Date | Title |
|---|---|---|
| CN111932671A (en) | Three-dimensional solid model reconstruction method based on dense point cloud data | |
| CN101615191B (en) | Storage and real-time visualization implementation method of mass cloud data | |
| CN104778744B (en) | Extensive three-dimensional forest Visual Scene method for building up based on Lidar data | |
| CN104616345B (en) | Octree forest compression based three-dimensional voxel access method | |
| EP0865000B1 (en) | Image processing method and apparatus | |
| Chao et al. | Parallel algorithm for viewshed analysis on a modern GPU | |
| CN113593027B (en) | Three-dimensional avionics display control interface device | |
| Hurter | Image-based visualization: Interactive multidimensional data exploration | |
| CN102722885A (en) | Method for accelerating three-dimensional graphic display | |
| CN118334363B (en) | Topographic feature semantic modeling method based on remote sensing image and LiDAR analysis | |
| CN116778285A (en) | Big data fusion method and system for constructing digital twin base | |
| CN103544731A (en) | Quick reflection drawing method on basis of multiple cameras | |
| CN119445006A (en) | Three-dimensional digital content generation method, device, system, equipment, medium and product | |
| Wegen et al. | A Survey on Non-photorealistic Rendering Approaches for Point Cloud Visualization | |
| CN116091533B (en) | Laser radar target demonstration and extraction method in Qt development environment | |
| CN116993894B (en) | Virtual picture generation method, device, equipment, storage medium and program product | |
| CN117078470B (en) | BIM+GIS-based three-dimensional sign dismantling management system | |
| Wang et al. | VoxNeRF: Bridging Voxel Representation and Neural Radiance Fields for Enhanced Indoor View Synthesis | |
| Buck et al. | Ignorance is bliss: flawed assumptions in simulated ground truth | |
| WO2024183288A1 (en) | Shadow rendering method and apparatus, computer device, and storage medium | |
| CN106875480B (en) | Method for organizing urban three-dimensional data | |
| CN118674850A (en) | Scene rendering method, device, equipment, medium and program product | |
| CN111445565B (en) | A line-of-sight-based integrated display method and device for multi-source spatial data | |
| CN115063496A (en) | Method and device for rapidly processing point cloud data | |
| Zhang et al. | Efficient and fine-grained viewshed analysis in a three-dimensional urban complex environment |
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |