技术领域technical field
本发明涉及煤炭智能化作业技术领域,尤其涉及一种煤矿井下环境感知方法、装置及存储介质。The invention relates to the technical field of coal intelligent operation, in particular to a coal mine underground environment sensing method, device and storage medium.
背景技术Background technique
煤矿作业环境的场景、结构、光线、信号常常发生变化,令煤炭作业较为困难且容易发生事故。随着计算机技术、通信技术、微电子技术的迅速发展,多传感器数据融合技术受到普遍关注和广泛应用,将其应用于煤炭作业领域,用于环境感知,大大降低了煤矿下作业的难度。The scene, structure, light, and signal of the coal mine operating environment often change, making coal operations more difficult and prone to accidents. With the rapid development of computer technology, communication technology, and microelectronics technology, multi-sensor data fusion technology has been widely concerned and widely used. It is applied to the field of coal operations and used for environmental perception, which greatly reduces the difficulty of coal mine operations.
其中,基于RGB图像与激光雷达数据的融合技术因其在深度感知方面的实用性和高性能而备受关注,现阶段的工作主要探索了两种不同的融合方法:激光雷达和单目图像融合、激光雷达和立体图像融合。Among them, the fusion technology based on RGB image and lidar data has attracted much attention because of its practicability and high performance in depth perception. The current work mainly explores two different fusion methods: lidar and monocular image fusion , lidar and stereo image fusion.
但是,目前通过上述融合方法进行图像的深度估计通常是基于像素的回归,这种回归具有内在的不可靠性和模糊性,且信息融合不充分,传感器特性优势发挥不充分,导致煤矿井下环境感知结果精确度不足。However, at present, the depth estimation of images through the above fusion methods is usually based on pixel regression. This regression has inherent unreliability and ambiguity, and the information fusion is not sufficient, and the advantages of sensor characteristics are not fully utilized, resulting in the perception of the coal mine environment. The result is not precise enough.
发明内容Contents of the invention
本发明旨在至少在一定程度上解决相关技术中的技术问题之一。The present invention aims to solve one of the technical problems in the related art at least to a certain extent.
为此,本发明的第一方面的目的在于提出一种煤矿井下环境感知方法,以使RGB图像与激光雷达数据充分融合,提高煤矿井下环境感知结果的精确度。For this reason, the purpose of the first aspect of the present invention is to propose a coal mine underground environment perception method, so as to fully integrate RGB images and laser radar data, and improve the accuracy of coal mine environment perception results.
本发明的第二个目的在于提出一种煤矿井下环境感知装置。The second object of the present invention is to propose a coal mine environment sensing device.
本发明的第三个目的在于提出另一种煤矿井下环境感知装置。The third object of the present invention is to propose another coal mine underground environment sensing device.
本发明的第四个目的在于提出一种非临时性计算机可读存储介质。A fourth object of the present invention is to provide a non-transitory computer-readable storage medium.
本发明的第五个目的在于提出一种计算机程序产品。A fifth object of the present invention is to provide a computer program product.
为达上述目的,本发明一个实施例提出了一种方法,包括:To achieve the above object, an embodiment of the present invention proposes a method, including:
获取煤矿井下待感知区域的视觉环境图像和激光雷达环境图像;Obtain the visual environment image and lidar environment image of the area to be sensed underground in the coal mine;
基于待感知区域的环境类型,对所述视觉环境图像和所述激光雷达环境图像进行融合,得到融合图像;Based on the environment type of the area to be perceived, the visual environment image and the lidar environment image are fused to obtain a fused image;
将所述融合图像输入不同场景下的任务程序,以输出所述待感知区域在对应场景下的环境感知结果。The fused image is input into the task program in different scenes, so as to output the environment perception result of the region to be perceived in the corresponding scene.
在一些可能的实现方式中,所述基于待感知区域的环境类型,对所述视觉环境图像和所述激光雷达环境图像进行融合,包括:In some possible implementation manners, the fusion of the visual environment image and the lidar environment image based on the environment type of the region to be perceived includes:
在所述待感知区域的环境类型为目标环境的情况下,对所述视觉环境图像和所述激光雷达环境图像进行融合时,以所述激光雷达环境图像作为基准图像,以所述视觉环境图像作为补充图像;When the environment type of the area to be perceived is the target environment, when fusing the visual environment image and the lidar environment image, the lidar environment image is used as a reference image, and the visual environment image is as a supplementary image;
对所述视觉环境图像和所述激光雷达环境图像进行融合时,所述基准图像的像素值权重大于所述补充图像的像素值权重。When fusing the visual environment image and the lidar environment image, the pixel value weight of the reference image is greater than the pixel value weight of the supplementary image.
在一些可能的实现方式中,所述基于待感知区域的环境类型,对所述视觉环境图像和所述激光雷达环境图像进行融合,包括:In some possible implementation manners, the fusion of the visual environment image and the lidar environment image based on the environment type of the region to be perceived includes:
在所述待感知区域的环境类型为非目标环境的情况下,在对所述视觉环境图像和所述激光雷达环境图像进行融合时,在待感知区域的成像距离为一级距离的情况下,以所述视觉环境图像作为基准图像,以所述激光雷达环境图像作为补充图像;在待感知区域的成像距离为二级距离的情况下,以所述激光雷达环境图像作为基准图像;所述一级距离和所述二级距离依据距离划分。In the case where the environment type of the area to be perceived is a non-target environment, when the visual environment image and the lidar environment image are fused, when the imaging distance of the area to be sensed is a first-level distance, The visual environment image is used as a reference image, and the lidar environment image is used as a supplementary image; when the imaging distance of the area to be perceived is a secondary distance, the lidar environment image is used as a reference image; the one The primary distance and the secondary distance are divided according to distance.
在一些可能的实现方式中,所述对所述视觉环境图像和所述激光雷达环境图像进行融合,包括:In some possible implementation manners, the fusing the visual environment image and the lidar environment image includes:
在采集所述视觉环境图像时,基于相机标定过程中空间坐标系与世界坐标系的关系,获得所述视觉环境图像的相机世界三维坐标信息;When collecting the visual environment image, based on the relationship between the space coordinate system and the world coordinate system in the camera calibration process, obtain the camera world three-dimensional coordinate information of the visual environment image;
依据激光雷达获得的所述激光雷达环境图像的世界三维信息,并将所述相机世界三维坐标信息作参照融合所述视觉环境图像和所述激光雷达环境图像。The world three-dimensional information of the lidar environment image is obtained according to the lidar, and the world three-dimensional coordinate information of the camera is used as a reference to fuse the visual environment image and the lidar environment image.
在一些可能的实现方式中,所述对所述视觉环境图像和所述激光雷达环境图像进行融合,包括:In some possible implementation manners, the fusing the visual environment image and the lidar environment image includes:
分别对所述视觉环境图像对应的三维环境空间信息提取第一环境特征信息、对所述激光雷达环境图像对应的三维环境空间信息提取第二环境特征信息,将得到的所述第一环境特征信息和所述第二环境特征信息进行融合。Extracting first environmental feature information from the three-dimensional environmental space information corresponding to the visual environment image, and extracting second environmental feature information from the three-dimensional environmental space information corresponding to the lidar environmental image, and obtaining the first environmental feature information and performing fusion with the second environmental characteristic information.
在一些可能的实现方式中,所述将所述融合图像输入不同场景下的任务程序,以输出所述待感知区域在对应场景下的环境感知结果,包括:In some possible implementation manners, the input of the fused image into the task program in different scenes to output the environment perception result of the region to be perceived in the corresponding scene includes:
在所述场景为交通环境场景的情况下,通过目标检测识别移动物体以及进行行人识别;In the case that the scene is a traffic environment scene, identifying moving objects and pedestrian identification through target detection;
在所述场景为去噪场景的情况下,通过图像增强进行去噪除雾;In the case that the scene is a denoising scene, denoising and defogging are performed through image enhancement;
在所述场景为轨道线场景的情况下,通过实地分割识别边缘线;In the case where the scene is a track line scene, the edge line is identified by field segmentation;
在所述场景为煤矿井下的地面环境场景的情况下,通过语义分割提取地面环境。In the case that the scene is a ground environment scene under a coal mine, the ground environment is extracted through semantic segmentation.
在一些可能的实现方式中,所述方法还包括:In some possible implementations, the method also includes:
基于激光雷达的角速度和线速度对激光雷达进行畸变补偿,以得到所述激光雷达环境图像。Distortion compensation is performed on the laser radar based on the angular velocity and the linear velocity of the laser radar to obtain the laser radar environment image.
为达上述目的,本发明第二方面实施例提出了一种煤矿井下环境感知装置,包括:In order to achieve the above purpose, the embodiment of the second aspect of the present invention proposes a coal mine underground environment sensing device, including:
图像获取模块,用于获取煤矿井下待感知区域的视觉环境图像和激光雷达环境图像;The image acquisition module is used to acquire the visual environment image and the laser radar environment image of the area to be sensed underground in the coal mine;
图像融合模块,用于基于待感知区域的环境类型,对所述视觉环境图像和所述激光雷达环境图像进行融合,得到融合图像;An image fusion module, configured to fuse the visual environment image and the lidar environment image based on the environment type of the area to be perceived to obtain a fusion image;
环境感知模块,用于将所述融合图像输入不同场景下的任务程序,以输出所述待感知区域在对应场景下的环境感知结果。The environment perception module is used to input the fused image into the task program in different scenes, so as to output the environment perception result of the region to be perceived in the corresponding scene.
为达上述目的,本发明第三方面实施例提出了一种煤矿井下环境感知装置,包括存储器,收发机,处理器:In order to achieve the above purpose, the embodiment of the third aspect of the present invention proposes a coal mine underground environment sensing device, including a memory, a transceiver, and a processor:
存储器,用于存储计算机程序;收发机,用于在所述处理器的控制下收发数据;处理器,用于读取所述存储器中的计算机程序并执行以下操作:The memory is used to store computer programs; the transceiver is used to send and receive data under the control of the processor; the processor is used to read the computer programs in the memory and perform the following operations:
获取煤矿井下待感知区域的视觉环境图像和激光雷达环境图像;Obtain the visual environment image and lidar environment image of the area to be sensed underground in the coal mine;
基于待感知区域的环境类型,对所述视觉环境图像和所述激光雷达环境图像进行融合,得到融合图像;Based on the environment type of the area to be perceived, the visual environment image and the lidar environment image are fused to obtain a fused image;
将所述融合图像输入不同场景下的任务程序,以输出所述待感知区域在对应场景下的环境感知结果。The fused image is input into the task program in different scenes, so as to output the environment perception result of the region to be perceived in the corresponding scene.
为了实现上述目的,本发明第四方面实施例提出了一种非临时性计算机可读存储介质,当所述存储介质中的指令由电子设备的处理器执行时,使得电子设备能够执行本发明第一方面实施例提出的一种煤矿井下环境感知方法。In order to achieve the above object, the embodiment of the fourth aspect of the present invention proposes a non-transitory computer-readable storage medium. When the instructions in the storage medium are executed by the processor of the electronic device, the electronic device can execute the first aspect of the present invention. On the one hand, an embodiment proposes a coal mine underground environment perception method.
为了实现上述目的,本发明第五方面实施例提出了一种计算机程序产品,当所述计算机程序产品中的指令处理器执行时,执行本发明第一方面实施例提出的一种煤矿井下环境感知方法。In order to achieve the above object, the embodiment of the fifth aspect of the present invention proposes a computer program product, when the instruction processor in the computer program product is executed, it executes the coal mine underground environment perception proposed by the embodiment of the first aspect of the present invention method.
本发明的实施例提供的技术方案可以包括以下有益效果:获取煤矿井下待感知区域的视觉环境图像和激光雷达环境图像;基于待感知区域的环境类型对视觉环境图像和激光雷达环境图像进行融合;将融合图像输入包含不同场景的多任务程序,以输出待感知区域在对应场景下的环境感知结果。通过自适应融合的方式使视觉环境图像和激光雷达环境图像充分融合,精确反映待感知区域的信息,提高煤矿井下环境感知结果的精确度。The technical solution provided by the embodiments of the present invention may include the following beneficial effects: acquiring the visual environment image and the lidar environment image of the area to be sensed underground in the coal mine; fusing the visual environment image and the lidar environment image based on the environment type of the area to be sensed; Input the fused image into a multi-task program containing different scenes to output the environment perception results of the area to be perceived in the corresponding scene. Through adaptive fusion, the visual environment image and the lidar environment image are fully fused, accurately reflecting the information of the area to be perceived, and improving the accuracy of the coal mine environment perception results.
本发明附加的方面和优点将在下面的描述中部分给出,部分将从下面的描述中变得明显,或通过本发明的实践了解到。Additional aspects and advantages of the invention will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention.
附图说明Description of drawings
本发明上述的和/或附加的方面和优点从下面结合附图对实施例的描述中将变得明显和容易理解,其中:The above and/or additional aspects and advantages of the present invention will become apparent and easy to understand from the following description of the embodiments in conjunction with the accompanying drawings, wherein:
图1为本发明实施例所提供的一种煤矿井下环境感知方法的流程示意图;Fig. 1 is a schematic flow chart of a coal mine environment sensing method provided by an embodiment of the present invention;
图2为本发明实施例所提供的另一种煤矿井下环境感知方法的流程示意图;Fig. 2 is a schematic flow chart of another coal mine environment sensing method provided by an embodiment of the present invention;
图3为本发明实施例所提供的另一种煤矿井下环境感知方法的流程示意图;Fig. 3 is a schematic flow chart of another coal mine environment perception method provided by an embodiment of the present invention;
图4为本发明实施例所提供的一种煤矿井下环境感知装置的结构示意图。Fig. 4 is a schematic structural diagram of an underground coal mine environment sensing device provided by an embodiment of the present invention.
具体实施方式Detailed ways
下面详细描述本发明的实施例,所述实施例的示例在附图中示出,其中自始至终相同或类似的标号表示相同或类似的元件或具有相同或类似功能的元件。下面通过参考附图描述的实施例是示例性的,旨在用于解释本发明,而不能理解为对本发明的限制。Embodiments of the present invention are described in detail below, examples of which are shown in the drawings, wherein the same or similar reference numerals designate the same or similar elements or elements having the same or similar functions throughout. The embodiments described below by referring to the figures are exemplary and are intended to explain the present invention and should not be construed as limiting the present invention.
下面参考附图描述本发明实施例的一种煤矿井下环境感知方法。A coal mine underground environment perception method according to an embodiment of the present invention will be described below with reference to the accompanying drawings.
图1为本发明实施例所提供的一种煤矿井下环境感知方法的流程示意图。Fig. 1 is a schematic flow chart of a coal mine environment sensing method provided by an embodiment of the present invention.
摄像机在环境感知方面具有语义丰富、细节多等优势,但是对于环境中的位置估计不准、距离近但易受环境干扰(比如多粉尘、多雾环境);雷达在环境感知方面具有位置估计准、探测距离远、不易受环境干扰等优点,但是存在细节不够,远处分类不准等缺点。因此基于摄像机和雷达各自的优势,通过RGB图像与激光雷达数据的融合技术进行环境感知,以获取高精度的三维环境信息。但是现阶段的融合技术信息融合不充分,会导致煤矿井下环境感知结果精确度不足。The camera has the advantages of rich semantics and many details in environmental perception, but it is inaccurate in estimating the position in the environment, and the distance is short but susceptible to environmental interference (such as dusty, foggy environments); radar has accurate position estimation in environmental perception. , long detection distance, less susceptible to environmental interference, etc., but there are shortcomings such as insufficient details and inaccurate classification in the distance. Therefore, based on the respective advantages of cameras and radars, the fusion technology of RGB images and lidar data is used for environmental perception to obtain high-precision three-dimensional environmental information. However, the information fusion of the fusion technology at this stage is not sufficient, which will lead to insufficient accuracy of the environmental perception results of the coal mine.
针对这一问题,本发明实施例提供了一种煤矿井下环境感知方法,以提高煤矿井下环境感知结果的精确度,如图1所示,该方法包括以下步骤:In response to this problem, an embodiment of the present invention provides a coal mine underground environment perception method to improve the accuracy of coal mine environment perception results, as shown in Figure 1, the method includes the following steps:
步骤101,获取煤矿井下待感知区域的视觉环境图像和激光雷达环境图像。Step 101, acquiring the visual environment image and the lidar environment image of the area to be sensed underground in the coal mine.
待感知区域是煤矿井下需要进行环境感知的区域。The area to be sensed is the area where the coal mine environment needs to be sensed.
可选地,通过相机采集待感知区域的RGB图像作为视觉环境图像,通过激光雷达采集待感知区域的激光雷达环境图像。Optionally, the RGB image of the area to be sensed is collected by the camera as the visual environment image, and the lidar environment image of the area to be sensed is collected by the laser radar.
步骤102,基于待感知区域的环境类型,对视觉环境图像和激光雷达环境图像进行融合,得到融合图像。Step 102, based on the environment type of the area to be sensed, the visual environment image and the lidar environment image are fused to obtain a fused image.
基于不同的待感知区域的环境类型,对视觉环境图像和激光雷达环境图像进行对应的融合,得到融合图像,通过自适应融合的方式使视觉环境图像和激光雷达环境图像充分融合,以精确反映待感知区域的信息。Based on different environmental types of the area to be sensed, the visual environment image and the lidar environment image are fused correspondingly to obtain a fused image, and the visual environment image and the lidar environment image are fully fused through adaptive fusion to accurately reflect the image to be perceived. Information about the sensing area.
步骤103,将融合图像输入不同场景下的任务程序,以输出待感知区域在对应场景下的环境感知结果。Step 103, input the fused image into the task program in different scenes, so as to output the environment perception result of the area to be perceived in the corresponding scene.
以融合得到的图像作为输入,利用多场景对应的多任务程序提取待感知区域的信息,以获取不同场景下待感知区域的环境感知结果。Taking the fused image as input, the multi-task program corresponding to multiple scenes is used to extract the information of the area to be perceived, so as to obtain the environmental perception results of the area to be sensed in different scenes.
可选地,环境感知结果可以是环境特征的识别结果。Optionally, the environment perception result may be the recognition result of the environment feature.
本实施例中,获取煤矿井下待感知区域的视觉环境图像和激光雷达环境图像;基于待感知区域的环境类型对视觉环境图像和激光雷达环境图像进行融合;将融合图像输入包含不同场景的多任务程序,以输出待感知区域在对应场景下的环境感知结果。通过自适应融合的方式使视觉环境图像和激光雷达环境图像充分融合,精确反映待感知区域的信息,提高煤矿井下环境感知结果的精确度。In this embodiment, the visual environment image and the lidar environment image of the area to be sensed underground in the coal mine are acquired; the visual environment image and the lidar environment image are fused based on the environment type of the area to be sensed; A program to output the environmental perception results of the area to be perceived in the corresponding scene. Through adaptive fusion, the visual environment image and the lidar environment image are fully fused, accurately reflecting the information of the area to be perceived, and improving the accuracy of the coal mine environment perception results.
为了清楚说明上一实施例,本实施例提供了另一种煤矿井下环境感知方法,图2为本发明实施例所提供的另一种煤矿井下环境感知方法的流程示意图。In order to clearly illustrate the previous embodiment, this embodiment provides another coal mine environment perception method, and FIG. 2 is a schematic flowchart of another coal mine environment perception method provided by the embodiment of the present invention.
如图2所示,该方法可以包括以下步骤:As shown in Figure 2, the method may include the following steps:
步骤201,获取煤矿井下待感知区域的视觉环境图像和激光雷达环境图像。Step 201, acquiring the visual environment image and the lidar environment image of the area to be sensed underground in the coal mine.
可选地,基于激光雷达的角速度和线速度对激光雷达进行畸变补偿,以得到激光雷达环境图像。Optionally, distortion compensation is performed on the lidar based on the angular velocity and the linear velocity of the lidar, so as to obtain the lidar environment image.
因为相机和雷达本身的特性,会存在相机和雷达标定、同步误差以及运动过程中的松动的问题,同时在实际使用过程中,相机画面会缺失,雷达也会丢失点云数据与丢帧;对于恶劣环境下,相机存在图像模糊的情况,雷达存在形成凝露的情况。Because of the characteristics of the camera and radar itself, there will be problems of camera and radar calibration, synchronization error, and looseness during motion. At the same time, in the actual use process, the camera picture will be missing, and the radar will also lose point cloud data and frames; for In harsh environments, the image of the camera may be blurred, and the radar may form condensation.
针对上述问题,对于相机进行内参和外参的标定,并进行偏移补偿。In view of the above problems, the camera is calibrated with internal and external parameters, and offset compensation is performed.
需要说明的是,相机内参标定是指相机到图像坐标系的转换;相机外参标定是指3D空间坐标到相机坐标系转换。It should be noted that the camera internal parameter calibration refers to the conversion from the camera to the image coordinate system; the camera external parameter calibration refers to the conversion from the 3D space coordinates to the camera coordinate system.
雷达姿态调节,对于雷达的位置信息、前后帧的相对位置关系进行建模和补偿。Radar attitude adjustment, modeling and compensating for the position information of the radar and the relative position relationship between the front and rear frames.
对于同步误差,针对雷达扫描一圈存在时间差,进行相机和激光雷达的同步。For the synchronization error, there is a time difference for the radar scanning circle, and the camera and the lidar are synchronized.
需要说明的是,针对硬件部分的同步误差,解决同步误差是要同步不同传感器的时间戳。通过计算雷达自身角速度与线速度,还原激光真实情况,完成畸变补偿。It should be noted that for the synchronization error of the hardware part, the solution to the synchronization error is to synchronize the time stamps of different sensors. By calculating the angular velocity and linear velocity of the radar itself, the real situation of the laser is restored, and the distortion compensation is completed.
可选地,GPS时间戳的时间同步方法为:首先判断传感的硬件是否支持该种方法,如果支持则传感器给出的数据包会有全局的时间戳,这些时间戳以GPS为基准,这样就使用了相同的时钟,而非各自传感器的时钟了。但是不同传感器的数据频率是不同的,例如激光雷达的频率为10Hz,摄像机的频率为25/30Hz,不同传感器之间的数据还是存在延迟。此时可以通过找相邻时间戳的方法找到最近帧。Optionally, the time synchronization method of the GPS time stamp is: first judge whether the sensing hardware supports this method, and if it supports, the data packets given by the sensor will have global time stamps, and these time stamps are based on GPS, so The same clock is used instead of the respective sensor's clock. However, the data frequency of different sensors is different. For example, the frequency of the lidar is 10Hz, and the frequency of the camera is 25/30Hz. There is still a delay in the data between different sensors. At this time, the nearest frame can be found by finding the adjacent timestamp.
可选地,硬同步方法:以激光雷达作为触发其它传感器的源头,当激光雷达转到某个角度时,才触发该角度的摄像头,这可以大大减少时间差的问题。这套时间同步方案可以做到硬件中,可以缓解查找时间戳造成的误差现象。可以大大降低同步误差,提高数据对齐效果。Optionally, the hard synchronization method: use the lidar as the source of triggering other sensors, and only trigger the camera at that angle when the lidar turns to a certain angle, which can greatly reduce the problem of time difference. This time synchronization solution can be implemented in hardware, which can alleviate the error phenomenon caused by searching for time stamps. It can greatly reduce the synchronization error and improve the data alignment effect.
通过畸变补偿确保任一时刻的点云和该时刻对应位置的相机能同时触发,使视觉环境图像和激光雷达环境图像时间戳同步。Through distortion compensation, it is ensured that the point cloud at any moment and the camera at the corresponding position at that moment can be triggered at the same time, so that the time stamps of the visual environment image and the lidar environment image are synchronized.
对于运动补偿,当物体运动时会导致环境发生变化,此时激光雷达也在同步扫描补充信息。For motion compensation, when the object moves, the environment changes, and the lidar is also synchronously scanning for supplementary information.
可选地,在雷达丢失部分或全部点云、和/或雷达形成凝露时,进行故障提示。Optionally, when the radar loses part or all of the point cloud, and/or the radar forms condensation, a fault prompt is given.
步骤202,基于待感知区域的环境类型,对视觉环境图像和激光雷达环境图像进行融合,得到融合图像。Step 202, based on the environment type of the area to be sensed, the visual environment image and the lidar environment image are fused to obtain a fused image.
需要说明的是,在获取融合图像时,可以根据待感知区域的环境类型自适应选取前融合的方式、后融合的方式或者前融合和后融合相结合的方式进行视觉环境图像和激光雷达环境图像的融合。It should be noted that, when obtaining the fused image, the pre-fusion method, post-fusion method, or a combination of pre-fusion and post-fusion methods can be adaptively selected according to the environment type of the area to be sensed for visual environment image and lidar environment image. fusion.
其中,前融合的过程为:在采集视觉环境图像时,基于相机标定过程中空间坐标系与世界坐标系的关系,获得视觉环境图像的相机世界三维坐标信息,依据激光雷达获得的激光雷达环境图像的世界三维信息,并将相机世界三维坐标信息作参照融合视觉环境图像和激光雷达环境图像。Among them, the pre-fusion process is: when collecting the visual environment image, based on the relationship between the space coordinate system and the world coordinate system in the camera calibration process, the camera world three-dimensional coordinate information of the visual environment image is obtained, and the lidar environment image obtained according to the lidar The three-dimensional information of the world, and the three-dimensional coordinate information of the camera world is used as a reference to fuse the visual environment image and the lidar environment image.
其中,后融合的过程为:分别对视觉环境图像对应的三维环境空间信息提取第一环境特征信息、对激光雷达环境图像对应的三维环境空间信息提取第二环境特征信息,将得到的第一环境特征信息和第二环境特征信息进行融合。Among them, the post-fusion process is: respectively extracting the first environment feature information from the 3D environment space information corresponding to the visual environment image, and extracting the second environment feature information from the 3D environment space information corresponding to the lidar environment image, and the obtained first environment The feature information is fused with the second environment feature information.
可选地,在待感知区域的环境类型为目标环境的情况下,对视觉环境图像和激光雷达环境图像进行融合时,以激光雷达环境图像作为基准图像,以视觉环境图像作为补充图像。Optionally, when the environment type of the area to be sensed is the target environment, when fusing the visual environment image and the lidar environment image, the lidar environment image is used as a reference image, and the visual environment image is used as a supplementary image.
需要说明的是,目标环境是多雾环境、多粉尘环境或者暗光环境,通过环境识别模型确定,环境识别模型通过学习不同环境类型的图像特征获取。It should be noted that the target environment is a foggy environment, a dusty environment or a dark light environment, which is determined by the environment recognition model, and the environment recognition model is obtained by learning image features of different types of environments.
对视觉环境图像和激光雷达环境图像进行融合时,基准图像的像素值权重大于补充图像的像素值权重。When fusing the visual environment image and the lidar environment image, the pixel value weight of the reference image is greater than that of the supplementary image.
进一步的,在相机内加入环境识别模型,所述通过对不同环境类别的图像特征进行学习,自动实现对环境照片的识别判定,来判断待感知区域是否为目标环境。Further, an environment recognition model is added in the camera, and by learning the image features of different environment categories, the recognition and judgment of the environment photos are automatically realized to judge whether the area to be sensed is the target environment.
基准图像的像素值权重大于补充图像的像素值权重,基准图像的像素值权重和补充图像的像素值权重之和为1,则基准图像的像素值权重大于0.5。The pixel value weight of the reference image is greater than the pixel value weight of the supplementary image, and the sum of the pixel value weight of the reference image and the pixel value weight of the supplementary image is 1, then the pixel value weight of the reference image is greater than 0.5.
可选地,在待感知区域的环境类型为非目标环境的情况下,在对视觉环境图像和激光雷达环境图像进行融合时,在待感知区域的成像距离为一级距离的情况下,以视觉环境图像作为基准图像,以激光雷达环境图像作为补充图像;在待感知区域的成像距离为二级距离的情况下,以激光雷达环境图像作为基准图像,以视觉环境图像作为补充图像;一级距离和二级距离依据距离划分。Optionally, when the environment type of the region to be perceived is a non-target environment, when the visual environment image and the lidar environment image are fused, when the imaging distance of the region to be perceived is a first-level distance, the visual The environment image is used as the reference image, and the lidar environment image is used as the supplementary image; when the imaging distance of the area to be sensed is the second-level distance, the lidar environment image is used as the reference image, and the visual environment image is used as the supplementary image; the first-level distance and secondary distance are divided according to distance.
需要说明的是,一级距离和二级距离的划分标准根据硬件参数来决定,一般情况下,煤矿井下的相机只用于近距离的环境细节感知,一般为10米内的距离;而雷达与激光雷达,例如Radar属于毫米波,通常是4-12mm,有效工作距离更远。Lidar用的是激光波长通常在900-1500nm之间,看得更“细”,更精确。It should be noted that the division standard of the first-level distance and the second-level distance is determined according to the hardware parameters. Under normal circumstances, the camera in the coal mine is only used for close-range environmental detail perception, generally within 10 meters; while radar and laser Radar, such as Radar, belongs to millimeter wave, usually 4-12mm, and has a longer effective working distance. Lidar uses laser wavelengths usually between 900-1500nm, so it can see more "finely" and more accurately.
基于上述硬件参数,作为一种可能的实现方式,小于10米的成像距离为一级距离,否则为二级距离。Based on the above hardware parameters, as a possible implementation, the imaging distance less than 10 meters is the primary distance, otherwise it is the secondary distance.
可以理解的是,无论通过前融合的方法还是后融合的方法进行视觉环境图像和激光雷达环境图像的融合,均可基于距离进行基准确定。It can be understood that no matter the fusion of the visual environment image and the lidar environment image is carried out through the pre-fusion method or the post-fusion method, the reference can be determined based on the distance.
通过不同环境类型下的图像融合,以更能反映待感知区域精确信息的图像作为基准图像,再以补充图像作为辅助,对基准图像进行完善,能够在不同环境类型下得到待感知区域精确的融合图像,获取更加稠密准确的视差图,提高环境感知的精度,为整个环境感知方法提供一个准确的深度估计,进而可以为煤矿道路环境感知、视觉测量系统、无人矿车导航系统、矿井搜救机器人等应用提供可靠的图像信息以及技术支持。Through image fusion under different environment types, the image that can better reflect the accurate information of the area to be perceived is used as the reference image, and then the supplementary image is used as an auxiliary to improve the reference image, and the accurate fusion of the area to be sensed can be obtained under different types of environments. image, obtain a denser and more accurate disparity map, improve the accuracy of environmental perception, and provide an accurate depth estimate for the entire environmental perception method, which in turn can be used for coal mine road environment perception, visual measurement systems, unmanned mine car navigation systems, and mine search and rescue robots. and other applications to provide reliable image information and technical support.
步骤203,将融合图像输入不同场景下的任务程序,以输出待感知区域在对应场景下的环境感知结果。Step 203, input the fused image into the task program in different scenes, so as to output the environment perception result of the area to be perceived in the corresponding scene.
步骤203可参见前述实施例中对应步骤的相关说明,本实施例中对此不再赘述。For step 203, reference may be made to relevant descriptions of corresponding steps in the preceding embodiments, which will not be repeated in this embodiment.
本实施例提供了另一种煤矿井下环境感知方法,图3为本发明实施例所提供的另一种煤矿井下环境感知方法的流程示意图。This embodiment provides another coal mine environment perception method, and FIG. 3 is a schematic flowchart of another coal mine environment perception method provided by the embodiment of the present invention.
如图3所示,该方法可以包括以下步骤:As shown in Figure 3, the method may include the following steps:
步骤301,获取煤矿井下待感知区域的视觉环境图像和激光雷达环境图像。Step 301, acquiring the visual environment image and the lidar environment image of the area to be sensed underground in the coal mine.
步骤302,基于待感知区域的环境类型,对视觉环境图像和激光雷达环境图像进行融合,得到融合图像。Step 302, based on the environment type of the area to be sensed, the visual environment image and the lidar environment image are fused to obtain a fused image.
步骤301和步骤302可参见前述实施例中对应步骤的相关说明,本实施例中对此不再赘述。For step 301 and step 302, reference may be made to relevant descriptions of corresponding steps in the foregoing embodiments, which will not be repeated in this embodiment.
步骤303,将融合图像输入不同场景下的任务程序,以输出待感知区域在对应场景下的环境感知结果。Step 303, input the fused image into the task program in different scenes, so as to output the environment perception result of the area to be perceived in the corresponding scene.
需要说明的是,不同场景的多任务程序可以并行也可以串行,根据实际算力和最优化路径选择。即,作为一种可能的实现方式,融合图像可以输入到不同的场景并行进行功能实现;作为另一种可能的实现方式,也可以先识别出对应场景,进行对应的程序选择,再将融合图像输入对应场景的程序中,得到输出结果。It should be noted that the multitasking programs in different scenarios can be parallel or serial, and the selection is based on the actual computing power and the optimal path. That is, as a possible implementation, the fused image can be input into different scenes to realize the function in parallel; as another possible implementation, it is also possible to first identify the corresponding scene, select the corresponding program, and then use the fused image Input it into the program corresponding to the scene, and get the output result.
作为一种可能的实现方式,在场景为交通环境场景的情况下,通过目标检测识别移动物体以及进行行人识别。As a possible implementation, when the scene is a traffic environment scene, target detection is used to identify moving objects and perform pedestrian identification.
交通环境场景是指包含了交通环境的场景,交通环境包括较多移动目标,如移动的人、车辆、物体等,是不断变化的,而目标检测能够追踪到物体的移动轨迹,因此对于移动目标较多的交通环境,通过目标检测来进行移动物体和行人的识别。The traffic environment scene refers to the scene that includes the traffic environment. The traffic environment includes many moving objects, such as moving people, vehicles, objects, etc., which are constantly changing, and the target detection can track the movement trajectory of the object. Therefore, for moving objects In more traffic environments, moving objects and pedestrians are identified through target detection.
作为一种可能的实现方式,在场景为障碍物场景的情况下,通过目标检测识别障碍物。As a possible implementation, when the scene is an obstacle scene, the obstacle is identified through target detection.
障碍物场景是指包含了静态障碍物的场景,如栏杆、堆石,目标检测能够定位到不同的物体,通过目标检测能够识别出障碍物场景中需要进行检测的障碍物。Obstacle scenes refer to scenes that contain static obstacles, such as railings and rock piles. Object detection can locate different objects, and through object detection, obstacles that need to be detected in obstacle scenes can be identified.
作为一种可能的实现方式,在场景为去噪场景的情况下,通过图像增强进行去噪除雾。As a possible implementation, when the scene is a denoising scene, image enhancement is used to perform denoising and defogging.
去噪场景是指包含需要进行去噪的环境类型的场景,如多雾环境、多粉尘环境或者暗光环境,也即目标环境。A denoising scene refers to a scene including an environment type that needs to be denoised, such as a foggy environment, a dusty environment, or a dark light environment, that is, the target environment.
作为一种可能的实现方式,在场景为轨道线场景的情况下,通过实地分割识别边缘线。As a possible implementation, when the scene is a track line scene, edge lines are identified through field segmentation.
轨道线场景是指包含轨道线的场景,煤矿井下利用矿下轨道运输车进行煤炭、爆破物品等运输,部分路面上可能会包含轨道线。The track line scene refers to the scene that includes the track line. Under the coal mine, the underground rail transport vehicle is used to transport coal and explosives, and some roads may contain track lines.
作为一种可能的实现方式,在场景为煤矿井下的地面环境场景的情况下,通过语义分割提取地面环境。As a possible implementation, when the scene is a ground environment scene under a coal mine, the ground environment is extracted through semantic segmentation.
地面环境场景是指不包含其他障碍物、移动物体等,只存在井下地面的场景。The ground environment scene refers to a scene that does not contain other obstacles, moving objects, etc., and only exists on the underground ground.
步骤304,将待感知区域在不同场景下的环境感知结果应用于不同设备,实现煤矿井下设备的控制和应用。In step 304, the environmental perception results of the area to be sensed in different scenarios are applied to different devices, so as to realize the control and application of the coal mine underground equipment.
由于不同设备利用环境感知结果的目的不同,首先根据环境感知结果待应用的设备,确定每个场景下的环境感知结果的权重。Since different devices use the environmental perception results for different purposes, firstly, the weight of the environmental perception results in each scenario is determined according to the devices to which the environmental perception results are to be applied.
作为一种可能的实现方式,需要将环境感知结果应用到不断移动的设备上,如移动机器人,其环境感知结果的权重排列为:地面环境场景>轨道线场景>障碍物场景>交通环境场景>去噪场景。As a possible implementation, it is necessary to apply the environmental perception results to constantly moving devices, such as mobile robots. The weights of the environmental perception results are: ground environment scene > track line scene > obstacle scene > traffic environment scene > Denoise the scene.
作为另一种可能的实现方式,作为固定点位识别的设备,其环境感知结果的权重排列为:去噪场景>地面环境场景>交通环境场景>轨道线场景>障碍物场景。As another possible implementation, as a fixed point recognition device, the weights of the environmental perception results are arranged as follows: denoising scene > ground environment scene > traffic environment scene > track line scene > obstacle scene.
获取不同场景下环境感知结果的权重之后,通过控制井下设备,完成决策。After obtaining the weight of environmental perception results in different scenarios, the decision is made by controlling the downhole equipment.
作为一种可能的实现方式,对于移动机器人,得到环境感知结果后,将环境感知结果传输到自身的处理器,处理器根据专家系统或者自身预设的处理命令,将命令下发控制器,主要为电机控制器,通过控制相应转速,实现转向、停止、加速、急停等操作。As a possible implementation, for a mobile robot, after obtaining the environmental perception results, it transmits the environmental perception results to its own processor, and the processor sends commands to the controller according to the expert system or its own preset processing commands. It is a motor controller, which realizes steering, stop, acceleration, emergency stop and other operations by controlling the corresponding speed.
作为另一种可能的实现方式,对于固定点位环境,得到环境感知结果后,通过云-边-端协同技术,将环境感知结果上传到本地端服务器以及云端服务器,进行相应数据存储及处理。As another possible implementation method, for a fixed-point environment, after obtaining the environmental perception results, the environmental perception results are uploaded to the local server and cloud server through cloud-edge-device collaboration technology for corresponding data storage and processing.
但出现预先设定的违规危险情况时,比如:有落石、行人误闯入危险区、未戴安全帽等,该情况由云-边-端协同的处理中心,根据危险层级进行相应设备的控制。However, when there are pre-set dangerous violations, such as falling rocks, pedestrians entering the danger zone by mistake, not wearing a helmet, etc., the cloud-edge-end collaborative processing center will control the corresponding equipment according to the danger level. .
例如,对于未戴安全帽,可将处理结果传输到移动边缘控制器进行处理,并由移动边缘控制器对报警器等设备进行联动控制;对于落石等,可由本地或云端服务器进行报警记录,并发布警报到中控中心,由人工协同进行处理,包括现场封闭等。For example, if you are not wearing a safety helmet, the processing results can be transmitted to the mobile edge controller for processing, and the mobile edge controller can perform linkage control on devices such as alarms; for falling rocks, etc., the local or cloud server can record the alarm and The alarm is issued to the central control center, which is processed by manual coordination, including on-site closure.
本实施例提供了一种煤矿井下环境感知装置,图4为本发明实施例所提供的一种煤矿井下环境感知装置的结构示意图。This embodiment provides an underground coal mine environment sensing device, and FIG. 4 is a schematic structural diagram of an underground coal mine environment sensing device provided by an embodiment of the present invention.
如图4所示,煤矿井下环境感知装置包括:图像获取模块401、图像融合模块402以及环境感知模块403。As shown in FIG. 4 , the coal mine environment perception device includes: an image acquisition module 401 , an image fusion module 402 and an environment perception module 403 .
图像获取模块401,用于获取煤矿井下待感知区域的视觉环境图像和激光雷达环境图像。The image acquisition module 401 is used to acquire the visual environment image and the lidar environment image of the area to be sensed underground in the coal mine.
图像融合模块402,用于基于待感知区域的环境类型,对视觉环境图像和激光雷达环境图像进行融合,得到融合图像。The image fusion module 402 is configured to fuse the visual environment image and the lidar environment image based on the environment type of the region to be sensed to obtain a fusion image.
环境感知模块403,用于将融合图像输入不同场景下的任务程序,以输出待感知区域在对应场景下的环境感知结果。The environment perception module 403 is configured to input the fused image into the task program in different scenes, so as to output the environment perception result of the area to be perceived in the corresponding scene.
作为一种可能的实现方式,图像融合模块402,在待感知区域的环境类型为目标环境的情况下,对视觉环境图像和激光雷达环境图像进行融合时,以激光雷达环境图像作为基准图像,以视觉环境图像作为补充图像;对视觉环境图像和激光雷达环境图像进行融合时,基准图像的像素值权重大于补充图像的像素值权重。As a possible implementation, the image fusion module 402, when the environment type of the area to be sensed is the target environment, when fusing the visual environment image and the lidar environment image, the lidar environment image is used as the reference image, and the The visual environment image is used as a supplementary image; when the visual environment image and the lidar environment image are fused, the pixel value weight of the reference image is greater than the pixel value weight of the supplementary image.
作为一种可能的实现方式,图像融合模块402,在待感知区域的环境类型为非目标环境的情况下,在对视觉环境图像和激光雷达环境图像进行融合时,在待感知区域的成像距离为一级距离的情况下,以视觉环境图像作为基准图像,以激光雷达环境图像作为补充图像;在待感知区域的成像距离为二级距离的情况下,以激光雷达环境图像作为基准图像;一级距离和二级距离依据距离划分。As a possible implementation, the image fusion module 402, when the environment type of the area to be sensed is a non-target environment, when fusing the visual environment image and the lidar environment image, the imaging distance of the area to be sensed is In the case of the first-level distance, the visual environment image is used as the reference image, and the lidar environment image is used as the supplementary image; when the imaging distance of the area to be perceived is the second-level distance, the lidar environment image is used as the reference image; Distance and secondary distance are divided by distance.
作为一种可能的实现方式,图像融合模块402,还包括:As a possible implementation, the image fusion module 402 also includes:
在采集视觉环境图像时,基于相机标定过程中空间坐标系与世界坐标系的关系,获得视觉环境图像的相机世界三维坐标信息;When collecting the visual environment image, based on the relationship between the space coordinate system and the world coordinate system in the camera calibration process, the camera world three-dimensional coordinate information of the visual environment image is obtained;
依据激光雷达获得的激光雷达环境图像的世界三维信息,并将相机世界三维坐标信息作参照融合视觉环境图像和激光雷达环境图像。According to the world three-dimensional information of the lidar environment image obtained by the lidar, the three-dimensional coordinate information of the camera world is used as a reference to fuse the visual environment image and the lidar environment image.
作为一种可能的实现方式,图像融合模块402,还包括:As a possible implementation, the image fusion module 402 also includes:
分别对视觉环境图像对应的三维环境空间信息提取第一环境特征信息、对激光雷达环境图像对应的三维环境空间信息提取第二环境特征信息,将得到的第一环境特征信息和第二环境特征信息进行融合。Extract the first environmental feature information from the three-dimensional environmental space information corresponding to the visual environment image, and extract the second environmental feature information from the three-dimensional environmental space information corresponding to the lidar environmental image, and obtain the first environmental feature information and the second environmental feature information Perform fusion.
作为一种可能的实现方式,环境感知模块403,还包括:As a possible implementation, the environment perception module 403 also includes:
在场景为交通环境场景的情况下,通过目标检测识别移动物体以及进行行人识别;When the scene is a traffic environment scene, identify moving objects and pedestrians through target detection;
在场景为去噪场景的情况下,通过图像增强进行去噪除雾;When the scene is a denoising scene, image enhancement is used to denoise and defog;
在场景为轨道线场景的情况下,通过实地分割识别边缘线;In the case that the scene is a track line scene, the edge line is identified through field segmentation;
在场景为煤矿井下的地面环境场景的情况下,通过语义分割提取地面环境。In the case that the scene is a ground environment scene under a coal mine, the ground environment is extracted through semantic segmentation.
需要说明的是,前述对煤矿井下环境感知方法实施例的解释说明也适用于该实施例的煤矿井下环境感知装置,此处不再赘述。It should be noted that the foregoing explanations of the embodiment of the coal mine environment perception method are also applicable to the coal mine environment perception device of this embodiment, and will not be repeated here.
为了实现上述实施例,本发明还提出另一种煤矿井下环境感知装置,包括:处理器,以及用于存储所述处理器可执行指令的存储器。In order to realize the above embodiments, the present invention also proposes another coal mine environment sensing device, including: a processor, and a memory for storing instructions executable by the processor.
其中,处理器被配置为执行所述指令,以实现一种煤矿井下环境感知方法:Wherein, the processor is configured to execute the instructions, so as to realize a coal mine underground environment perception method:
获取煤矿井下待感知区域的视觉环境图像和激光雷达环境图像;Obtain the visual environment image and lidar environment image of the area to be sensed underground in the coal mine;
基于待感知区域的环境类型,对视觉环境图像和激光雷达环境图像进行融合,得到融合图像;Based on the environment type of the area to be perceived, the visual environment image and the lidar environment image are fused to obtain a fused image;
将融合图像输入不同场景下的任务程序,以输出待感知区域在对应场景下的环境感知结果。Input the fused image into the task program in different scenes to output the environmental perception results of the area to be perceived in the corresponding scene.
为了实现上述实施例,本发明还提出一种非临时性计算机可读存储介质,当所述存储介质中的指令由电子设备的处理器被执行时,使得电子设备的能够执行一种煤矿井下环境感知方法,所述方法包括:In order to realize the above-mentioned embodiments, the present invention also proposes a non-transitory computer-readable storage medium. When the instructions in the storage medium are executed by the processor of the electronic device, the electronic device can execute a coal mine underground environment Perception method, said method comprising:
获取煤矿井下待感知区域的视觉环境图像和激光雷达环境图像;Obtain the visual environment image and lidar environment image of the area to be sensed underground in the coal mine;
基于待感知区域的环境类型,对视觉环境图像和激光雷达环境图像进行融合,得到融合图像;Based on the environment type of the area to be perceived, the visual environment image and the lidar environment image are fused to obtain a fused image;
将融合图像输入不同场景下的任务程序,以输出待感知区域在对应场景下的环境感知结果。Input the fused image into the task program in different scenes to output the environmental perception results of the area to be perceived in the corresponding scene.
为了实现上述实施例,本发明还提出一种计算机程序产品,当所述计算机程序产品中的指令处理器执行时,执行一种煤矿井下环境感知方法,所述方法包括:In order to realize the above-mentioned embodiments, the present invention also proposes a computer program product. When the instruction processor in the computer program product is executed, a coal mine underground environment perception method is executed, the method comprising:
获取煤矿井下待感知区域的视觉环境图像和激光雷达环境图像;Obtain the visual environment image and lidar environment image of the area to be sensed underground in the coal mine;
基于待感知区域的环境类型,对视觉环境图像和激光雷达环境图像进行融合,得到融合图像;Based on the environment type of the area to be perceived, the visual environment image and the lidar environment image are fused to obtain a fused image;
将融合图像输入不同场景下的任务程序,以输出待感知区域在对应场景下的环境感知结果。Input the fused image into the task program in different scenes to output the environmental perception results of the area to be perceived in the corresponding scene.
在本说明书的描述中,参考术语“一个实施例”、“一些实施例”、“示例”、“具体示例”、或“一些示例”等的描述意指结合该实施例或示例描述的具体特征、结构、材料或者特点包含于本发明的至少一个实施例或示例中。在本说明书中,对上述术语的示意性表述不必须针对的是相同的实施例或示例。而且,描述的具体特征、结构、材料或者特点可以在任一个或多个实施例或示例中以合适的方式结合。此外,在不相互矛盾的情况下,本领域的技术人员可以将本说明书中描述的不同实施例或示例以及不同实施例或示例的特征进行结合和组合。In the description of this specification, descriptions referring to the terms "one embodiment", "some embodiments", "example", "specific examples", or "some examples" mean that specific features described in connection with the embodiment or example , structure, material or characteristic is included in at least one embodiment or example of the present invention. In this specification, the schematic representations of the above terms are not necessarily directed to the same embodiment or example. Furthermore, the described specific features, structures, materials or characteristics may be combined in any suitable manner in any one or more embodiments or examples. In addition, those skilled in the art can combine and combine different embodiments or examples and features of different embodiments or examples described in this specification without conflicting with each other.
此外,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括至少一个该特征。在本发明的描述中,“多个”的含义是至少两个,例如两个,三个等,除非另有明确具体的限定。In addition, the terms "first" and "second" are used for descriptive purposes only, and cannot be interpreted as indicating or implying relative importance or implicitly specifying the quantity of indicated technical features. Thus, the features defined as "first" and "second" may explicitly or implicitly include at least one of these features. In the description of the present invention, "plurality" means at least two, such as two, three, etc., unless otherwise specifically defined.
流程图中或在此以其他方式描述的任何过程或方法描述可以被理解为,表示包括一个或更多个用于实现定制逻辑功能或过程的步骤的可执行指令的代码的模块、片段或部分,并且本发明的优选实施方式的范围包括另外的实现,其中可以不按所示出或讨论的顺序,包括根据所涉及的功能按基本同时的方式或按相反的顺序,来执行功能,这应被本发明的实施例所属技术领域的技术人员所理解。Any process or method descriptions in flowcharts or otherwise described herein may be understood to represent a module, segment or portion of code comprising one or more executable instructions for implementing custom logical functions or steps of a process , and the scope of preferred embodiments of the invention includes alternative implementations in which functions may be performed out of the order shown or discussed, including substantially concurrently or in reverse order depending on the functions involved, which shall It is understood by those skilled in the art to which the embodiments of the present invention pertain.
在流程图中表示或在此以其他方式描述的逻辑和/或步骤,例如,可以被认为是用于实现逻辑功能的可执行指令的定序列表,可以具体实现在任何计算机可读介质中,以供指令执行系统、装置或设备(如基于计算机的系统、包括处理器的系统或其他可以从指令执行系统、装置或设备取指令并执行指令的系统)使用,或结合这些指令执行系统、装置或设备而使用。就本说明书而言,"计算机可读介质"可以是任何可以包含、存储、通信、传播或传输程序以供指令执行系统、装置或设备或结合这些指令执行系统、装置或设备而使用的装置。计算机可读介质的更具体的示例(非穷尽性列表)包括以下:具有一个或多个布线的电连接部(电子装置),便携式计算机盘盒(磁装置),随机存取存储器(RAM),只读存储器(ROM),可擦除可编辑只读存储器(EPROM或闪速存储器),光纤装置,以及便携式光盘只读存储器(CDROM)。另外,计算机可读介质甚至可以是可在其上打印所述程序的纸或其他合适的介质,因为可以例如通过对纸或其他介质进行光学扫描,接着进行编辑、解译或必要时以其他合适方式进行处理来以电子方式获得所述程序,然后将其存储在计算机存储器中。The logic and/or steps represented in the flowcharts or otherwise described herein, for example, can be considered as a sequenced listing of executable instructions for implementing logical functions, can be embodied in any computer-readable medium, For use with instruction execution systems, devices, or devices (such as computer-based systems, systems including processors, or other systems that can fetch instructions from instruction execution systems, devices, or devices and execute instructions), or in conjunction with these instruction execution systems, devices or equipment for use. For the purposes of this specification, a "computer-readable medium" may be any device that can contain, store, communicate, propagate or transmit a program for use in or in conjunction with an instruction execution system, device or device. More specific examples (non-exhaustive list) of computer-readable media include the following: electrical connection with one or more wires (electronic device), portable computer disk case (magnetic device), random access memory (RAM), Read Only Memory (ROM), Erasable and Editable Read Only Memory (EPROM or Flash Memory), Fiber Optic Devices, and Portable Compact Disc Read Only Memory (CDROM). In addition, the computer-readable medium may even be paper or other suitable medium on which the program can be printed, as it may be possible, for example, by optically scanning the paper or other medium, followed by editing, interpreting, or other suitable processing if necessary. processing to obtain the program electronically and store it in computer memory.
应当理解,本发明的各部分可以用硬件、软件、固件或它们的组合来实现。在上述实施方式中,多个步骤或方法可以用存储在存储器中且由合适的指令执行系统执行的软件或固件来实现。如,如果用硬件来实现和在另一实施方式中一样,可用本领域公知的下列技术中的任一项或他们的组合来实现:具有用于对数据信号实现逻辑功能的逻辑门电路的离散逻辑电路,具有合适的组合逻辑门电路的专用集成电路,可编程门阵列(PGA),现场可编程门阵列(FPGA)等。It should be understood that various parts of the present invention can be realized by hardware, software, firmware or their combination. In the embodiments described above, various steps or methods may be implemented by software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware as in another embodiment, it can be implemented by any one or a combination of the following techniques known in the art: a discrete Logic circuits, ASICs with suitable combinational logic gates, Programmable Gate Arrays (PGA), Field Programmable Gate Arrays (FPGA), etc.
本技术领域的普通技术人员可以理解实现上述实施例方法携带的全部或部分步骤是可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中,该程序在执行时,包括方法实施例的步骤之一或其组合。Those of ordinary skill in the art can understand that all or part of the steps carried by the methods of the above embodiments can be completed by instructing related hardware through a program, and the program can be stored in a computer-readable storage medium. During execution, one or a combination of the steps of the method embodiments is included.
上述提到的存储介质可以是只读存储器,磁盘或光盘等。尽管上面已经示出和描述了本发明的实施例,可以理解的是,上述实施例是示例性的,不能理解为对本发明的限制,本领域的普通技术人员在本发明的范围内可以对上述实施例进行变化、修改、替换和变型。The storage medium mentioned above may be a read-only memory, a magnetic disk or an optical disk, and the like. Although the embodiments of the present invention have been shown and described above, it can be understood that the above embodiments are exemplary and should not be construed as limiting the present invention, those skilled in the art can make the above-mentioned The embodiments are subject to changes, modifications, substitutions and variations.
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202310552707.2ACN116453031A (en) | 2023-05-16 | 2023-05-16 | A coal mine environment sensing method, device and storage medium |
| PCT/CN2023/116524WO2024234503A1 (en) | 2023-05-16 | 2023-09-01 | Coal mine subsurface environment sensing method and apparatus, and storage medium |
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202310552707.2ACN116453031A (en) | 2023-05-16 | 2023-05-16 | A coal mine environment sensing method, device and storage medium |
| Publication Number | Publication Date |
|---|---|
| CN116453031Atrue CN116453031A (en) | 2023-07-18 |
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202310552707.2APendingCN116453031A (en) | 2023-05-16 | 2023-05-16 | A coal mine environment sensing method, device and storage medium |
| Country | Link |
|---|---|
| CN (1) | CN116453031A (en) |
| WO (1) | WO2024234503A1 (en) |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2024234503A1 (en)* | 2023-05-16 | 2024-11-21 | 煤炭科学技术研究院有限公司 | Coal mine subsurface environment sensing method and apparatus, and storage medium |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN120495604B (en)* | 2025-07-18 | 2025-09-19 | 中煤科工集团信息技术有限公司 | Image enhancement method and system in complex coal mine environment |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN109495694A (en)* | 2018-11-05 | 2019-03-19 | 福瑞泰克智能系统有限公司 | A kind of environment perception method and device based on RGB-D |
| CN112561966A (en)* | 2020-12-22 | 2021-03-26 | 清华大学 | Sparse point cloud multi-target tracking method fusing spatio-temporal information |
| CN113192091A (en)* | 2021-05-11 | 2021-07-30 | 紫清智行科技(北京)有限公司 | Long-distance target sensing method based on laser radar and camera fusion |
| CN113408456A (en)* | 2021-06-29 | 2021-09-17 | 袁�嘉 | Environment perception algorithm, system, device, electronic equipment and storage medium |
| CN114862901A (en)* | 2022-04-26 | 2022-08-05 | 青岛慧拓智能机器有限公司 | Road-end multi-source sensor fusion target sensing method and system for surface mine |
| CN115402349A (en)* | 2022-07-21 | 2022-11-29 | 岚图汽车科技有限公司 | A high-speed assisted driving control method and system combined with weather conditions |
| CN116109675A (en)* | 2023-02-03 | 2023-05-12 | 北京天玛智控科技股份有限公司 | A method and device for capturing and sensing the reality of underground coal mine scenes |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN113002396B (en)* | 2020-04-14 | 2022-06-10 | 青岛慧拓智能机器有限公司 | A environmental perception system and mining vehicle for automatic driving mining vehicle |
| CN112861653B (en)* | 2021-01-20 | 2024-01-23 | 上海西井科技股份有限公司 | Method, system, equipment and storage medium for detecting fused image and point cloud information |
| CN113858221A (en)* | 2021-09-15 | 2021-12-31 | 山东省科学院自动化研究所 | A coal mine inspection robot |
| CN116453031A (en)* | 2023-05-16 | 2023-07-18 | 煤炭科学技术研究院有限公司 | A coal mine environment sensing method, device and storage medium |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN109495694A (en)* | 2018-11-05 | 2019-03-19 | 福瑞泰克智能系统有限公司 | A kind of environment perception method and device based on RGB-D |
| CN112561966A (en)* | 2020-12-22 | 2021-03-26 | 清华大学 | Sparse point cloud multi-target tracking method fusing spatio-temporal information |
| CN113192091A (en)* | 2021-05-11 | 2021-07-30 | 紫清智行科技(北京)有限公司 | Long-distance target sensing method based on laser radar and camera fusion |
| CN113408456A (en)* | 2021-06-29 | 2021-09-17 | 袁�嘉 | Environment perception algorithm, system, device, electronic equipment and storage medium |
| CN114862901A (en)* | 2022-04-26 | 2022-08-05 | 青岛慧拓智能机器有限公司 | Road-end multi-source sensor fusion target sensing method and system for surface mine |
| CN115402349A (en)* | 2022-07-21 | 2022-11-29 | 岚图汽车科技有限公司 | A high-speed assisted driving control method and system combined with weather conditions |
| CN116109675A (en)* | 2023-02-03 | 2023-05-12 | 北京天玛智控科技股份有限公司 | A method and device for capturing and sensing the reality of underground coal mine scenes |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2024234503A1 (en)* | 2023-05-16 | 2024-11-21 | 煤炭科学技术研究院有限公司 | Coal mine subsurface environment sensing method and apparatus, and storage medium |
| Publication number | Publication date |
|---|---|
| WO2024234503A1 (en) | 2024-11-21 |
| Publication | Publication Date | Title |
|---|---|---|
| KR102565533B1 (en) | Framework of navigation information for autonomous navigation | |
| CN109583415B (en) | Traffic light detection and identification method based on fusion of laser radar and camera | |
| EP3732657B1 (en) | Vehicle localization | |
| JP7210165B2 (en) | Method, device and display device for displaying virtual route | |
| CN111971725B (en) | Method for determining lane change instructions of a vehicle, readable storage medium and vehicle | |
| JP7645258B2 (en) | Map data updates | |
| JP7165201B2 (en) | recognition device | |
| CN116453031A (en) | A coal mine environment sensing method, device and storage medium | |
| US11676403B2 (en) | Combining visible light camera and thermal camera information | |
| CN113643431B (en) | A system and method for iterative optimization of visual algorithms | |
| US20220012503A1 (en) | Systems and methods for deriving an agent trajectory based on multiple image sources | |
| US20220012899A1 (en) | Systems and methods for deriving an agent trajectory based on tracking points within images | |
| CN116050277A (en) | A method and device for capturing, sensing and simulating the reality of a coal mine underground scene | |
| CN114084129A (en) | Fusion-based vehicle automatic driving control method and system | |
| Virdi | Using deep learning to predict obstacle trajectories for collision avoidance in autonomous vehicles | |
| CN117496515A (en) | Point cloud data labeling method, storage medium and electronic equipment | |
| CN117553811A (en) | Vehicle-road co-location navigation method and system based on road side camera and vehicle-mounted GNSS/INS | |
| CN119780955B (en) | Autonomous navigation method based on laser and vision fusion and related equipment | |
| WO2022062480A1 (en) | Positioning method and positioning apparatus of mobile device | |
| CN116109675A (en) | A method and device for capturing and sensing the reality of underground coal mine scenes | |
| US20230101472A1 (en) | Methods and Systems for Estimating Lanes for a Vehicle | |
| CN109144052B (en) | Navigation system for autonomous vehicle and method thereof | |
| CN113390422B (en) | Automobile positioning method and device and computer storage medium | |
| Krajewski et al. | Drone-based generation of sensor reference and training data for highly automated vehicles | |
| KR20220084849A (en) | Annotation Method for AI Mechanical Learning in Electromagnetic wave Data |
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination |