技术领域technical field
本发明涉及图像处理技术领域,特别是涉及一种基于三维点云匹配与真实场景重建的视频去雾方法、装置。The invention relates to the technical field of image processing, in particular to a video defogging method and device based on three-dimensional point cloud matching and real scene reconstruction.
背景技术Background technique
随着社会的发展,环境污染逐渐加剧,越来越多的城市频繁出现雾霾,这不仅给人们的身体健康带来危害,还给那些依赖图像信息的计算机视觉系统造成了不良影响。在雾霾影响下,视频监控,远程感应,自动驾驶等许多实际应用很容易受到威胁,检测和识别等高级计算机视觉任务很难完成。因此,去雾成为一种越来越重要的技术,寻找一种简单有效的图像去雾方法,对计算机视觉的后续研究至关重要。With the development of society, environmental pollution has gradually intensified, and more and more cities frequently appear smog, which not only brings harm to people's health, but also has a negative impact on computer vision systems that rely on image information. Under the influence of smog, many practical applications such as video surveillance, remote sensing, and automatic driving are easily threatened, and advanced computer vision tasks such as detection and recognition are difficult to complete. Therefore, dehazing has become an increasingly important technology, and finding a simple and effective image dehazing method is crucial to the follow-up research of computer vision.
发明内容Contents of the invention
基于此,为了解决以上问题本文提出了一种基于三维点云匹配与真实场景重建的视频去雾方法、装置。Based on this, in order to solve the above problems, this paper proposes a video defogging method and device based on 3D point cloud matching and real scene reconstruction.
根据本发明的第一方面,提供了一种基于三维点云匹配与真实场景重建的视频去雾方法,包括以下步骤:According to a first aspect of the present invention, there is provided a video defogging method based on 3D point cloud matching and real scene reconstruction, comprising the following steps:
三维点云获取步骤,选取视频同一场景下不同时段的两帧图片,前者作为雾霾图像,后者作为参考图像,从雾霾图像中提取三维点云,在参考图像中提取相同区域的三维点云作为匹配参照;从雾霾图像中提取三维点云包括:从雾霾图像中提取深度信息,得到原始的三维点云;The 3D point cloud acquisition step is to select two frames of pictures in different periods of the same scene of the video. The former is used as a haze image, and the latter is used as a reference image. The 3D point cloud is extracted from the haze image, and the 3D points of the same area are extracted from the reference image. The cloud is used as a matching reference; extracting a three-dimensional point cloud from the haze image includes: extracting depth information from the haze image to obtain the original three-dimensional point cloud;
匹配步骤,对两个三维点云进行匹配,找到匹配成功的三维点在其对应图像中的投影位置,利用匹配成功的三维点和其投影位置获得相机位姿;The matching step is to match the two 3D point clouds, find the projected position of the successfully matched 3D point in its corresponding image, and obtain the camera pose by using the successfully matched 3D point and its projected position;
透射率计算步骤,通过相机的内参和外参将参考图像投影到深度图像平面,得到深度图像,通过深度图像计算出场景的透射率;相机的内参和外参根据相机位姿计算获得;In the transmittance calculation step, the reference image is projected onto the depth image plane through the internal and external parameters of the camera to obtain a depth image, and the transmittance of the scene is calculated through the depth image; the internal and external parameters of the camera are calculated according to the camera pose;
去雾步骤,利用得到的透射率对图像进行去雾,获得去雾后的图像。In the defogging step, the obtained transmittance is used to defog the image to obtain a defogged image.
在一些实施例中,三维点云获取步骤中,雾霾图像和参考图像的三维点云通过相机获得。In some embodiments, in the step of obtaining the 3D point cloud, the 3D point cloud of the haze image and the reference image is obtained by a camera.
在一些实施例中,对两个三维点云进行匹配,找到匹配成功的三维点在其对应图像中的投影位置,利用匹配成功的三维点和投影位置获得相机位姿,包括:In some embodiments, two 3D point clouds are matched, the projected position of the successfully matched 3D point in its corresponding image is found, and the camera pose is obtained by using the successfully matched 3D point and projected position, including:
使用SIFT算法构建高斯金字塔来对雾霾图像与参考图像进行尺度变换;使用高斯差分算法来检测雾霾图像、参考图像中的特征点;计算每个特征点的主方向,使用局部图像的梯度信息来描述两幅图像中每个特征点的特征;对参考图像和雾霾图像中的特征点进行匹配,对于参考图像中的每个特征点,在雾霾图像中找到距离它最近的特征点,然后根据设定的阈值,判断匹配是否成功;获得匹配成功的特征点在其对应图像中的投影位置;建立相机坐标系和图像坐标系之间的映射关系,根据匹配成功的特征点以及匹配成功的特征点在其对应图像中的投影位置,计算出相机坐标系到图像坐标系之间的转换矩阵;通过转换矩阵得到相机的位姿;通过最小化重投影误差来调整相机的位姿,以使得匹配成功的特征点之间的投影误差最小化。其中,转换矩阵通过pnp算法计算获得。Use the SIFT algorithm to construct a Gaussian pyramid to scale the haze image and the reference image; use the Gaussian difference algorithm to detect the feature points in the haze image and the reference image; calculate the main direction of each feature point, and use the gradient information of the local image To describe the characteristics of each feature point in the two images; match the feature points in the reference image and the haze image, for each feature point in the reference image, find the feature point closest to it in the haze image, Then according to the set threshold, it is judged whether the matching is successful; the projected position of the matching feature point in its corresponding image is obtained; the mapping relationship between the camera coordinate system and the image coordinate system is established, according to the matching feature point and the matching success The projection position of the feature point in its corresponding image, calculate the transformation matrix between the camera coordinate system and the image coordinate system; get the pose of the camera through the transformation matrix; adjust the pose of the camera by minimizing the reprojection error, to Minimize the projection error between the matching feature points. Wherein, the transformation matrix is obtained by calculating through the pnp algorithm.
在一些实施例中,通过相机的内参和外参将参考图像投影到深度图像平面,得到深度图像,通过深度图像计算出场景的透射率,包括:In some embodiments, the reference image is projected onto the depth image plane through the internal and external parameters of the camera to obtain a depth image, and the transmittance of the scene is calculated through the depth image, including:
通过相机的内参和外参,将参考图像中的每个像素点根据投影矩阵映射到深度图像平面上,并将映射后的像素点的深度值作为深度图像中对应像素点的深度值,得到深度图像;利用Koschmieder的大气光学模型计算beta系数;通过透射率与距离的关系计算透射率。Through the internal and external parameters of the camera, each pixel in the reference image is mapped to the depth image plane according to the projection matrix, and the depth value of the mapped pixel is used as the depth value of the corresponding pixel in the depth image to obtain the depth Image; beta coefficient calculated using Koschmieder's Atmospheric Optics Model; transmittance calculated from transmittance versus distance.
在一些实施例中,Koschmieder的大气光学模型的计算公式为:In some embodiments, the calculation formula of Koschmieder's atmospheric optics model is:
beta=(1/d0)*ln(I0/I1)beta=(1/d0)*ln(I0/I1)
其中,beta为beta系数,d0是一个标准的大气条件下的参考距离;I0是相机观察同一场景时,在无雾的情况下所得到的亮度值;I1是相机观察同一场景时,在存在雾的情况下所得到的亮度值。Among them, beta is the beta coefficient, d0 is a reference distance under a standard atmospheric condition; I0 is the brightness value obtained when the camera observes the same scene without fog; I1 is the camera observes the same scene, in the presence of fog The brightness value obtained in the case of .
在一些实施例中,通过透射率与距离的关系计算透射率的计算公式为:In some embodiments, the formula for calculating transmittance through the relationship between transmittance and distance is:
t(x)=exp(-beta*d(x))t(x)=exp(-beta*d(x))
其中,t(x)为透射率,beta为beta系数,d(x)是相机和物体之间的距离。Among them, t(x) is the transmittance, beta is the beta coefficient, and d(x) is the distance between the camera and the object.
在一些实施例中,利用得到的透射率对图像进行去雾,获得去雾后的图像,包括:In some embodiments, using the obtained transmittance to defog the image to obtain the defogged image, including:
获取有雾图像暗通道;从暗通道图中按照亮度的大小取前0.1%的像素,记录该像素区域;在原始有雾图像中寻找该像素区域对应位置中具有最高亮度的点的值,作为A值;利用得到的透射率对图像进行去雾,获得去雾后的图像。Obtain the dark channel of the foggy image; take the first 0.1% of the pixels from the dark channel image according to the brightness, and record the pixel area; find the value of the point with the highest brightness in the corresponding position of the pixel area in the original foggy image, as A value; use the obtained transmittance to defog the image to obtain the defogged image.
在一些实施例中,有雾图像暗通道的公式如下所示:In some embodiments, the formula of the dark channel of the foggy image is as follows:
其中,Jc表示彩色图像的每个通道,Ω(x)表示以像素x为中心的一个窗口,c代表图像的通道。where Jc represents each channel of the color image, Ω(x) represents a window centered on pixel x, and c represents the channel of the image.
在一些实施例中,利用得到的透射率对图像进行去雾的公式为:In some embodiments, the formula for dehazing the image using the obtained transmittance is:
J(x)=(I(x)-A(1-t(x)))/t(x)J(x)=(I(x)-A(1-t(x)))/t(x)
其中t(x)为透射率,A为全局大气光值,I(x)为有雾图像,J(x)为去雾图像。Where t(x) is the transmittance, A is the global atmospheric light value, I(x) is the hazy image, and J(x) is the dehazed image.
所述相机均为同一立体相机;立体相机在获得三维点云方面具有较高的精度和准确性,可以在实时或近实时的速度下捕获和处理图像,从而可以快速地生成三维点云数据。The cameras are all the same stereo camera; the stereo camera has high precision and accuracy in obtaining 3D point cloud, and can capture and process images at real-time or near-real-time speed, so that 3D point cloud data can be quickly generated.
根据本发明的第二方面,提供了一种基于三维点云匹配与真实场景重建的视频去雾装置,包括:According to the second aspect of the present invention, a video defogging device based on 3D point cloud matching and real scene reconstruction is provided, including:
三维点云获取模块,用于选取视频同一场景下不同时段的两帧图片,前者作为雾霾图像,后者作为参考图像,从雾霾图像中提取三维点云,在参考图像中提取相同区域的三维点云作为匹配参照;The 3D point cloud acquisition module is used to select two frames of pictures in different periods of time in the same scene of the video. The former is used as a haze image, and the latter is used as a reference image to extract a 3D point cloud from the haze image, and extract the same area from the reference image. 3D point cloud as a matching reference;
匹配模块,用于对两个三维点云进行匹配,找到匹配成功的三维点在其对应图像中的投影位置,利用匹配成功的三维点和其投影位置获得相机位姿;The matching module is used to match two 3D point clouds, find the projected position of the successfully matched 3D point in its corresponding image, and obtain the camera pose by using the successfully matched 3D point and its projected position;
透射率计算模块,用于通过相机的内参和外参将参考图像投影到深度图像平面,得到深度图像,通过深度图像计算出场景的透射率;所述相机的内参和外参根据相机位姿计算获得;The transmittance calculation module is used to project the reference image to the depth image plane through the internal and external parameters of the camera to obtain a depth image, and calculate the transmittance of the scene through the depth image; the internal and external parameters of the camera are calculated according to the camera pose get;
去雾模块,用于利用得到的透射率对图像进行去雾,获得去雾后的图像。The defogging module is configured to use the obtained transmittance to defog the image to obtain a defogged image.
根据本发明的第三方面,提供了一种计算机设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,其特征在于,处理器执行计算机程序时实现上述任一实施例方法的步骤。According to a third aspect of the present invention, a computer device is provided, including a memory, a processor, and a computer program stored on the memory and operable on the processor, wherein the processor implements any of the above-mentioned computer programs when executing the computer program. The steps of the embodiment method.
根据本发明的第四方面,提供了一种计算机可读存储介质,其上存储有计算机程序,计算机程序被处理器执行时实现上述任一实施例方法的步骤。According to a fourth aspect of the present invention, a computer-readable storage medium is provided, on which a computer program is stored, and when the computer program is executed by a processor, the steps of the method in any one of the above embodiments are implemented.
本发明提供了一种新的视频去雾方法,能够有效去除视频中的雾霾,还原出真实场景的图像。The invention provides a new video defogging method, which can effectively remove the haze in the video and restore the image of the real scene.
附图说明Description of drawings
图1是本发明的基于三维点云匹配与真实场景重建的视频去雾方法的一些实施例的流程图;Fig. 1 is the flowchart of some embodiments of the video defogging method based on three-dimensional point cloud matching and real scene reconstruction of the present invention;
图2是本发明的基于三维点云匹配与真实场景重建的视频去雾装置的一些实施例的结构示意图;Fig. 2 is a schematic structural view of some embodiments of the video defogging device based on 3D point cloud matching and real scene reconstruction of the present invention;
图3是用于实现本发明一些实施例的计算机设备的内部结构图。Figure 3 is a diagram of the internal structure of a computer device for implementing some embodiments of the present invention.
具体实施方式Detailed ways
以下将参照附图更充分地描述本发明实施例,在附图中示出了本发明实施例。然而,可以用很多不同形式来实施本发明,并且本发明不应理解为受限于在此所阐述的实施例。Embodiments of the invention will be described more fully hereinafter with reference to the accompanying drawings, in which embodiments of the invention are shown. This invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein.
在此所使用的术语仅用于描述特定实施例的目的,而并非意欲限制本公开。如在此所使用的那样,单数形式的“一个”、“这个”意欲同样包括复数形式,除非上下文清楚地另有所指。还应当理解,当在此使用时,术语“包括”指定出现所声明的特征、整体、步骤、操作、元件和/或组件,但并不排除出现或添加一个或多个其它特征、整体、步骤、操作、元件、组件和/或其群组。The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the present disclosure. As used herein, the singular forms "a", "the" and "the" are intended to include the plural forms as well, unless the context clearly dictates otherwise. It should also be understood that when used herein, the term "comprising" specifies the presence of stated features, integers, steps, operations, elements and/or components, but does not exclude the presence or addition of one or more other features, integers, steps , operation, element, component and/or group thereof.
除非另外定义,否则在此所使用的术语(包括技术术语和科学术语)具有与本公开所属领域的普通技术人员所共同理解的相同意义。在此所使用的术语应解释为具有与其在该说明书的上下文以及有关领域中的意义一致的意义,而不能以理想化的或过于正式的意义来解释,除非在此特意如此定义。Unless otherwise defined, terms (including technical terms and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure belongs. Terms used herein should be interpreted to have a meaning consistent with their meaning in the context of this specification and the relevant art, and not in an idealized or overly formal sense, unless expressly so defined herein.
图1示出了本发明的基于三维点云匹配与真实场景重建的视频去雾方法的一些实施例的流程图。Fig. 1 shows a flow chart of some embodiments of the video defogging method based on 3D point cloud matching and real scene reconstruction of the present invention.
如图1所示,该方法包括:As shown in Figure 1, the method includes:
三维点云获取步骤S102,选取视频同一场景下不同时段的两帧图片,前者作为雾霾图像,后者作为参考图像,从雾霾图像中提取三维点云,在参考图像中提取相同区域的三维点云作为匹配参照;The 3D point cloud acquisition step S102 is to select two frames of pictures in different periods of time in the same scene of the video, the former as a haze image, and the latter as a reference image, extract a 3D point cloud from the haze image, and extract a 3D image of the same area in the reference image Point cloud as matching reference;
匹配步骤S104,对两个三维点云进行匹配,找到匹配成功的三维点在其对应图像中的投影位置,利用匹配成功的三维点和其投影位置获得相机位姿;Matching step S104, matching the two 3D point clouds, finding the projected position of the successfully matched 3D point in its corresponding image, and obtaining the camera pose by using the successfully matched 3D point and its projected position;
透射率计算步骤S106,通过相机的内参和外参将参考图像投影到深度图像平面,得到深度图像,通过深度图像计算出场景的透射率;相机的内参和外参根据相机位姿计算获得;The transmittance calculation step S106 is to project the reference image onto the depth image plane through the internal and external parameters of the camera to obtain a depth image, and calculate the transmittance of the scene through the depth image; the internal and external parameters of the camera are calculated according to the camera pose;
内参外参的具体计算方法为:在匹配后,找到对应的特征点对,得到基础矩阵,将基础矩阵转换为本质矩阵,在将本质矩阵进行分解,通过已知的三维点在两个图像中的对应投影点,计算相机的内参和外参矩阵;对于期间可能存在的误差,通过非线性优化方法或重投影误差优化等方法来提高其准确性。The specific calculation method of the internal and external parameters is: after matching, find the corresponding feature point pair, obtain the fundamental matrix, convert the fundamental matrix into an essential matrix, decompose the essential matrix, and pass the known 3D points in the two images The corresponding projection points of the camera are calculated to calculate the internal and external parameter matrices of the camera; for the errors that may exist during the period, the accuracy is improved by nonlinear optimization methods or re-projection error optimization methods.
去雾步骤S108,利用得到的透射率对图像进行去雾,获得去雾后的图像。In the defogging step S108, the obtained transmittance is used to defog the image to obtain a defogged image.
三维点云获取步骤S102具体包括:The 3D point cloud acquisition step S102 specifically includes:
选取视频同一场景下不同时段的两帧图片,前者作为雾霾图像,后者作为参考图像;通过立体相机获得雾霾图像与参考图像中的三维点云。Select two frames of pictures in different periods of time in the same scene of the video, the former is used as a haze image, and the latter is used as a reference image; the 3D point cloud in the haze image and the reference image is obtained through a stereo camera.
匹配步骤S104具体包括:The matching step S104 specifically includes:
构建高斯金字塔来对雾霾图像(待匹配图像)与参考图像进行尺度变换;使用高斯差分算法来检测两幅图像中的关键点;计算每个关键点的主方向,然后使用局部图像的梯度信息来描述两幅图像中每个关键点的特征;对参考图像和待匹配图像中的特征点进行匹配。对于参考图像中的每个特征点,在待匹配图像中找到距离它最近的特征点,然后根据一定的阈值判断匹配是否成功;选择一些匹配成功的三维点,获得它们在图像中的投影位置;建立相机坐标系和图像坐标系之间的映射关系,根据三维点和它们在图像中的投影位置,计算出相机坐标系到图像坐标系之间的转换矩阵(pnp算法计算);通过转换矩阵得到相机的位姿;通过最小化重投影误差来调整相机的位姿,以使得匹配点之间的投影误差最小化。Construct a Gaussian pyramid to scale the haze image (image to be matched) and the reference image; use the Gaussian difference algorithm to detect key points in the two images; calculate the main direction of each key point, and then use the gradient information of the local image To describe the characteristics of each key point in the two images; match the feature points in the reference image and the image to be matched. For each feature point in the reference image, find the feature point closest to it in the image to be matched, and then judge whether the matching is successful according to a certain threshold; select some 3D points that are successfully matched, and obtain their projection positions in the image; Establish the mapping relationship between the camera coordinate system and the image coordinate system, and calculate the transformation matrix between the camera coordinate system and the image coordinate system (calculated by the pnp algorithm) according to the three-dimensional points and their projection positions in the image; through the transformation matrix, The pose of the camera; the pose of the camera is adjusted by minimizing the reprojection error so that the projection error between matched points is minimized.
透射率计算步骤S106具体包括:The transmittance calculation step S106 specifically includes:
通过相机的内参和外参,将参考图像中的每个像素点根据投影矩阵映射到深度图像平面上,并将映射后的像素点的深度值作为深度图像中对应像素点的深度值,得到深度图像;利用Koschmieder的大气光学模型计算beta系数;通过透射率与距离的关系计算透射率;Through the internal and external parameters of the camera, each pixel in the reference image is mapped to the depth image plane according to the projection matrix, and the depth value of the mapped pixel is used as the depth value of the corresponding pixel in the depth image to obtain the depth Image; calculate the beta coefficient using Koschmieder's atmospheric optics model; calculate the transmittance through the relationship between transmittance and distance;
进一步的,Koschmieder的大气光学模型公式如下所示:Further, Koschmieder's atmospheric optics model formula is as follows:
beta=(1/d0)*ln(I0/I1)beta=(1/d0)*ln(I0/I1)
其中,d0是一个标准的大气条件下的参考距离,通常取值为10公里;I0是相机观察同一场景时,在无雾的情况下所得到的亮度值;I1是相机观察同一场景时,在存在雾的情况下所得到的亮度值。Among them, d0 is a reference distance under standard atmospheric conditions, usually 10 kilometers; I0 is the brightness value obtained when the camera observes the same scene without fog; I1 is when the camera observes the same scene, at The resulting brightness value in the presence of fog.
进一步的,透射率与距离的关系公式如下所示:Further, the relationship formula between transmittance and distance is as follows:
t(x)=exp(-beta*d(x))t(x)=exp(-beta*d(x))
其中,beta是衰减系数,可以根据大气光学的理论计算得出,d(x)是相机和物体之间的距离。Among them, beta is the attenuation coefficient, which can be calculated according to the theory of atmospheric optics, and d(x) is the distance between the camera and the object.
去雾步骤S108具体包括:The defogging step S108 specifically includes:
获取有雾图像暗通道;从暗通道图中按照亮度的大小取前0.1%的像素;在原始有雾图像中寻找对应的具有最高亮度的点的值,作为A值;对每一帧图像进行去雾。Obtain the dark channel of the foggy image; take the first 0.1% of the pixels from the dark channel image according to the brightness; find the value of the corresponding point with the highest brightness in the original foggy image as the A value; perform to fog.
进一步的,有雾图像暗通道公式如下所示:Further, the dark channel formula of the foggy image is as follows:
式中JC表示彩色图像的每个通道,Ω(x)表示以像素X为中心的一个窗口。where JC represents each channel of the color image, and Ω(x) represents a window centered on pixel X.
去雾图像公式为:The formula for dehazing image is:
J(x)=(I(x)-A(1-t(x)))/t(x)J(x)=(I(x)-A(1-t(x)))/t(x)
其中t(x)为透射率,A为全局大气光值,I(x)为有雾图像,J(x)为去雾图像。Where t(x) is the transmittance, A is the global atmospheric light value, I(x) is the hazy image, and J(x) is the dehazed image.
图2示出了本发明的基于三维点云匹配与真实场景重建的视频去雾装置的一些实施例的结构示意图。Fig. 2 shows a schematic structural diagram of some embodiments of the video defogging device based on 3D point cloud matching and real scene reconstruction of the present invention.
如图2所示,实施例中的基于三维点云匹配与真实场景重建的视频去雾装置包括:As shown in Figure 2, the video defogging device based on 3D point cloud matching and real scene reconstruction in the embodiment includes:
三维点云获取模块100,用于选取视频同一场景下不同时段的两帧图片,前者作为雾霾图像,后者作为参考图像,从雾霾图像中提取三维点云,在参考图像中提取相同区域的三维点云作为匹配参照;The three-dimensional point cloud acquisition module 100 is used to select two frames of pictures in different periods of time in the same scene of the video, the former is used as a haze image, and the latter is used as a reference image to extract a three-dimensional point cloud from the haze image, and extract the same area in the reference image 3D point cloud as a matching reference;
匹配模块200,用于对两个三维点云进行匹配,找到匹配成功的三维点在其对应图像中的投影位置,利用匹配成功的三维点和其投影位置获得相机位姿;The matching module 200 is used to match two 3D point clouds, find the projected position of the successfully matched 3D point in its corresponding image, and obtain the camera pose by using the successfully matched 3D point and its projected position;
透射率计算模块300,用于通过相机的内参和外参将参考图像投影到深度图像平面,得到深度图像,通过深度图像计算出场景的透射率;所述相机的内参和外参根据相机位姿计算获得;The transmittance calculation module 300 is used to project the reference image onto the depth image plane through the internal reference and external reference of the camera to obtain a depth image, and calculate the transmittance of the scene through the depth image; the internal reference and external reference of the camera are based on the camera pose calculated;
去雾模块400,用于利用得到的透射率对图像进行去雾,获得去雾后的图像。在其中一些实施例中,转换规则,还包括:流程接口转换为子流程或开始事件。The defogging module 400 is configured to use the obtained transmittance to defog the image to obtain a defogged image. In some of these embodiments, the conversion rule further includes: converting a process interface into a sub-process or a start event.
关于一种基于三维点云匹配与真实场景重建的视频去雾装置的具体限定可以参见上文中对于一种基于三维点云匹配与真实场景重建的视频去雾方法的限定,在此不再赘述。上述一种基于三维点云匹配与真实场景重建的视频去雾装置中的各个模块可全部或部分通过软件、硬件及其组合来实现。上述各模块可以硬件形式内嵌于或独立于计算机设备中的处理器中,也可以以软件形式存储于计算机设备中的存储器中,以便于处理器调用执行以上各个模块对应的操作。For a specific definition of a video defogging device based on 3D point cloud matching and real scene reconstruction, please refer to the above definition of a video defogging method based on 3D point cloud matching and real scene reconstruction, and will not be repeated here. Each module in the above-mentioned video defogging device based on 3D point cloud matching and real scene reconstruction can be fully or partially realized by software, hardware and combinations thereof. The above-mentioned modules can be embedded in or independent of the processor in the computer device in the form of hardware, and can also be stored in the memory of the computer device in the form of software, so that the processor can invoke and execute the corresponding operations of the above-mentioned modules.
本发明还提供了一种计算机设备,该计算机设备可以是终端,其内部结构图可以如图3所示。该计算机设备包括通过系统总线连接的处理器、存储器、网络接口、显示屏和输入装置。其中,该计算机设备的处理器用于提供计算和控制能力。该计算机设备的存储器包括非易失性存储介质、内存储器。该非易失性存储介质存储有操作系统和计算机程序。该内存储器为非易失性存储介质中的操作系统和计算机程序的运行提供环境。该计算机设备的网络接口用于与外部的终端通过网络连接通信。该计算机程序被处理器执行时以实现上述的基于三维点云匹配与真实场景重建的视频去雾方法。该计算机设备的显示屏可以是液晶显示屏或者电子墨水显示屏,该计算机设备的输入装置可以是显示屏上覆盖的触摸层,也可以是计算机设备外壳上设置的按键、轨迹球或触控板,还可以是外接的键盘、触控板或鼠标等。本领域技术人员可以理解,图3中示出的结构,仅仅是与本发明方案相关的部分结构的框图,并不构成对本发明方案所应用于其上的计算机设备的限定,具体的计算机设备可以包括比图中所示更多或更少的部件,或者组合某些部件,或者具有不同的部件布置。The present invention also provides a computer device, which may be a terminal, and its internal structure may be as shown in FIG. 3 . The computer device includes a processor, a memory, a network interface, a display screen and an input device connected through a system bus. Wherein, the processor of the computer device is used to provide calculation and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and computer programs. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage medium. The network interface of the computer device is used to communicate with an external terminal via a network connection. When the computer program is executed by a processor, the above-mentioned video defogging method based on three-dimensional point cloud matching and real scene reconstruction is realized. The display screen of the computer device may be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer device may be a touch layer covered on the display screen, or a button, a trackball or a touch pad provided on the casing of the computer device , and can also be an external keyboard, touchpad, or mouse. Those skilled in the art can understand that the structure shown in Figure 3 is only a block diagram of a part of the structure related to the solution of the present invention, and does not constitute a limitation to the computer equipment on which the solution of the present invention is applied. The specific computer equipment can be More or fewer components than shown in the figures may be included, or some components may be combined, or have a different arrangement of components.
本发明还提供了一种计算机可读存储介质,其上存储有计算机程序,计算机程序被处理器执行时实现上述的基于三维点云匹配与真实场景重建的视频去雾方法。The present invention also provides a computer-readable storage medium, on which a computer program is stored. When the computer program is executed by a processor, the above-mentioned video defogging method based on three-dimensional point cloud matching and real scene reconstruction is realized.
本领域普通技术人员可以理解实现上述方法实施例中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,所述的计算机程序可存储于一非易失性计算机可读取存储介质中,该计算机程序在执行时,可包括如上述各方法的实施例的流程。其中,本发明所提供的各实施例中所使用的对存储器、存储、数据库或其它介质的任何引用,均可包括非易失性和/或易失性存储器。非易失性存储器可包括只读存储器(ROM)、可编程ROM(PROM)、电可编程ROM(EPROM)、电可擦除可编程ROM(EEPROM)或闪存。易失性存储器可包括随机存取存储器(RAM)或者外部高速缓冲存储器。作为说明而非局限,RAM以多种形式可得,诸如静态RAM(SRAM)、动态RAM(DRAM)、同步DRAM(SDRAM)、双数据率SDRAM(DDRSDRAM)、增强型SDRAM(ESDRAM)、同步链路(Synchlink)DRAM(SLDRAM)、存储器总线(Rambus)、直接RAM(RDRAM)、直接存储器总线动态RAM(DRDRAM)、以及存储器总线动态RAM(RDRAM)等。Those of ordinary skill in the art can understand that realizing all or part of the processes in the above method embodiments can be completed by instructing related hardware through a computer program, and the computer program can be stored in a non-volatile computer-readable storage In the medium, when the computer program is executed, it may include the processes of the embodiments of the above-mentioned methods. Wherein, any reference to memory, storage, database or other media used in the various embodiments provided by the present invention may include non-volatile and/or volatile memory. Nonvolatile memory can include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory. Volatile memory can include random access memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in many forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Chain Synchlink DRAM (SLDRAM), memory bus (Rambus), direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), etc.
至此,已经详细描述了本发明的实施例。为了避免遮蔽本发明的构思,没有描述本领域所公知的一些细节。本领域技术人员根据上面的描述,完全可以明白如何实施这里公开的技术方案。So far, the embodiments of the present invention have been described in detail. Certain details well known in the art have not been described in order to avoid obscuring the inventive concept. Based on the above description, those skilled in the art can fully understand how to implement the technical solutions disclosed herein.
虽然已经通过示例对本发明的一些特定实施例进行了详细说明,但是本领域的技术人员应该理解,以上示例仅是为了进行说明,而不是为了限制本发明的范围。本领域的技术人员应该理解,可在不脱离本发明的范围和精神的情况下,对以上实施例进行修改或者对部分技术特征进行等同替换。本发明的范围由所附权利要求来限定。Although some specific embodiments of the present invention have been described in detail through examples, those skilled in the art should understand that the above examples are for illustration only, rather than limiting the scope of the present invention. Those skilled in the art should understand that without departing from the scope and spirit of the present invention, the above embodiments can be modified or some technical features can be equivalently replaced. The scope of the invention is defined by the appended claims.
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202310731535.5ACN116703774A (en) | 2023-06-20 | 2023-06-20 | Video defogging method and device based on three-dimensional point cloud matching and real scene reconstruction |
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202310731535.5ACN116703774A (en) | 2023-06-20 | 2023-06-20 | Video defogging method and device based on three-dimensional point cloud matching and real scene reconstruction |
| Publication Number | Publication Date |
|---|---|
| CN116703774Atrue CN116703774A (en) | 2023-09-05 |
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202310731535.5APendingCN116703774A (en) | 2023-06-20 | 2023-06-20 | Video defogging method and device based on three-dimensional point cloud matching and real scene reconstruction |
| Country | Link |
|---|---|
| CN (1) | CN116703774A (en) |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN104408757A (en)* | 2014-11-07 | 2015-03-11 | 吉林大学 | Method and system for adding haze effect to driving scene video |
| WO2018176929A1 (en)* | 2017-03-27 | 2018-10-04 | 华为技术有限公司 | Image background blurring method and apparatus |
| CN111583136A (en)* | 2020-04-25 | 2020-08-25 | 华南理工大学 | Method for simultaneously positioning and establishing image of autonomous mobile platform in rescue scene |
| CN115631108A (en)* | 2022-10-27 | 2023-01-20 | 西安星舟志屹智能科技有限公司 | RGBD-based image defogging method and related equipment |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN104408757A (en)* | 2014-11-07 | 2015-03-11 | 吉林大学 | Method and system for adding haze effect to driving scene video |
| WO2018176929A1 (en)* | 2017-03-27 | 2018-10-04 | 华为技术有限公司 | Image background blurring method and apparatus |
| CN111583136A (en)* | 2020-04-25 | 2020-08-25 | 华南理工大学 | Method for simultaneously positioning and establishing image of autonomous mobile platform in rescue scene |
| CN115631108A (en)* | 2022-10-27 | 2023-01-20 | 西安星舟志屹智能科技有限公司 | RGBD-based image defogging method and related equipment |
| Title |
|---|
| 王新;张旭东;张骏;孙锐;: "结合光场多线索和大气散射模型的去雾算法", 光电工程, no. 09, 15 September 2020 (2020-09-15)* |
| Publication | Publication Date | Title |
|---|---|---|
| CN111797650B (en) | Obstacle identification method, obstacle identification device, computer equipment and storage medium | |
| CN109376667B (en) | Target detection method, device and electronic device | |
| CN116485856B (en) | Unmanned aerial vehicle image geographic registration method based on semantic segmentation and related equipment | |
| JP7422105B2 (en) | Obtaining method, device, electronic device, computer-readable storage medium, and computer program for obtaining three-dimensional position of an obstacle for use in roadside computing device | |
| CN110135455A (en) | Image matching method, device and computer readable storage medium | |
| US8355565B1 (en) | Producing high quality depth maps | |
| WO2021004416A1 (en) | Method and apparatus for establishing beacon map on basis of visual beacons | |
| CN110895822B (en) | Method of operating a depth data processing system | |
| CN104978077B (en) | interaction method and system | |
| CN114273826A (en) | Automatic identification method of welding position for large workpieces to be welded | |
| US11132586B2 (en) | Rolling shutter rectification in images/videos using convolutional neural networks with applications to SFM/SLAM with rolling shutter images/videos | |
| CN114897684A (en) | Vehicle image splicing method and device, computer equipment and storage medium | |
| WO2025002194A1 (en) | Scene reconstruction method and apparatus, and storage medium and electronic device | |
| CN109785444A (en) | Recognition methods, device and the mobile terminal of real plane in image | |
| CN115100294B (en) | Event camera calibration method, device and equipment based on linear characteristics | |
| JP7007649B2 (en) | Optical flow estimator, optical flow estimation method, optical flow estimation system, and optical flow estimation program, as well as yaw rate estimator, yaw rate estimation method, yaw rate estimation system, and yaw rate estimation program. | |
| CN116703774A (en) | Video defogging method and device based on three-dimensional point cloud matching and real scene reconstruction | |
| WO2024222252A1 (en) | Image inpainting method and apparatus, and electronic device and storage medium | |
| CN117409149A (en) | Three-dimensional modeling method and system of beam method adjustment equation based on three-dimensional constraint | |
| CN113963335B (en) | Road surface obstacle detection method based on image and point cloud data | |
| JP7252775B2 (en) | Video analysis support device and method | |
| WO2022107548A1 (en) | Three-dimensional skeleton detection method and three-dimensional skeleton detection device | |
| CN119832166B (en) | Panorama reconstruction method based on 3DGS, electronic equipment and storage medium | |
| CN120027782B (en) | Multi-sensor coupling method, device and equipment | |
| CN115147797B (en) | A method, system and medium for intelligently adjusting the field of view of an electronic exterior rearview mirror |
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination |