Movatterモバイル変換


[0]ホーム

URL:


CN105374039B - Monocular image depth information method of estimation based on contour acuity - Google Patents

Monocular image depth information method of estimation based on contour acuity
Download PDF

Info

Publication number
CN105374039B
CN105374039BCN201510786727.1ACN201510786727ACN105374039BCN 105374039 BCN105374039 BCN 105374039BCN 201510786727 ACN201510786727 ACN 201510786727ACN 105374039 BCN105374039 BCN 105374039B
Authority
CN
China
Prior art keywords
contour
image
depth
gradient
edge
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201510786727.1A
Other languages
Chinese (zh)
Other versions
CN105374039A (en
Inventor
马利
景源
李鹏
张玉奇
胡彬彬
牛斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Liaoning University
Original Assignee
Liaoning University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Liaoning UniversityfiledCriticalLiaoning University
Priority to CN201510786727.1ApriorityCriticalpatent/CN105374039B/en
Publication of CN105374039ApublicationCriticalpatent/CN105374039A/en
Application grantedgrantedCritical
Publication of CN105374039BpublicationCriticalpatent/CN105374039B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Landscapes

Abstract

Translated fromChinese

本发明提出一种基于轮廓锐度的单目图像深度信息估计方法,该方法应用边缘轮廓锐度作为模糊信息估计特征、通过低层次线索信息进行深度信息提取。首先对图像进行边缘检测;接着对图像中的边缘计算边缘能量、轮廓锐度,并以边缘能量、轮廓锐度作为轮廓的外部能量,结合轮廓的内部能量——轮廓线特征性能量和轮廓线距离能量建立轮廓跟踪模型,求解能量函数的最小值,搜索图像轮廓;然后以深度梯度假设作为先验假设梯度模型,对具有不同的轮廓线的区域进行深度值填充,计算得出深度分布;最后利用原图像信息和所得深度图像信息对得到的深度图像进行优化处理,得到最终的视差图。实验结果证明,本发明的深度估计算法简单,能快速准确地估计单目图像的深度图。

The present invention proposes a monocular image depth information estimation method based on contour sharpness, which uses edge contour sharpness as fuzzy information estimation features, and extracts depth information through low-level clue information. First, edge detection is performed on the image; then the edge energy and contour sharpness are calculated for the edges in the image, and the edge energy and contour sharpness are used as the external energy of the contour, combined with the internal energy of the contour - the characteristic energy of the contour line and the contour line The distance energy establishes the contour tracking model, solves the minimum value of the energy function, and searches the image contour; then uses the depth gradient assumption as the prior assumption gradient model, fills the depth value of the area with different contour lines, and calculates the depth distribution; finally The obtained depth image is optimized by using the original image information and the obtained depth image information to obtain the final disparity map. Experimental results prove that the depth estimation algorithm of the present invention is simple and can quickly and accurately estimate the depth map of a monocular image.

Description

Translated fromChinese
基于轮廓锐度的单目图像深度信息估计方法Depth Information Estimation Method of Monocular Image Based on Contour Sharpness

技术领域technical field

本发明涉及一种能够对单目图像进行深度信息估计方法,该方法能够通过图像低层次线索得到单目图像中所示物体的深度信息。The invention relates to a method capable of estimating the depth information of a monocular image, and the method can obtain the depth information of an object shown in the monocular image through low-level clues of the image.

背景技术Background technique

深度信息感知是产生立体视觉的前提,基于深度信息的实际应用在场景重解、三维重建、模式识别、目标跟踪等领域中发挥重要作用。在实际应用中,可采用单目摄像机、多目摄像机或深度摄像机进行深度信息提取。其中,基于单目摄像机的深度信息估计方法由于其实际操作简单、硬件成本低、能够从已有的单目图像素材中直接提取深度信息而具有一定的优势。Depth information perception is the premise of producing stereo vision, and the practical application based on depth information plays an important role in the fields of scene reconstruction, 3D reconstruction, pattern recognition, and object tracking. In practical applications, monocular cameras, multi-cameras or depth cameras can be used for depth information extraction. Among them, the depth information estimation method based on a monocular camera has certain advantages because of its simple operation, low hardware cost, and the ability to directly extract depth information from existing monocular image materials.

单目图像深度信息估计通过单幅图像提取目标的颜色、形状、共面性等二维、三维几何信息,从而利用少量已知条件获取该目标的空间三维信息。目前大多算法都是采用图像高层次、中层次线索实现。如经过学习参考图像得到已知图像的语义标签,利用其实现对目标图像定义语义标签,从而得到相对深度信息。利用对真实视差图像中训练得到的图像结构信息,如颜色、纹理、形状等,对目标图像进行过分割,应用有区分训练的马尔可夫随机场推导出深度信息。而利用低层次线索信息不需要对图像进行内容的分析,仅需要直接应用局部信息即可从图像中恢复深度信息,算法相对简单。Monocular image depth information estimation extracts the two-dimensional and three-dimensional geometric information of the target such as color, shape, and coplanarity through a single image, so as to obtain the spatial three-dimensional information of the target with a small number of known conditions. At present, most algorithms are realized by using high-level and middle-level clues of images. For example, the semantic label of the known image is obtained by learning the reference image, and it is used to define the semantic label of the target image, so as to obtain the relative depth information. The target image is over-segmented by using the image structure information obtained from the training of the real disparity image, such as color, texture, shape, etc., and the depth information is derived by applying the Markov random field with discriminative training. However, the use of low-level clue information does not need to analyze the content of the image, and only needs to directly apply the local information to recover the depth information from the image, and the algorithm is relatively simple.

在单目图像的深度信息估计过程中可以利用模糊信息作为深度估计的一个重要特征。模糊信息多是在成像过程中对焦不准或成像区域内存在不同深度的目标而造成的。根据这一特性,单目图像深度信息估计通过对图像做散焦处理,根据图像的模糊情况确定图像的前景和背景来估计场景的深度。如以边缘位置的模糊值作为初始,应用消光和马尔可夫随机场使模糊扩散到整幅图像,来实现相对深度提取。或者利用热扩离过程实现离焦模糊过程的建模,利用不均匀逆热扩散方程估计边缘位置模糊量来恢复场景深度。而由于模糊扩散方法复杂,效率较低,其实用性较差。Blur information can be used as an important feature of depth estimation in the process of depth information estimation of monocular images. Blurred information is mostly caused by inaccurate focus during imaging or objects with different depths in the imaging area. According to this characteristic, monocular image depth information estimation estimates the depth of the scene by defocusing the image and determining the foreground and background of the image according to the blurring of the image. For example, the blur value of the edge position is used as the initial value, and the extinction and Markov random field are applied to diffuse the blur to the entire image to achieve relative depth extraction. Or use the thermal diffusion process to realize the modeling of the defocus blur process, and use the non-uniform inverse thermal diffusion equation to estimate the amount of blur at the edge position to restore the scene depth. However, due to the complexity and low efficiency of the fuzzy diffusion method, its practicability is poor.

因此,本发明利用图像低层次线索及新的模糊信息特性,对单目图像深度信息估计方法进行了简化。Therefore, the present invention simplifies the monocular image depth information estimation method by utilizing the low-level clues of the image and the new fuzzy information characteristics.

发明内容Contents of the invention

本发明提出一种基于轮廓锐度的单目图像深度信息估计方法,该方法利用图像低层次线索信息,以边缘的轮廓锐度信息作为模糊信息估计特征进行物体轮廓提取,根据物体轮廓与物体深度边缘的联系对不同物体进行深度分配,从而得到图像中不同物体的深度信息。The present invention proposes a monocular image depth information estimation method based on contour sharpness. This method utilizes image low-level clue information, and uses edge contour sharpness information as fuzzy information estimation features to extract object contours. According to the object contour and object depth The connection of the edge assigns the depth of different objects, so as to obtain the depth information of different objects in the image.

本发明的目的是通过下述技术方案实现的:The purpose of the present invention is achieved through the following technical solutions:

基于轮廓锐度的单目图像深度信息估计方法,其特征在于,利用梯度幅度和轮廓锐度信息反映图像中具有不同深度的物体轮廓的特性,在进行深度估计时,不仅仅考虑边缘的梯度幅度,同时加入边缘的空间信息,更充分地体现了物体边缘模糊变化趋势。包括如下步骤:The monocular image depth information estimation method based on the contour sharpness is characterized in that the gradient magnitude and contour sharpness information is used to reflect the characteristics of the contours of objects with different depths in the image, and not only the gradient magnitude of the edge is considered when performing depth estimation , while adding the spatial information of the edge, it can more fully reflect the blurring trend of the edge of the object. Including the following steps:

(1)对于输入图像,进行边缘检测,得到图像中物体的边缘点P={p1,p2,…pn};(1) For the input image, perform edge detection to obtain the edge point P={p1 , p2 ,...pn } of the object in the image;

(2)根据先验深度梯度模型进行初始轮廓线的定义,定义一组互相平行且间距相等的轮廓线作为初始轮廓线V={v0,v1,…vm};其中v(s)=[x(s),y(s)],x(s),y(s)为初始轮廓线中的点s的坐标;(2) Define the initial contour line according to the prior depth gradient model, and define a group of contour lines parallel to each other and with equal spacing as the initial contour line V={v0 ,v1 ,…vm }; where v(s) =[x(s), y(s)], x(s), y(s) are the coordinates of the point s in the initial contour line;

轮廓线的总能量定义为:The total energy of the contour is defined as:

Etotal=w1Eedge+w2Esharp+w3Es+w4EdEtotal =w1 Eedge +w2 Esharp +w3 Es +w4 Ed

其中Eedge为轮廓边缘能量函数,Esharp轮廓锐度能量函数,Es为轮廓线特性能量函数,Ed为轮廓线距离能量函数,w1、w2、w3、w4为各个函数权重控制参数,可根据具体图像特性赋值。Where Eedge is the contour edge energy function, Esharp contour sharpness energy function, Es is the contour line characteristic energy function, Ed is the contour line distance energy function, w1 , w2 , w3 , w4 are the weights of each function The control parameters can be assigned according to specific image characteristics.

轮廓边缘能量函数Eedge表示轮廓的梯度幅度,定义为图像I(x,y)沿着梯度方向的梯度幅度大小,表示为:The contour edge energy function Eedge represents the gradient magnitude of the contour, which is defined as the gradient magnitude of the image I(x,y) along the gradient direction, expressed as:

其中,是对I(x,y)求梯度幅度;in, It is to find the gradient magnitude of I(x,y);

轮廓锐度能量函数Esharp代表轮廓的模糊程度,通过对梯度轮廓锐度求解得到。定义为:The contour sharpness energy function Esharp represents the blurring degree of the contour, which is obtained by solving the sharpness of the gradient contour. defined as:

其中,梯度轮廓锐度σ(p(q0))为梯度轮廓变量方差的均方根。这里梯度轮廓为图像中边缘像素q0(x0,y0)作为起始点,沿梯度方向向边缘的边界追踪,直到边缘的梯度幅度不再发生变化,所得到一维路径p(q0)形成的梯度幅度曲线。其梯度轮廓锐度定义为Wherein, the gradient profile sharpness σ(p(q0 )) is the root mean square of the variance of the gradient profile variable. Here the gradient profile is the edge pixel q0 (x0 , y0 ) in the image as the starting point, and traces along the gradient direction to the boundary of the edge until the gradient amplitude of the edge no longer changes, and the obtained one-dimensional path p(q0 ) The resulting gradient magnitude curve. Its gradient profile sharpness is defined as

其中,dc(q,q0)为梯度轮廓中点q和q0之间的曲线长度,g(q)为q点处的梯度幅度,G(q0)为梯度轮廓中所有点的梯度幅度和,s为梯度轮廓中任一点,参数b为权值控制参数。where dc (q,q0 ) is the curve length between points q and q0 in the gradient profile, g(q) is the gradient magnitude at point q, and G(q0 ) is the gradient of all points in the gradient profile Amplitude sum, s is any point in the gradient profile, and parameter b is the weight control parameter.

轮廓线特性能量函数Es用于约束轮廓的平滑度;The contour line characteristic energy function Es is used to constrain the smoothness of the contour;

轮廓线距离能量函数Ed用于控制轮廓跟踪曲线不会超出搜索区域;The contour line distance energy function Ed is used to control the contour tracking curve and will not exceed the search area;

(3)对图像中每一条初始轮廓线从图像的左侧起始点开始,由图像底部向顶部更新每一轮廓点的位置,对每一个轮廓点根据步骤(2)中轮廓线总能量定义计算轮廓跟踪能量;(3) For each initial contour line in the image, start from the left starting point of the image, update the position of each contour point from the bottom of the image to the top, and calculate according to the definition of the total energy of the contour line in step (2) for each contour point Contour tracking energy;

(4)求解能量函数的最小值;对与当前轮廓点相邻的像素列中的像素点进行搜索,搜索满足能量函数定义的边缘点P={p1,p2,…pn}中具有最小能量值的点,选择具有最小能量的像素点位置作为新的轮廓点;从图像左侧至右侧,重复搜索最小值,得到最终的轮廓搜索结果,即得到目标轮廓线V';(4) Solve the minimum value of the energy function; search the pixel points in the pixel column adjacent to the current contour point, and search for the edge points P={p1 ,p2 ,...pn } that satisfy the definition of the energy function. For the point with the minimum energy value, select the pixel point position with the minimum energy as the new contour point; from the left side of the image to the right side, repeat the search for the minimum value to obtain the final contour search result, that is, the target contour line V';

(5)以深度梯度假设作为先验假设梯度模型,对具有不同的轮廓线的区域进行深度值填充,计算得出深度分布;(5) Using the depth gradient assumption as a priori assumption gradient model, fill the depth value of the area with different contour lines, and calculate the depth distribution;

轮廓线集中目标轮廓线V'={v'0,v'1,…,v'm},对应的分配深度值Depth为:The target contour line in the contour set V'={v'0 ,v'1 ,...,v'm }, and the corresponding assigned depth value Depth is:

(6)利用原图像灰度信息和所得深度图像信息对得到的深度图像进行优化处理:(6) Use the original image grayscale information and the obtained depth image information to optimize the obtained depth image:

其中Depth(xi)为输入深度图像,Ω(xi)是以像素xi为中心的邻域,I(xi)为像素xi的亮度I分量,xj为像素xi在邻域Ω(xi)的邻域像素,W(xi)是滤波器参数的归一化因子;||xi-xj||为两像素的空间欧式距离;I(xi)-I(xj)表示两像素的亮度相似性,像素xi和xj的空间坐标分别为(xix,xiy)和(xjx,xjy)。空间权重系数和色度权重系数定义为:Where Depth(xi ) is the input depth image, Ω(xi ) is the neighborhood centered on pixelxi , I(xi ) is the brightness I component of pixelxi , and xj is the neighborhood of pixelxi Neighborhood pixels of Ω(xi ), W(xi ) is the normalization factor of filter parameters; ||xi -xj || is the spatial Euclidean distance of two pixels; I(xi )-I ( xj ) represents the brightness similarity of two pixels, and the spatial coordinates of pixels xi and xj are (xix , xiy ) and (xjx , xjy ), respectively. Spatial weight coefficient and chroma weighting coefficients defined as:

其中σs为空间权重的方差、σr为色度权重的方差。where σs is the variance of the spatial weights, and σr is the variance of the chrominance weights.

(7)得到了对输入单目图像进行深度信息估计后的视差图。(7) The disparity map after depth information estimation of the input monocular image is obtained.

本发明的优点是,提出了一种基于轮廓锐度的单目图像深度信息估计方法。传统的单目图像深度信息估计方法需要利用图像高、中层次线索进行学习训练、图像理解等步骤,算法复杂。而本方法则是利用图像低层次线索,计算简单。与之传统利用模糊信息区分物体深度方法不同的是,本发明在计算模糊信息时,利用轮廓锐度信息有效区分物体轮廓,从而得到具有不同深度的物体轮廓,避免了模糊扩散等步骤,提高了方法的实验性。本方法利用梯度幅度和轮廓锐度信息反映图像中具有不同深度的物体轮廓的特性,在进行深度估计时,不仅仅考虑边缘的梯度幅度,同时加入边缘的空间信息,更充分地体现了物体边缘模糊变化趋势。The advantage of the present invention is that it proposes a method for estimating the depth information of a monocular image based on contour sharpness. Traditional monocular image depth information estimation methods need to use image high-level and medium-level clues for learning and training, image understanding and other steps, and the algorithm is complex. However, this method uses the low-level clues of the image, and the calculation is simple. Different from the traditional method of using fuzzy information to distinguish object depth, the present invention uses contour sharpness information to effectively distinguish object contours when calculating fuzzy information, thereby obtaining object contours with different depths, avoiding steps such as blur diffusion, and improving The experimental nature of the method. This method uses the gradient magnitude and contour sharpness information to reflect the characteristics of object contours with different depths in the image. When performing depth estimation, not only the gradient magnitude of the edge is considered, but also the spatial information of the edge is added to fully reflect the edge of the object. Fuzzy changing trends.

附图说明Description of drawings

图1是本方法流程图。Figure 1 is a flow chart of the method.

图2显示了轮廓锐度的定义。Figure 2 shows the definition of contour sharpness.

图3为利用轮廓线所分配的深度估计相对关系示意图。FIG. 3 is a schematic diagram of the relative relationship between depth estimates assigned by contour lines.

具体实施方式Detailed ways

下面结合附图及具体实例,对本发明的实施过程给予详细的说明。The implementation process of the present invention will be described in detail below in conjunction with the accompanying drawings and specific examples.

(1)对于输入图像,采用Canny边缘检测算法进行边缘检测,得到图像中物体的边缘点P={p1,p2,…pn};(1) For the input image, use the Canny edge detection algorithm for edge detection, and obtain the edge point P={p1 ,p2 ,...pn } of the object in the image;

(2)根据先验深度梯度模型进行初始轮廓线的定义,定义一组互相平行且间距相等的轮廓线作为初始轮廓线V={v0,v1,...,vm}。其中,v(s)=[x(s),y(s)],x(s),y(s)初始轮廓线中的点s的坐标;(2) Define the initial contours according to the prior depth gradient model, and define a group of contours parallel to each other with equal spacing as the initial contours V={v0 ,v1 ,...,vm }. Wherein, v (s)=[x (s), y (s)], x (s), the coordinate of the point s in y (s) initial contour line;

轮廓线的总能量定义为The total energy of the contour is defined as

Etotal=w1Eedge+w2Esharp+w3Es+w4EdEt otal =w1 Eedge +w2 Esharp +w3 Es +w4 Ed

其中Eedge为轮廓边缘能量函数,Esharp轮廓锐度能量函数,Es为轮廓线特性能量函数,Ed为轮廓线距离能量函数,w1、w2、w3、w4为各个函数权重控制参数,对于通用图像可以设置为w1=0.25、w2=0.5、w3=0.125、w4=0.125。Where Eedge is the contour edge energy function, Esharp contour sharpness energy function, Es is the contour line characteristic energy function, Ed is the contour line distance energy function, w1 , w2 , w3 , w4 are the weights of each function The control parameters can be set as w1 =0.25, w2 =0.5, w3 =0.125, w4 =0.125 for common images.

(3)轮廓边缘能量函数Eedge为图像I(x,y)沿着梯度方向的梯度幅度大小,定义为(3) The contour edge energy function Eedge is the gradient magnitude of the image I(x,y) along the gradient direction, defined as

其中,是对I(x,y)求梯度幅度;in, It is to find the gradient magnitude of I(x,y);

其中参数a为权值控制参数。The parameter a is the weight control parameter.

(4)在本方法中利用边缘的梯度轮廓锐度代表轮廓的模糊程度,所以定义通过梯度轮廓锐度求解得到的轮廓锐度能量函数Esharp作为轮廓模糊程度的表征值。如图2所示,以图中边缘像素q0(x0,y0)作为起始点,沿梯度方向向边缘的两边追踪,直到边缘的梯度幅度不再发生变化,这样得到路径p(q0)。沿一维路径p(q0)形成的梯度幅度曲线被称为梯度轮廓。应用梯度轮廓变量方差的均方根对轮廓锐度进行定义,表示为:(4) In this method, the gradient contour sharpness of the edge is used to represent the blurring degree of the contour, so the contour sharpness energy function Esharp obtained by solving the gradient contour sharpness is defined as the representative value of the contour blurring degree. As shown in Figure 2, starting from the edge pixel q0 (x0 , y0 ) in the figure, trace along the gradient direction to both sides of the edge until the gradient magnitude of the edge no longer changes, thus obtaining the path p(q0 ). The gradient magnitude profile formed along the one-dimensional path p(q0 ) is called a gradient profile. The contour sharpness is defined by applying the root mean square of the variance of the gradient contour variable, expressed as:

这里dc(q,q0)为梯度轮廓中点q和q0之间的曲线长度,g(q)为q点处的梯度幅度,G(q0)为梯度轮廓中所有点的梯度幅度和,s为梯度轮廓中任一点。Here dc (q,q0 ) is the length of the curve between points q and q0 in the gradient profile, g(q) is the gradient magnitude at point q, and G(q0 ) is the gradient magnitude at all points in the gradient profile and, s is any point in the gradient profile.

锐度能量函数Esharp定义为:The sharpness energy function Esharp is defined as:

其中参数b为权值控制参数。The parameter b is the weight control parameter.

(5)轮廓线特性能量函数Es作为平滑项约束来控制轮廓跟踪,确保跟踪生成的曲线是平滑,同时防止求解时陷入局部极值中。(5) The characteristic energy function Es of the contour line is used as a smooth item constraint to control the contour tracking to ensure that the curve generated by the tracking is smooth, and at the same time prevent it from falling into the local extremum when solving.

定义轮廓线中轮廓点为N={n0,n1,...,nn},其中n0为轮廓起始点,则轮廓线特性能量函数定义为Define the contour points in the contour line as N={n0 ,n1 ,...,nn }, where n0 is the starting point of the contour, then the characteristic energy function of the contour line is defined as

其中ds(ni,ni-1)表示点ni与点ni-1之间的曲线长度,参数c为权值控制参数。Where ds (ni ,ni-1 ) represents the length of the curve between point ni and point ni-1 , and parameter c is a weight control parameter.

(6)轮廓线距离能量函数Ed是轮廓跟踪的弹性约束项,它用于约束轮廓跟踪过程的轮廓线中各轮廓点的距离,使轮廓跟踪曲线不会超出搜索区域,保证轮廓线不相互交叉。(6) Contour line distance energy function Ed is the elastic constraint item of contour tracking, which is used to constrain the distance of each contour point in the contour line of the contour tracking process, so that the contour tracking curve will not exceed the search area and ensure that the contour lines do not interact with each other. cross.

其中de(ni,n0)表示参数表示点ni与点n0之间的垂直距离,d为权值控制参数。Among them, de (ni , n0 ) indicates that the parameter indicates the vertical distance between point ni and point n0 , and d is a weight control parameter.

(7)对图像中每一条初始轮廓线从图像的左侧起始点开始,由图像底部向顶部更新每一轮廓点的位置,对每一个轮廓点根据步骤(2~6)中轮廓线总能量计算方法计算轮廓跟踪能量;(7) For each initial contour line in the image, start from the left starting point of the image, update the position of each contour point from the bottom of the image to the top, and for each contour point according to the total energy of the contour line in steps (2-6) The calculation method calculates the contour tracking energy;

(8)求解能量函数的最小值。对与当前轮廓点相邻的像素列中的像素点进行搜索,搜索满足能量函数定义的边缘点P={p1,p2,…pn}中具有最小能量值的点,选择具有最小能量的像素点位置作为新的轮廓点;从图像左侧至右侧,重复搜索最小值,得到最终的轮廓搜索结果,即得到目标轮廓线V'。(8) Find the minimum value of the energy function. Search the pixel points in the pixel column adjacent to the current contour point, search for the point with the minimum energy value among the edge points P={p1 ,p2 ,…pn } satisfying the definition of the energy function, and select the point with the minimum energy The pixel position of is used as the new contour point; from the left side of the image to the right side, the minimum value is searched repeatedly to obtain the final contour search result, that is, the target contour line V'.

(9)以由下至上逐步加深的深度梯度假设作为先验假设梯度模型,按照图3对具有不同的轮廓线的区域进行深度值填充,计算得出深度分布。(9) Taking the depth gradient assumption gradually deepened from bottom to top as the prior assumption gradient model, filling depth values for areas with different contour lines according to Figure 3, and calculating the depth distribution.

轮廓线集中目标轮廓线V'={v'0,v'1,...,v'm},对应的分配深度值为。The target contour line in the contour set V'={v'0 ,v'1 ,...,v'm }, and the corresponding assigned depth value.

(10)利用原图像信息和所得深度图像信息对得到的深度图像进行优化处理:(10) Utilize the original image information and the obtained depth image information to optimize the obtained depth image:

其中Depth(xi)为输入深度图像,Ω(xi)是以像素xi为中心的邻域,I(xi)为像素xi的亮度I分量,像素xj为像素xi在邻域Ω(xi)的邻域像素,W(xi)是滤波器参数的归一化因子;‖xi-xj‖为两像素的空间欧式距离;I(xi)-I(xj)表示两像素的亮度相似性,像素xi和xj的空间坐标分别为(xix,xiy)和(xjx,xjy)。空间权重系数和色度权重系数定义为:Among them, Depth(xi ) is the input depth image, Ω(xi ) is the neighborhood centered on pixelxi , I(xi ) is the brightness I component of pixelxi , and pixel xj is the neighborhood of pixelxi . Neighboring pixels of domain Ω(xi ), W(xi ) is the normalization factor of filter parameters;‖xi -xj ‖ is the spatial Euclidean distance of two pixels; I(xi )-I(xj ) represents the brightness similarity of two pixels, and the spatial coordinates of pixels xi and xj are (xix , xiy ) and (xjx , xjy ) respectively. Spatial weight coefficient and chroma weighting coefficients defined as:

其中σs为空间权重的方差、σr为色度权重的方差。where σs is the variance of the spatial weights, and σr is the variance of the chrominance weights.

(11)得到了对输入单目图像进行深度信息估计后的视差图。(11) The disparity map after depth information estimation of the input monocular image is obtained.

Claims (3)

Translated fromChinese
1.基于轮廓锐度的单目图像深度信息估计方法,其特征在于,包括如下步骤:1. The monocular image depth information estimation method based on contour sharpness, is characterized in that, comprises the steps:(1)对于输入图像,进行边缘检测,得到图像中物体的边缘点P={p1,p2,…pn};(1) For the input image, perform edge detection to obtain the edge point P={p1 ,p2 ,...pn } of the object in the image;(2)根据先验深度梯度模型进行初始轮廓线的定义,定义一组互相平行且间距相等的轮廓线作为初始轮廓线V={v0,v1,…vm};其中,v(s)=[x(s),y(s)],x(s),y(s)为初始轮廓线中的点s的坐标;(2) Define the initial contour line according to the prior depth gradient model, and define a group of contour lines parallel to each other and with equal spacing as the initial contour line V={v0 ,v1 ,…vm }; where, v(s )=[x(s), y(s)], x(s), y(s) are the coordinates of the point s in the initial contour line;轮廓线的总能量定义为The total energy of the contour is defined asEtotal=w1Eedge+w2Esharp+w3Es+w4EdEtotal =w1 Eedge +w2 Esharp +w3 Es +w4 Ed其中Eedge为轮廓边缘能量函数,表示轮廓的梯度幅度;Where Eedge is the contour edge energy function, which represents the gradient magnitude of the contour;Esharp轮廓锐度能量函数,代表轮廓的模糊程度;Esharp Contour sharpness energy function, representing the fuzzy degree of the contour;Es为轮廓线特性能量函数,用于约束轮廓的平滑度;Es is the characteristic energy function of the contour line, which is used to constrain the smoothness of the contour;Ed为轮廓线距离能量函数,用于控制轮廓跟踪曲线不会超出搜索区域;Ed is the contour line distance energy function, which is used to control the contour tracking curve and will not exceed the search area;w1、w2、w3、w4为各个函数权重控制参数;w1 , w2 , w3 , and w4 are weight control parameters for each function;(3)对图像中每一条初始轮廓线从图像的左侧起始点开始,由图像底部向顶部更新每一轮廓点的位置,对每一个轮廓点根据步骤(2)中轮廓线总能量定义计算轮廓跟踪能量;(3) For each initial contour line in the image, start from the left starting point of the image, update the position of each contour point from the bottom of the image to the top, and calculate according to the definition of the total energy of the contour line in step (2) for each contour point Contour tracking energy;(4)求解能量函数的最小值;对与当前轮廓点相邻的像素列中的像素点进行搜索,搜索满足能量函数定义的边缘点P={p1,p2,…pn}中具有最小能量值的点,选择具有最小能量的像素点位置作为新的轮廓点;从图像左侧至右侧,重复搜索最小值,得到最终的轮廓搜索结果,即得到目标轮廓线V';(4) Solve the minimum value of the energy function; search the pixel points in the pixel column adjacent to the current contour point, and search for the edge points P={p1 ,p2 ,...pn } that satisfy the definition of the energy function. For the point with the minimum energy value, select the pixel point position with the minimum energy as the new contour point; from the left side of the image to the right side, repeat the search for the minimum value to obtain the final contour search result, that is, the target contour line V';(5)以深度梯度假设作为先验假设梯度模型,对具有不同的轮廓线的区域进行深度值填充,计算得出深度分布;(5) Using the depth gradient assumption as a priori assumption gradient model, fill the depth value of the area with different contour lines, and calculate the depth distribution;轮廓线集中目标轮廓线V'={v'0,v'1,...,v'm},对于i=0…m对应的分配深度值Depth为:The target contour line in the contour set V'={v'0 ,v'1 ,...,v'm }, the assigned depth value Depth corresponding to i=0...m is:(6)利用原图像灰度信息和所得深度图像信息对得到的深度图像进行优化处理:(6) Use the original image grayscale information and the obtained depth image information to optimize the obtained depth image:其中Depth(xi)为输入深度图像,Ω(xi)是以像素xi为中心的邻域,I(xi)为像素xi的亮度I分量,xj为像素xi在邻域Ω(xi)的邻域像素,W(xi)是滤波器参数的归一化因子;||xi-xj||为两像素的空间欧式距离;I(xi)-I(xj)表示两像素的亮度相似性,像素xi和xj的空间坐标分别为(xix,xiy)和(xjx,xjy);空间权重系数和色度权重系数定义为:Where Depth(xi ) is the input depth image, Ω(xi ) is the neighborhood centered on pixelxi , I(xi ) is the brightness I component of pixelxi , and xj is the neighborhood of pixelxi Neighborhood pixels of Ω(xi ), W(xi ) is the normalization factor of filter parameters; ||xi -xj || is the spatial Euclidean distance of two pixels; I(xi )-I ( xj ) represents the brightness similarity of two pixels, the spatial coordinates of pixels xi and xj are (xix , xiy ) and (xjx , xjy ) respectively; the spatial weight coefficient and chroma weighting coefficients defined as:其中σs为空间权重的方差、σr为色度权重的方差;where σs is the variance of the spatial weights, and σr is the variance of the chroma weights;(7)得到了对输入单目图像进行深度信息估计后的视差图。(7) The disparity map after depth information estimation of the input monocular image is obtained.2.根据权利要求1所述的基于轮廓锐度的单目图像深度信息估计方法,其特征在于,所述的轮廓边缘能量函数Eedge为图像I(x,y)沿着梯度方向的梯度幅度大小,定义为:2. the monocular image depth information estimation method based on contour sharpness according to claim 1, is characterized in that, described contour edge energy function Eedge is the gradient magnitude of image I (x, y) along gradient direction size, defined as:其中,是对I(x,y)求梯度幅度;in, It is to find the gradient magnitude of I(x,y);其中参数a为权值控制参数。The parameter a is the weight control parameter.3.根据权利要求1所述的基于轮廓锐度的单目图像深度信息估计方法,其特征在于,所述的轮廓锐度能量函数Esharp以对梯度轮廓锐度求解得到,定义为:3. the monocular image depth information estimation method based on contour sharpness according to claim 1, is characterized in that, described contour sharpness energy function Esharp obtains with gradient contour sharpness solution, is defined as:其中,梯度轮廓锐度σ(p(q0))为梯度轮廓变量方差的均方根,这里梯度轮廓为图像中边缘像素q0(x0,y0)作为起始点,沿梯度方向向边缘的边界追踪,直到边缘的梯度幅度不再发生变化,所得到一维路径p(q0)形成的梯度幅度曲线;其梯度轮廓锐度定义为Among them, the sharpness of the gradient contour σ(p(q0 )) is the root mean square of the variance of the gradient contour variable, where the gradient contour is the edge pixel q0 (x0 ,y0 ) in the image as the starting point, along the gradient direction to the edge Boundary tracing of , until the gradient magnitude of the edge no longer changes, the gradient magnitude curve formed by the obtained one-dimensional path p(q0 ); its gradient profile sharpness is defined as其中,dc(q,q0)为梯度轮廓中点q和q0之间的曲线长度,g(q)为q点处的梯度幅度,G(q0)为梯度轮廓中所有点的梯度幅度和,s为梯度轮廓中任一点,参数b为权值控制参数。where dc (q,q0 ) is the curve length between points q and q0 in the gradient profile, g(q) is the gradient magnitude at point q, and G(q0 ) is the gradient of all points in the gradient profile Amplitude sum, s is any point in the gradient profile, and parameter b is the weight control parameter.
CN201510786727.1A2015-11-162015-11-16Monocular image depth information method of estimation based on contour acuityActiveCN105374039B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN201510786727.1ACN105374039B (en)2015-11-162015-11-16Monocular image depth information method of estimation based on contour acuity

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201510786727.1ACN105374039B (en)2015-11-162015-11-16Monocular image depth information method of estimation based on contour acuity

Publications (2)

Publication NumberPublication Date
CN105374039A CN105374039A (en)2016-03-02
CN105374039Btrue CN105374039B (en)2018-09-21

Family

ID=55376211

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201510786727.1AActiveCN105374039B (en)2015-11-162015-11-16Monocular image depth information method of estimation based on contour acuity

Country Status (1)

CountryLink
CN (1)CN105374039B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN107204010B (en)*2017-04-282019-11-19中国科学院计算技术研究所 A monocular image depth estimation method and system
CN107582001B (en)*2017-10-202020-08-11珠海格力电器股份有限公司Dish washing machine and control method, device and system thereof
CN108647713B (en)*2018-05-072021-04-02宁波华仪宁创智能科技有限公司Embryo boundary identification and laser track fitting method
TWI678681B (en)2018-05-152019-12-01緯創資通股份有限公司Method, image processing device, and system for generating depth map
CN108932734B (en)*2018-05-232021-03-09浙江商汤科技开发有限公司Monocular image depth recovery method and device and computer equipment
CN109087346B (en)*2018-09-212020-08-11北京地平线机器人技术研发有限公司Monocular depth model training method and device and electronic equipment
CN112446946B (en)*2019-08-282024-07-09深圳市光鉴科技有限公司Depth reconstruction method, system, equipment and medium based on sparse depth and boundary
CN112396645B (en)*2020-11-062022-05-31华中科技大学Monocular image depth estimation method and system based on convolution residual learning
CN114022567A (en)*2021-11-092022-02-08浙江商汤科技开发有限公司Pose tracking method and device, electronic equipment and storage medium
CN114841969B (en)*2022-05-072024-12-27辽宁大学 A forged face identification method based on color gradient texture representation
CN116503821B (en)*2023-06-192023-08-25成都经开地理信息勘测设计院有限公司Road identification recognition method and system based on point cloud data and image recognition

Citations (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN101840574A (en)*2010-04-162010-09-22西安电子科技大学Depth estimation method based on edge pixel features
CN102883175A (en)*2012-10-232013-01-16青岛海信信芯科技有限公司Methods for extracting depth map, judging video scene change and optimizing edge of depth map
CN103793918A (en)*2014-03-072014-05-14深圳市辰卓科技有限公司Image definition detecting method and device

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US8184196B2 (en)*2008-08-052012-05-22Qualcomm IncorporatedSystem and method to generate depth data using edge detection
US8248410B2 (en)*2008-12-092012-08-21Seiko Epson CorporationSynthesizing detailed depth maps from images

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN101840574A (en)*2010-04-162010-09-22西安电子科技大学Depth estimation method based on edge pixel features
CN102883175A (en)*2012-10-232013-01-16青岛海信信芯科技有限公司Methods for extracting depth map, judging video scene change and optimizing edge of depth map
CN103793918A (en)*2014-03-072014-05-14深圳市辰卓科技有限公司Image definition detecting method and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
A novel 2D-to-3D conversion technique based on relative height depth cue;Yong Ju Jung et al.;《Proceedings of the SPIE》;20090218;第1-8页*
Edge Based Method for Sharp Region Extraction from Low Depth of Field Images;Natalia Neverova et al.;《Visual Communications and Image Processing (VCIP), 2012 IEEE》;20121130;第1-6页*

Also Published As

Publication numberPublication date
CN105374039A (en)2016-03-02

Similar Documents

PublicationPublication DateTitle
CN105374039B (en)Monocular image depth information method of estimation based on contour acuity
CN111066065B (en) Systems and methods for hybrid deep regularization
CN104574366B (en)A kind of extracting method in the vision significance region based on monocular depth figure
CN105374019B (en)A kind of more depth map fusion methods and device
Yang et al.Color-guided depth recovery from RGB-D data using an adaptive autoregressive model
CN103426182B (en)The electronic image stabilization method of view-based access control model attention mechanism
CN104680496B (en)A kind of Kinect depth map restorative procedures based on color images
Liu et al.Guided inpainting and filtering for kinect depth maps
CN106651853B (en) Establishment method of 3D saliency model based on prior knowledge and depth weight
CN106447725B (en)Spatial target posture method of estimation based on the matching of profile point composite character
CN107025660B (en) A method and device for determining image parallax of binocular dynamic vision sensor
CN106952222A (en)A kind of interactive image weakening method and device
CN110853151A (en)Three-dimensional point set recovery method based on video
CN103826032B (en)Depth map post-processing method
CN104463870A (en)Image salient region detection method
WO2018053952A1 (en)Video image depth extraction method based on scene sample library
CN111476812A (en)Map segmentation method and device, pose estimation method and equipment terminal
KR101125061B1 (en)A Method For Transforming 2D Video To 3D Video By Using LDI Method
CN103646397B (en) Real-time synthetic aperture perspective imaging method based on multi-source data fusion
CN104778673B (en)A kind of improved gauss hybrid models depth image enhancement method
CN106447718A (en)2D-to-3D depth estimation method
CN113077504B (en) Depth map generation method for large scenes based on multi-granularity feature matching
CN110148168A (en)A kind of three mesh camera depth image processing methods based on the biradical line of size
Fan et al.Collaborative three-dimensional completion of color and depth in a specified area with superpixels
CN106652044A (en)Virtual scene modeling method and system

Legal Events

DateCodeTitleDescription
C06Publication
PB01Publication
C10Entry into substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp