Movatterモバイル変換


[0]ホーム

URL:


CN101902657A - A method for generating virtual multi-viewpoint images based on depth map layering - Google Patents

A method for generating virtual multi-viewpoint images based on depth map layering
Download PDF

Info

Publication number
CN101902657A
CN101902657ACN 201010228696CN201010228696ACN101902657ACN 101902657 ACN101902657 ACN 101902657ACN 201010228696CN201010228696CN 201010228696CN 201010228696 ACN201010228696 ACN 201010228696ACN 101902657 ACN101902657 ACN 101902657A
Authority
CN
China
Prior art keywords
depth
index
layer
image
dimensional image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN 201010228696
Other languages
Chinese (zh)
Other versions
CN101902657B (en
Inventor
席明
薛玖飞
王梁昊
李东晓
张明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wanwei Display Technology Shenzhen Co ltd
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJUfiledCriticalZhejiang University ZJU
Priority to CN2010102286965ApriorityCriticalpatent/CN101902657B/en
Publication of CN101902657ApublicationCriticalpatent/CN101902657A/en
Application grantedgrantedCritical
Publication of CN101902657BpublicationCriticalpatent/CN101902657B/en
Expired - Fee Relatedlegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Landscapes

Abstract

Translated fromChinese

本发明公开了一种基于深度图分层的虚拟多视点图像的生成方法,包括如下步骤:(1)对待处理的深度图像进行预处理;(2)对预处理后的深度图进行分层,得到分层深度图像;(3)选定相机阵列聚焦的深度层,确定前景层和背景层;(4)对待处理的二维图像进行分层,得到分层二维图像;(5)计算各个深度层对应二维图像层对应的视差值;(6)对分层二维图像进行扩展,得到扩展后的分层二维图像;(7)通过加权水平平移算法得到各个虚拟视点位置的虚拟二维图像。本发明不需要虚拟多视点相机阵列模型的参数,可以快速有效的生成多视点自由立体显示系统所需的虚拟多视点图像,并对输入深度图像具有较好的容错能力。The invention discloses a method for generating a virtual multi-viewpoint image based on depth map layering, comprising the following steps: (1) preprocessing the depth image to be processed; (2) layering the preprocessed depth map, Obtain a layered depth image; (3) select the depth layer focused by the camera array, and determine the foreground layer and the background layer; (4) layer the two-dimensional image to be processed to obtain a layered two-dimensional image; (5) calculate each The depth layer corresponds to the parallax value corresponding to the two-dimensional image layer; (6) expand the layered two-dimensional image to obtain the expanded layered two-dimensional image; (7) obtain the virtual position of each virtual viewpoint through a weighted horizontal translation algorithm 2D image. The invention does not need the parameters of the virtual multi-viewpoint camera array model, can quickly and effectively generate virtual multi-viewpoint images required by a multi-viewpoint autostereoscopic display system, and has better fault-tolerant capability for input depth images.

Description

Translated fromChinese
一种基于深度图分层的虚拟多视点图像的生成方法A method for generating virtual multi-viewpoint images based on depth map layering

技术领域technical field

本发明涉及多视点自由立体显示系统中虚拟多视点图像的生成方法,尤其涉及一种基于深度图分层的虚拟多视点图像的生成方法。The invention relates to a method for generating virtual multi-viewpoint images in a multi-viewpoint autostereoscopic display system, in particular to a method for generating virtual multi-viewpoint images based on depth map layers.

背景技术Background technique

多视点自由立体显示技术是一种将同一场景的多个不同视点的图像同时呈献给观众,观众不需要佩戴眼镜即可在多个位置看到立体图像的技术。该技术需要多幅同一场景不同视点的二维图像。Multi-viewpoint autostereoscopic display technology is a technology that simultaneously presents images of multiple different viewpoints of the same scene to the audience, and the audience can see stereoscopic images in multiple positions without wearing glasses. The technique requires multiple 2D images of the same scene from different viewpoints.

为了获取多幅同一场景不同视点的二维图像,现在主要有两种解决办法。In order to obtain multiple 2D images of the same scene from different viewpoints, there are currently two main solutions.

一种办法是在拍摄时采用多个摄相机组成的相机阵列同时进行拍摄,然后将多幅二维图像数据同时传输到显示端进行立体合成显示。这种方法摄像机阵列成本高昂,而且多幅二维图像数据的传输需要占用大量的带宽资源,无法在现有网络上传输。另一种方法是在拍摄时采用双摄像机拍摄,经过立体匹配求取深度图的处理后,传输一路二维图像加上一路深度图像,然后在显示端重构出多幅虚拟视点的二维图像,最后进行立体合成显示。这种方法拍摄成本较低,视频传输增加的带宽很少,可以直接应用现有网络进行传输,而且可以根据自由立体显示器的需要生成不同数目的虚拟视点二维图像,灵活性更强。因此,对基于二维图像和深度图像生成多幅虚拟视点图像这一技术的研究具有更加现实的意义,这一技术的研究不仅可以为立体显示提供丰富的素材,而且能大大节省内容制作的成本。One method is to use a camera array composed of multiple cameras to shoot simultaneously, and then simultaneously transmit multiple pieces of two-dimensional image data to the display terminal for stereoscopic composite display. The cost of the camera array in this method is high, and the transmission of multiple two-dimensional image data needs to occupy a large amount of bandwidth resources, which cannot be transmitted on the existing network. Another method is to use dual cameras to shoot, after stereo matching to obtain the depth map, transmit one 2D image plus one depth image, and then reconstruct multiple 2D images of virtual viewpoints on the display side , and finally perform stereocompositing display. This method has low shooting cost, little bandwidth increase in video transmission, can be directly applied to the existing network for transmission, and can generate two-dimensional images of different numbers of virtual viewpoints according to the needs of the autostereoscopic display, which is more flexible. Therefore, the research on the technology of generating multiple virtual viewpoint images based on two-dimensional images and depth images has more practical significance. The research on this technology can not only provide rich materials for stereoscopic display, but also greatly save the cost of content production .

基于二维图像和深度图像生成多幅虚拟视点图像技术现在采用的主流算法是DIBR算法(Depth-Image-Based Rendering),其主要原理是根据输入的深度图像和相机参数将输入的二维图像中的每一个像素点映射到三维空间中的一点,然后再根据虚拟相机阵列模型的参数将这些空间中的点映射到虚拟相机成像平面上,得到虚拟视点的二维图像。这种方法需要做复杂的映射操作,运算量大,耗时久,对深度数据的准确性要求很高,而且相机阵列模型的参数不易获得。Based on two-dimensional images and depth images to generate multiple virtual viewpoint images, the mainstream algorithm currently used is the DIBR algorithm (Depth-Image-Based Rendering). Each pixel of the camera is mapped to a point in the three-dimensional space, and then these points in the space are mapped to the imaging plane of the virtual camera according to the parameters of the virtual camera array model to obtain a two-dimensional image of the virtual viewpoint. This method requires complex mapping operations, which requires a large amount of calculation, takes a long time, and requires high accuracy of depth data, and the parameters of the camera array model are not easy to obtain.

发明内容Contents of the invention

本发明的目的是克服现有技术的缺陷和不足,提出了一种基于深度图分层的虚拟多视点图像的生成方法。The purpose of the present invention is to overcome the defects and deficiencies of the prior art, and propose a method for generating virtual multi-viewpoint images based on depth map layers.

基于深度图分层的虚拟多视点图像的生成方法包括以下步骤:The generation method of the virtual multi-viewpoint image based on depth map layering comprises the following steps:

(1)对待处理的参考视点深度图像进行中值滤波操作,并在水平方向上对深度值小的物体进行轮廓扩展操作,得到滤波扩展后的深度图像;(1) Carry out a median filter operation on the depth image of the reference viewpoint to be processed, and perform a contour extension operation on objects with small depth values in the horizontal direction to obtain a depth image after filtering;

(2)根据设定的分层层数,对滤波扩展后的深度图像进行分层操作,得到分层深度图像;(2) According to the set number of layered layers, perform a layered operation on the depth image after filtering and expansion to obtain a layered depth image;

(3)在分层深度图像中选定一个深度层作为相机阵列的聚焦层,则深度值比聚焦层深度值大的为背景层,深度值比聚焦层深度值小的为前景层;(3) Select a depth layer in the layered depth image as the focus layer of the camera array, then the background layer whose depth value is larger than the focus layer depth value, and the foreground layer whose depth value is smaller than the focus layer depth value;

(4)对待处理的参考视点二维图像,根据二维图像相对应的分层深度图像,对二维图像进行分层操作,得到分层二维图像;(4) For the reference viewpoint 2D image to be processed, perform layering operations on the 2D image according to the layered depth image corresponding to the 2D image to obtain a layered 2D image;

(5)根据设定的参考视点前景层和其相邻视点前景层之间的最大视差值,计算虚拟相机阵列相邻视点前景层与背景层之间的最大视差值,并结合各个深度层到聚焦层的距离计算各个深度层对应二维图像层对应的视差值;(5) According to the maximum disparity value between the set reference viewpoint foreground layer and its adjacent viewpoint foreground layer, calculate the maximum disparity value between the virtual camera array adjacent viewpoint foreground layer and background layer, and combine each depth Calculate the disparity value corresponding to the two-dimensional image layer corresponding to each depth layer from the distance from the layer to the focus layer;

(6)根据虚拟视点位置到参考视点位置的相对距离和相机阵列相邻视点前景层与背景层之间的最大视差值,对分层二维图像进行扩展操作,得到扩展后的分层二维图像;(6) According to the relative distance from the virtual viewpoint position to the reference viewpoint position and the maximum disparity value between the foreground layer and the background layer of adjacent viewpoints of the camera array, the layered two-dimensional image is extended to obtain the extended layered two-dimensional image. dimensional image;

(7)根据虚拟视点位置到参考视点位置的相对距离和各个二维图像层对应的视差值,通过加权水平平移算法得到各个虚拟视点位置的虚拟二维图像。(7) According to the relative distance from the virtual viewpoint position to the reference viewpoint position and the disparity value corresponding to each two-dimensional image layer, the virtual two-dimensional image of each virtual viewpoint position is obtained through a weighted horizontal translation algorithm.

所述的对待处理的参考视点深度图像进行中值滤波操作,并在水平方向上对深度值较小的物体进行扩展操作,得到滤波扩展后的深度图像步骤为:The step of performing a median filtering operation on the depth image of the reference viewpoint to be processed, and performing an expansion operation on objects with smaller depth values in the horizontal direction, to obtain the depth image after filtering and expansion is as follows:

(a)对于参考视点深度图像IDepth(x,y)进行5×5窗口大小的中值滤波,得到平滑的深度图I′Depth(x,y),x表示图像像素点水平方向上的坐标,y表示图像像素点垂直方向上的坐标,x的取值为x=0,1,2,...,H-1,y的取值为y=0,1,2,...,V-1,H表示图像的水平分辨率,V表示图像的垂直分辨率;(a) For the reference viewpoint depth image IDepth (x, y), perform a median filter with a window size of 5×5 to obtain a smooth depth map I'Depth ( x, y), where x represents the coordinates of the image pixel in the horizontal direction , y represents the coordinates in the vertical direction of the image pixel point, the value of x is x=0, 1, 2, ..., H-1, the value of y is y = 0, 1, 2, ..., V-1, H represents the horizontal resolution of the image, and V represents the vertical resolution of the image;

(b)对于步骤(a)得到的平滑深度图I′Depth(x,y)沿水平方向进行扫描,计算水平方向上相邻两个像素点之间深度差值的绝对值,并将该绝对值与一个预先设定的阈值th进行比较,确定深度值较小的物体水平方向轮廓扩展的方向dir(x,y),-1表示水平向左扩展,1表示水平向右扩展,0表示不做扩展,表示公式如下:(b) Scan the smooth depth map I′Depth (x, y) obtained in step (a) along the horizontal direction, calculate the absolute value of the depth difference between two adjacent pixels in the horizontal direction, and convert the absolute value The value is compared with a preset threshold value th to determine the direction dir(x, y) of the horizontal outline extension of the object with a smaller depth value, -1 means horizontal expansion to the left, 1 means horizontal expansion to the right, and 0 means no To expand, the expression formula is as follows:

dirdir((xx,,ythe y))==--11||IIDepthDepth&prime;&prime;((xx,,ythe y))--IIDepthDepth&prime;&prime;((xx++11,,ythe y))||>>thandmore thanIIDepthDepth&prime;&prime;((xx,,ythe y))>>IIDepthDepth&prime;&prime;((xx++11,,ythe y))11||IIDepthDepth&prime;&prime;((xx,,ythe y))--IIDepthDepth&prime;&prime;((xx++11,,ythe y))||>>thandmore thanIIDepthDepth&prime;&prime;((xx,,ythe y))<<IIDepthDepth&prime;&prime;((xx++11,,ythe y));;00otherwiseotherwise

(c)将每一个像素点的赋值标志位flag(x,y)初始化为零,表示公式如下:(c) Initialize the assignment flag flag (x, y) of each pixel to zero, and the expression formula is as follows:

flagflag((xx,,ythe y))==00,,&ForAll;&ForAll;xx==0,1,20,1,2,,......,,Hh--11&ForAll;&ForAll;ythe y==0,1,20,1,2,,......,,VV--11

其中表示任意给定;in means any given;

(d)根据步骤(b)得到的每个像素的扩展方向dir(x,y)对步骤(a)得到的平滑深度图I′Depth(x,y)中待扩展的像素点进行赋值操作,得到深度图I″Depth(x,y),并记录对应赋值标志位flag(x,y),表示公式如下:(d) According to the extension direction dir(x, y) of each pixel obtained in step (b), assign values to the pixels to be extended in the smooth depth map I′Depth (x, y) obtained in step (a), Get the depth map I″Depth (x, y), and record the corresponding assignment flag bit flag (x, y), the expression formula is as follows:

IIDepthDepth&prime;&prime;&prime;&prime;((xx++dirdir((xx,,ythe y))&CenterDot;&CenterDot;ii,,ythe y))==IIDepthDepth&prime;&prime;((xx++11,,ythe y))dirdir((xx,,ythe y))==--11IIDepthDepth&prime;&prime;((xx,,ythe y))dirdir((xx,,ythe y))==11,,ii==0,1,20,1,2,,......,,KK--11

flag(x+dir(x,y)·i,y)=1|dir(x,y)|=1,i=0,1,2,...,K-1flag(x+dir(x, y)·i, y)=1|dir(x, y)|=1, i=0, 1, 2, ..., K-1

其中,K表示扩展像素的个数;Among them, K represents the number of expanded pixels;

(e)对步骤(d)得到的深度图I″Depth(x,y)中对应赋值标志位flag(x,y)=0的像素点进行赋值操作,得到扩展深度图

Figure BSA00000193660500033
表示公式如下:(e) carry out the assignment operation to the pixel corresponding to the assignment flag flag (x, y)=0 in the depth map I″Depth (x, y) that step (d) obtains, obtain the extended depth map
Figure BSA00000193660500033
The expression formula is as follows:

IIDepthDepthEE.((xx,,ythe y))==IIDepthDepth&prime;&prime;((xx,,ythe y))flagflag((xx,,ythe y))==00IIDepthDepth&prime;&prime;&prime;&prime;((xx,,ythe y))flagflag((xx,,ythe y))==11..

所述的根据设定的分层层数,对滤波扩展后的深度图像进行分层操作,得到分层深度图像步骤为:According to the set number of layered layers, the layered operation is performed on the filtered and expanded depth image to obtain the layered depth image. The steps are:

(f)根据设定的分层层数N,N为大于等于1的整数,计算步骤(e)得到的扩展深度图

Figure BSA00000193660500035
中的每一个像素点对应深度层的索引值Index(x,y),表示公式如下:(f) According to the set number of layers N, where N is an integer greater than or equal to 1, calculate the extended depth map obtained in step (e)
Figure BSA00000193660500035
Each pixel in corresponds to the index value Index(x, y) of the depth layer, and the expression formula is as follows:

Figure BSA00000193660500036
Figure BSA00000193660500036

其中,

Figure BSA00000193660500037
表示取小于等于·的最大整数的操作;in,
Figure BSA00000193660500037
Indicates the operation of taking the largest integer less than or equal to ;

(g)根据步骤(f)得到的像素点深度层索引值Index(x,y),得到每一层的分层深度图

Figure BSA00000193660500038
分层深度图
Figure BSA00000193660500039
的索引值index越小表示该深度层越靠近观看者,索引值index越大表示该深度层越远离观看者,表示公式如下:(g) According to the pixel depth layer index value Index(x, y) obtained in step (f), obtain the layered depth map of each layer
Figure BSA00000193660500038
layered depth map
Figure BSA00000193660500039
The smaller the index value of , the closer the depth layer is to the viewer, and the larger the index value is, the farther the depth layer is from the viewer. The expression formula is as follows:

IIDepthDepthindexindex((xx,,ythe y))==IIDepthDepthEE.((xx,,ythe y))IndexIndex((xx,,ythe y))==indexindex255255otherwiseotherwise,,indexindex==0,1,20,1,2,,......,,NN--11..

所述的在分层深度图像中选定一个深度层作为相机阵列的聚焦层,则深度值比聚焦层深度值大的为背景层,深度值比聚焦层深度值小的为前景层步骤为:In the described layered depth image, a depth layer is selected as the focus layer of the camera array, then the depth value is larger than the focus layer depth value as the background layer, and the depth value is smaller than the focus layer depth value as the foreground layer. The steps are:

(h)根据设定的相机阵列聚焦层focus,确定前景图层和背景图层,索引值index>focus的深度层和聚焦层相比,距离观看者远,为背景层,索引值index<focus的深度层和聚焦层相比,距离观看者近,为前景图层。(h) Determine the foreground layer and background layer according to the set camera array focus layer focus, the depth layer with index value index>focus is farther away from the viewer than the focus layer, and is the background layer, and the index value index<focus Compared with the focus layer, the depth layer is closer to the viewer and is the foreground layer.

所述的对待处理的参考视点二维图像,根据二维图像相对应的分层深度图像,对二维图像进行分层操作,得到分层二维图像步骤为:For the reference viewpoint 2D image to be processed, according to the layered depth image corresponding to the 2D image, the layered operation is performed on the 2D image to obtain the layered 2D image. The steps are as follows:

(i)根据步骤(f)得到的像素点深度层索引值Index(x,y),对于参考视点二维图像IColor(x,y)进行分层操作,得到每一层的分层二维图像

Figure BSA00000193660500041
同时记录每一层二维图像
Figure BSA00000193660500042
的赋值标志位
Figure BSA00000193660500043
分层二维图像
Figure BSA00000193660500044
的索引值index越小表示该二维图像层越靠近观看者,索引值index越大表示该二维图像层越远离观看者,表示公式如下:(i) According to the pixel depth layer index value Index(x, y) obtained in step (f), perform layering operation on the reference viewpoint two-dimensional image IColor (x, y) to obtain the layered two-dimensional image of each layer image
Figure BSA00000193660500041
Simultaneously record each layer of 2D image
Figure BSA00000193660500042
The assignment flag
Figure BSA00000193660500043
layered 2D image
Figure BSA00000193660500044
The smaller the index value index of , the closer the 2D image layer is to the viewer, and the larger the index value index is, the farther the 2D image layer is from the viewer. The expression formula is as follows:

IIColorcolorindexindex((xx,,ythe y))==IIColorcolor((xx,,ythe y))IndexIndex((xx,,ythe y))==indexindex00otherwiseotherwise,index, index==0,1,20,1,2,,......,,NN--11

FfColorcolorindexindex((xx,,ythe y))==11IndexIndex((xx,,ythe y))==indexindex00otherwiseotherwise,,indexindex==0,1,20,1,2,,......,,NN--11..

所述的根据设定的参考视点前景层和其相邻视点前景层之间的最大视差值,计算虚拟相机阵列相邻视点前景层与背景层之间的最大视差值,并结合各个深度层到聚焦层的距离计算各个深度层对应二维图像层对应的视差值步骤为:According to the maximum parallax value between the set reference viewpoint foreground layer and its adjacent viewpoint foreground layer, calculate the maximum parallax value between the adjacent viewpoint foreground layer and the background layer of the virtual camera array, and combine each depth The steps to calculate the disparity value corresponding to each depth layer corresponding to the two-dimensional image layer are:

(j)根据设定的参考视点前景层和其相邻视点前景层之间的最大视差值MaxD,MaxD为大于等于1的整数,计算相机阵列相邻视点前景层与背景层之间的最大视差值MaxR,表示公式如下:(j) According to the maximum parallax value MaxD between the set reference viewpoint foreground layer and its adjacent viewpoint foreground layer, MaxD is an integer greater than or equal to 1, calculate the maximum disparity between the adjacent viewpoint foreground layer and the background layer of the camera array Parallax value MaxR, the expression formula is as follows:

MaxR=(MaxD/focus)·(N-1);MaxR=(MaxD/focus)·(N-1);

(k)根据步骤(j)得到的最大视差值MaxR,结合各个深度层到聚焦层的距离计算各个深度层对应二维图像层对应的视差值Disindex,表示公式如下:(k) According to the maximum parallax value MaxR obtained in step (j), combine the distance from each depth layer to the focus layer to calculate the parallax value Disindex corresponding to each depth layer corresponding to the two-dimensional image layer, and the expression formula is as follows:

Figure BSA00000193660500047
Figure BSA00000193660500047

所述的根据虚拟视点位置到参考视点位置的相对距离和相机阵列相邻视点前景层与背景层之间的最大视差值,对分层二维图像进行扩展操作,得到扩展后的分层二维图像步骤为:According to the relative distance from the virtual viewpoint position to the reference viewpoint position and the maximum parallax value between the foreground layer and the background layer of adjacent viewpoints of the camera array, the layered two-dimensional image is extended to obtain the extended layered two-dimensional image. Dimensional image steps are:

(1)设定虚拟平行多视点阵列中虚拟视点个数为M,M为大于等于2的整数,以参考视点作为虚拟平行多视点阵列的中心视点,即虚拟平行多视点阵列最中间位置的视点,令参考视点的索引值为零,则在水平方向上按从右向左的顺序,各个虚拟视点的索引值subi的取值依次为:(1) Set the number of virtual viewpoints in the virtual parallel multi-viewpoint array as M, M is an integer greater than or equal to 2, and use the reference viewpoint as the central viewpoint of the virtual parallel multi-viewpoint array, that is, the viewpoint at the middlemost position of the virtual parallel multi-viewpoint array , let the index value of the reference viewpoint be zero, then in the order from right to left in the horizontal direction, the values of the index value subi of each virtual viewpoint are as follows:

Figure BSA00000193660500048
Figure BSA00000193660500048

(m)根据步骤(1)得到的各个虚拟视点的索引值subi计算虚拟视点位置到参考视点位置的相对距离Dsub_i=Subi-0=subi(m) Calculate the relative distance Dsub_i = Subi -0 = subi from the virtual view point position to the reference view point position according to the index value subi of each virtual view point obtained in step (1);

(n)对步骤(i)得到的每一层二维图像的赋值标志位

Figure BSA000001936605000410
沿水平方向进行扫描,计算相邻两个赋值标志位差的绝对值,将该绝对值和0进行比较,判断二维图像层
Figure BSA00000193660500051
水平扩展的方向
Figure BSA00000193660500052
-1表示水平向左扩展,1表示水平向右扩展,0表示不做扩展,表示公式如下:(n) for each layer of two-dimensional image obtained in step (i) The assignment flag
Figure BSA000001936605000410
Scan along the horizontal direction, calculate the absolute value of the bit difference between two adjacent assignment marks, compare the absolute value with 0, and judge the two-dimensional image layer
Figure BSA00000193660500051
Direction of horizontal expansion
Figure BSA00000193660500052
-1 means to expand horizontally to the left, 1 means to expand horizontally to the right, and 0 means not to expand. The expression formula is as follows:

dirdirColorcolorindexindex((xx,,ythe y))==--11,,||FfColorcolorindexindex((xx,,ythe y))--FfColorcolorindexindex((xx++11,,ythe y))||>>00andandFfColorcolorindexindex((xx,,ythe y))<<FfColorcolorindexindex((xx++11,,ythe y))11,,||FfColorcolorindexindex((xx,,ythe y))--FfColorcolorindexindex((xx++11,,ythe y))||>>00andandFfColorcolorindexindex((xx,,ythe y))>>FfColorcolorindexindex((xx++11,,ythe y))00,,otherwiseotherwise,,indexindex==0,1,20,1,2,,......,,NN--11;;

(o)将每一层扩展二维图像的赋值标志位

Figure BSA00000193660500054
初始化为零,表示公式如下:(o) Extend the assignment flag of each layer of two-dimensional image
Figure BSA00000193660500054
Initialized to zero, the representation formula is as follows:

FfECECindexindex((xx,,ythe y))==00,,&ForAll;&ForAll;xx==0,1,20,1,2,,......,,Hh--11&ForAll;&ForAll;ythe y==0,1,20,1,2,,......,,VV--11;;

(p)根据步骤(n)得到的水平扩展方向

Figure BSA00000193660500056
对步骤(i)得到的每一层二维图像
Figure BSA00000193660500057
待扩展的像素点进行赋值操作,得到二维图像层
Figure BSA00000193660500058
同时记录赋值标志位
Figure BSA00000193660500059
表示公式如下:(p) Horizontal expansion direction obtained according to step (n)
Figure BSA00000193660500056
For each layer of two-dimensional image obtained in step (i)
Figure BSA00000193660500057
The pixels to be expanded are assigned to obtain a two-dimensional image layer
Figure BSA00000193660500058
Simultaneous record assignment flag bit
Figure BSA00000193660500059
The expression formula is as follows:

IIECECindexindex((xx++dirdirColorcolorindexindex((xx,,ythe y))&CenterDot;&Center Dot;jj,,ythe y))==IIColorcolor((xx++dirdirColorcolorindexindex((xx,,ythe y))&CenterDot;&Center Dot;jj,,ythe y))dirdirColorcolorindexindex((xx,,ythe y))==--11IIColorcolor((xx++dirdirColorcolorindexindex((xx,,ythe y))&CenterDot;&Center Dot;jj,,ythe y))dirdirColorcolorindexindex((xx,,ythe y))==11,,jj==0,1,20,1,2,,......,,LL--11

FfECECindexindex((xx++dirdirColorcolorindexindex((xx,,ythe y))&CenterDot;&Center Dot;jj,,ythe y))==11||dirdirColorcolorindexindex((xx,,ythe y))||==11,,jj==0,1,20,1,2,,......,,LL--11

其中,L表示二维图像层

Figure BSA000001936605000512
扩展像素点的个数,L=Dsub·MaxR;Among them, L represents the two-dimensional image layer
Figure BSA000001936605000512
The number of expanded pixels, L=Dsub MaxR;

(q)对步骤(p)得到的二维图像层

Figure BSA000001936605000513
对应赋值标志位
Figure BSA000001936605000514
的像素点进行赋值操作,得到最终的扩展二维图像层
Figure BSA000001936605000515
并记录最终的二维图像层赋值标志位表示公式如下:(q) for the two-dimensional image layer obtained in step (p)
Figure BSA000001936605000513
Corresponding assignment flag bit
Figure BSA000001936605000514
The pixel points are assigned to obtain the final extended two-dimensional image layer
Figure BSA000001936605000515
And record the final two-dimensional image layer assignment flag bit The expression formula is as follows:

IIExColorExColorindexindex((xx,,ythe y))==IIColorcolorindexindex((xx,,ythe y))FfECECindexindex((xx,,ythe y))==00IIECECindexindex((xx,,ythe y))FfECECindexindex((xx,,ythe y))==11,,indexindex==0,1,20,1,2,,......,,NN--11

FfExColorExColorindexindex((xx,,ythe y))==00FfECECindexindex((xx,,ythe y))==00andandFfColorcolorindexindex((xx,,ythe y))==0011FfColorcolorindexindex((xx,,ythe y))==1122FfECECindexindex((xx,,ythe y))==11andandFfColorcolorindexindex((xx,,ythe y))==00,,indexindex==0,1,20,1,2,,......,,NN--11..

所述的根据虚拟视点位置到参考视点位置的相对距离和各个二维图像层对应的视差值,通过加权水平平移算法得到各个虚拟视点位置的虚拟二维图像步骤为:According to the relative distance from the virtual viewpoint position to the reference viewpoint position and the parallax value corresponding to each two-dimensional image layer, the step of obtaining the virtual two-dimensional image of each virtual viewpoint position through a weighted horizontal translation algorithm is as follows:

(r)假设要生成M个虚拟视点位置中第i个位置的图像,按照步骤(m)得到的虚拟视点位置到参考视点位置的相对距离Dsub_i和步骤(k)得到的各个二维图像层对应的视差值Disindex,对步骤(q)得到的扩展二维图像层

Figure BSA000001936605000519
和赋值标志位
Figure BSA000001936605000520
进行加权水平平移操作,得到平移后的二维图像层和平移后的赋值标志位
Figure BSA000001936605000522
表示公式如下:(r) Assuming that the image of the i-th position among the M virtual viewpoint positions is to be generated, the relative distance Dsub_i from the virtual viewpoint position to the reference viewpoint position obtained in step (m) and each two-dimensional image layer obtained in step (k) Corresponding disparity value Disindex , for the extended two-dimensional image layer obtained in step (q)
Figure BSA000001936605000519
and assignment flags
Figure BSA000001936605000520
Perform a weighted horizontal translation operation to obtain the translated two-dimensional image layer and shifted assignment flags
Figure BSA000001936605000522
The expression formula is as follows:

IIHSColorHSColorindexindex((xx,,ythe y))==00((xx++Disdisindexindex&CenterDot;&CenterDot;DD.subsub__ii))<<00oror((xx++Disdisindexindex&CenterDot;&CenterDot;DD.subsub__ii))&GreaterEqual;&Greater Equal;HhIIExColorExColorindexindex((xx++Disdisindexindex&CenterDot;&Center Dot;DD.subsub__ii,,ythe y))00&le;&le;((xx++Disdisindexindex&CenterDot;&CenterDot;DD.subsub__ii))<<Hh,,indexindex==0,1,20,1,2,,......,,NN--11

FfHSColorHSColorindexindex((xx,,ythe y))==00((xx++Disdisindexindex&CenterDot;&Center Dot;DD.subsub__ii))<<00oror((xx++Disdisindexindex&CenterDot;&Center Dot;DD.subsub__ii))&GreaterEqual;&Greater Equal;HhFfExColorExColorindexindex((xx++Disdisindexindex&CenterDot;&Center Dot;DD.subsub__ii,,ythe y))00&le;&le;((xx++Disdisindexindex&CenterDot;&Center Dot;DD.subsub__ii))<<Hh,,indexindex==0,1,20,1,2,,......,,NN--11

其中,H表示图像的水平分辨率;Among them, H represents the horizontal resolution of the image;

(s)初始化虚拟视点位置i二维图像的赋值标志位

Figure BSA00000193660500063
为零,表示公式如下;(s) Initialize the assignment flag bit of the virtual viewpoint position i two-dimensional image
Figure BSA00000193660500063
is zero, the expression formula is as follows;

Ffvirvirii((xx,,ythe y))==00,,&ForAll;&ForAll;xx==0,1,20,1,2,,......,,Hh--11&ForAll;&ForAll;ythe y==0,1,20,1,2,,......,,VV--11;;

(t)将步骤(r)中得到的平移后的二维图像层按照先放背景后放前景,由远及近的顺序,即index从大到小的顺序,一层一层地叠加起来,每叠加一层二维图像层,更新一次虚拟视点位置i二维图像的赋值标志位叠加操作结束后根据最终的赋值标志位

Figure BSA00000193660500067
得到虚拟视点位置i的二维图像
Figure BSA00000193660500068
表示公式如下:(t) the translated two-dimensional image layer obtained in step (r) Put the background first and then the foreground, in the order from far to near, that is, in the order of index from large to small, superimpose layer by layer, and update the virtual viewpoint position i two-dimensional image every time a layer of two-dimensional image is superimposed The assignment flag After the superposition operation ends, according to the final assignment flag
Figure BSA00000193660500067
Get the two-dimensional image of the virtual viewpoint position i
Figure BSA00000193660500068
The expression formula is as follows:

Ffvirvirii((xx,,ythe y))==00Ffvirvirii((xx,,ythe y))==00andandFfHSColorHSColorindexindex((xx,,ythe y))==00indexindex++11FfHSColorHSColorindexindex((xx,,ythe y))==11indexindex++11++NNFfvirvirii((xx,,ythe y))==00andandFfHSColorHSColorindexindex((xx,,ythe y))==22,,indexindex==0,1,20,1,2,,......,,NN--11

IIvirvirii((xx,,ythe y))==00Ffvirvirii((xx,,ythe y))==00IIHSColorHSColorFfvirvirii((xx,,ythe y))--11((xx,,ythe y))Ffvirvirii((xx,,ythe y))>>00andandFfvirvirii((xx,,ythe y))--NN&le;&le;00IIHSColorHSColorFfvirvirii((xx,,ythe y))--11--NN((xx,,ythe y))Ffvirvirii((xx,,ythe y))--NN>>00indexindex==0,1,20,1,2,,......,,NN--11;;

(u)对M个虚拟视点位置依次进行步骤(r)至步骤(t)的操作,即可得到M幅虚拟视点位置的二维图像。(u) Step (r) to step (t) are sequentially performed on the M virtual viewpoint positions to obtain M two-dimensional images of the virtual viewpoint positions.

本发明适用于对已有一路二维图像序列和一路深度图像序列的数据产生多个虚拟视点,以往采用的DIBR算法运算量大,耗时久,对深度图像的质量要求很高,而且相机阵列模型的参数不易获得。本方法不需要已知或者标定出虚拟多视点相机阵列模型的参数,可以快速有效的生成多视点自由立体显示系统所需的虚拟多视点图像,此外,为了提高对输入深度数据的容错能力,对深度图进行了分层处理,降低了对输入深度图图像质量的要求,扩大了本方法的适用范围。The present invention is suitable for generating multiple virtual viewpoints on the data of an existing two-dimensional image sequence and a depth image sequence. The DIBR algorithm used in the past has a large amount of calculation, takes a long time, and has high requirements on the quality of the depth image, and the camera array The parameters of the model are not easy to obtain. This method does not need to know or calibrate the parameters of the virtual multi-view camera array model, and can quickly and effectively generate the virtual multi-view images required by the multi-view autostereoscopic display system. In addition, in order to improve the fault tolerance of the input depth data, the The depth map is processed in layers, which reduces the requirements on the image quality of the input depth map and expands the scope of application of the method.

附图说明Description of drawings

图1是基于深度分层生成虚拟多视点图像的流程框图;Fig. 1 is a flowchart of generating a virtual multi-viewpoint image based on depth layering;

图2(a)是Racket二维图像测试序列的视频截图;Figure 2(a) is a video screenshot of the Racket two-dimensional image test sequence;

图2(b)是Racket深度图像测试序列的视频截图;Figure 2(b) is a video screenshot of the Racket depth image test sequence;

图3是图2(a)二维图像视频截图对应的预处理后的深度图像;Fig. 3 is the preprocessed depth image corresponding to the screenshot of the two-dimensional image video in Fig. 2(a);

图4(a)是图3预处理后深度图像经过分层处理后的第0层深度图像;Fig. 4(a) is the 0th layer depth image after the depth image is preprocessed in Fig. 3 after layered processing;

图4(b)是图2(b)深度图像视频截图经过分层处理后的第0层二维图像;Fig. 4(b) is the 0th layer two-dimensional image after the layered processing of the depth image video screenshot of Fig. 2(b);

图5是图4(b)第0层二维图像对应的第0层扩展后的二维图像;Fig. 5 is the expanded two-dimensional image of the 0th layer corresponding to the 0th layer two-dimensional image of Fig. 4 (b);

图6是最终生成的虚拟九视点二维图像。Figure 6 is the final generated virtual nine-viewpoint two-dimensional image.

具体实施方式Detailed ways

基于深度图分层的虚拟多视点图像的生成方法包括以下步骤(整体流程图如图1所示):The generation method of the virtual multi-viewpoint image based on depth map layering comprises the following steps (the overall flow chart is as shown in Figure 1):

(1)对待处理的参考视点深度图像进行中值滤波操作,并在水平方向上对深度值小的物体进行轮廓扩展操作,得到滤波扩展后的深度图像;(1) Carry out a median filter operation on the depth image of the reference viewpoint to be processed, and perform a contour extension operation on objects with small depth values in the horizontal direction to obtain a depth image after filtering;

(2)根据设定的分层层数,对滤波扩展后的深度图像进行分层操作,得到分层深度图像;(2) According to the set number of layered layers, perform a layered operation on the depth image after filtering and expansion to obtain a layered depth image;

(3)在分层深度图像中选定一个深度层作为相机阵列的聚焦层,则深度值比聚焦层深度值大的为背景层,深度值比聚焦层深度值小的为前景层;(3) Select a depth layer in the layered depth image as the focus layer of the camera array, then the background layer whose depth value is larger than the focus layer depth value, and the foreground layer whose depth value is smaller than the focus layer depth value;

(4)对待处理的参考视点二维图像,根据二维图像相对应的分层深度图像,对二维图像进行分层操作,得到分层二维图像;(4) For the reference viewpoint 2D image to be processed, perform layering operations on the 2D image according to the layered depth image corresponding to the 2D image to obtain a layered 2D image;

(5)根据设定的参考视点前景层和其相邻视点前景层之间的最大视差值,计算虚拟相机阵列相邻视点前景层与背景层之间的最大视差值,并结合各个深度层到聚焦层的距离计算各个深度层对应二维图像层对应的视差值;(5) According to the maximum disparity value between the set reference viewpoint foreground layer and its adjacent viewpoint foreground layer, calculate the maximum disparity value between the virtual camera array adjacent viewpoint foreground layer and background layer, and combine each depth Calculate the disparity value corresponding to the two-dimensional image layer corresponding to each depth layer from the distance from the layer to the focus layer;

(6)根据虚拟视点位置到参考视点位置的相对距离和相机阵列相邻视点前景层与背景层之间的最大视差值,对分层二维图像进行扩展操作,得到扩展后的分层二维图像;(6) According to the relative distance from the virtual viewpoint position to the reference viewpoint position and the maximum disparity value between the foreground layer and the background layer of adjacent viewpoints of the camera array, the layered two-dimensional image is extended to obtain the extended layered two-dimensional image. dimensional image;

(7)根据虚拟视点位置到参考视点位置的相对距离和各个二维图像层对应的视差值,通过加权水平平移算法得到各个虚拟视点位置的虚拟二维图像。(7) According to the relative distance from the virtual viewpoint position to the reference viewpoint position and the disparity value corresponding to each two-dimensional image layer, the virtual two-dimensional image of each virtual viewpoint position is obtained through a weighted horizontal translation algorithm.

所述的对待处理的参考视点深度图像进行中值滤波操作,并在水平方向上对深度值小的物体进行扩展操作,得到滤波扩展后的深度图像步骤为:The steps of performing a median filtering operation on the reference viewpoint depth image to be processed, and performing an expansion operation on objects with small depth values in the horizontal direction, to obtain a filtered and expanded depth image are as follows:

(a)对于参考视点深度图像IDepth(x,y)进行5×5窗口大小的中值滤波,得到平滑的深度图I′Depth(x,y),x表示图像像素点水平方向上的坐标,y表示图像像素点垂直方向上的坐标,x的取值为x=0,1,2,...,H-1,y的取值为y=0,1,2,...,V-1,H表示图像的水平分辨率,V表示图像的垂直分辨率;(a) For the reference viewpoint depth image IDepth (x, y), perform a median filter with a window size of 5×5 to obtain a smooth depth map I'Depth (x, y), where x represents the coordinates of the image pixel in the horizontal direction , y represents the coordinates in the vertical direction of the image pixel point, the value of x is x=0, 1, 2, ..., H-1, the value of y is y = 0, 1, 2, ..., V-1, H represents the horizontal resolution of the image, and V represents the vertical resolution of the image;

中值滤波有两个目的:一个目的是去除参考视点深度图像中孤立的像素点,这些孤立点大部分是深度值错误的像素点,这样可以去除一部分错误;另一个目的是使输入的深度图更为平滑,有助于提高最后生成的虚拟视点二维图像的质量。图像的中值滤波一般采用一个含有奇数个点的滑动窗口,用窗口中各点灰度值的中值代替窗口中点的灰度值,本专利中采用5×5大小的窗口。Median filtering has two purposes: one purpose is to remove isolated pixels in the depth image of the reference viewpoint, most of these isolated points are pixels with wrong depth values, so that part of the error can be removed; the other purpose is to make the input depth map It is smoother and helps to improve the quality of the final generated 2D image of the virtual viewpoint. The median filter of the image generally adopts a sliding window containing an odd number of points, and replaces the gray value of the middle point of the window with the median value of the gray value of each point in the window. In this patent, a window with a size of 5×5 is used.

(b)对于步骤(a)得到的平滑深度图I′Depth(x,y)沿水平方向进行扫描,计算水平方向上相邻两个像素点之间深度差值的绝对值,并将该绝对值与一个预先设定的阈值th进行比较,确定深度值较小的物体水平方向轮廓扩展的方向dir(x,y),-1表示水平向左扩展,1表示水平向右扩展,0表示不做扩展,表示公式如下:(b) Scan the smooth depth map I′Depth (x, y) obtained in step (a) along the horizontal direction, calculate the absolute value of the depth difference between two adjacent pixels in the horizontal direction, and convert the absolute value The value is compared with a preset threshold value th to determine the direction dir(x, y) of the horizontal outline extension of the object with a smaller depth value, -1 means horizontal expansion to the left, 1 means horizontal expansion to the right, and 0 means no To expand, the expression formula is as follows:

dirdir((xx,,ythe y))==--11||IIDepthDepth&prime;&prime;((xx,,ythe y))--IIDepthDepth&prime;&prime;((xx++11,,ythe y))||>>thandmore thanIIDepthDepth&prime;&prime;((xx,,ythe y))>>IIDepthDepth&prime;&prime;((xx++11,,ythe y))11||IIDepthDepth&prime;&prime;((xx,,ythe y))--IIDepthDepth&prime;&prime;((xx++11,,ythe y))||>>thandmore thanIIDepthDepth&prime;&prime;((xx,,ythe y))<<IIDepthDepth&prime;&prime;((xx++11,,ythe y))00otherwiseotherwise;;

(c)将每一个像素点的赋值标志位flag(x,y)初始化为零,表示公式如下:(c) Initialize the assignment flag (x, y) of each pixel to zero, and the expression formula is as follows:

flagflag((xx,,ythe y))==00,,&ForAll;&ForAll;xx==0,1,20,1,2,,......,,Hh--11&ForAll;&ForAll;ythe y==0,1,20,1,2,,......,,VV--11

其中

Figure BSA00000193660500083
表示任意给定;in
Figure BSA00000193660500083
means any given;

(d)根据步骤(b)得到的每个像素的扩展方向dir(x,y)对步骤(a)得到的平滑深度图I′Depth(x,y)中待扩展的像素点进行赋值操作,得到深度图I″Depth(x,y),并记录对应赋值标志位flag(x,y),表示公式如下:(d) According to the extension direction dir(x, y) of each pixel obtained in step (b), assign values to the pixels to be extended in the smooth depth map I′Depth (x, y) obtained in step (a), Get the depth map I″Depth (x, y), and record the corresponding assignment flag bit flag (x, y), the expression formula is as follows:

IIDepthDepth&prime;&prime;&prime;&prime;((xx++dirdir((xx,,ythe y))&CenterDot;&CenterDot;ii,,ythe y))==IIDepthDepth&prime;&prime;((xx++11,,ythe y))dirdir((xx,,ythe y))==--11IIDepthDepth&prime;&prime;((xx,,ythe y))dirdir((xx,,ythe y))==11,,ii==0,1,20,1,2......,,KK--11

flag(x+dir(x,y)·i,y)=1|dir(x,y)|=1,i=0,1,2,...,K-1flag(x+dir(x, y)·i, y)=1|dir(x, y)|=1, i=0, 1, 2, ..., K-1

其中,K表示扩展像素的个数;Among them, K represents the number of expanded pixels;

(e)对步骤(d)得到的深度图I″Depth(x,y)中对应赋值标志位flag(x,y)=0的像素点进行赋值操作,得到扩展深度图

Figure BSA00000193660500085
表示公式如下:(e) carry out the assignment operation to the pixel corresponding to the assignment flag flag (x, y)=0 in the depth map I″Depth (x, y) that step (d) obtains, obtain the extended depth map
Figure BSA00000193660500085
The expression formula is as follows:

IIDepthDepthEE.((xx,,ythe y))==IIDepthDepth&prime;&prime;((xx,,ythe y))flagflag((xx,,ythe y))==00IIDepthDepth&prime;&prime;&prime;&prime;((xx,,ythe y))flagflag((xx,,ythe y))==11..

由于深度图像中物体边缘轮廓不够准确,传统的DIBR(Depth-Image-BasedRendering)算法生成虚拟视点图像时,经常会发生前景物体被割裂在空洞区域两边的现象,对深度图像中前景物体轮廓进行扩展操作可以有效地避免该现象。Due to the inaccuracy of the object edge outline in the depth image, when the traditional DIBR (Depth-Image-BasedRendering) algorithm generates a virtual viewpoint image, the phenomenon that the foreground object is often split on both sides of the hollow area, the foreground object outline in the depth image is extended. operation can effectively avoid this phenomenon.

所述的根据设定的分层层数,对滤波扩展后的深度图像进行分层操作,得到分层深度图像步骤为:According to the set number of layered layers, the layered operation is performed on the depth image after filtering and expansion, and the steps of obtaining the layered depth image are:

(f)根据设定的分层层数N,N为大于等于1的整数,计算步骤(e)得到的扩展深度图

Figure BSA00000193660500087
中的每一个像素点对应深度层的索引值Index(x,y),表示公式如下:(f) According to the set number of layers N, where N is an integer greater than or equal to 1, calculate the extended depth map obtained in step (e)
Figure BSA00000193660500087
Each pixel in corresponds to the index value Index(x, y) of the depth layer, and the expression formula is as follows:

其中,

Figure BSA00000193660500092
表示取小于等于·的最大整数的操作;in,
Figure BSA00000193660500092
Indicates the operation of taking the largest integer less than or equal to ;

(g)根据步骤(f)得到的像素点深度层索引值Index(x,y),得到每一层的分层深度图

Figure BSA00000193660500093
分层深度图
Figure BSA00000193660500094
的索引值index越小表示该深度层越靠近观看者,索引值index越大表示该深度层越远离观看者,表示公式如下:(g) According to the pixel depth layer index value Index(x, y) obtained in step (f), obtain the layered depth map of each layer
Figure BSA00000193660500093
layered depth map
Figure BSA00000193660500094
The smaller the index value of , the closer the depth layer is to the viewer, and the larger the index value is, the farther the depth layer is from the viewer. The expression formula is as follows:

IIDepthDepthindexindex((xx,,ythe y))==IIDepthDepthEE.((xx,,ythe y))IndexIndex((xx,,ythe y))==indexindex255255otherwiseotherwiseindexindex==0,1,20,1,2,,......,,NN--11..

所述的在分层深度图像中选定一个深度层作为相机阵列的聚焦层,则深度值比聚焦层深度值大的为背景层,深度值比聚焦层深度值小的为前景层步骤为:In the described layered depth image, a depth layer is selected as the focus layer of the camera array, then the depth value is larger than the focus layer depth value as the background layer, and the depth value is smaller than the focus layer depth value as the foreground layer. The steps are:

(h)根据设定的相机阵列聚焦层focus,确定前景图层和背景图层,索引值index>focus的深度层和聚焦层相比,距离观看者远,为背景层,索引值index<focus的深度层和聚焦层相比,距离观看者近,为前景图层。(h) Determine the foreground layer and background layer according to the set camera array focus layer focus, the depth layer with index value index>focus is farther away from the viewer than the focus layer, and is the background layer, index value index<focus Compared with the focus layer, the depth layer is closer to the viewer and is the foreground layer.

所述的对待处理的参考视点二维图像,根据二维图像相对应的分层深度图像,对二维图像进行分层操作,得到分层二维图像步骤为:For the reference viewpoint 2D image to be processed, according to the layered depth image corresponding to the 2D image, the layered operation is performed on the 2D image to obtain the layered 2D image. The steps are as follows:

(i)根据步骤(f)得到的像素点深度层索引值Index(x,y),对于参考视点二维图像IColor(x,y)进行分层操作,得到每一层的分层二维图像同时记录每一层二维图像的赋值标志位

Figure BSA00000193660500098
分层二维图像的索引值index越小表示该二维图像层越靠近观看者,索引值index越大表示该二维图像层越远离观看者,表示公式如下:(i) According to the pixel depth layer index value Index(x, y) obtained in step (f), perform layering operation on the reference viewpoint two-dimensional image IColor (x, y) to obtain the layered two-dimensional image of each layer image Simultaneously record each layer of 2D image The assignment flag
Figure BSA00000193660500098
layered 2D image The smaller the index value index of , the closer the 2D image layer is to the viewer, and the larger the index value index is, the farther the 2D image layer is from the viewer. The expression formula is as follows:

IIColorcolorindexindex((xx,,ythe y))==IIColorcolor((xx,,ythe y))IndexIndex((xx,,ythe y))==indexindex00otherwiseotherwise,,indexindex==0,1,20,1,2,,......,,NN--11

FfColorcolorindexindex((xx,,ythe y))==11IndexIndex((xx,,ythe y))==indexindex00otherwiseotherwise,,indexindex==0,1,20,1,2,,......,,NN--11..

所述的根据设定的参考视点前景层和其相邻视点前景层之间的最大视差值,计算虚拟相机阵列相邻视点前景层与背景层之间的最大视差值,并结合各个深度层到聚焦层的距离计算各个深度层对应二维图像层对应的视差值步骤为:According to the maximum parallax value between the set reference viewpoint foreground layer and its adjacent viewpoint foreground layer, calculate the maximum parallax value between the adjacent viewpoint foreground layer and the background layer of the virtual camera array, and combine each depth The steps to calculate the disparity value corresponding to the two-dimensional image layer corresponding to each depth layer from the distance from the layer to the focus layer are:

(j)根据设定的参考视点前景层和其相邻视点前景层之间的最大视差值MaxD,MaxD为大于等于1的整数,计算相机阵列相邻视点前景层与背景层之间的最大视差值MaxR,表示公式如下:(j) According to the maximum parallax value MaxD between the set reference viewpoint foreground layer and its adjacent viewpoint foreground layer, MaxD is an integer greater than or equal to 1, calculate the maximum disparity between the adjacent viewpoint foreground layer and the background layer of the camera array Parallax value MaxR, the expression formula is as follows:

MaxR=(MaxD/focus)·(N-1);MaxR=(MaxD/focus)·(N-1);

(k)根据步骤(j)得到的最大视差值MaxR,结合各个深度层到聚焦层的距离计算各个深度层对应二维图像层对应的视差值Disindex,表示公式如下:(k) According to the maximum parallax value MaxR obtained in step (j), combine the distance from each depth layer to the focus layer to calculate the parallax value Disindex corresponding to each depth layer corresponding to the two-dimensional image layer, and the expression formula is as follows:

Figure BSA00000193660500101
Figure BSA00000193660500101

所述的根据虚拟视点位置到参考视点位置的相对距离和相机阵列相邻视点前景层与背景层之间的最大视差值,对分层二维图像进行扩展操作,得到扩展后的分层二维图像步骤为:According to the relative distance from the virtual viewpoint position to the reference viewpoint position and the maximum parallax value between the foreground layer and the background layer of adjacent viewpoints of the camera array, the layered two-dimensional image is extended to obtain the extended layered two-dimensional image. Dimensional image steps are:

(1)设定虚拟平行多视点阵列中虚拟视点个数为M,M为大于等于2的整数,以参考视点作为虚拟平行多视点阵列的中心视点,即虚拟平行多视点阵列最中间位置的视点,令参考视点的索引值为零,则在水平方向上按从右向左的顺序,各个虚拟视点的索引值subi的取值依次为:(1) Set the number of virtual viewpoints in the virtual parallel multi-viewpoint array as M, M is an integer greater than or equal to 2, and use the reference viewpoint as the central viewpoint of the virtual parallel multi-viewpoint array, that is, the viewpoint at the middlemost position of the virtual parallel multi-viewpoint array , let the index value of the reference viewpoint be zero, then in the order from right to left in the horizontal direction, the values of the index value subi of each virtual viewpoint are as follows:

Figure BSA00000193660500102
Figure BSA00000193660500102

(m)根据步骤(1)得到的各个虚拟视点的索引值subi计算虚拟视点位置到参考视点位置的相对距离Dsub_i=subi-0=subi(m) Calculate the relative distance Dsub_i =subi -0=subi from the virtual viewpoint position to the reference viewpoint position according to the index value subi of each virtual viewpoint obtained in step (1);

(n)对步骤(i)得到的每一层二维图像

Figure BSA00000193660500103
的赋值标志位
Figure BSA00000193660500104
沿水平方向进行扫描,计算相邻两个赋值标志位差的绝对值,将该绝对值和0进行比较,判断二维图像层
Figure BSA00000193660500105
水平扩展的方向
Figure BSA00000193660500106
-1表示水平向左扩展,1表示水平向右扩展,0表示不做扩展,表示公式如下:(n) for each layer of two-dimensional image obtained in step (i)
Figure BSA00000193660500103
The assignment flag
Figure BSA00000193660500104
Scan along the horizontal direction, calculate the absolute value of the bit difference between two adjacent assignment marks, compare the absolute value with 0, and judge the two-dimensional image layer
Figure BSA00000193660500105
Direction of horizontal expansion
Figure BSA00000193660500106
-1 means to expand horizontally to the left, 1 means to expand horizontally to the right, and 0 means not to expand. The expression formula is as follows:

(o)将每一层扩展二维图像的赋值标志位

Figure BSA00000193660500108
初始化为零,表示公式如下:(o) Extend the assignment flag of each layer of two-dimensional image
Figure BSA00000193660500108
Initialized to zero, the representation formula is as follows:

FfECECindexindex((xx,,ythe y))==00,,&ForAll;&ForAll;xx==0,1,20,1,2,,......,,Hh--11&ForAll;&ForAll;ythe y==0,1,20,1,2,,......,,VV--11;;

(p)根据步骤(n)得到的水平扩展方向

Figure BSA000001936605001010
对步骤(i)得到的每一层二维图像待扩展的像素点进行赋值操作,得到二维图像层
Figure BSA000001936605001012
同时记录赋值标志位表示公式如下:(p) Horizontal expansion direction obtained according to step (n)
Figure BSA000001936605001010
For each layer of two-dimensional image obtained in step (i) The pixels to be expanded are assigned to obtain a two-dimensional image layer
Figure BSA000001936605001012
Simultaneous record assignment flag bit The expression formula is as follows:

IIECECindexindex((xx++dirdirColorcolorindexindex((xx,,ythe y))&CenterDot;&Center Dot;jj,,ythe y))==IIColorcolor((xx++dirdirColorcolorindexindex((xx,,ythe y))&CenterDot;&CenterDot;jj,,ythe y))dirdirColorcolorindexindex((xx,,ythe y))==--11IIColorcolor((xx++dirdirColorcolorindexindex((xx,,ythe y))&CenterDot;&Center Dot;jj,,ythe y))dirdirColorcolorindexindex((xx,,ythe y))==11jj==0,1,20,1,2,,......,,LL--11

FfECECindexindex((xx++dirdirColorcolorindexindex((xx,,ythe y))&CenterDot;&Center Dot;jj,,ythe y))==11||dirdirColorcolorindexindex((xx,,ythe y))||==11,,jj==0,1,20,1,2,,......,,LL--11

其中,L表示二维图像层

Figure BSA000001936605001016
扩展像素点的个数,L=Dsub·MaxR;Among them, L represents the two-dimensional image layer
Figure BSA000001936605001016
The number of expanded pixels, L=Dsub MaxR;

每一层二维图像对应不同的视差值,在利用加权水平平移算法进行虚拟视点二维图像绘制时,会在层与层之间产生没有值像素点形成的空洞区域,对视觉效果产生极大地影响。对分层二维图像进行扩展的目的就是利用扩展的像素部分填补平移产生的空洞区域,提高生成的虚拟视点图像的质量。Each layer of two-dimensional images corresponds to a different parallax value. When using the weighted horizontal translation algorithm to draw a virtual viewpoint two-dimensional image, there will be a hollow area formed by pixels with no value between layers, which will greatly affect the visual effect. earth influence. The purpose of extending the layered two-dimensional image is to use the expanded pixel part to fill the hole area generated by the translation and improve the quality of the generated virtual viewpoint image.

(q)对步骤(p)得到的二维图像层

Figure BSA00000193660500111
对应赋值标志位
Figure BSA00000193660500112
的像素点进行赋值操作,得到最终的扩展二维图像层
Figure BSA00000193660500113
并记录最终的二维图像层赋值标志位
Figure BSA00000193660500114
表示公式如下:(q) for the two-dimensional image layer obtained in step (p)
Figure BSA00000193660500111
Corresponding assignment flag bit
Figure BSA00000193660500112
The pixel points are assigned to obtain the final extended two-dimensional image layer
Figure BSA00000193660500113
And record the final two-dimensional image layer assignment flag bit
Figure BSA00000193660500114
The expression formula is as follows:

IIExColorExColorindexindex((xx,,ythe y))==IIColorcolorindexindex((xx,,ythe y))FfECECindexindex((xx,,ythe y))==00IIECECindexindex((xx,,ythe y))FfECECindexindex((xx,,ythe y))==11,,indexindex==0,1,20,1,2,,......,,NN--11

FfExColorExColorindexindex((xx,,ythe y))==00FfECECindexindex((xx,,ythe y))==00andandFfColorcolorindexindex((xx,,ythe y))==0011FfColorcolorindexindex((xx,,ythe y))==1122FfECECindexindex((xx,,ythe y))==11andandFfColorcolorindexindex((xx,,ythe y))==00,,indexindex==0,1,20,1,2,,......,,NN--11..

所述的根据虚拟视点位置到参考视点位置的相对距离和各个二维图像层对应的视差值,通过加权水平平移算法得到各个虚拟视点位置的虚拟二维图像步骤为:According to the relative distance from the virtual viewpoint position to the reference viewpoint position and the parallax value corresponding to each two-dimensional image layer, the step of obtaining the virtual two-dimensional image of each virtual viewpoint position through a weighted horizontal translation algorithm is as follows:

(r)假设要生成M个虚拟视点位置中第i个位置的图像,按照步骤(m)得到的虚拟视点位置到参考视点位置的相对距离Dsub_i和步骤(k)得到的各个二维图像层对应的视差值Disindex,对步骤(q)得到的扩展二维图像层

Figure BSA00000193660500117
和赋值标志位
Figure BSA00000193660500118
进行加权水平平移操作,得到平移后的二维图像层
Figure BSA00000193660500119
和平移后的赋值标志位
Figure BSA000001936605001110
表示公式如下:(r) Assuming that the image of the i-th position among the M virtual viewpoint positions is to be generated, the relative distance Dsub_i from the virtual viewpoint position to the reference viewpoint position obtained in step (m) and each two-dimensional image layer obtained in step (k) Corresponding disparity value Disindex , for the extended two-dimensional image layer obtained in step (q)
Figure BSA00000193660500117
and assignment flags
Figure BSA00000193660500118
Perform a weighted horizontal translation operation to obtain the translated two-dimensional image layer
Figure BSA00000193660500119
and shifted assignment flags
Figure BSA000001936605001110
The expression formula is as follows:

IIHSColorHSColorindexindex((xx,,ythe y))==00((xx++Disdisindexindex&CenterDot;&Center Dot;DD.subsub__ii))<<00oror((xx++Disdisindexindex&CenterDot;&Center Dot;DD.subsub__ii))&GreaterEqual;&Greater Equal;HhIIExColorExColorindexindex((xx++Disdisindexindex&CenterDot;&Center Dot;DD.subsub__ii,,ythe y))00&le;&le;((xx++Disdisindexindex&CenterDot;&CenterDot;DD.subsub__ii))<<Hh,,indexindex==0,1,20,1,2,,......,,NN--11

FfHSColorHSColorindexindex((xx,,ythe y))==00((xx++Disdisindexindex&CenterDot;&Center Dot;DD.subsub__ii))<<00oror((xx++Disdisindexindex&CenterDot;&Center Dot;DD.subsub__ii))&GreaterEqual;&Greater Equal;HhFfExColorExColorindexindex((xx++Disdisindexindex&CenterDot;&CenterDot;DD.subsub__ii,,ythe y))00&le;&le;((xx++Disdisindexindex&CenterDot;&Center Dot;DD.subsub__ii))<<Hh,,indexindex==0,1,20,1,2,,......,,NN--11

其中,H表示图像的水平分辨率;Among them, H represents the horizontal resolution of the image;

(s)初始化虚拟视点位置i二维图像的赋值标志位

Figure BSA000001936605001113
为零,表示公式如下:(s) Initialize the assignment flag bit of the virtual viewpoint position i two-dimensional image
Figure BSA000001936605001113
is zero, the formula is as follows:

Ffvirvirii((xx,,ythe y))==00,,&ForAll;&ForAll;xx==0,1,20,1,2,,......,,Hh--11,,&ForAll;&ForAll;ythe y==0,1,20,1,2,,......,,VV--11;;

(t)将步骤(r)中得到的平移后的二维图像层

Figure BSA000001936605001115
按照先放背景后放前景,由远及近的顺序,即index从大到小的顺序,一层一层地叠加起来,每叠加一层二维图像层,更新一次虚拟视点位置i二维图像的赋值标志位
Figure BSA000001936605001116
叠加操作结束后根据最终的赋值标志位
Figure BSA000001936605001117
得到虚拟视点位置i的二维图像表示公式如下:(t) the translated two-dimensional image layer obtained in step (r)
Figure BSA000001936605001115
Put the background first and then the foreground, in the order from far to near, that is, in the order of index from large to small, superimpose layer by layer, and update the virtual viewpoint position i two-dimensional image every time a layer of two-dimensional image is superimposed The assignment flag
Figure BSA000001936605001116
After the superposition operation ends, according to the final assignment flag
Figure BSA000001936605001117
Get the two-dimensional image of the virtual viewpoint position i The expression formula is as follows:

Ffvirvirii((xx,,ythe y))==00Ffvirvirii((xx,,ythe y))==00andandFfHSColorHSColorindexindex((xx,,ythe y))==00indexindex++11FfHSColorHSColorindexindex((xx,,ythe y))==11indexindex++11++NNFfvirvirii((xx,,ythe y))==00andandFfHSColorHSColorindexindex((xx,,ythe y))==22,,indexindex==0,1,20,1,2,,......,,NN--11

IIvirvirii((xx,,ythe y))==00Ffvirvirii((xx,,ythe y))==00IIHSColorHSColorFfvirvirii((xx,,ythe y))--11((xx,,ythe y))Ffvirvirii((xx,,ythe y))>>00andandFfvirvirii((xx,,ythe y))--NN&le;&le;00IIHSColorHSColorFfvirvirii((xx,,ythe y))--11--NN((xx,,ythe y))Ffvirvirii((xx,,ythe y))--NN>>00,,indexindex==0,1,20,1,2,,......,,NN--11;;

在对平移后的二维图像层进行叠加操作时按照先放背景后放前景,由远及近的顺序进行叠加,有效地保证了场景中前景物体和背景物体之间正确的遮挡关系。When superimposing the translated two-dimensional image layer, put the background first and then the foreground, and superimpose in the order from far to near, effectively ensuring the correct occlusion relationship between the foreground object and the background object in the scene.

(u)对M个虚拟视点位置依次进行步骤(r)至步骤(t)的操作,即可得到M幅虚拟视点位置的二维图像。(u) Step (r) to step (t) are sequentially performed on the M virtual viewpoint positions to obtain M two-dimensional images of the virtual viewpoint positions.

实施例:Example:

(1)将图像分辨率为640×360的Racket二维图像测试码流和分辨率为640×360的Racket深度图像测试码流作为待生成多视点虚拟图像的视频文件。图2(a)即为Racket二维图像测试码流的截图,图2(b)即为Racket深度图像测试码流的截图。(1) The Racket two-dimensional image test stream with an image resolution of 640×360 and the Racket depth image test stream with a resolution of 640×360 are used as video files to generate multi-viewpoint virtual images. Figure 2(a) is a screenshot of the Racket two-dimensional image test stream, and Figure 2(b) is a screenshot of the Racket depth image test stream.

(2)对输入深度图像进行中值滤波和边缘扩展,得到预处理后的深度图像。图3即为Racket深度图像预处理后的深度图像。(2) Perform median filtering and edge extension on the input depth image to obtain the preprocessed depth image. Figure 3 is the depth image after Racket depth image preprocessing.

(3)设定分层数N=17,对预处理后的深度图像进行分层处理,得到分层后的深度图像。图4(a)即为第0层分层深度图像。(3) Set the layer number N=17, perform layer processing on the preprocessed depth image, and obtain the layered depth image. Figure 4(a) is the layer-0 layered depth image.

(4)选定第12层为聚焦深度层,则第0-11层为前景层,第13-16层为背景层。(4) Select the 12th layer as the focus depth layer, then the 0-11th layer is the foreground layer, and the 13th-16th layer is the background layer.

(5)根据分层深度图像,对Racket二维图像进行分层处理,得到分层后的二维图像。图4(b)即为第0层分层二维图像。(5) According to the layered depth image, the Racket two-dimensional image is layered to obtain a layered two-dimensional image. Figure 4(b) is the layered 2D image of layer 0.

(6)设定参考视点前景层和其相邻视点前景层之间的最大视差值为12,计算虚拟相机阵列相邻视点前景层与背景层之间的最大视差值,并结合各个深度层到聚焦层的距离计算各个深度层对应二维图像层对应的视差值。(6) Set the maximum disparity value between the reference viewpoint foreground layer and its adjacent viewpoint foreground layer to 12, calculate the maximum disparity value between the virtual camera array adjacent viewpoint foreground layer and background layer, and combine each depth The distance from the layer to the focus layer is used to calculate the disparity value corresponding to the two-dimensional image layer corresponding to each depth layer.

(7)根据虚拟视点位置到参考视点位置的相对距离和相机阵列相邻视点前景层与背景层之间的最大视差值,对分层二维图像进行扩展操作,得到扩展后的分层二维图像。图5即为第0层扩展二维图像。(7) According to the relative distance from the virtual viewpoint position to the reference viewpoint position and the maximum parallax value between the foreground layer and the background layer of adjacent viewpoints of the camera array, the layered two-dimensional image is extended to obtain the extended layered two-dimensional image. dimensional image. Figure 5 is the extended two-dimensional image of the 0th layer.

(8)根据虚拟视点位置到参考视点位置的相对距离和各个二维图像层对应的视差值,通过加权水平平移算法得到各个虚拟视点位置的虚拟二维图像。图6即为最终生成的虚拟九视点二维图像。(8) According to the relative distance from the virtual viewpoint position to the reference viewpoint position and the disparity value corresponding to each two-dimensional image layer, the virtual two-dimensional image of each virtual viewpoint position is obtained through a weighted horizontal translation algorithm. Figure 6 is the final generated virtual nine-viewpoint two-dimensional image.

Claims (8)

Translated fromChinese
1.一种基于深度图分层的虚拟多视点图像的生成方法,其特征在于包括如下步骤:1. A method for generating virtual multi-viewpoint images based on depth map layering, characterized in that it comprises the steps:(1)对待处理的参考视点深度图像进行中值滤波操作,并在水平方向上对深度值小的物体进行轮廓扩展操作,得到滤波扩展后的深度图像;(1) Carry out a median filter operation on the depth image of the reference viewpoint to be processed, and perform a contour extension operation on objects with small depth values in the horizontal direction to obtain a depth image after filtering;(2)根据设定的分层层数,对滤波扩展后的深度图像进行分层操作,得到分层深度图像;(2) According to the set number of layered layers, perform a layered operation on the depth image after filtering and expansion to obtain a layered depth image;(3)在分层深度图像中选定一个深度层作为相机阵列的聚焦层,则深度值比聚焦层深度值大的为背景层,深度值比聚焦层深度值小的为前景层;(3) Select a depth layer in the layered depth image as the focus layer of the camera array, then the background layer whose depth value is larger than the focus layer depth value, and the foreground layer whose depth value is smaller than the focus layer depth value;(4)对待处理的参考视点二维图像,根据二维图像相对应的分层深度图像,对二维图像进行分层操作,得到分层二维图像;(4) For the reference viewpoint 2D image to be processed, perform layering operations on the 2D image according to the layered depth image corresponding to the 2D image to obtain a layered 2D image;(5)根据设定的参考视点前景层和其相邻视点前景层之间的最大视差值,计算虚拟相机阵列相邻视点前景层与背景层之间的最大视差值,并结合各个深度层到聚焦层的距离计算各个深度层对应二维图像层对应的视差值;(5) According to the maximum disparity value between the set reference viewpoint foreground layer and its adjacent viewpoint foreground layer, calculate the maximum disparity value between the virtual camera array adjacent viewpoint foreground layer and background layer, and combine each depth Calculate the disparity value corresponding to the two-dimensional image layer corresponding to each depth layer from the distance from the layer to the focus layer;(6)根据虚拟视点位置到参考视点位置的相对距离和相机阵列相邻视点前景层与背景层之间的最大视差值,对分层二维图像进行扩展操作,得到扩展后的分层二维图像;(6) According to the relative distance from the virtual viewpoint position to the reference viewpoint position and the maximum disparity value between the foreground layer and the background layer of adjacent viewpoints of the camera array, the layered two-dimensional image is extended to obtain the extended layered two-dimensional image. dimensional image;(7)根据虚拟视点位置到参考视点位置的相对距离和各个二维图像层对应的视差值,通过加权水平平移算法得到各个虚拟视点位置的虚拟二维图像。(7) According to the relative distance from the virtual viewpoint position to the reference viewpoint position and the disparity value corresponding to each two-dimensional image layer, the virtual two-dimensional image of each virtual viewpoint position is obtained through a weighted horizontal translation algorithm.2.根据权利要求1所述的一种基于深度图分层的虚拟多视点图像的生成方法,其特征在于所述的对待处理的参考视点深度图像进行中值滤波操作,并在水平方向上对深度值小的物体进行扩展操作,得到滤波扩展后的深度图像步骤为:2. A method for generating a virtual multi-viewpoint image based on depth map layering according to claim 1, wherein the reference viewpoint depth image to be processed is subjected to a median filtering operation, and horizontally The object with a small depth value is expanded, and the steps to obtain the depth image after filtering and expansion are as follows:(a)对于参考视点深度图像IDepth(x,y)进行5×5窗口大小的中值滤波,得到平滑的深度图I′Depth(x,y),x表示图像像素点水平方向上的坐标,y表示图像像素点垂直方向上的坐标,x的取值为x=0,1,2,...,H-1,y的取值为y=0,1,2,...,V-1,H表示图像的水平分辨率,V表示图像的垂直分辨率;(a) For the reference viewpoint depth image IDepth (x, y), perform a median filter with a window size of 5×5 to obtain a smooth depth map I'Depth (x, y), where x represents the coordinates of the image pixel in the horizontal direction , y represents the coordinates in the vertical direction of the image pixel point, the value of x is x=0, 1, 2, ..., H-1, the value of y is y = 0, 1, 2, ..., V-1, H represents the horizontal resolution of the image, and V represents the vertical resolution of the image;(b)对于步骤(a)得到的平滑深度图I′Depth(x,y)沿水平方向进行扫描,计算水平方向上相邻两个像素点之间深度差值的绝对值,并将该绝对值与一个预先设定的阈值th进行比较,确定深度值较小的物体水平方向轮廓扩展的方向dir(x,y),-1表示水平向左扩展,1表示水平向右扩展,0表示不做扩展,表示公式如下:(b) Scan the smooth depth map I′Depth (x, y) obtained in step (a) along the horizontal direction, calculate the absolute value of the depth difference between two adjacent pixels in the horizontal direction, and convert the absolute value The value is compared with a preset threshold value th to determine the direction dir(x, y) of the horizontal outline extension of the object with a smaller depth value, -1 means horizontal expansion to the left, 1 means horizontal expansion to the right, and 0 means no To expand, the expression formula is as follows:dirdir((xx,,ythe y))==--11||IIDepthDepth&prime;&prime;((xx,,ythe y))--IIDepthDepth&prime;&prime;((xx++11,,ythe y))||>>thandmore thanIIDepthDepth&prime;&prime;((xx,,ythe y))>>IIDepthDepth&prime;&prime;((xx++11,,ythe y))11||IIDepthDepth&prime;&prime;((xx,,ythe y))--IIDepthDepth&prime;&prime;((xx++11,,ythe y))||>>thandmore thanIIDepthDepth&prime;&prime;((xx,,ythe y))<<IIDepthDepth&prime;&prime;((xx++11,,ythe y))00otherwiseotherwise;;(c)将每一个像素点的赋值标志位flag(x,y)初始化为零,表示公式如下:(c) Initialize the assignment flag (x, y) of each pixel to zero, and the expression formula is as follows:flagflag((xx,,ythe y))==00,,&ForAll;&ForAll;xx==0,1,20,1,2,,......,,Hh--11&ForAll;&ForAll;ythe y==0,1,20,1,2,,......,,VV--11其中
Figure FSA00000193660400023
表示任意给定;in
Figure FSA00000193660400023
means any given;(d)根据步骤(b)得到的每个像素的扩展方向dir(x,y)对步骤(a)得到的平滑深度图I′Depth(x,y)中待扩展的像素点进行赋值操作,得到深度图I″Depth(x,y),并记录对应赋值标志位flag(x,y),表示公式如下:(d) According to the extension direction dir(x, y) of each pixel obtained in step (b), assign values to the pixels to be extended in the smooth depth map I′Depth (x, y) obtained in step (a), Get the depth map I″Depth (x, y), and record the corresponding assignment flag bit flag (x, y), the expression formula is as follows:IIDepthDepth&prime;&prime;&prime;&prime;((xx++dirdir((xx,,ythe y))&CenterDot;&Center Dot;ii,,ythe y))==IIDepthDepth&prime;&prime;((xx++11,,ythe y))dirdir((xx,,ythe y))==--11IIDepthDepth&prime;&prime;((xx,,ythe y))dirdir((xx,,ythe y))==11,,ii==0,1,20,1,2,,......,,KK--11flag(x+dir(x,y)·i,y)=1|dir(x,y)|=1,i=0,1,2,...,K-1flag(x+dir(x, y)·i, y)=1|dir(x, y)|=1, i=0, 1, 2, ..., K-1其中,K表示扩展像素的个数;Among them, K represents the number of expanded pixels;(e)对步骤(d)得到的深度图I″Depth(x,y)中对应赋值标志位flag(x,y)=0的像素点进行赋值操作,得到扩展深度图
Figure FSA00000193660400025
表示公式如下:
(e) carry out the assignment operation to the pixel corresponding to the assignment flag flag (x, y)=0 in the depth map I″Depth (x, y) that step (d) obtains, obtain the extended depth map
Figure FSA00000193660400025
The expression formula is as follows:
IIDepthDepthEE.((xx,,ythe y))==IIDepthDepth&prime;&prime;((xx,,ythe y))flagflag((xx,,ythe y))==00IIDepthDepth&prime;&prime;&prime;&prime;((xx,,ythe y))flagflag((xx,,ythe y))==11..3.根据权利要求1所述的一种基于深度图分层的虚拟多视点图像的生成方法,其特征在于所述的根据设定的分层层数,对滤波扩展后的深度图像进行分层操作,得到分层深度图像步骤为:3. A method for generating a virtual multi-viewpoint image based on depth map layering according to claim 1, wherein the depth image after filtering and expansion is layered according to the layered number of layers set. Operation, the steps to obtain the layered depth image are:(f)根据设定的分层层数N,N为大于等于1的整数,计算步骤(e)得到的扩展深度图
Figure FSA00000193660400027
中的每一个像素点对应深度层的索引值Index(x,y),表示公式如下:
(f) According to the set number of layers N, where N is an integer greater than or equal to 1, calculate the extended depth map obtained in step (e)
Figure FSA00000193660400027
Each pixel in corresponds to the index value Index(x, y) of the depth layer, and the expression formula is as follows:
Figure FSA00000193660400028
Figure FSA00000193660400028
其中,
Figure FSA00000193660400029
表示取小于等于·的最大整数的操作;
in,
Figure FSA00000193660400029
Indicates the operation of taking the largest integer less than or equal to ;
(g)根据步骤(f)得到的像素点深度层索引值Index(x,y),得到每一层的分层深度图
Figure FSA000001936604000210
分层深度图的索引值index越小表示该深度层越靠近观看者,索引值index越大表示该深度层越远离观看者,表示公式如下:
(g) According to the pixel depth layer index value Index(x, y) obtained in step (f), obtain the layered depth map of each layer
Figure FSA000001936604000210
layered depth map The smaller the index value of , the closer the depth layer is to the viewer, and the larger the index value is, the farther the depth layer is from the viewer. The expression formula is as follows:
IIDepthDepthindexindex((xx,,ythe y))==IIDepthDepthEE.((xx,,ythe y))IndexIndex((xx,,ythe y))==indexindex255255otherwiseotherwise,,indexindex==0,1,20,1,2,,......,,NN--11..
4.根据权利要求1所述的一种基于深度图分层的虚拟多视点图像的生成方法,其特征在于所述的在分层深度图像中选定一个深度层作为相机阵列的聚焦层,则深度值比聚焦层深度值大的为背景层,深度值比聚焦层深度值小的为前景层步骤为:4. the generation method of a kind of virtual multi-viewpoint image based on depth map layering according to claim 1, it is characterized in that described in layered depth image, select a depth layer as the focusing layer of camera array, then The depth value greater than the depth value of the focus layer is the background layer, and the depth value smaller than the depth value of the focus layer is the foreground layer. The steps are:(h)根据设定的相机阵列聚焦层focus,确定前景图层和背景图层,索引值index>focus的深度层和聚焦层相比,距离观看者远,为背景层,索引值index<focus的深度层和聚焦层相比,距离观看者近,为前景图层。(h) Determine the foreground layer and background layer according to the set camera array focus layer focus, the depth layer with index value index>focus is farther away from the viewer than the focus layer, and is the background layer, and the index value index<focus Compared with the focus layer, the depth layer is closer to the viewer and is the foreground layer.5.根据权利要求1所述的一种基于深度图分层的虚拟多视点图像的生成方法,其特征在于所述的对待处理的参考视点二维图像,根据二维图像相对应的分层深度图像,对二维图像进行分层操作,得到分层二维图像步骤为:5. A method for generating a virtual multi-viewpoint image based on depth map layering according to claim 1, wherein the reference viewpoint two-dimensional image to be processed is based on the layered depth corresponding to the two-dimensional image Image, perform layered operation on the two-dimensional image, and the steps to obtain the layered two-dimensional image are:(i)根据步骤(f)得到的像素点深度层索引值Index(x,y),对于参考视点二维图像IColor(x,y)进行分层操作,得到每一层的分层二维图像
Figure FSA00000193660400031
同时记录每一层二维图像
Figure FSA00000193660400032
的赋值标志位
Figure FSA00000193660400033
分层二维图像
Figure FSA00000193660400034
的索引值index越小表示该二维图像层越靠近观看者,索引值index越大表示该二维图像层越远离观看者,表示公式如下:
(i) According to the pixel depth layer index value Index(x, y) obtained in step (f), perform layering operation on the reference viewpoint two-dimensional image IColor (x, y) to obtain the layered two-dimensional image of each layer image
Figure FSA00000193660400031
Simultaneously record each layer of 2D image
Figure FSA00000193660400032
The assignment flag
Figure FSA00000193660400033
layered 2D image
Figure FSA00000193660400034
The smaller the index value index of , the closer the 2D image layer is to the viewer, and the larger the index value index is, the farther the 2D image layer is from the viewer. The expression formula is as follows:
IIColorcolorindexindex((xx,,ythe y))==IIColorcolor((xx,,ythe y))IndexIndex((xx,,ythe y))==indexindex00otherwiseotherwise,,indexindex==0,1,20,1,2,,......,,NN--11FfColorcolorindexindex((xx,,ythe y))==11IndexIndex((xx,,ythe y))==indexindex00otherwiseotherwise,,indexindex==0,1,20,1,2,,......,,NN--11..
6.根据权利要求1所述的一种基于深度图分层的虚拟多视点图像的生成方法,其特征在于所述的根据设定的参考视点前景层和其相邻视点前景层之间的最大视差值,计算虚拟相机阵列相邻视点前景层与背景层之间的最大视差值,并结合各个深度层到聚焦层的距离计算各个深度层对应二维图像层对应的视差值步骤为:6. A method for generating virtual multi-viewpoint images based on depth map layers according to claim 1, characterized in that according to the set reference viewpoint foreground layer and its adjacent viewpoint foreground layer, the maximum Disparity value, calculate the maximum disparity value between the foreground layer and the background layer of the adjacent viewpoint of the virtual camera array, and combine the distance from each depth layer to the focus layer to calculate the disparity value corresponding to the two-dimensional image layer corresponding to each depth layer. The steps are: :(j)根据设定的参考视点前景层和其相邻视点前景层之间的最大视差值MaxD,MaxD为大于等于1的整数,计算相机阵列相邻视点前景层与背景层之间的最大视差值MaxR,表示公式如下:(j) According to the maximum parallax value MaxD between the set reference viewpoint foreground layer and its adjacent viewpoint foreground layer, MaxD is an integer greater than or equal to 1, calculate the maximum disparity between the adjacent viewpoint foreground layer and the background layer of the camera array Parallax value MaxR, the expression formula is as follows:MaxR=(MaxD/focus)·(N-1);MaxR=(MaxD/focus)·(N-1);(k)根据步骤(j)得到的最大视差值MaxR,结合各个深度层到聚焦层的距离计算各个深度层对应二维图像层对应的视差值Disindex,表示公式如下:(k) According to the maximum parallax value MaxR obtained in step (j), combine the distance from each depth layer to the focus layer to calculate the parallax value Disindex corresponding to each depth layer corresponding to the two-dimensional image layer, and the expression formula is as follows:
Figure FSA00000193660400037
Figure FSA00000193660400037
7.根据权利要求1所述的一种基于深度图分层的虚拟多视点图像的生成方法,其特征在于所述的根据虚拟视点位置到参考视点位置的相对距离和相机阵列相邻视点前景层与背景层之间的最大视差值,对分层二维图像进行扩展操作,得到扩展后的分层二维图像步骤为:7. The generation method of a virtual multi-viewpoint image based on depth map layering according to claim 1, wherein the relative distance from the virtual viewpoint position to the reference viewpoint position and the adjacent viewpoint foreground layer of the camera array are characterized in that The maximum parallax value between the background layer and the layered two-dimensional image is extended, and the steps to obtain the extended layered two-dimensional image are:(1)设定虚拟平行多视点阵列中虚拟视点个数为M,M为大于等于2的整数,以参考视点作为虚拟平行多视点阵列的中心视点,即虚拟平行多视点阵列最中间位置的视点,令参考视点的索引值为零,则在水平方向上按从右向左的顺序,各个虚拟视点的索引值subi的取值依次为:(1) Set the number of virtual viewpoints in the virtual parallel multi-viewpoint array as M, M is an integer greater than or equal to 2, and use the reference viewpoint as the central viewpoint of the virtual parallel multi-viewpoint array, that is, the viewpoint at the middlemost position of the virtual parallel multi-viewpoint array , let the index value of the reference viewpoint be zero, then in the order from right to left in the horizontal direction, the values of the index value subi of each virtual viewpoint are as follows:
Figure FSA00000193660400041
Figure FSA00000193660400041
(m)根据步骤(1)得到的各个虚拟视点的索引值subi计算虚拟视点位置到参考视点位置的相对距离Dsub_i′=subi-0=subi(m) calculate the relative distance Dsub_i ' =subi -0=subi from the virtual viewpoint position to the reference viewpoint position according to the index value subi of each virtual viewpoint obtained in step (1);(n)对步骤(i)得到的每一层二维图像
Figure FSA00000193660400042
的赋值标志位
Figure FSA00000193660400043
沿水平方向进行扫描,计算相邻两个赋值标志位差的绝对值,将该绝对值和0进行比较,判断二维图像层
Figure FSA00000193660400044
水平扩展的方向-1表示水平向左扩展,1表示水平向右扩展,0表示不做扩展,表示公式如下:
(n) for each layer of two-dimensional image obtained in step (i)
Figure FSA00000193660400042
The assignment flag
Figure FSA00000193660400043
Scan along the horizontal direction, calculate the absolute value of the bit difference between two adjacent assignment marks, compare the absolute value with 0, and judge the two-dimensional image layer
Figure FSA00000193660400044
Direction of horizontal expansion -1 means to expand horizontally to the left, 1 means to expand horizontally to the right, and 0 means not to expand. The expression formula is as follows:
(o)将每一层扩展二维图像的赋值标志位初始化为零,表示公式如下:(o) Extend the assignment flag of each layer of two-dimensional image Initialized to zero, the representation formula is as follows:FfECECindexindex((xx,,ythe y))==00,,&ForAll;&ForAll;xx==0,1,20,1,2,,......,,Hh--11&ForAll;&ForAll;ythe y==0,1,20,1,2,,......,,VV--11;;(p)根据步骤(n)得到的水平扩展方向
Figure FSA00000193660400049
对步骤(i)得到的每一层二维图像
Figure FSA000001936604000410
待扩展的像素点进行赋值操作,得到二维图像层
Figure FSA000001936604000411
同时记录赋值标志位
Figure FSA000001936604000412
表示公式如下:
(p) Horizontal expansion direction obtained according to step (n)
Figure FSA00000193660400049
For each layer of two-dimensional image obtained in step (i)
Figure FSA000001936604000410
The pixels to be expanded are assigned to obtain a two-dimensional image layer
Figure FSA000001936604000411
Simultaneous record assignment flag bit
Figure FSA000001936604000412
The expression formula is as follows:
IIECECindexindex((xx++dirdirColorcolorindexindex((xx,,ythe y))&CenterDot;&CenterDot;jj,,ythe y))==IIColorcolor((xx++dirdirColorcolorindexindex((xx,,ythe y))&CenterDot;&Center Dot;jj,,ythe y))dirdirColorcolorindexindex((xx,,ythe y))==--11IIColorcolor((xx++dirdirColorcolorindexindex((xx,,ythe y))&CenterDot;&CenterDot;jj,,ythe y))dirdirColorcolorindexindex((xx,,ythe y))==11,,jj==0,1,20,1,2,,......,,LL--11FfECECindexindex((xx++dirdirColorcolorindexindex((xx,,ythe y))&CenterDot;&CenterDot;jj,,ythe y))==11||dirdirColorcolorindexindex((xx,,ythe y))||==11,,jj==0,1,20,1,2,,......,,LL--11其中,L表示二维图像层扩展像素点的个数,L=Dsub·MaxR;Among them, L represents the two-dimensional image layer The number of expanded pixels, L=Dsub MaxR;(q)对步骤(p)得到的二维图像层
Figure FSA000001936604000416
对应赋值标志位
Figure FSA000001936604000417
的像素点进行赋值操作,得到最终的扩展二维图像层并记录最终的二维图像层赋值标志位
Figure FSA000001936604000419
表示公式如下:
(q) for the two-dimensional image layer obtained in step (p)
Figure FSA000001936604000416
Corresponding assignment flag bit
Figure FSA000001936604000417
The pixel points are assigned to obtain the final extended two-dimensional image layer And record the final two-dimensional image layer assignment flag bit
Figure FSA000001936604000419
The expression formula is as follows:
IIExColorExColorindexindex((xx,,ythe y))==IIColorcolorindexindex((xx,,ythe y))FfECECindexindex((xx,,ythe y))==00IIECECindexindex((xx,,ythe y))FfECECindexindex((xx,,ythe y))==11,,indexindex==0,1,20,1,2,,......,,NN--11FfExColorExColorindexindex((xx,,ythe y))==00FfECECindexindex((xx,,ythe y))==00andandFfColorcolorindexindex((xx,,ythe y))==0011FfColorcolorindexindex((xx,,ythe y))==1122FfECECindexindex((xx,,ythe y))==11andandFfColorcolorindexindex((xx,,ythe y))==00,,indexindex==0,1,20,1,2,,......,,NN--11..
8.根据权利要求1所述的一种基于深度图分层的虚拟多视点图像的生成方法,其特征在于所述的根据虚拟视点位置到参考视点位置的相对距离和各个二维图像层对应的视差值,通过加权水平平移算法得到各个虚拟视点位置的虚拟二维图像步骤为:8. A method for generating virtual multi-viewpoint images based on depth map layers according to claim 1, characterized in that the relative distance from the virtual viewpoint position to the reference viewpoint position and the corresponding two-dimensional image layers are The parallax value, the steps to obtain the virtual two-dimensional image of each virtual viewpoint position through the weighted horizontal translation algorithm are as follows:(r)假设要生成M个虚拟视点位置中第i个位置的图像,按照步骤(m)得到的虚拟视点位置到参考视点位置的相对距离Dsub_i和步骤(k)得到的各个二维图像层对应的视差值Disindex,对步骤(q)得到的扩展二维图像层
Figure FSA00000193660400053
和赋值标志位
Figure FSA00000193660400054
进行加权水平平移操作,得到平移后的二维图像层
Figure FSA00000193660400055
和平移后的赋值标志位表示公式如下:
(r) Assuming that the image of the i-th position among the M virtual viewpoint positions is to be generated, the relative distance Dsub_i from the virtual viewpoint position to the reference viewpoint position obtained in step (m) and each two-dimensional image layer obtained in step (k) Corresponding disparity value Disindex, for the extended two-dimensional image layer obtained in step (q)
Figure FSA00000193660400053
and assignment flags
Figure FSA00000193660400054
Perform a weighted horizontal translation operation to obtain the translated two-dimensional image layer
Figure FSA00000193660400055
and shifted assignment flags The expression formula is as follows:
IIHSColorHSColorindexindex((xx,,ythe y))==00((xx++Disdisindexindex&CenterDot;&CenterDot;DD.subsub__ii))<<00oror((xx++Disdisindexindex&CenterDot;&CenterDot;DD.subsub__ii))&GreaterEqual;&Greater Equal;HhIIExColorExColorindexindex((xx++Disdisindexindex&CenterDot;&CenterDot;DD.subsub__ii,,ythe y))00&le;&le;((xx++Disdisindexindex&CenterDot;&CenterDot;DD.subsub__ii))<<Hh,,indexindex==0,1,20,1,2,,......,,NN--11FfHSColorHSColorindexindex((xx,,ythe y))==00((xx++Disdisindexindex&CenterDot;&CenterDot;DD.subsub__ii))<<00oror((xx++Disdisindexindex&CenterDot;&CenterDot;DD.subsub__ii))&GreaterEqual;&Greater Equal;HhFfExColorExColorindexindex((xx++Disdisindexindex&CenterDot;&CenterDot;DD.subsub__ii,,ythe y))00&le;&le;((xx++Disdisindexindex&CenterDot;&Center Dot;DD.subsub__ii))<<Hh,,indexindex==0,1,20,1,2,,......,,NN--11其中,H表示图像的水平分辨率;Among them, H represents the horizontal resolution of the image;(s)初始化虚拟视点位置i二维图像的赋值标志位为零,表示公式如下;(s) Initialize the assignment flag bit of the virtual viewpoint position i two-dimensional image is zero, the expression formula is as follows;Ffvirvirii((xx,,ythe y))==00,,&ForAll;&ForAll;xx==0,1,20,1,2,,......,,Hh--11&ForAll;&ForAll;ythe y==0,1,20,1,2,,......,,VV--11;;(t)将步骤(r)中得到的平移后的二维图像层
Figure FSA000001936604000511
按照先放背景后放前景,由远及近的顺序,即index从大到小的顺序,一层一层地叠加起来,每叠加一层二维图像层,更新一次虚拟视点位置i二维图像的赋值标志位
Figure FSA000001936604000512
叠加操作结束后根据最终的赋值标志位
Figure FSA000001936604000513
得到虚拟视点位置i的二维图像
Figure FSA000001936604000514
表示公式如下:
(t) the translated two-dimensional image layer obtained in step (r)
Figure FSA000001936604000511
Put the background first and then the foreground, in the order from far to near, that is, in the order of index from large to small, superimpose layer by layer, and update the virtual viewpoint position i two-dimensional image every time a layer of two-dimensional image is superimposed The assignment flag
Figure FSA000001936604000512
After the superposition operation ends, according to the final assignment flag
Figure FSA000001936604000513
Get the two-dimensional image of the virtual viewpoint position i
Figure FSA000001936604000514
The expression formula is as follows:
Ffvirvirii((xx,,ythe y))==00Ffvirvirii((xx,,ythe y))==00andandFfHSColorHSColorindexindex((xx,,ythe y))==00indexindex++11FfHSColorHSColorindexindex((xx,,ythe y))==11indexindex++11++NNFfvirvirii((xx,,ythe y))==00andandFfHSColorHSColorindexindex((xx,,ythe y))==22,,indexindex==0,1,20,1,2,,......,,NN--11IIvirvirii((xx,,ythe y))==00Ffvirvirii((xx,,ythe y))==00IIHSColorHSColorFfvirvirii((xx,,ythe y))--11((xx,,ythe y))Ffvirvirii((xx,,ythe y))>>00andandFfvirvirii((xx,,ythe y))--NN&le;&le;00IIHSColorHSColorFfvirvirii((xx,,ythe y))--11--NN((xx,,ythe y))Ffvirvirii((xx,,ythe y))--NN>>00,,indexindex==0,1,20,1,2,,......,,NN--11;;(u)对M个虚拟视点位置依次进行步骤(r)至步骤(t)的操作,即可得到M幅虚拟视点位置的二维图像。(u) Step (r) to step (t) are sequentially performed on the M virtual viewpoint positions to obtain M two-dimensional images of the virtual viewpoint positions.
CN2010102286965A2010-07-162010-07-16Method for generating virtual multi-viewpoint images based on depth image layeringExpired - Fee RelatedCN101902657B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN2010102286965ACN101902657B (en)2010-07-162010-07-16Method for generating virtual multi-viewpoint images based on depth image layering

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN2010102286965ACN101902657B (en)2010-07-162010-07-16Method for generating virtual multi-viewpoint images based on depth image layering

Publications (2)

Publication NumberPublication Date
CN101902657Atrue CN101902657A (en)2010-12-01
CN101902657B CN101902657B (en)2011-12-21

Family

ID=43227788

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN2010102286965AExpired - Fee RelatedCN101902657B (en)2010-07-162010-07-16Method for generating virtual multi-viewpoint images based on depth image layering

Country Status (1)

CountryLink
CN (1)CN101902657B (en)

Cited By (35)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN102316354A (en)*2011-09-222012-01-11冠捷显示科技(厦门)有限公司Parallelly processable multi-view image synthesis method in imaging technology
CN102457756A (en)*2011-12-292012-05-16广西大学Structure and method of 3D video monitoring system capable of watching videos by naked eyes
CN102595170A (en)*2011-01-062012-07-18索尼公司Image pickup apparatus and image processing method
CN102609974A (en)*2012-03-142012-07-25浙江理工大学Virtual viewpoint image generation process on basis of depth map segmentation and rendering
CN102903143A (en)*2011-07-272013-01-30国际商业机器公司Method and system for converting two-dimensional image into three-dimensional image
CN103269435A (en)*2013-04-192013-08-28四川长虹电器股份有限公司Binocular to multi-view virtual viewpoint synthetic method
CN103313075A (en)*2012-03-092013-09-18株式会社东芝Image processing device, image processing method and non-transitory computer readable recording medium for recording image processing program
CN103828359A (en)*2011-09-292014-05-28杜比实验室特许公司Representation and coding of multi-view images using tapestry encoding
CN104219516A (en)*2014-09-012014-12-17北京邮电大学Method and device for three-dimensional display of figure lamination
CN104270625A (en)*2014-10-092015-01-07成都斯斐德科技有限公司Composite image generating method capable of weakening auto-stereoscopic display counterfeit stereoscopic image
CN104504671A (en)*2014-12-122015-04-08浙江大学Method for generating virtual-real fusion image for stereo display
EP2611182A3 (en)*2011-12-262015-04-22Samsung Electronics Co., LtdImage processing method and apparatus using multi-layer representation
CN104935910A (en)*2015-06-032015-09-23青岛海信电器股份有限公司Method and device for correcting three-dimensional image
CN105453136A (en)*2013-08-162016-03-30高通股份有限公司Stereo yaw correction using autofocus feedback
CN103339651B (en)*2011-10-112016-12-07松下知识产权经营株式会社 Image processing device, imaging device, and image processing method
CN106464853A (en)*2014-05-212017-02-22索尼公司Image processing apparatus and method
CN106851247A (en)*2017-02-132017-06-13浙江工商大学Complex scene layered approach based on depth information
WO2017156905A1 (en)*2016-03-162017-09-21深圳创维-Rgb电子有限公司Display method and system for converting two-dimensional image into multi-viewpoint image
US9819863B2 (en)2014-06-202017-11-14Qualcomm IncorporatedWide field of view array camera for hemispheric and spherical imaging
US9832381B2 (en)2014-10-312017-11-28Qualcomm IncorporatedOptical image stabilization for thin cameras
US9838601B2 (en)2012-10-192017-12-05Qualcomm IncorporatedMulti-camera system using folded optics
US9854182B2 (en)2014-06-202017-12-26Qualcomm IncorporatedFolded optic array camera using refractive prisms
US9860434B2 (en)2014-04-042018-01-02Qualcomm IncorporatedAuto-focus in low-profile folded optics multi-camera system
US9973680B2 (en)2014-04-042018-05-15Qualcomm IncorporatedAuto-focus in low-profile folded optics multi-camera system
US10013764B2 (en)2014-06-192018-07-03Qualcomm IncorporatedLocal adaptive histogram equalization
US10084958B2 (en)2014-06-202018-09-25Qualcomm IncorporatedMulti-camera system using folded optics free from parallax and tilt artifacts
WO2019085022A1 (en)*2017-10-312019-05-09武汉华星光电技术有限公司Generation method and device for optical field 3d display unit image
CN109793999A (en)*2019-01-252019-05-24无锡海鹰医疗科技股份有限公司The construction method of the static three-dimensional profile body image of HIFU Treatment system
CN111405265A (en)*2020-03-242020-07-10杭州电子科技大学 A New Image Rendering Technology
CN111405262A (en)*2019-01-022020-07-10中国移动通信有限公司研究院 A method, apparatus, system, device and medium for generating viewpoint information
CN111738061A (en)*2020-05-082020-10-02诡谷子人工智能科技(深圳)有限公司 Binocular Vision Stereo Matching Method and Storage Medium Based on Region Feature Extraction
CN112291549A (en)*2020-09-232021-01-29广西壮族自治区地图院Method for acquiring stereoscopic sequence frame images of raster topographic map based on DEM
CN115393213A (en)*2022-08-252022-11-25广州虎牙信息科技有限公司Image visual angle transformation method and device, electronic equipment and readable storage medium
CN115442580A (en)*2022-08-172022-12-06深圳市纳晶云实业有限公司Naked eye 3D picture effect processing method for portable intelligent device
CN120111200A (en)*2025-01-152025-06-06北京天马辉电子技术有限责任公司 3D display image generation method and related equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN1395231A (en)*2001-07-042003-02-05松下电器产业株式会社Image signal coding method, equipment and storage medium
CN101533529A (en)*2009-01-232009-09-16北京建筑工程学院Range image-based 3D spatial data processing method and device
EP2180449A1 (en)*2008-10-212010-04-28Koninklijke Philips Electronics N.V.Method and device for providing a layered depth model of a scene
TW201023619A (en)*2008-11-042010-06-16Koninkl Philips Electronics NvMethod and system for encoding a 3D image signal, encoded 3D image signal, method and system for decoding a 3D image signal

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN1395231A (en)*2001-07-042003-02-05松下电器产业株式会社Image signal coding method, equipment and storage medium
EP2180449A1 (en)*2008-10-212010-04-28Koninklijke Philips Electronics N.V.Method and device for providing a layered depth model of a scene
TW201023619A (en)*2008-11-042010-06-16Koninkl Philips Electronics NvMethod and system for encoding a 3D image signal, encoded 3D image signal, method and system for decoding a 3D image signal
CN101533529A (en)*2009-01-232009-09-16北京建筑工程学院Range image-based 3D spatial data processing method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
《ICPR 2008. 19th International Conference onPattern Recognition, 2008. 》 20081211 Chia-Ming Ch ET Al Improved novel view synthesis from depth image with large baseline 全文 1-8 , 2*

Cited By (49)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN102595170A (en)*2011-01-062012-07-18索尼公司Image pickup apparatus and image processing method
US9432656B2 (en)2011-01-062016-08-30Sony CorporationImage capturing device including lens array and processing
CN102595170B (en)*2011-01-062016-07-06索尼公司Image-pickup device and image processing method
CN102903143A (en)*2011-07-272013-01-30国际商业机器公司Method and system for converting two-dimensional image into three-dimensional image
CN102316354A (en)*2011-09-222012-01-11冠捷显示科技(厦门)有限公司Parallelly processable multi-view image synthesis method in imaging technology
CN103828359B (en)*2011-09-292016-06-22杜比实验室特许公司For producing the method for the view of scene, coding system and solving code system
US9451232B2 (en)2011-09-292016-09-20Dolby Laboratories Licensing CorporationRepresentation and coding of multi-view images using tapestry encoding
CN103828359A (en)*2011-09-292014-05-28杜比实验室特许公司Representation and coding of multi-view images using tapestry encoding
CN103339651B (en)*2011-10-112016-12-07松下知识产权经营株式会社 Image processing device, imaging device, and image processing method
EP2611182A3 (en)*2011-12-262015-04-22Samsung Electronics Co., LtdImage processing method and apparatus using multi-layer representation
CN102457756A (en)*2011-12-292012-05-16广西大学Structure and method of 3D video monitoring system capable of watching videos by naked eyes
CN103313075A (en)*2012-03-092013-09-18株式会社东芝Image processing device, image processing method and non-transitory computer readable recording medium for recording image processing program
CN102609974A (en)*2012-03-142012-07-25浙江理工大学Virtual viewpoint image generation process on basis of depth map segmentation and rendering
CN102609974B (en)*2012-03-142014-04-09浙江理工大学Virtual viewpoint image generation process on basis of depth map segmentation and rendering
US10165183B2 (en)2012-10-192018-12-25Qualcomm IncorporatedMulti-camera system using folded optics
US9838601B2 (en)2012-10-192017-12-05Qualcomm IncorporatedMulti-camera system using folded optics
CN103269435A (en)*2013-04-192013-08-28四川长虹电器股份有限公司Binocular to multi-view virtual viewpoint synthetic method
CN105453136A (en)*2013-08-162016-03-30高通股份有限公司Stereo yaw correction using autofocus feedback
CN105453136B (en)*2013-08-162019-08-09高通股份有限公司 System, method and apparatus for stereo roll correction using autofocus feedback
US10178373B2 (en)2013-08-162019-01-08Qualcomm IncorporatedStereo yaw correction using autofocus feedback
US9973680B2 (en)2014-04-042018-05-15Qualcomm IncorporatedAuto-focus in low-profile folded optics multi-camera system
US9860434B2 (en)2014-04-042018-01-02Qualcomm IncorporatedAuto-focus in low-profile folded optics multi-camera system
US10547822B2 (en)2014-05-212020-01-28Sony CorporationImage processing apparatus and method to generate high-definition viewpoint interpolation image
CN106464853A (en)*2014-05-212017-02-22索尼公司Image processing apparatus and method
US10013764B2 (en)2014-06-192018-07-03Qualcomm IncorporatedLocal adaptive histogram equalization
US9819863B2 (en)2014-06-202017-11-14Qualcomm IncorporatedWide field of view array camera for hemispheric and spherical imaging
US9843723B2 (en)2014-06-202017-12-12Qualcomm IncorporatedParallax free multi-camera system capable of capturing full spherical images
US9854182B2 (en)2014-06-202017-12-26Qualcomm IncorporatedFolded optic array camera using refractive prisms
US10084958B2 (en)2014-06-202018-09-25Qualcomm IncorporatedMulti-camera system using folded optics free from parallax and tilt artifacts
CN104219516A (en)*2014-09-012014-12-17北京邮电大学Method and device for three-dimensional display of figure lamination
CN104270625A (en)*2014-10-092015-01-07成都斯斐德科技有限公司Composite image generating method capable of weakening auto-stereoscopic display counterfeit stereoscopic image
US9832381B2 (en)2014-10-312017-11-28Qualcomm IncorporatedOptical image stabilization for thin cameras
CN104504671B (en)*2014-12-122017-04-19浙江大学Method for generating virtual-real fusion image for stereo display
CN104504671A (en)*2014-12-122015-04-08浙江大学Method for generating virtual-real fusion image for stereo display
CN104935910A (en)*2015-06-032015-09-23青岛海信电器股份有限公司Method and device for correcting three-dimensional image
WO2017156905A1 (en)*2016-03-162017-09-21深圳创维-Rgb电子有限公司Display method and system for converting two-dimensional image into multi-viewpoint image
US10334231B2 (en)2016-03-162019-06-25Shenzhen Skyworth-Rgb Electronic Co., LtdDisplay method and system for converting two-dimensional image into multi-viewpoint image
CN106851247A (en)*2017-02-132017-06-13浙江工商大学Complex scene layered approach based on depth information
WO2019085022A1 (en)*2017-10-312019-05-09武汉华星光电技术有限公司Generation method and device for optical field 3d display unit image
CN111405262A (en)*2019-01-022020-07-10中国移动通信有限公司研究院 A method, apparatus, system, device and medium for generating viewpoint information
CN109793999A (en)*2019-01-252019-05-24无锡海鹰医疗科技股份有限公司The construction method of the static three-dimensional profile body image of HIFU Treatment system
CN111405265A (en)*2020-03-242020-07-10杭州电子科技大学 A New Image Rendering Technology
CN111738061A (en)*2020-05-082020-10-02诡谷子人工智能科技(深圳)有限公司 Binocular Vision Stereo Matching Method and Storage Medium Based on Region Feature Extraction
CN112291549A (en)*2020-09-232021-01-29广西壮族自治区地图院Method for acquiring stereoscopic sequence frame images of raster topographic map based on DEM
CN112291549B (en)*2020-09-232021-07-09广西壮族自治区地图院Method for acquiring stereoscopic sequence frame images of raster topographic map based on DEM
CN115442580A (en)*2022-08-172022-12-06深圳市纳晶云实业有限公司Naked eye 3D picture effect processing method for portable intelligent device
CN115442580B (en)*2022-08-172024-03-26深圳市纳晶云实业有限公司Naked eye 3D picture effect processing method for portable intelligent equipment
CN115393213A (en)*2022-08-252022-11-25广州虎牙信息科技有限公司Image visual angle transformation method and device, electronic equipment and readable storage medium
CN120111200A (en)*2025-01-152025-06-06北京天马辉电子技术有限责任公司 3D display image generation method and related equipment

Also Published As

Publication numberPublication date
CN101902657B (en)2011-12-21

Similar Documents

PublicationPublication DateTitle
CN101902657B (en)Method for generating virtual multi-viewpoint images based on depth image layering
CN102625127B (en)Optimization method suitable for virtual viewpoint generation of 3D television
CN101720047B (en)Method for acquiring range image by stereo matching of multi-aperture photographing based on color segmentation
CN102164298B (en) Stereo Matching-based Element Image Acquisition Method in Panoramic Imaging System
CN101277454A (en) A real-time stereoscopic video generation method based on binocular cameras
CN104756489B (en)A kind of virtual visual point synthesizing method and system
CN102254348B (en)Virtual viewpoint mapping method based o adaptive disparity estimation
CN101771893A (en)Video frequency sequence background modeling based virtual viewpoint rendering method
CN106228605A (en)A kind of Stereo matching three-dimensional rebuilding method based on dynamic programming
CN111325693A (en)Large-scale panoramic viewpoint synthesis method based on single-viewpoint RGB-D image
CN101808251A (en)Method for extracting blocking information in stereo image pair
CN111047709B (en)Binocular vision naked eye 3D image generation method
CN104065947B (en)The depth map acquisition methods of a kind of integration imaging system
CN101840574B (en)Depth estimation method based on edge pixel characteristics
CN103248909B (en)Method and system of converting monocular video into stereoscopic video
CN103702103B (en)Based on the grating stereo printing images synthetic method of binocular camera
CN101848397A (en)Improved high-resolution reconstruction method for calculating integrated image
CN102034265A (en)Three-dimensional view acquisition method
CN103024421A (en)Method for synthesizing virtual viewpoints in free viewpoint television
CN105704476B (en)A kind of virtual visual point image frequency domain fast acquiring method based on edge reparation
CN106791774A (en)Virtual visual point image generating method based on depth map
CN102750694B (en)Local optimum belief propagation algorithm-based binocular video depth map solution method
CN106412560B (en)A kind of stereoscopic image generation method based on depth map
CN103679739A (en)Virtual view generating method based on shielding region detection
CN107809630A (en)Based on the multi-view point video super-resolution rebuilding algorithm for improving virtual view synthesis

Legal Events

DateCodeTitleDescription
C06Publication
PB01Publication
C10Entry into substantive examination
SE01Entry into force of request for substantive examination
C14Grant of patent or utility model
GR01Patent grant
C41Transfer of patent application or patent right or utility model
TR01Transfer of patent right

Effective date of registration:20160614

Address after:518000 new energy building, Nanhai Road, Shenzhen, Guangdong, Nanshan District A838

Patentee after:Meng Qi media (Shenzhen) Co.,Ltd.

Address before:310027 Hangzhou, Zhejiang Province, Xihu District, Zhejiang Road, No. 38, No.

Patentee before:Zhejiang University

C41Transfer of patent application or patent right or utility model
TR01Transfer of patent right

Effective date of registration:20160901

Address after:518000, 101, 2, Fengyun technology building, Fifth Industrial Zone, North Ring Road, Shenzhen, Guangdong, Nanshan District

Patentee after:World wide technology (Shenzhen) Ltd.

Address before:518000 new energy building, Nanhai Road, Shenzhen, Guangdong, Nanshan District A838

Patentee before:Meng Qi media (Shenzhen) Co.,Ltd.

EE01Entry into force of recordation of patent licensing contract

Application publication date:20101201

Assignee:MCLOUD (SHANGHAI) DIGITAL TECHNOLOGY CO.,LTD.

Assignor:World wide technology (Shenzhen) Ltd.

Contract record no.:2018440020049

Denomination of invention:Method for generating virtual multi-viewpoint images based on depth image layering

Granted publication date:20111221

License type:Exclusive License

Record date:20180428

EE01Entry into force of recordation of patent licensing contract
TR01Transfer of patent right

Effective date of registration:20180904

Address after:518000 B unit 101, Fengyun mansion 5, Xili street, Nanshan District, Shenzhen, Guangdong.

Patentee after:WANWEI DISPLAY TECHNOLOGY (SHENZHEN) Co.,Ltd.

Address before:518000 2 of Fengyun tower, Fifth Industrial Zone, Nanshan District North Ring Road, Shenzhen, Guangdong, 101

Patentee before:World wide technology (Shenzhen) Ltd.

TR01Transfer of patent right
CF01Termination of patent right due to non-payment of annual fee

Granted publication date:20111221

CF01Termination of patent right due to non-payment of annual fee

[8]ページ先頭

©2009-2025 Movatter.jp