技术领域technical field
本发明涉及图像处理技术领域,特别是涉及一种图像合成方法、系统和设备。The present invention relates to the technical field of image processing, in particular to an image synthesis method, system and equipment.
背景技术Background technique
随着电子技术的进度,图像成像系统越来越多的融入到众多的便携式电子产品中,如手机,行车记录仪等产品,拍照和录像都是其主要功能,同时对成像图像的质量要求也越来越高,图像的动态范围是成像系统的主要指标之一。With the progress of electronic technology, image imaging systems are more and more integrated into many portable electronic products, such as mobile phones, driving recorders and other products. Taking photos and videos are their main functions, and at the same time, the quality requirements for imaging images are also very high. Higher and higher, the dynamic range of the image is one of the main indicators of the imaging system.
一般CMOS摄像头成像的动态范围都比较窄,无法捕捉到暗至星光,亮至太阳光的全动态范围的图像,因此提升成像图像的动态范围,可以捕捉更多的图像细节。对行车记录仪、自动驾驶系统等产品而言,可以避免逆光等场景图像过曝光和欠曝光而导致图像细节丢失。Generally, the dynamic range of CMOS camera imaging is relatively narrow, and it is impossible to capture images in the full dynamic range from dark to starlight to bright to sunlight. Therefore, increasing the dynamic range of imaging images can capture more image details. For products such as driving recorders and automatic driving systems, it can avoid the loss of image details caused by overexposure and underexposure of scenes such as backlighting.
一般高动态范围图像合成主要采用包围曝光的方式将多张图像进行合成,得到一张高动态范围的图像。在实现本发明过程中,发明人发现如上的技术中,至少存在如下问题:由于需要成像多张不同的图像,导致成像延迟时间较长,合成时运算量较大,难以满足连续实时处理的要求。Generally, high dynamic range image synthesis mainly uses exposure bracketing to synthesize multiple images to obtain a high dynamic range image. In the process of realizing the present invention, the inventors found that in the above technology, there are at least the following problems: due to the need to image multiple different images, the imaging delay time is relatively long, and the amount of computation during synthesis is relatively large, making it difficult to meet the requirements of continuous real-time processing .
发明内容Contents of the invention
基于此,有必要针对传统的动态范围图像合成方式运算量大造成成像延迟,难以满足连续实时处理的问题,提供一种图像合成方法、系统和设备。Based on this, it is necessary to provide an image synthesis method, system and device for the problem that the traditional dynamic range image synthesis method has a large amount of computation, which causes imaging delay and is difficult to meet continuous real-time processing.
一种图像合成方法,包括以下步骤:An image synthesis method, comprising the following steps:
分别获取目标拍摄对象的第一曝光图像和第二曝光图像,其中,第一曝光图像的曝光时间大于第二曝光图像的曝光时间;Acquiring a first exposure image and a second exposure image of the target object respectively, wherein the exposure time of the first exposure image is longer than the exposure time of the second exposure image;
根据第一曝光图像和第二曝光图像的相同位置的像素点数据获取图像融合的权重值;Acquiring the weight value of image fusion according to the pixel point data of the same position of the first exposure image and the second exposure image;
根据权重值对第一曝光图像和第二曝光图像进行融合,获得合成图像。The first exposure image and the second exposure image are fused according to the weight value to obtain a composite image.
根据上述的图像合成方法,其是获取目标拍摄对象的两种不同曝光时间的曝光图像,对两种曝光图像中的相同位置的像素点数据进行分析,获得图像融合的权重值,利用该权重值对两种曝光图像进行融合,获得合成图像,该合成图像是由不同曝光时间的图像合成,同一位置的像素点可以捕捉更多的图像细节,也就提高了合成图像的动态范围,同时在合成图像过程中只需要两种不同曝光时间的图像,相比于传统技术使用多张图像的合成过程降低了图像运算量,可以满足图像处理实时性的要求。According to the above-mentioned image synthesis method, it is to acquire two kinds of exposure images with different exposure times of the target object, analyze the pixel point data at the same position in the two exposure images, obtain the weight value of image fusion, and use the weight value The two exposure images are fused to obtain a composite image. The composite image is composed of images with different exposure times. Pixels at the same position can capture more image details, which improves the dynamic range of the composite image. In the image process, only two images with different exposure times are needed. Compared with the traditional technology, the synthesis process of multiple images reduces the amount of image calculation and can meet the real-time requirements of image processing.
在其中一个实施例中,分别获取目标拍摄对象的第一曝光图像和第二曝光图像的步骤包括以下步骤:In one of the embodiments, the steps of respectively acquiring the first exposure image and the second exposure image of the target object include the following steps:
根据配置的不同曝光时间对目标拍摄对象进行拍摄,分别获取目标拍摄对象的第一曝光行数据和第二曝光行数据,分别对第一曝光行数据和第二曝光行数据进行线性化映射,得到第一映射数据和第二映射数据;Shoot the target subject according to the different exposure times configured, obtain the first exposure line data and the second exposure line data of the target subject respectively, and perform linearized mapping on the first exposure line data and the second exposure line data respectively, to obtain first mapping data and second mapping data;
分别对第一映射数据和第二映射数据进行增益补偿,根据增益补偿后的数据分别获取第一曝光图像和第二曝光图像。Gain compensation is performed on the first mapping data and the second mapping data respectively, and the first exposure image and the second exposure image are acquired respectively according to the gain-compensated data.
在其中一个实施例中,根据第一曝光图像和第二曝光图像的相同位置的像素点数据获取图像融合的权重值的步骤之前还包括以下步骤:In one of the embodiments, the step of obtaining the weight value of image fusion according to the pixel point data of the same position of the first exposure image and the second exposure image further includes the following steps:
缓存第一曝光行数据和第二曝光行数据,对第一曝光行数据和第二曝光行数据进行图像匹配计算,根据图像匹配计算的结果获取第一曝光图像中目标拍摄对象相对于第二曝光图像中目标拍摄对象的运动向量;Buffer the first exposure line data and the second exposure line data, perform image matching calculation on the first exposure line data and the second exposure line data, and obtain the relative position of the target object in the first exposure image relative to the second exposure line according to the result of the image matching calculation. the motion vector of the target subject in the image;
根据运动向量对第二曝光图像的数据与第一曝光图像的数据进行对齐操作。The data of the second exposure image and the data of the first exposure image are aligned according to the motion vector.
在其中一个实施例中,对第一曝光行数据和第二曝光行数据进行图像匹配计算的步骤包括以下步骤:In one of the embodiments, the step of performing image matching calculation on the first exposure row data and the second exposure row data includes the following steps:
在第一曝光行数据中获取第一数据块,在第二曝光行数据的预设搜索窗范围内获取第二数据块,第一数据块与第二数据块的大小相同;Obtaining a first data block in the first exposure row data, obtaining a second data block within a preset search window range of the second exposure row data, and the size of the first data block is the same as that of the second data block;
获取第一数据块内和第二数据块内的各个对应像素点的像素值差值,对各像素值差值取绝对值并求和,得到图像匹配和值;Obtain the pixel value difference of each corresponding pixel point in the first data block and the second data block, take the absolute value of each pixel value difference and sum, and obtain the image matching sum;
在预设搜索窗范围内遍历不同的第二数据块,确定图像匹配和值最小的目标数据块;Traversing different second data blocks within the range of the preset search window, and determining the target data block with the smallest image matching sum value;
根据图像匹配计算的结果获取第一曝光图像中目标拍摄对象相对于第二曝光图像中目标拍摄对象的运动向量的步骤包括以下步骤:The step of obtaining the motion vector of the target object in the first exposure image relative to the target object in the second exposure image according to the result of the image matching calculation includes the following steps:
根据第一数据块和目标数据块的相对位置确定运动向量。A motion vector is determined based on the relative positions of the first data block and the target data block.
在其中一个实施例中,图像合成方法还包括以下步骤:In one of the embodiments, the image synthesis method also includes the following steps:
计算第一数据块的纹理复杂度;calculating the texture complexity of the first data block;
若第一数据块的纹理复杂度小于纹理复杂阈值,将运动向量置零;If the texture complexity of the first data block is less than the texture complexity threshold, set the motion vector to zero;
或者,or,
计算目标数据块的纹理复杂度;Calculate the texture complexity of the target data block;
若目标数据块的纹理复杂度小于纹理复杂阈值,将运动向量置零。If the texture complexity of the target data block is less than the texture complexity threshold, set the motion vector to zero.
在其中一个实施例中,根据第一曝光图像和第二曝光图像的相同位置的像素点数据获取图像融合的权重值的步骤包括以下步骤:In one of the embodiments, the step of obtaining the weight value of image fusion according to the pixel point data of the same position of the first exposure image and the second exposure image includes the following steps:
分别根据第一曝光图像的目标位置像素点的亮度、对比度和纹理复杂度获取第一亮度权重、第一对比度权重和第一纹理复杂度权重;根据第一亮度权重、第一对比度权重和第一纹理复杂度权重获取第一曝光图像的第一权重值;Obtain the first brightness weight, the first contrast weight and the first texture complexity weight respectively according to the brightness, contrast and texture complexity of the target position pixel point of the first exposure image; according to the first brightness weight, the first contrast weight and the first The texture complexity weight acquires a first weight value of the first exposure image;
分别根据第二曝光图像的目标位置像素点的亮度、对比度和纹理复杂度获取第二亮度权重、第二对比度权重和第二纹理复杂度权重;根据第二亮度权重、第二对比度权重和第二纹理复杂度权重获取第二曝光图像的第二权重值;Acquire the second brightness weight, the second contrast weight and the second texture complexity weight respectively according to the brightness, contrast and texture complexity of the target position pixel point of the second exposure image; according to the second brightness weight, the second contrast weight and the second The texture complexity weight acquires a second weight value of the second exposure image;
对第一权重值和第二权重值分别进行归一化和噪声滤波,得到图像融合的权重值。Normalization and noise filtering are respectively performed on the first weight value and the second weight value to obtain a weight value for image fusion.
在其中一个实施例中,图像合成方法还包括以下步骤:In one of the embodiments, the image synthesis method also includes the following steps:
获取合成图像的亮度均值,根据亮度均值调节第一曝光图像的曝光参数和第二曝光图像的曝光参数,根据第一曝光图像的曝光参数和第二曝光图像的曝光参数分别计算增益补偿的参数。Acquiring the average brightness value of the synthesized image, adjusting the exposure parameters of the first exposure image and the exposure parameters of the second exposure image according to the average brightness value, and calculating gain compensation parameters respectively according to the exposure parameters of the first exposure image and the exposure parameters of the second exposure image.
在其中一个实施例中,图像合成方法还包括以下步骤:In one of the embodiments, the image synthesis method also includes the following steps:
获取合成图像的直方图信息,根据直方图信息对合成图像进行动态范围压缩。Obtain the histogram information of the synthesized image, and perform dynamic range compression on the synthesized image according to the histogram information.
一种图像合成系统,包括:An image synthesis system comprising:
图像获取单元,用于分别获取目标拍摄对象的第一曝光图像和第二曝光图像,其中,第一曝光图像的曝光时间大于第二曝光图像的曝光时间;an image acquiring unit, configured to respectively acquire a first exposure image and a second exposure image of the target object, wherein the exposure time of the first exposure image is greater than the exposure time of the second exposure image;
权重分析单元,用于根据第一曝光图像和第二曝光图像的相同位置的像素点数据获取图像融合的权重值;a weight analysis unit, configured to obtain a weight value for image fusion according to the pixel point data at the same position of the first exposure image and the second exposure image;
图像合成单元,用于根据权重值对第一曝光图像和第二曝光图像进行融合,获得合成图像。An image synthesis unit, configured to fuse the first exposure image and the second exposure image according to the weight value to obtain a composite image.
根据上述的图像合成系统,图像获取单元获取目标拍摄对象的两种不同曝光时间的曝光图像,权重分析单元对两种曝光图像中的相同位置的像素点数据进行分析,获得图像融合的权重值,图像合成单元利用该权重值对两种曝光图像进行融合,获得合成图像,该合成图像是由不同曝光时间的图像合成,同一位置的像素点可以捕捉更多的图像细节,也就提高了合成图像的动态范围,同时在合成图像过程中只需要两种不同曝光时间的图像,相比于传统技术使用多张图像的合成过程降低了图像运算量,可以满足图像处理实时性的要求。According to the above-mentioned image synthesis system, the image acquisition unit acquires exposure images of two different exposure times of the target subject, and the weight analysis unit analyzes the pixel point data at the same position in the two exposure images to obtain a weight value for image fusion, The image synthesis unit uses the weight value to fuse the two exposure images to obtain a composite image. The composite image is composed of images with different exposure times. Pixels at the same position can capture more image details, which improves the composite image. At the same time, only two images with different exposure times are required in the process of image synthesis. Compared with the traditional technology of using multiple images in the synthesis process, the amount of image calculation is reduced, which can meet the real-time requirements of image processing.
在其中一个实施例中,图像获取单元根据配置的不同曝光时间对目标拍摄对象进行拍摄,分别获取目标拍摄对象的第一曝光行数据和第二曝光行数据,分别对第一曝光行数据和第二曝光行数据进行线性化映射,得到第一映射数据和第二映射数据;分别对第一映射数据和第二映射数据进行增益补偿,根据增益补偿后的数据分别获取第一曝光图像和第二曝光图像。In one of the embodiments, the image acquisition unit photographs the target subject according to different configured exposure times, respectively acquires the first exposure line data and the second exposure line data of the target subject, and respectively performs the first exposure line data and the second exposure line data Perform linearized mapping on the data of the two exposure lines to obtain the first mapping data and the second mapping data; respectively perform gain compensation on the first mapping data and the second mapping data, and obtain the first exposure image and the second exposure image respectively according to the gain-compensated data Expose the image.
在其中一个实施例中,图像合成系统还包括运动估计单元,用于缓存第一曝光行数据和第二曝光行数据,对第一曝光行数据和第二曝光行数据进行图像匹配计算,根据图像匹配计算的结果获取第一曝光图像中目标拍摄对象相对于第二曝光图像中目标拍摄对象的运动向量;根据运动向量对第二曝光图像的数据与第一曝光图像的数据进行对齐操作。In one of the embodiments, the image synthesis system further includes a motion estimation unit, which is used for buffering the first exposure line data and the second exposure line data, performing image matching calculation on the first exposure line data and the second exposure line data, and according to the image A motion vector of the target object in the first exposure image relative to the target object in the second exposure image is obtained as a result of the matching calculation; and an alignment operation is performed on the data of the second exposure image and the data of the first exposure image according to the motion vector.
在其中一个实施例中,运动估计单元在第一曝光行数据中获取第一数据块,在第二曝光行数据的预设搜索窗范围内获取第二数据块,第一数据块与第二数据块的大小相同;获取第一数据块内和第二数据块内的各个对应像素点的像素值差值,对各像素值差值取绝对值并求和,得到图像匹配和值;在预设搜索窗范围内遍历不同的第二数据块,确定图像匹配和值最小的目标数据块;根据第一数据块和目标数据块的相对位置确定运动向量。In one of the embodiments, the motion estimation unit obtains the first data block in the first exposure line data, obtains the second data block within the preset search window range of the second exposure line data, and the first data block and the second data block The size of the block is the same; obtain the pixel value difference of each corresponding pixel point in the first data block and the second data block, take the absolute value of each pixel value difference and sum to obtain the image matching sum value; in the preset Different second data blocks are traversed within the range of the search window, and the target data block with the smallest image matching sum value is determined; a motion vector is determined according to the relative positions of the first data block and the target data block.
在其中一个实施例中,运动估计单元还用于计算第一数据块的纹理复杂度,若第一数据块的纹理复杂度小于纹理复杂阈值,将运动向量置零;或者,计算目标数据块的纹理复杂度,若目标数据块的纹理复杂度小于纹理复杂阈值,将运动向量置零。In one of the embodiments, the motion estimation unit is also used to calculate the texture complexity of the first data block, if the texture complexity of the first data block is less than the texture complexity threshold, set the motion vector to zero; or, calculate the texture complexity of the target data block Texture complexity. If the texture complexity of the target data block is less than the texture complexity threshold, set the motion vector to zero.
在其中一个实施例中,权重分析单元分别根据第一曝光图像的目标位置像素点的亮度、对比度和纹理复杂度获取第一亮度权重、第一对比度权重和第一纹理复杂度权重;根据第一亮度权重、第一对比度权重和第一纹理复杂度权重获取第一曝光图像的第一权重值;In one of the embodiments, the weight analysis unit respectively obtains the first brightness weight, the first contrast weight and the first texture complexity weight according to the brightness, contrast and texture complexity of the pixel point at the target position of the first exposure image; according to the first The brightness weight, the first contrast weight and the first texture complexity weight obtain the first weight value of the first exposure image;
权重分析单元分别根据第二曝光图像的目标位置像素点的亮度、对比度和纹理复杂度获取第二亮度权重、第二对比度权重和第二纹理复杂度权重;根据第二亮度权重、第二对比度权重和第二纹理复杂度权重获取第二曝光图像的第二权重值;The weight analysis unit obtains the second brightness weight, the second contrast weight and the second texture complexity weight respectively according to the brightness, contrast and texture complexity of the target position pixel point of the second exposure image; according to the second brightness weight, the second contrast weight Obtaining a second weight value of the second exposure image with the second texture complexity weight;
权重分析单元对第一权重值和第二权重值分别进行归一化和噪声滤波,得到图像融合的权重值。The weight analysis unit performs normalization and noise filtering on the first weight value and the second weight value respectively to obtain the weight value of image fusion.
在其中一个实施例中,图像合成系统还包括曝光控制单元,用于获取合成图像的亮度均值,根据亮度均值调节第一曝光图像的曝光参数和第二曝光图像的曝光参数,根据第一曝光图像的曝光参数和第二曝光图像的曝光参数分别计算增益补偿的参数。In one of the embodiments, the image synthesis system further includes an exposure control unit, configured to acquire the average brightness value of the synthesized image, adjust the exposure parameters of the first exposure image and the exposure parameters of the second exposure image according to the brightness average value, and adjust the exposure parameters of the first exposure image according to the brightness average value of the first exposure image. The exposure parameters of the second exposure image and the exposure parameters of the second exposure image respectively calculate the parameters of the gain compensation.
在其中一个实施例中,图像合成系统还包括动态压缩单元,用于获取合成图像的直方图信息,根据直方图信息对合成图像进行动态范围压缩。In one embodiment, the image synthesis system further includes a dynamic compression unit, configured to obtain the histogram information of the synthesized image, and perform dynamic range compression on the synthesized image according to the histogram information.
一种图像合成设备,包括相互连接的摄像头和图像处理器;An image synthesis device comprising an interconnected camera and an image processor;
摄像头用于对目标拍摄对象进行拍摄;The camera is used to shoot the target object;
图像处理器用于执行如上述的图像合成方法。The image processor is used to execute the above-mentioned image synthesis method.
上述图像合成设备可以用来合成图像,该合成图像是由不同曝光时间的图像合成,同一位置的像素点可以捕捉更多的图像细节,也就提高了合成图像的动态范围,同时在合成图像过程中只需要两种不同曝光时间的图像,相比于传统技术使用多张图像的合成过程降低了图像运算量,可以满足图像处理实时性的要求。The above-mentioned image synthesis device can be used to synthesize images. The synthesized image is synthesized from images with different exposure times. Pixels at the same position can capture more image details, which improves the dynamic range of the synthesized image. Only two images with different exposure times are needed, compared with the traditional technology, the synthesis process of multiple images reduces the amount of image calculation, and can meet the real-time requirements of image processing.
一种可读存储介质,可读存储介质上存储有可执行程序,该程序被处理器执行时实现上述图像合成方法。A readable storage medium stores an executable program on the readable storage medium, and when the program is executed by a processor, the above image synthesis method is realized.
上述可读存储介质,通过其存储的可执行程序,实现了图像合成,该合成图像是由不同曝光时间的图像合成,同一位置的像素点可以捕捉更多的图像细节,也就提高了合成图像的动态范围,同时在合成图像过程中只需要两种不同曝光时间的图像,相比于传统技术使用多张图像的合成过程降低了图像运算量,可以满足图像处理实时性的要求。The above-mentioned readable storage medium, through its stored executable program, realizes image synthesis. The composite image is composed of images with different exposure times. Pixels at the same position can capture more image details, which improves the composite image quality. At the same time, only two images with different exposure times are required in the process of image synthesis. Compared with the traditional technology of using multiple images in the synthesis process, the amount of image calculation is reduced, which can meet the real-time requirements of image processing.
一种计算机设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的可执行程序,处理器执行程序时实现上述图像合成方法。A computer device includes a memory, a processor, and an executable program stored on the memory and operable on the processor. The above-mentioned image synthesis method is realized when the processor executes the program.
上述计算机设备,通过处理器上运行的可执行程序,实现了图像合成,该合成图像是由不同曝光时间的图像合成,同一位置的像素点可以捕捉更多的图像细节,也就提高了合成图像的动态范围,同时在合成图像过程中只需要两种不同曝光时间的图像,相比于传统技术使用多张图像的合成过程降低了图像运算量,可以满足图像处理实时性的要求。The above-mentioned computer equipment realizes image synthesis through the executable program running on the processor. The composite image is composed of images with different exposure times, and the pixels at the same position can capture more image details, which improves the composite image quality. At the same time, only two images with different exposure times are required in the process of image synthesis. Compared with the traditional technology of using multiple images in the synthesis process, the amount of image calculation is reduced, which can meet the real-time requirements of image processing.
附图说明Description of drawings
图1为一个实施例的图像合成方法的应用环境图;Fig. 1 is the application environment diagram of the image synthesis method of an embodiment;
图2为一个实施例的图像合成方法的流程示意图;Fig. 2 is a schematic flow chart of an image synthesis method of an embodiment;
图3为另一个实施例的图像合成方法的流程示意图;FIG. 3 is a schematic flow chart of an image synthesis method in another embodiment;
图4为又一个实施例的图像合成方法的流程示意图;FIG. 4 is a schematic flow chart of an image synthesis method in another embodiment;
图5为一个实施例的图像合成方法的步骤S140的流程示意图;FIG. 5 is a schematic flow chart of step S140 of the image synthesis method of an embodiment;
图6为另一个实施例的图像合成方法的步骤S140的流程示意图;FIG. 6 is a schematic flow chart of step S140 of an image synthesis method in another embodiment;
图7为一个实施例的图像合成方法的步骤S120的流程示意图;FIG. 7 is a schematic flow chart of step S120 of the image synthesis method of an embodiment;
图8为一个实施例的图像合成方法的流程示意图;FIG. 8 is a schematic flow chart of an image synthesis method in an embodiment;
图9为另一个实施例的图像合成方法的流程示意图;FIG. 9 is a schematic flowchart of an image synthesis method in another embodiment;
图10为一个实施例的图像合成系统的结构示意图;FIG. 10 is a schematic structural diagram of an image synthesis system according to an embodiment;
图11为另一个实施例的图像合成系统的结构示意图;Fig. 11 is a schematic structural diagram of an image synthesis system in another embodiment;
图12为又一个实施例的图像合成系统的结构示意图;Fig. 12 is a schematic structural diagram of an image synthesis system in another embodiment;
图13为再一个实施例的图像合成系统的结构流程图;Fig. 13 is a structural flowchart of an image synthesis system in another embodiment;
图14为一个实施例的图像合成设备的结构示意图;Fig. 14 is a schematic structural diagram of an image synthesis device of an embodiment;
图15为一个实施例的图像合成方法的具体应用流程图;Fig. 15 is a specific application flowchart of the image synthesis method of an embodiment;
图16为一个实施例的图像合成方法中图像的Bayer数据格式的示例图;Fig. 16 is an example diagram of the Bayer data format of the image in the image synthesis method of an embodiment;
图17为一个实施例的图像合成方法中运动估计SAD的计算示例图;Fig. 17 is an example diagram of calculation of motion estimation SAD in the image synthesis method of an embodiment;
图18为一个实施例的图像合成方法中运动估计的流程示意图。Fig. 18 is a schematic flowchart of motion estimation in an image synthesis method of an embodiment.
具体实施方式Detailed ways
为使本发明的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本发明进行进一步的详细说明。应当理解,此处所描述的具体实施方式仅仅用以解释本发明,并不限定本发明的保护范围。In order to make the object, technical solution and advantages of the present invention clearer, the present invention will be further described in detail below in conjunction with the accompanying drawings and embodiments. It should be understood that the specific embodiments described here are only used to explain the present invention, and do not limit the protection scope of the present invention.
需要说明的是,本发明实施例所涉及的术语“第一/第二/第三”仅仅是是区别类似的对象,不代表针对对象的特定排序。It should be noted that the term "first/second/third" involved in the embodiment of the present invention is only for distinguishing similar objects, and does not represent a specific ordering of objects.
图1为本发明一个实施例中进行图像合成的硬件实体示意图,图1中包括具备拍摄装置的终端设备,终端设备通过拍摄装置对目标拍摄对象进行拍摄,并对图像数据进行处理,得到合成图像;终端设备可以是手机、台式机、PC机、一体机、平板电脑、运动摄像机、行车记录仪等不同类型的电子设备。本发明的图像合成方法可以在终端设备中实现。Fig. 1 is a schematic diagram of a hardware entity for image synthesis in an embodiment of the present invention. Fig. 1 includes a terminal device equipped with a photographing device. The terminal device photographs the target subject through the photographing device, and processes the image data to obtain a synthesized image. ; The terminal equipment can be different types of electronic equipment such as mobile phones, desktop computers, PCs, all-in-one computers, tablet computers, sports cameras, and driving recorders. The image synthesis method of the present invention can be implemented in a terminal device.
参见图2所示,为一个实施例的图像合成方法的流程示意图。该实施例中的图像合成方法包括以下步骤:Referring to FIG. 2 , it is a schematic flowchart of an image synthesis method in an embodiment. The image synthesis method in this embodiment comprises the following steps:
步骤S110:分别获取目标拍摄对象的第一曝光图像和第二曝光图像,其中,第一曝光图像的曝光时间大于第二曝光图像的曝光时间;Step S110: Obtaining a first exposure image and a second exposure image of the target subject respectively, wherein the exposure time of the first exposure image is greater than the exposure time of the second exposure image;
在本步骤中,第一曝光图像和第二曝光图像是对同一目标拍摄对象进行拍摄后得到的,两者的曝光时间不同,为了能捕捉更多的图像细节,第一曝光图像和第二曝光图像的曝光时间差可以大于预设的时间跨度;In this step, the first exposure image and the second exposure image are obtained after shooting the same target subject, and the exposure time of the two is different. In order to capture more image details, the first exposure image and the second exposure The exposure time difference of the image can be greater than the preset time span;
步骤S120:根据第一曝光图像和第二曝光图像的相同位置的像素点数据获取图像融合的权重值;Step S120: Acquiring the weight value of image fusion according to the pixel point data at the same position of the first exposure image and the second exposure image;
在本步骤中,第一曝光图像和第二曝光图像的大小相同,在相同位置的像素点数据可以反映真实图像信息在不同曝光时间下的差异,对该差异进行分析可以得到相应的权重值;In this step, the size of the first exposure image and the second exposure image are the same, and the pixel data at the same position can reflect the difference of real image information under different exposure times, and the corresponding weight value can be obtained by analyzing the difference;
步骤S130:根据权重值对第一曝光图像和第二曝光图像进行融合,获得合成图像;Step S130: Fusing the first exposure image and the second exposure image according to the weight value to obtain a composite image;
在本步骤中,图像融合是对图像中各个对应的像素点数据进行融合;In this step, image fusion is to fuse each corresponding pixel point data in the image;
根据上述的图像合成方法,其是获取目标拍摄对象的两种不同曝光时间的曝光图像,对两种曝光图像中的相同位置的像素点数据进行分析,获得图像融合的权重值,利用该权重值对两种曝光图像进行融合,获得合成图像,该合成图像是由不同曝光时间的图像合成,同一位置的像素点可以捕捉更多的图像细节,也就提高了合成图像的动态范围,同时在合成图像过程中只需要两种不同曝光时间的图像,相比于传统技术使用多张图像的合成过程降低了图像运算量,可以满足图像处理实时性的要求。According to the above-mentioned image synthesis method, it is to acquire two kinds of exposure images with different exposure times of the target object, analyze the pixel point data at the same position in the two exposure images, obtain the weight value of image fusion, and use the weight value The two exposure images are fused to obtain a composite image. The composite image is composed of images with different exposure times. Pixels at the same position can capture more image details, which improves the dynamic range of the composite image. In the image process, only two images with different exposure times are needed. Compared with the traditional technology, the synthesis process of multiple images reduces the amount of image calculation and can meet the real-time requirements of image processing.
在其中一个实施例中,如图3所示,分别获取目标拍摄对象的第一曝光图像和第二曝光图像的步骤包括以下步骤:In one of the embodiments, as shown in FIG. 3, the steps of respectively acquiring the first exposure image and the second exposure image of the target object include the following steps:
步骤S111:根据配置的不同曝光时间对目标拍摄对象进行拍摄,分别获取目标拍摄对象的第一曝光行数据和第二曝光行数据,分别对第一曝光行数据和第二曝光行数据进行线性化映射,得到第一映射数据和第二映射数据;Step S111: Shooting the target subject according to different configured exposure times, respectively acquiring the first exposure line data and the second exposure line data of the target subject, and linearizing the first exposure line data and the second exposure line data respectively Mapping to obtain the first mapping data and the second mapping data;
步骤S112:分别对第一映射数据和第二映射数据进行增益补偿,根据增益补偿后的数据分别获取第一曝光图像和第二曝光图像。Step S112: performing gain compensation on the first mapping data and the second mapping data respectively, and acquiring a first exposure image and a second exposure image respectively according to the gain-compensated data.
在本实施例中,拍摄的图像可以由一行行的曝光行数据组成,一般的拍摄对光亮度的响应是非线性的,若直接使用拍摄的数据,会造成合成后的图像出现亮度不均匀、亮度反转等现象,因此可以对拍摄后的数据进行校正,也就是对曝光行数据进行线性化映射,得到的映射数据和拍摄时入射光亮度呈线性关系;不同曝光时间的图像数据的水平是不同的,分别对第一映射数据和第二映射数据进行增益补偿,可以使不同曝光时间的图像数据对齐到同一水平,对图像的像素数据进行准确的分析处理,有利于图像的合成。In this embodiment, the photographed image can be composed of row-by-row exposure data. Generally, the response of photographing to light brightness is non-linear. If the photographed data is used directly, the synthesized image will have uneven brightness and brightness. Inversion and other phenomena, so the data after shooting can be corrected, that is, the exposure line data is linearly mapped, and the obtained mapping data has a linear relationship with the incident light brightness during shooting; the level of image data at different exposure times is different. Yes, performing gain compensation on the first mapping data and the second mapping data respectively can align the image data of different exposure times to the same level, and accurately analyze and process the pixel data of the image, which is beneficial to the synthesis of images.
具体的,在进行线性化映射时,可以采用查找表的方式进行转换,转换方式如下:其中LUT为查找表,通过Iin索引查找表,得到线性化映射后的值。LUT查找表用于描述一条曲线,该曲线将非线性的数据映射为线性数据。下面的式子中,Iin0表示第一曝光行的像素点数据,Ilinear0为第一曝光行数据经过线性化映射之后的数据;Iin1表示第二曝光行的像素点数据,Ilinear1为第二曝光行数据经过线性化映射之后的数据;Specifically, when performing linearization mapping, a lookup table can be used for conversion, and the conversion method is as follows: wherein LUT is a lookup table, and the value after linearization mapping is obtained through the Iin index lookup table. A LUT lookup table is used to describe a curve that maps non-linear data to linear data. In the following formula, Iin0 represents the pixel point data of the first exposure line, Ilinear0 is the data after linearized mapping of the first exposure line data; Iin1 represents the pixel point data of the second exposure line, and Ilinear1 is the data of the first exposure line The data after the second exposure row data has been linearized and mapped;
Ilinear0=LUT0[Iin0]Ilinear0 = LUT0 [Iin0 ]
Ilinear1=LUT1[Iin1]Ilinear1 = LUT1 [Iin1 ]
对上述Ilinear0,Ilinear1图像数据进行增益补偿,增益补偿的公式如下,其中g0,g1为增益补偿的参数,Ig0,Ig1为增益补偿后的输出值。对第一映射数据进行增益补偿的参数定义为g0,对第二映射数据进行增益补偿的参数定义为g1。Gain compensation is performed on the above Ilinear0 and Ilinear1 image data. The formula of gain compensation is as follows, where g0 and g1 are parameters of gain compensation, and Ig0 and Ig1 are output values after gain compensation. A parameter for performing gain compensation on the first mapping data is defined as g0 , and a parameter for performing gain compensation on the second mapping data is defined as g1 .
Ig0=Ilinear0*g0Ig0 =Ilinear0 *g0
Ig1=Ilinear1*g1Ig1 =Ilinear1 *g1
增益补偿是将不同曝光时间的数据对齐到同一水平,比如第一曝光行某个点曝光出的像素值是100,第二曝光行同一位置曝光出的像素值是49,其第二曝光时间是第一曝光时间的一半,这样100和49不在同一水平上,因此可以将曝光时间转化为增益进行补偿,第二曝光时间是第一曝光时间一半,那么就把第二曝光行的像素值49乘以2,得到98,再将100和98依据得到的权重去合成;再比如第一曝光像素值是255,第二曝光像素值是150,则将255和150*2=300的值依据得到的权重去合成,此时255应该是由于过曝导致截断为255,对应的权重应该很小,因此合成后的像素值为300左右,这样就恢复了过曝的像素点的值。Gain compensation is to align the data of different exposure times to the same level. For example, the pixel value exposed at a certain point in the first exposure line is 100, and the pixel value exposed at the same position in the second exposure line is 49. The second exposure time is Half of the first exposure time, so 100 and 49 are not at the same level, so the exposure time can be converted into gain for compensation, the second exposure time is half of the first exposure time, then multiply the pixel value of the second exposure row by 49 Use 2 to get 98, and then combine 100 and 98 according to the obtained weight; for example, the first exposure pixel value is 255, and the second exposure pixel value is 150, then the values of 255 and 150*2=300 are obtained according to the obtained The weight is decomposed. At this time, 255 should be truncated to 255 due to overexposure. The corresponding weight should be very small, so the pixel value after synthesis is about 300, which restores the value of the overexposed pixel.
在其中一个实施例中,如图4所示,根据第一曝光图像和第二曝光图像的相同位置的像素点数据获取图像融合的权重值的步骤之前还包括以下步骤:In one of the embodiments, as shown in FIG. 4 , the step of obtaining the weight value of image fusion according to the pixel point data at the same position of the first exposure image and the second exposure image further includes the following steps:
步骤S140:缓存第一曝光行数据和第二曝光行数据,对第一曝光行数据和第二曝光行数据进行图像匹配计算,根据图像匹配计算的结果获取第一曝光图像中目标拍摄对象相对于第二曝光图像中目标拍摄对象的运动向量;Step S140: cache the first exposure line data and the second exposure line data, perform image matching calculation on the first exposure line data and the second exposure line data, and obtain the relative position of the target object in the first exposure image according to the image matching calculation result a motion vector of the target object in the second exposure image;
步骤S150:根据运动向量对第二曝光图像的数据与第一曝光图像的数据进行对齐操作。Step S150: Align the data of the second exposure image with the data of the first exposure image according to the motion vector.
在本实施例中,由于目标拍摄对象的快速移动,第一曝光图像的曝光时间与第二曝光图像的曝光时间的时间间隔内,目标拍摄对象会存在运动位移,如果直接将存在运动位移的第一曝光图像数据与第二曝光图像数据进行点对点融合,会产生明显的拖影或重影现象,导致图像边缘模糊而使融合后图像质量下降;因此对原始的第一曝光行数据和第二曝光行数据进行图像匹配计算,根据图像匹配结果确定相似像素点的位置关系,进而获取曝光图像中目标拍摄对象的运动向量,利用该运动向量对第二曝光图像的数据与第一曝光图像的数据进行对齐操作,在进行图像融合时像素点数据是对齐的,可以解决拖影或重影的问题。In this embodiment, due to the rapid movement of the target object, there will be motion displacement of the target object within the time interval between the exposure time of the first exposure image and the exposure time of the second exposure image. The point-to-point fusion of the first-exposure image data and the second-exposure image data will produce obvious smear or double image phenomenon, resulting in blurred image edges and degraded image quality after fusion; therefore, the original first-exposure row data and second-exposure Perform image matching calculations on row data, determine the positional relationship of similar pixels according to the image matching results, and then obtain the motion vector of the target object in the exposure image, and use the motion vector to compare the data of the second exposure image and the data of the first exposure image Alignment operation, the pixel data is aligned during image fusion, which can solve the problem of smear or ghosting.
需要说明的是,由于目标拍摄对象的运动位移会有一定的范围,在获取运动向量时需要对多行曝光行数据进行处理,因此可以将第一曝光行数据和第二曝光行数据预先进行缓存,便于在图像匹配时直接调用,加快运动向量的计算处理过程。It should be noted that since the motion displacement of the target object has a certain range, multiple rows of exposure row data need to be processed when obtaining the motion vector, so the first exposure row data and the second exposure row data can be cached in advance , which is convenient for direct calling during image matching, and speeds up the calculation process of the motion vector.
具体的,图像合成方法包括以下步骤:Specifically, the image synthesis method includes the following steps:
根据配置的不同曝光时间对目标拍摄对象进行拍摄,分别获取目标拍摄对象的第一曝光行数据和第二曝光行数据,分别对第一曝光行数据和第二曝光行数据进行线性化映射,得到第一映射数据和第二映射数据;Shoot the target subject according to the different exposure times configured, obtain the first exposure line data and the second exposure line data of the target subject respectively, and perform linearized mapping on the first exposure line data and the second exposure line data respectively, to obtain first mapping data and second mapping data;
分别对第一映射数据和第二映射数据进行增益补偿,根据增益补偿后的数据分别获取第一曝光图像和第二曝光图像;performing gain compensation on the first mapping data and the second mapping data respectively, and obtaining a first exposure image and a second exposure image respectively according to the data after gain compensation;
缓存第一曝光行数据和第二曝光行数据,对第一曝光行数据和第二曝光行数据进行图像匹配计算,根据图像匹配计算的结果获取第一曝光图像中目标拍摄对象相对于第二曝光图像中目标拍摄对象的运动向量;根据运动向量对第二曝光图像的数据与第一曝光图像的数据进行对齐操作;Buffer the first exposure line data and the second exposure line data, perform image matching calculation on the first exposure line data and the second exposure line data, and obtain the relative position of the target object in the first exposure image relative to the second exposure line according to the result of the image matching calculation. A motion vector of the target object in the image; aligning the data of the second exposure image with the data of the first exposure image according to the motion vector;
根据对齐后的第一曝光图像和第二曝光图像的相同位置的像素点数据获取图像融合的权重值;根据权重值对第一曝光图像和第二曝光图像进行融合,获得合成图像。The weight value of image fusion is obtained according to the aligned pixel point data of the first exposure image and the second exposure image at the same position; the first exposure image and the second exposure image are fused according to the weight value to obtain a composite image.
在其中一个实施例中,如图5所示,对第一曝光图像的数据和第二曝光图像的数据进行图像匹配计算的步骤包括以下步骤:In one of the embodiments, as shown in FIG. 5, the step of performing image matching calculation on the data of the first exposure image and the data of the second exposure image includes the following steps:
步骤S141:在第一曝光行数据中获取第一数据块,在第二曝光行数据的预设搜索窗范围内获取第二数据块,第一数据块与第二数据块的大小相同;Step S141: Obtain the first data block in the first exposure row data, and obtain the second data block within the preset search window range of the second exposure row data, the size of the first data block and the second data block are the same;
步骤S142:获取第一数据块内和第二数据块内的各个对应像素点的像素值差值,对各像素值差值取绝对值并求和,得到图像匹配和值;Step S142: Obtain the pixel value difference of each corresponding pixel point in the first data block and the second data block, take the absolute value of each pixel value difference and sum them up to obtain the image matching sum;
步骤S143:在预设搜索窗范围内遍历不同的第二数据块,确定图像匹配和值最小的目标数据块;Step S143: traverse different second data blocks within the range of the preset search window, and determine the target data block with the smallest image matching sum value;
根据图像匹配计算的结果获取第一曝光图像中目标拍摄对象相对于第二曝光图像中目标拍摄对象的运动向量的步骤包括以下步骤:The step of obtaining the motion vector of the target object in the first exposure image relative to the target object in the second exposure image according to the result of the image matching calculation includes the following steps:
步骤S144:根据第一数据块和目标数据块的相对位置确定运动向量。Step S144: Determine the motion vector according to the relative positions of the first data block and the target data block.
在本实施例中,在第一曝光行数据中选取第一数据块作为基准数据块,在第二曝光行数据中选取相同大小的第二数据块,第一数据块内的像素点与第二数据块内的像素点一一对应,计算第一数据块内和第二数据块内的各对应像素点的像素值差值,对所有的像素值差值取绝对值并求和,得到图像匹配和值,若在第二曝光行数据中的某一数据块与第一数据块相匹配,该数据块与第一数据块的图像匹配和值应接近于零,考虑到图像误差因素,匹配时的图像匹配和值应尽可能小,因此可以在第二曝光行数据的预设搜索窗范围内遍历不同的第二数据块,利用图像匹配和值最小这一条件来确定目标数据块,通过目标数据块和第一数据块的位置关系可以确定运动向量。In this embodiment, the first data block is selected as the reference data block in the first exposure row data, and the second data block of the same size is selected in the second exposure row data, and the pixels in the first data block are the same as the second One-to-one correspondence between the pixels in the data block, calculate the pixel value difference between the corresponding pixel points in the first data block and the second data block, take the absolute value of all the pixel value differences and sum them up, and get the image matching and value, if a certain data block in the second exposure line data matches the first data block, the image matching sum value of this data block and the first data block should be close to zero, considering the image error factor, when matching The image matching sum value of should be as small as possible, so different second data blocks can be traversed within the preset search window range of the second exposure line data, and the target data block can be determined by using the minimum image matching sum value. The positional relationship of the data block to the first data block can determine the motion vector.
可选的,可以根据第一数据块的中心坐标和目标数据块的中心坐标的向量差来得到运动向量。Optionally, the motion vector may be obtained according to a vector difference between the center coordinates of the first data block and the center coordinates of the target data block.
需要说明的是,由于第一曝光图像的曝光时间与第二曝光图像的曝光时间的时间间隔一般不会过长,目标拍摄对象的运动幅度不会太大,因此无需在第二曝光行数据的所有区域获取第二数据块,可以设置一个搜索窗,在搜索窗内遍历第二数据块即可,该搜索窗的中点坐标可以与第一数据块的中点坐标相同,搜索窗的覆盖范围大于第一数据块的覆盖范围。It should be noted that since the time interval between the exposure time of the first exposure image and the exposure time of the second exposure image is generally not too long, and the range of motion of the target object is not too large, it is not necessary to To obtain the second data block in all areas, a search window can be set, and the second data block can be traversed within the search window. The midpoint coordinates of the search window can be the same as the midpoint coordinates of the first data block. The coverage of the search window Greater than the coverage of the first data block.
具体的,可以采用基于数据块的SAD(Sum of absolute differences,绝对误差和)算法,其为一种图像匹配算法,应用SAD算法的具体过程如下:Specifically, the SAD (Sum of absolute differences) algorithm based on the data block can be used, which is an image matching algorithm, and the specific process of applying the SAD algorithm is as follows:
分别对第一曝光行数据和第二曝光行数据进行缓存,在第一曝光行数据和第二曝光行数据中分别获取5×5的数据块,数据块也可以是其他大小,计算对应数据块的SAD,SAD的计算公式如下:Buffer the first exposure line data and the second exposure line data respectively, obtain 5×5 data blocks from the first exposure line data and the second exposure line data respectively, and the data blocks can also be of other sizes, and calculate the corresponding data blocks The SAD, the calculation formula of SAD is as follows:
mv=argmin(SADij) i∈(-N,N)j∈(-N,N)mv=argmin(SADij ) i∈(-N,N)j∈(-N,N)
其中L(x+m,y+n)为第一曝光行数据中坐标为(x,y)周围5×5数据块内偏移为(m,n)的位置对应的像素值,S(x+i+m,y+j+n)为第二曝光行数据中坐标为(x+i+m,y+j+n)位置对应的像素值,SAD算法是将对应位置的像素值相减,再绝对值求和,获取的运动位移为在搜索窗范围内最小SAD对应的坐标点的运动向量,如第一曝光行数据中(x,y)周围5x5的数据块和第二曝光行数据中(x+i,y+j)周围5×5的数据块的SAD值最小,则运动向量mv为(i,j)。由于第一曝光图像的曝光时间与第二曝光图像的曝光时间的时间间隔不是太长,目标拍摄对象的运动幅度不是很大,因此搜索窗的范围一般不用很大,N一般设置为4或8。如果N为4,则搜索窗的范围为[(-4,-4),(4,4)],为9×9的窗口。Wherein L(x+m, y+n) is the pixel value corresponding to the position where the offset is (m, n) in the 5×5 data blocks around the coordinates (x, y) in the first exposure row data, and S(x +i+m,y+j+n) is the pixel value corresponding to the position whose coordinates are (x+i+m,y+j+n) in the second exposure row data, and the SAD algorithm is to subtract the pixel value at the corresponding position , and then sum the absolute values, the obtained motion displacement is the motion vector of the coordinate point corresponding to the minimum SAD within the search window range, such as the 5x5 data blocks around (x, y) in the first exposure row data and the second exposure row data The SAD value of the 5×5 data blocks around (x+i, y+j) is the smallest, and the motion vector mv is (i, j). Since the time interval between the exposure time of the first exposure image and the exposure time of the second exposure image is not too long, and the range of motion of the target object is not very large, the range of the search window generally does not need to be very large, and N is generally set to 4 or 8 . If N is 4, the range of the search window is [(-4,-4), (4,4)], which is a 9×9 window.
进一步的,在获取运动向量时,所依据的数据是原始的曝光行数据,数据格式一般为Bayer数据格式,在计算SAD时需要Bayer数据的相位对齐,如以R点为中心的数据块不能和以G点为中心的数据块直接两两相减计算SAD,因此一般搜索窗内选取第二数据块时,以2为步长进行移动,可以保证相同相位的数据块进行SAD计算,因此获取的运动向量的精度为2。Furthermore, when obtaining the motion vector, the data based on it is the original exposure line data, and the data format is generally Bayer data format. When calculating SAD, the phase alignment of Bayer data is required. For example, the data block centered on point R cannot be aligned with The data blocks centered on point G are directly subtracted two by two to calculate SAD. Therefore, when selecting the second data block in the general search window, move with a step size of 2, which can ensure that the data blocks with the same phase are used for SAD calculation. Therefore, the obtained Motion vectors have a precision of 2.
在其中一个实施例中,如图6所示,图像合成方法还包括以下步骤:In one of the embodiments, as shown in Figure 6, the image synthesis method also includes the following steps:
步骤S145:计算第一数据块的纹理复杂度;若第一数据块的纹理复杂度小于纹理复杂阈值,将运动向量置零;Step S145: Calculate the texture complexity of the first data block; if the texture complexity of the first data block is less than the texture complexity threshold, set the motion vector to zero;
或者,or,
步骤S146:计算目标数据块的纹理复杂度;若目标数据块的纹理复杂度小于纹理复杂阈值,将运动向量置零。Step S146: Calculate the texture complexity of the target data block; if the texture complexity of the target data block is less than the texture complexity threshold, set the motion vector to zero.
在本实施例中,在获取到运动向量后,需要计算第一数据块和目标数据块的纹理复杂度,数据块的纹理过于平坦,即纹理复杂度过低,获取的运动向量的有效性会大大降低,因此在纹理复杂度小于纹理复杂阈值时,直接将运动向量置零,可以避免由于计算出错而导致图像融合后图像质量下降;另外,一般过曝光和欠曝光图像没有纹理信息,将运动向量置零也是为了避免图像曝光过曝光和欠曝光后出现无法评估运动向量的问题。数据块纹理复杂度的计算可以采用计算数据块的亮度方差的方式。In this embodiment, after obtaining the motion vector, it is necessary to calculate the texture complexity of the first data block and the target data block. If the texture of the data block is too flat, that is, the texture complexity is too low, the validity of the obtained motion vector will be reduced. It is greatly reduced, so when the texture complexity is less than the texture complexity threshold, directly set the motion vector to zero, which can avoid the image quality degradation after image fusion due to calculation errors; in addition, generally overexposed and underexposed images have no texture information, and the motion vector The zeroing of the vector is also to avoid the problem that the motion vector cannot be evaluated after the image is overexposed and underexposed. The calculation of the texture complexity of the data block may adopt the method of calculating the brightness variance of the data block.
具体的,图像合成方法包括以下步骤:Specifically, the image synthesis method includes the following steps:
在第一曝光行数据中获取第一数据块,在第二曝光行数据的预设搜索窗范围内获取第二数据块,第一数据块与第二数据块的大小相同;获取第一数据块内和第二数据块内的各个对应像素点的像素值差值,对各像素值差值取绝对值并求和,得到图像匹配和值;在预设搜索窗范围内遍历不同的第二数据块,确定图像匹配和值最小的目标数据块;Obtain the first data block in the first exposure row data, and obtain the second data block within the preset search window range of the second exposure row data, the size of the first data block and the second data block are the same; obtain the first data block The pixel value difference of each corresponding pixel point in the inner and the second data block, the absolute value of each pixel value difference is taken and summed to obtain the image matching sum value; different second data are traversed within the preset search window range Block, determine the target data block with image matching and minimum value;
根据第一数据块和目标数据块的相对位置确定运动向量;计算第一数据块的纹理复杂度,若第一数据块的纹理复杂度小于纹理复杂阈值,将运动向量置零;或者,计算目标数据块的纹理复杂度,若目标数据块的纹理复杂度小于纹理复杂阈值,将运动向量置零。Determine the motion vector according to the relative position of the first data block and the target data block; calculate the texture complexity of the first data block, if the texture complexity of the first data block is less than the texture complexity threshold, set the motion vector to zero; or, calculate the target The texture complexity of the data block. If the texture complexity of the target data block is less than the texture complexity threshold, set the motion vector to zero.
在其中一个实施例中,如图7所示,根据第一曝光图像和第二曝光图像的相同位置的像素点数据获取图像融合的权重值的步骤包括以下步骤:In one of the embodiments, as shown in FIG. 7, the step of obtaining the weight value of image fusion according to the pixel point data at the same position of the first exposure image and the second exposure image includes the following steps:
步骤S121:分别根据第一曝光图像的目标位置像素点的亮度、对比度和纹理复杂度获取第一亮度权重、第一对比度权重和第一纹理复杂度权重;根据第一亮度权重、第一对比度权重和第一纹理复杂度权重获取第一曝光图像的第一权重值;Step S121: Obtain the first brightness weight, the first contrast weight and the first texture complexity weight respectively according to the brightness, contrast and texture complexity of the target pixel point of the first exposure image; according to the first brightness weight, the first contrast weight Obtaining the first weight value of the first exposure image with the first texture complexity weight;
步骤S122:分别根据第二曝光图像的目标位置像素点的亮度、对比度和纹理复杂度获取第二亮度权重、第二对比度权重和第二纹理复杂度权重;根据第二亮度权重、第二对比度权重和第二纹理复杂度权重获取第二曝光图像的第二权重值;Step S122: Obtain the second brightness weight, the second contrast weight and the second texture complexity weight respectively according to the brightness, contrast and texture complexity of the target pixel point of the second exposure image; according to the second brightness weight, the second contrast weight Obtaining a second weight value of the second exposure image with the second texture complexity weight;
步骤S123:对第一权重值和第二权重值分别进行归一化和噪声滤波,得到图像融合的权重值。Step S123: Perform normalization and noise filtering on the first weight value and the second weight value respectively to obtain weight values for image fusion.
在本实施例中,根据曝光图像的目标位置像素点的亮度、对比度和纹理复杂度可以分别获取对应的权重,对上述权重进行综合,可以得到曝光图像的权重值,第一曝光图像的目标位置与第二曝光图像的目标位置是相同位置,上述第一权重值和第二权重值还需要进行归一化,便于图像融合的计算;归一化后的权重值一般包含较多噪声,直接用于图像融合会使合成图像携带过多噪声,降低合成图像质量,因此可以对权重值进行噪声滤波处理,减小噪声的干扰。In this embodiment, the corresponding weights can be obtained respectively according to the brightness, contrast and texture complexity of the pixel points at the target position of the exposure image, and the above weights can be combined to obtain the weight value of the exposure image, and the target position of the first exposure image It is the same position as the target position of the second exposure image, the above-mentioned first weight value and second weight value also need to be normalized to facilitate the calculation of image fusion; the normalized weight value generally contains more noise, directly use Because image fusion will cause too much noise in the synthesized image and reduce the quality of the synthesized image, noise filtering can be performed on the weight value to reduce the interference of noise.
可选的,权重包括亮度权重、对比度权重和纹理复杂度权重,在计算权重值时可以选择上述三种权重的和值,也可以选择上述三种权重的积值;滤波时可以采用中值滤波、均值滤波或交叉双边滤波等。Optionally, the weight includes brightness weight, contrast weight and texture complexity weight. When calculating the weight value, the sum of the above three weights can be selected, or the product of the above three weights can be selected; median filtering can be used for filtering , mean filter or cross bilateral filter, etc.
需要说明的是,亮度权重与亮度值和亮度中心值有关,亮度与亮度权重的关系为一条钟型曲线,离中心亮度越近,像素点的曝光越合适,亮度权重越大;对比度越大,对比度权重也越大;纹理复杂度越大,纹理复杂度权重也越大。纹理复杂度权重可以通过Sobel算子和拉普拉斯算子来提取像素点的边缘信息得到,边缘数据越大,纹理复杂度权重越大。It should be noted that the brightness weight is related to the brightness value and the brightness center value. The relationship between the brightness and the brightness weight is a bell-shaped curve. The closer to the center brightness, the more appropriate the exposure of the pixel, and the greater the brightness weight; the greater the contrast, The greater the contrast weight, the greater the texture complexity, the greater the texture complexity weight. The texture complexity weight can be obtained by extracting the edge information of the pixel point through the Sobel operator and the Laplacian operator. The larger the edge data, the greater the texture complexity weight.
在其中一个实施例中,如图8所示,图像合成方法还包括以下步骤:In one of the embodiments, as shown in Figure 8, the image synthesis method also includes the following steps:
步骤S160:获取合成图像的亮度均值,根据亮度均值调节第一曝光图像的曝光参数和第二曝光图像的曝光参数,根据第一曝光图像的曝光参数和第二曝光图像的曝光参数分别计算增益补偿的参数。Step S160: Obtain the average brightness of the synthesized image, adjust the exposure parameters of the first exposure image and the exposure parameters of the second exposure image according to the average brightness, and calculate the gain compensation according to the exposure parameters of the first exposure image and the exposure parameters of the second exposure image parameters.
在本实施例中,根据合成图像的亮度均值,可以对第一曝光图像的曝光参数和第二曝光图像的曝光参数进行调节,通过此种反馈调节机制,可以使最终合成图像的亮度满足实际要求,避免过曝光或欠曝光;同时,曝光参数与增益补偿密切相关,在调节曝光参数时可以计算增益补偿的具体参数值,通过调整曝光参数和增益补偿的参数达到调节图像的目的。In this embodiment, the exposure parameters of the first exposure image and the exposure parameters of the second exposure image can be adjusted according to the average brightness of the composite image, and through this feedback adjustment mechanism, the brightness of the final composite image can meet actual requirements , to avoid overexposure or underexposure; at the same time, exposure parameters are closely related to gain compensation. When adjusting exposure parameters, the specific parameter value of gain compensation can be calculated, and the purpose of image adjustment can be achieved by adjusting exposure parameters and gain compensation parameters.
具体的,图像合成方法包括以下步骤:Specifically, the image synthesis method includes the following steps:
根据配置的不同曝光时间对目标拍摄对象进行拍摄,分别获取目标拍摄对象的第一曝光行数据和第二曝光行数据,分别对第一曝光行数据和第二曝光行数据进行线性化映射,得到第一映射数据和第二映射数据;分别对第一映射数据和第二映射数据进行增益补偿,根据增益补偿后的数据分别获取第一曝光图像和第二曝光图像;Shoot the target subject according to the different exposure times configured, obtain the first exposure line data and the second exposure line data of the target subject respectively, and perform linearized mapping on the first exposure line data and the second exposure line data respectively, to obtain The first mapping data and the second mapping data; respectively performing gain compensation on the first mapping data and the second mapping data, respectively acquiring a first exposure image and a second exposure image according to the data after gain compensation;
根据第一曝光图像和第二曝光图像的相同位置的像素点数据获取图像融合的权重值;根据权重值对第一曝光图像和第二曝光图像进行融合,获得合成图像;Acquiring the weight value of image fusion according to the pixel point data of the same position of the first exposure image and the second exposure image; fusing the first exposure image and the second exposure image according to the weight value to obtain a composite image;
获取合成图像的亮度均值,根据亮度均值调节第一曝光图像的曝光参数和第二曝光图像的曝光参数,根据第一曝光图像的曝光参数和第二曝光图像的曝光参数分别计算增益补偿的参数。Acquiring the average brightness value of the synthesized image, adjusting the exposure parameters of the first exposure image and the exposure parameters of the second exposure image according to the average brightness value, and calculating gain compensation parameters respectively according to the exposure parameters of the first exposure image and the exposure parameters of the second exposure image.
可选的,曝光参数可以是曝光时间和曝光增益的乘积值,根据亮度均值可以确定一个预估的曝光参数,设置第一曝光系数和第二曝光系数,将预估的曝光参数与第一曝光系数的乘积值作为第一曝光图像的曝光参数,将预估的曝光参数与第二曝光系数的乘积值作为第二曝光图像的曝光参数,由于增益补偿的作用是将不同曝光时间的数据对齐到同一水平,可以利用第一曝光系数和第二曝光系数来确定增益补偿参数。Optionally, the exposure parameter can be the product value of the exposure time and the exposure gain, an estimated exposure parameter can be determined according to the brightness average, the first exposure coefficient and the second exposure coefficient are set, and the estimated exposure parameter and the first exposure The product value of the coefficient is used as the exposure parameter of the first exposure image, and the product value of the estimated exposure parameter and the second exposure coefficient is used as the exposure parameter of the second exposure image. Since the function of gain compensation is to align the data of different exposure times to At the same level, the gain compensation parameter can be determined by using the first exposure coefficient and the second exposure coefficient.
例如,如果根据合成图像的亮度均值评估出的曝光参数为ET,则可以得到第一曝光图像的曝光参数为ETl=ET*get0,第二曝光图像的曝光参数为ETs=ET*get1,其中get0大于1,这样使第一曝光时间更大,采集的图像包含更多暗区域的细节,当然部分亮区域可能会过曝,get1小于1,这样使第二曝光时间更小一些,采集图像包含更多亮区域的细节,当然部分暗区域可能会欠曝光。这样将第一曝光图像和第二曝光图像融合才能更好地拓展图像的动态范围,采集到更多的图像细节,get0和get1的具体值可以根据实际亮度均值进行设置,若亮度均值相对于标准亮度范围偏低,则可以适当提高get0和get1的具体值;若亮度均值相对于标准亮度范围偏高,则可以适当降低get0和get1的具体值。通过get0和get1可以计算得到第一曝光图像的增益补偿参数和第二曝光图像的增益补偿参数g0,g1。一般可以采用下面的方式计算:For example, if the exposure parameter evaluated according to the average brightness value of the synthesized image is ET, the exposure parameter of the first exposure image can be obtained as ETl =ET*get0 , and the exposure parameter of the second exposure image is ETs =ET*get1 , where get0 is greater than 1, which makes the first exposure time longer, and the captured image contains more details in dark areas. Of course, some bright areas may be overexposed, and get1 is less than 1, which makes the second exposure time smaller Some, the captured image contains more details in bright areas, and of course some dark areas may be underexposed. In this way, the fusion of the first exposure image and the second exposure image can better expand the dynamic range of the image and collect more image details. The specific values ofget0 andget1 can be set according to the actual brightness average value. If the brightness average value is relatively If the standard brightness range is relatively low, the specific values ofget0 andget1 can be appropriately increased; if the average brightness is relatively high relative to the standard brightness range, the specific values ofget0 andget1 can be appropriately reduced. The gain compensation parameters of the first exposure image and the gain compensation parameters g0and g1 of the second exposure image can be calculated through get0 andget1 . Generally, it can be calculated in the following way:
g0=1g0 =1
在其中一个实施例中,如图9所示,图像合成方法还包括以下步骤:In one of the embodiments, as shown in Figure 9, the image synthesis method further includes the following steps:
步骤S170:获取合成图像的直方图信息,根据直方图信息对合成图像进行动态范围压缩。Step S170: Obtain the histogram information of the synthesized image, and perform dynamic range compression on the synthesized image according to the histogram information.
在本实施例中,利用合成图像的直方图信息对合成图像进行动态范围压缩,由于合成后的图像数据一般动态范围比较大,代表图像数据的位宽也比较大,意味着合成后的图像相对于原始图像位宽增大,适当压缩图像的动态范围,有利于图像数据的处理和存储。In this embodiment, the histogram information of the synthesized image is used to compress the dynamic range of the synthesized image. Since the synthesized image data generally has a relatively large dynamic range, the bit width of the representative image data is also relatively large, which means that the synthesized image is relatively large. Due to the increase of the bit width of the original image, the dynamic range of the image is appropriately compressed, which is beneficial to the processing and storage of image data.
可选的,动态范围压缩可以使用多种不同的压缩方式,如全局曲线映射算法,该算法实现比较简单,根据统计出的图像的直方图,动态计算出映射曲线,将高动态范围的图像压缩为较低动态范围的图像;也可以采用局域映射算法,如直方图均衡算法,区域直方图均衡算法,同态滤波算法等,动态范围压缩的效果较好;需要说明的是,进行动态范围压缩只是为了调整位宽,以便进行图像数据处理和存储,但并不会对合成图像的动态范围造成太大影响。Optionally, dynamic range compression can use a variety of different compression methods, such as the global curve mapping algorithm, which is relatively simple to implement. According to the statistical histogram of the image, the mapping curve is dynamically calculated to compress the high dynamic range image. It is an image with a lower dynamic range; local mapping algorithms can also be used, such as histogram equalization algorithm, regional histogram equalization algorithm, homomorphic filtering algorithm, etc. The effect of dynamic range compression is better; it should be noted that the dynamic range Compression is just to adjust the bit width for image data processing and storage, but it does not affect the dynamic range of the composite image too much.
根据上述图像合成方法,本发明还提供一种图像合成系统,以下就本发明的图像合成系统的实施例进行详细说明。According to the above-mentioned image synthesis method, the present invention also provides an image synthesis system, and the embodiments of the image synthesis system of the present invention will be described in detail below.
参见图10所示,为本发明一个实施例的图像合成系统的结构示意图。该实施例中的图像合成系统包括:Referring to FIG. 10 , it is a schematic structural diagram of an image synthesis system according to an embodiment of the present invention. The image synthesis system in this embodiment includes:
图像获取单元210,用于分别获取目标拍摄对象的第一曝光图像和第二曝光图像,其中,第一曝光图像的曝光时间大于第二曝光图像的曝光时间;An image acquiring unit 210, configured to respectively acquire a first exposure image and a second exposure image of the target object, wherein the exposure time of the first exposure image is greater than the exposure time of the second exposure image;
权重分析单元220,用于根据第一曝光图像和第二曝光图像的相同位置的像素点数据获取图像融合的权重值;A weight analysis unit 220, configured to obtain a weight value for image fusion according to the pixel point data at the same position of the first exposure image and the second exposure image;
图像合成单元230,用于根据权重值对第一曝光图像和第二曝光图像进行融合,获得合成图像。The image synthesis unit 230 is configured to fuse the first exposure image and the second exposure image according to the weight value to obtain a composite image.
根据上述的图像合成系统,图像获取单元210获取目标拍摄对象的两种不同曝光时间的曝光图像,权重分析单元220对两种曝光图像中的相同位置的像素点数据进行分析,获得图像融合的权重值,图像合成单元230利用该权重值对两种曝光图像进行融合,获得合成图像,该合成图像是由不同曝光时间的图像合成,同一位置的像素点可以捕捉更多的图像细节,也就提高了合成图像的动态范围,同时在合成图像过程中只需要两种不同曝光时间的图像,相比于传统技术使用多张图像的合成过程降低了图像运算量,可以满足图像处理实时性的要求。According to the above-mentioned image synthesis system, the image acquisition unit 210 acquires exposure images of two different exposure times of the target subject, and the weight analysis unit 220 analyzes the pixel point data at the same position in the two exposure images to obtain the weight of image fusion value, the image synthesis unit 230 uses the weight value to fuse the two exposure images to obtain a composite image, the composite image is composed of images with different exposure times, and the pixels at the same position can capture more image details, which improves The dynamic range of the synthesized image is improved, and at the same time, only two images with different exposure times are needed in the process of synthesizing the image. Compared with the traditional technology that uses multiple images in the synthesis process, the amount of image calculation is reduced, which can meet the real-time requirements of image processing.
在其中一个实施例中,图像获取单元210根据配置的不同曝光时间对目标拍摄对象进行拍摄,分别获取目标拍摄对象的第一曝光行数据和第二曝光行数据,分别对第一曝光行数据和第二曝光行数据进行线性化映射,得到第一映射数据和第二映射数据;分别对第一映射数据和第二映射数据进行增益补偿,根据增益补偿后的数据分别获取第一曝光图像和第二曝光图像。In one of the embodiments, the image acquisition unit 210 photographs the target subject according to different configured exposure times, respectively acquires the first exposure row data and the second exposure row data of the target subject, and respectively performs the first exposure row data and the second exposure row data Perform linearized mapping on the second exposure row data to obtain first mapping data and second mapping data; respectively perform gain compensation on the first mapping data and second mapping data, and obtain the first exposure image and the second exposure image respectively according to the gain-compensated data Two exposure images.
在其中一个实施例中,如图11所示,图像合成系统还包括运动估计单元240,用于缓存第一曝光行数据和第二曝光行数据,对第一曝光行数据和第二曝光行数据进行图像匹配计算,根据图像匹配计算的结果获取第一曝光图像中目标拍摄对象相对于第二曝光图像中目标拍摄对象的运动向量;根据运动向量对第二曝光图像的数据与第一曝光图像的数据进行对齐操作。In one of the embodiments, as shown in FIG. 11 , the image synthesis system further includes a motion estimation unit 240 for buffering the first exposure line data and the second exposure line data, and for the first exposure line data and the second exposure line data Carry out image matching calculation, obtain the motion vector of the target shooting object in the first exposure image relative to the target shooting object in the second exposure image according to the result of the image matching calculation; Data is aligned.
在其中一个实施例中,运动估计单元240在第一曝光行数据中获取第一数据块,在第二曝光行数据的预设搜索窗范围内获取第二数据块,第一数据块与第二数据块的大小相同;获取第一数据块内和第二数据块内的各个对应像素点的像素值差值,对各像素值差值取绝对值并求和,得到图像匹配和值;在预设搜索窗范围内遍历不同的第二数据块,确定图像匹配和值最小的目标数据块;根据第一数据块和目标数据块的相对位置确定运动向量。In one of the embodiments, the motion estimation unit 240 obtains the first data block in the first exposure line data, obtains the second data block within the preset search window range of the second exposure line data, and the first data block and the second The size of the data block is the same; obtain the pixel value difference of each corresponding pixel point in the first data block and the second data block, and get the absolute value and sum of each pixel value difference to obtain the image matching sum value; It is assumed that different second data blocks are traversed within the range of the search window, and the target data block with the smallest image matching sum value is determined; the motion vector is determined according to the relative positions of the first data block and the target data block.
在其中一个实施例中,运动估计单元240还用于计算第一数据块的纹理复杂度,若第一数据块的纹理复杂度小于纹理复杂阈值,将运动向量置零;或者,计算目标数据块的纹理复杂度,若目标数据块的纹理复杂度小于纹理复杂阈值,将运动向量置零。In one of the embodiments, the motion estimation unit 240 is also used to calculate the texture complexity of the first data block, if the texture complexity of the first data block is less than the texture complexity threshold, set the motion vector to zero; or, calculate the target data block If the texture complexity of the target data block is less than the texture complexity threshold, set the motion vector to zero.
在其中一个实施例中,权重分析单元220分别根据第一曝光图像的目标位置像素点的亮度、对比度和纹理复杂度获取第一亮度权重、第一对比度权重和第一纹理复杂度权重;根据第一亮度权重、第一对比度权重和第一纹理复杂度权重获取第一曝光图像的第一权重值;In one of the embodiments, the weight analysis unit 220 respectively obtains the first brightness weight, the first contrast weight and the first texture complexity weight according to the brightness, contrast and texture complexity of the pixel point at the target position of the first exposure image; A brightness weight, a first contrast weight, and a first texture complexity weight obtain a first weight value of the first exposure image;
权重分析单元220分别根据第二曝光图像的目标位置像素点的亮度、对比度和纹理复杂度获取第二亮度权重、第二对比度权重和第二纹理复杂度权重;根据第二亮度权重、第二对比度权重和第二纹理复杂度权重获取第二曝光图像的第二权重值;The weight analysis unit 220 obtains the second brightness weight, the second contrast weight and the second texture complexity weight respectively according to the brightness, contrast and texture complexity of the target position pixel point of the second exposure image; according to the second brightness weight, the second contrast The weight and the second texture complexity weight obtain a second weight value of the second exposure image;
权重分析单元220对第一权重值和第二权重值分别进行归一化和噪声滤波,得到图像融合的权重值。The weight analysis unit 220 performs normalization and noise filtering on the first weight value and the second weight value, respectively, to obtain weight values for image fusion.
在其中一个实施例中,如图12所示,图像合成系统还包括曝光控制单元250,用于获取合成图像的亮度均值,根据亮度均值调节第一曝光图像的曝光参数和第二曝光图像的曝光参数,根据第一曝光图像的曝光参数和第二曝光图像的曝光参数分别计算增益补偿的参数。In one of the embodiments, as shown in FIG. 12 , the image synthesis system further includes an exposure control unit 250, configured to obtain the average brightness value of the synthesized image, and adjust the exposure parameters of the first exposure image and the exposure of the second exposure image according to the average brightness value. parameters, parameters for gain compensation are respectively calculated according to the exposure parameters of the first exposure image and the exposure parameters of the second exposure image.
在其中一个实施例中,如图13所示,图像合成系统还包括动态压缩单元260,用于获取合成图像的直方图信息,根据直方图信息对合成图像进行动态范围压缩。In one embodiment, as shown in FIG. 13 , the image synthesis system further includes a dynamic compression unit 260, configured to acquire the histogram information of the synthesized image, and perform dynamic range compression on the synthesized image according to the histogram information.
本发明的图像合成系统与本发明的图像合成方法一一对应,在上述的图像合成方法的实施例阐述的技术特征及其有益效果均适用于图像合成系统的实施例中。The image synthesis system of the present invention has a one-to-one correspondence with the image synthesis method of the present invention, and the technical features and beneficial effects described in the above-mentioned embodiment of the image synthesis method are applicable to the embodiment of the image synthesis system.
根据上述图像合成方法,本发明实施例还提供一种可读存储介质和一种计算机设备。According to the above image synthesis method, an embodiment of the present invention further provides a readable storage medium and a computer device.
可读存储介质上存储有可执行程序,该程序被处理器执行时实现上述图像合成方法的步骤;存储介质通过其存储的可执行程序,实现了图像合成,该合成图像是由不同曝光时间的图像合成,同一位置的像素点可以捕捉更多的图像细节,也就提高了合成图像的动态范围,同时在合成图像过程中只需要两种不同曝光时间的图像,相比于传统技术使用多张图像的合成过程降低了图像运算量,可以满足图像处理实时性的要求。An executable program is stored on the readable storage medium, and when the program is executed by the processor, the steps of the above-mentioned image synthesis method are realized; the storage medium realizes image synthesis through the stored executable program, and the composite image is composed of different exposure times Image synthesis, the pixels at the same position can capture more image details, which improves the dynamic range of the composite image. At the same time, only two images with different exposure times are needed in the process of composite image. The image synthesis process reduces the amount of image computation and can meet the real-time requirements of image processing.
计算机设备包括存储器、处理器及存储在存储器上并可在处理器上运行的可执行程序,处理器执行程序时实现上述图像合成方法的步骤;上述计算机设备通过处理器上运行的可执行程序,实现了图像合成,该合成图像是由不同曝光时间的图像合成,同一位置的像素点可以捕捉更多的图像细节,也就提高了合成图像的动态范围,同时在合成图像过程中只需要两种不同曝光时间的图像,相比于传统技术使用多张图像的合成过程降低了图像运算量,可以满足图像处理实时性的要求。The computer device includes a memory, a processor, and an executable program stored on the memory and operable on the processor. When the processor executes the program, the steps of the above-mentioned image synthesis method are realized; the above-mentioned computer device uses the executable program running on the processor, Image synthesis is realized, the composite image is composed of images with different exposure times, the pixels at the same position can capture more image details, which improves the dynamic range of the composite image, and at the same time only two For images with different exposure times, compared with the traditional technology, the synthesis process of multiple images reduces the amount of image calculation, which can meet the real-time requirements of image processing.
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,程序可存储于一非易失性的计算机可读取存储介质中,如实施例中,该程序可存储于计算机系统的存储介质中,并被该计算机系统中的至少一个处理器执行,以实现包括如上述图像合成方法的实施例的流程。其中,存储介质可为磁碟、光盘、只读存储记忆体(Read-Only Memory,ROM)或随机存储记忆体(Random AccessMemory,RAM)等。Those of ordinary skill in the art can understand that all or part of the processes in the methods of the above embodiments can be implemented through computer programs to instruct related hardware, and the programs can be stored in a non-volatile computer-readable storage medium. As in the embodiment, the program may be stored in the storage medium of the computer system, and executed by at least one processor in the computer system, so as to realize the process including the embodiment of the above-mentioned image synthesis method. Wherein, the storage medium may be a magnetic disk, an optical disk, a read-only memory (Read-Only Memory, ROM) or a random access memory (Random Access Memory, RAM), and the like.
根据上述图像合成方法,本发明实施例还提供一种图像合成设备。According to the above image synthesis method, an embodiment of the present invention further provides an image synthesis device.
参见图14所示,为本发明一个实施例的图像合成设备的结构示意图。该实施例中的图像合成设备包括相互连接的摄像头310和图像处理器320;Referring to FIG. 14 , it is a schematic structural diagram of an image synthesis device according to an embodiment of the present invention. The image synthesis device in this embodiment includes a camera 310 and an image processor 320 connected to each other;
摄像头310用于对目标拍摄对象进行拍摄;The camera 310 is used to photograph the target object;
图像处理器320用于执行如上述的图像合成方法。The image processor 320 is used to execute the above-mentioned image synthesis method.
上述图像合成设备可以用来合成图像,该合成图像是由不同曝光时间的图像合成,同一位置的像素点可以捕捉更多的图像细节,也就提高了合成图像的动态范围,同时在合成图像过程中只需要两种不同曝光时间的图像,相比于传统技术使用多张图像的合成过程降低了图像运算量,可以满足图像处理实时性的要求。The above-mentioned image synthesis device can be used to synthesize images. The synthesized image is synthesized from images with different exposure times. Pixels at the same position can capture more image details, which improves the dynamic range of the synthesized image. Only two images with different exposure times are needed, compared with the traditional technology, the synthesis process of multiple images reduces the amount of image calculation, and can meet the real-time requirements of image processing.
在一个实施例中,如图15所示,摄像头分别针对图像的每一行,根据配置的不同的曝光时间,采集长曝光时间行数据和短曝光时间行数据。数据的格式一般为Bayer数据格式,Bayer数据格式为摄像头输出的原始数据,其中一种Bayer数据格式示例如图16,长曝光时间行数据和短曝光时间行数据的处理过程可以在图像处理器中进行。In one embodiment, as shown in FIG. 15 , the camera collects long-exposure-time row data and short-exposure-time row data for each row of the image according to different configured exposure times. The format of the data is generally Bayer data format, which is the raw data output by the camera. An example of Bayer data format is shown in Figure 16. The processing process of long exposure time line data and short exposure time line data can be performed in the image processor conduct.
在图15中,第一线性化映射单元和第二线性化映射单元分别对长曝光行数据(图中为第一曝光行数据)和短曝光行数据(图中为第二曝光行数据)进行线性化映射。一般摄像头对光亮度响应是非线性的,线性化映射单元主要是对摄像头这种非线性特性进行校正,校正后输出的图像数据和入射光亮度成线性关系。在进行线性化映射时,采用查找表的方式进行转换,转换方式如下:In Fig. 15, the first linearization mapping unit and the second linearization mapping unit respectively perform long exposure line data (first exposure line data in the figure) and short exposure line data (second exposure line data in the figure) Linearized map. Generally, the response of the camera to the light brightness is nonlinear. The linearization mapping unit mainly corrects this nonlinear characteristic of the camera. After correction, the output image data has a linear relationship with the incident light brightness. When performing linearization mapping, the conversion is performed in the form of a lookup table, and the conversion method is as follows:
Ilinear0=LUT0[Iin0]Ilinear0 = LUT0 [Iin0 ]
Ilinear1=LUT1[Iin1]Ilinear1 = LUT1 [Iin1 ]
其中LUT为查找表,通过Iin索引查找表,得到线性化映射后的值。LUT查找表用于描述一条曲线,该曲线将非线性的数据映射为线性数据。上面的式子中,Iin0表示长曝光行数据,Ilinear0为长曝光行经过线性化映射之后的数据;Iin1表示短曝光行数据,Ilinear1为短曝光行经过线性化映射之后的数据;The LUT is a lookup table, and the value after the linearization mapping is obtained through the Iin index lookup table. A LUT lookup table is used to describe a curve that maps non-linear data to linear data. In the above formula, Iin0 represents the data of the long exposure line, Ilinear0 is the data of the long exposure line after linearization mapping; Iin1 represents the data of the short exposure line, and Ilinear1 is the data of the short exposure line after linearization mapping;
在图15中,第一数字增益单元和第二数字增益单元对上述Ilinear0,Ilinear1图像数据进行增益补偿,增益补偿的公式如下:In Fig. 15, the first digital gain unit and the second digital gain unit carry out gain compensation to the above-mentioned Ilinear0 , Ilinear1 image data, and the formula of gain compensation is as follows:
Ig0=Ilinear0*g0Ig0 =Ilinear0 *g0
Ig1=Ilinear1*g1Ig1 =Ilinear1 *g1
其中g0,g1为增益补偿的参数,Ig0,Ig1为增益补偿后的输出值。分别定义第一数字增益单元的参数为g0,第二数字增益单元的参数为g1。Among them, g0 and g1 are parameters of gain compensation, and Ig0 and Ig1 are output values after gain compensation. The parameter of the first digital gain unit is defined as g0 , and the parameter of the second digital gain unit is g1 .
在图15中,去重影运动估计单元用于估计长曝光图像和短曝光图像中运动物体的位移。对于快速运动的物体,长短曝光时间的时间间隔内,物体可能会存在运动位移,如果直接将存在运动位移的长短曝光行数据直接点对点融合,会产生明显的拖影或是重影现象,导致图像边缘模糊而使融合后图像质量下降。为了避免这种现象,估计长曝光数据和短曝光数据中运动物体的位移,运动估计可以采用基于图像块的算法。因此运动估计主要是为了将长曝光行和短曝光行的数据位置进行对齐。In Fig. 15, the deghosting motion estimation unit is used to estimate the displacement of the moving object in the long exposure image and the short exposure image. For fast-moving objects, the object may have motion displacement during the time interval between long and short exposure times. If the data of long and short exposure lines with motion displacement are directly fused point-to-point, obvious smear or ghosting will occur, resulting in image Blurred edges degrade image quality after fusion. In order to avoid this phenomenon and estimate the displacement of moving objects in long-exposure data and short-exposure data, a block-based algorithm can be used for motion estimation. Therefore, the motion estimation is mainly to align the data positions of the long exposure line and the short exposure line.
图17为运动估计中SAD的计算示例,SAD的计算过程如下:分别将长曝光和短曝光数据缓存到RAM中,然后在长曝光数据和短曝光数据中分别取5x5的数据块,计算对应块的SAD,SAD的计算公式如下:Figure 17 is an example of the calculation of SAD in motion estimation. The calculation process of SAD is as follows: cache the long-exposure and short-exposure data in RAM respectively, and then take 5x5 data blocks from the long-exposure data and short-exposure data respectively, and calculate the corresponding block The SAD, the calculation formula of SAD is as follows:
mv=argmin(SADij) i∈(-N,N)j∈(-N,N)mv=argmin(SADij ) i∈(-N,N)j∈(-N,N)
其中L(x+m,y+n)为长曝光图像中坐标为(x,y)周围5×5数据块内偏移为(m,n)的位置对应的像素值,S(x+i+m,y+j+n)为短曝光图像中坐标为(x+i+m,y+j+n)位置对应的像素值,SAD是将对应位置的像素值相减,再绝对值求和。运动位移为在搜索窗范围内最小SAD对应的坐标点的运动向量,如长曝光图像中(x,y)周围5×5的数据块和短曝光行中(x+i,y+j)周围5×5数据块计算出的SAD值最小,则运动向量为(i,j)。由于长短曝光时间的时间间隔不是太长,拍摄对象物体的运动幅度不是很大,因此搜索窗的范围一般不用很大,N一般为4或是8。如果N为4,则搜索窗的范围为[(-4,-4),(4,4)],为9×9的窗口。Among them, L(x+m, y+n) is the pixel value corresponding to the position offset (m, n) in the 5×5 data block around the coordinates (x, y) in the long exposure image, and S(x+i +m,y+j+n) is the pixel value corresponding to the position whose coordinates are (x+i+m,y+j+n) in the short-exposure image, SAD is to subtract the pixel value at the corresponding position, and then calculate the absolute value and. The motion displacement is the motion vector of the coordinate point corresponding to the minimum SAD within the search window, such as a 5×5 data block around (x, y) in the long exposure image and around (x+i, y+j) in the short exposure line The SAD value calculated by the 5×5 data block is the smallest, and the motion vector is (i, j). Since the time interval between long and short exposure times is not too long, and the range of motion of the object to be photographed is not very large, the range of the search window generally does not need to be very large, and N is generally 4 or 8. If N is 4, the range of the search window is [(-4,-4), (4,4)], which is a 9×9 window.
图18为运动估计的流程图,搜索i∈(-N,N)j∈(-N,N)窗口范围内最小的SAD对应的运动向量。在搜索出运动向量后,需要计算最小SAD对应原始5×5大小数据块的纹理复杂度信息,如果原始5×5数据块纹理过于平坦,即威力复杂度小于纹理复杂阈值threshold,则有可能评估出的运动向量不太可靠,直接将运动向量置为(0,0),避免由于运动估计出错而导致后续图像融合时图像质量下降,另外一般过曝光和欠曝光的图像没有纹理信息,如此也是为了避免图像曝光过曝后出现无法评估运动向量的问题。图像块纹理复杂度信息的评估可以采用计算数据块的亮度方差。同时由于运动估计输入的图像为原始的Bayer数据,计算SAD时需要Bayer数据的相位对齐,如以R点为中心的数据块不能和以G点为中心的数据块直接两两相减计算SAD,因此一般运动搜索时,以2为步长移动,可以保证相同相位的图像块进行SAD计算,因此运动估计出的运动向量的精度为2。FIG. 18 is a flow chart of motion estimation, searching for the motion vector corresponding to the smallest SAD within the i∈(-N,N)j∈(-N,N) window range. After searching out the motion vector, it is necessary to calculate the texture complexity information of the original 5×5 data block corresponding to the minimum SAD. If the texture of the original 5×5 data block is too flat, that is, the power complexity is less than the texture complexity threshold threshold, it is possible to evaluate The resulting motion vector is not very reliable. Set the motion vector directly to (0,0) to avoid image quality degradation during subsequent image fusion due to motion estimation errors. In addition, generally overexposed and underexposed images do not have texture information, and so do In order to avoid the problem of not being able to evaluate the motion vector after the image is overexposed. The evaluation of the texture complexity information of the image block can use the calculation of the brightness variance of the data block. At the same time, since the input image for motion estimation is the original Bayer data, the phase alignment of Bayer data is required when calculating SAD. For example, the data block centered on point R cannot be directly subtracted from the data block centered on point G to calculate SAD. Therefore, during general motion search, moving with a step size of 2 can ensure that image blocks with the same phase are subjected to SAD calculation, so the motion vector estimated by motion has an accuracy of 2.
在图15中,权重评估单元根据长曝光图像和短曝光图像相同位置的像素点及其邻近区域对应的像素值评估出图像融合的权重。根据图像相关位置像素点的亮点,对比度,纹理复杂度等信息评估出图像融合的权重。像素点的亮度越接近中心位置权重越大,对比度越大权重越大,纹理越复杂权重越大。分别计算长曝光图像对应像素点的权重和短曝光图像对应像素点的权重,权重可以是亮度权重,对比度权重和纹理复杂度权重的和,也可以是三者相乘的积。然后在对计算出的长曝光图像权重和短曝光图像权重进行归一化,得出最后的权重。其中一种算式如下:In FIG. 15 , the weight evaluation unit evaluates the weight of the image fusion according to the pixel points at the same position of the long-exposure image and the short-exposure image and the pixel values corresponding to their adjacent areas. The weight of image fusion is evaluated according to information such as bright spots, contrast, and texture complexity of pixels at relevant positions in the image. The closer the brightness of the pixel is to the center, the greater the weight, the greater the contrast, the greater the weight, and the more complex the texture, the greater the weight. The weights of the pixels corresponding to the long-exposure image and the weights of the pixels corresponding to the short-exposure image are respectively calculated. The weights can be the sum of the brightness weight, the contrast weight and the texture complexity weight, or the product of the multiplication of the three. Then, the calculated long-exposure image weight and short-exposure image weight are normalized to obtain the final weight. One of the formulas is as follows:
Wl=Wluma+Wsat+WedgeWl =Wluma +Wsat +Wedge
Ws=Wluma+Wsat+WedgeWs =Wluma +Wsat +Wedge
Wluma为亮度权重,Wsat对比度权重,Wedge为纹理复杂度权重,通过计算图像的边缘信息得到,Wl为长曝光图像中像素点的权重,Ws为短曝光图像中像素点的权重,w最终评估出的权重。Wluma可以由像素点的亮度计算得到,一般像素点亮度与Wluma的关系为一条钟型曲线,离中心亮度越近,像素点的曝光越合适,权重越大,像素亮度接近于0,越可能欠曝光,权重越小,像素点亮度接近最大值,越可能过曝光,权重越小;Wsat可以计算像素点R,G,B的差值得到,如计算公式可以为|R-G|+|B-G|;Wedge可以通过Sobel算子和拉普拉斯算子来提取像素点附近的边缘信息得到,边缘越大,权重越高,边缘越小,权重越低。Wluma is the brightness weight, Wsat contrast weight, Wedge is the texture complexity weight, obtained by calculating the edge information of the image, Wl is the weight of the pixel in the long exposure image, Ws is the weight of the pixel in the short exposure image , w is the final evaluated weight. Wluma can be calculated from the brightness of the pixel. Generally, the relationship between pixel brightness and Wluma is a bell-shaped curve. The closer to the center brightness, the more appropriate the exposure of the pixel. The greater the weight, the closer the pixel brightness is to 0, the more It may be underexposed, the smaller the weight, the brightness of the pixel is close to the maximum value, the more likely it is overexposed, the smaller the weight; Wsat can be obtained by calculating the difference between the pixel R, G, and B, such as the calculation formula can be |RG|+| BG|;Wedge can be obtained by extracting the edge information near the pixel through the Sobel operator and the Laplacian operator. The larger the edge, the higher the weight, and the smaller the edge, the lower the weight.
在图15中,权重滤波单元对上一步评估出来的权重进行滤波,上一步评估出来的权重一般包含较多噪声,直接用于图像融合会图像融合后噪声比较多,质量比较查。因此权重滤波单元采用滤波器对评估出的权重进行滤波,减小噪声的干扰,滤波器可以采用中值滤波器,均值滤波器,交叉双边滤波器等。In Figure 15, the weight filter unit filters the weights evaluated in the previous step. The weights evaluated in the previous step generally contain more noise, and if they are directly used for image fusion, there will be more noise after image fusion, and the quality will be compared. Therefore, the weight filtering unit uses a filter to filter the estimated weight to reduce noise interference. The filter can use a median filter, a mean filter, a cross-bilateral filter, and the like.
在图15中,图像合成单元,根据滤波出的权重,对长曝光图像和短曝光图像相同位置的像素点进行融合,得到一张HDR(High-Dynamic Range,高动态范围)图像。融合采用如下公式:In FIG. 15 , the image synthesis unit fuses the pixels at the same position of the long-exposure image and the short-exposure image according to the filtered weights to obtain an HDR (High-Dynamic Range) image. Fusion uses the following formula:
Ihdr=Il*w+Is*(1-w)Ihdr =Il *w+Is *(1-w)
其中Ihdr为输出的HDR图像,Il经过上述数字增益处理和对齐后的长曝光图像数据,Is经过上述数字增益处理和对齐后的短曝光图像数据,w为滤波后的权重。如果w越大,则长曝光图像中像素点的权重越大,反之则短曝光图像中像素点的权重越大,采用w权重将二者加权相加,即得到融合后的像素值。Where Ihdr is the output HDR image, Il is the long-exposure image data after the digital gain processing and alignment, Is is the short-exposure image data after the digital gain processing and alignment, and w is the weight after filtering. If w is larger, the weight of the pixel in the long-exposure image is greater, otherwise, the weight of the pixel in the short-exposure image is greater, and the w weight is used to add the weight of the two to obtain the fused pixel value.
在图15中,图像统计单元用于统计合成后的HDR图像的亮度均值,直方图等信息。后续的曝光控制单元和动态范围单元可以利用统计信息实时控制摄像头的曝光和动态压缩参数的计算。In FIG. 15 , the image statistics unit is used for statistics of brightness mean value, histogram and other information of the synthesized HDR image. The subsequent exposure control unit and dynamic range unit can use statistical information to control the exposure of the camera and the calculation of dynamic compression parameters in real time.
曝光控制单元根据合成后图像的统计信息,实时计算第一曝光行和第二曝光行的曝光参数,同时根据第一曝光行和第二曝光行的曝光参数,生成第一数字增益单元的增益系数,和第二数字增益单元的增益系数。The exposure control unit calculates the exposure parameters of the first exposure line and the second exposure line in real time according to the statistical information of the synthesized image, and generates the gain coefficient of the first digital gain unit according to the exposure parameters of the first exposure line and the second exposure line , and the gain coefficient of the second digital gain unit.
具体的,曝光控制单元根据上一帧图像统计的亮度信息,评估出当前场景的亮度,然后控制摄像头的曝光时间和增益。如果根据当前场景评估出的曝光时间和增益为ET,ET为曝光时间和曝光增益的乘积,则可以得到长曝光时间和增益为ETl=ET*get0,短曝光时间和增益为ETs=ET*get1,其中get0大于1,这样使长曝光时间更大,采集的图像包含更多暗区域的细节,当然部分亮区域可能会过曝,get1小于1,这样使短曝光时间更小一些,采集图像包含更多亮区域的细节,当然部分暗区域可能会欠曝光。这样将长曝光图像和短曝光图像融合才能拓展图像的动态范围,采集到更多的图像细节,get0和get1的具体值可以根据实际亮度均值进行设置,若亮度均值相对于标准亮度范围偏低,则可以适当提高get0和get1的具体值;若亮度均值相对于标准亮度范围偏高,则可以适当降低get0和get1的具体值。然后再通过get0和get1计算得到第一数字增益单元和第二数字增益的参数g0,g1。一般可以采用下面的方式计算:Specifically, the exposure control unit evaluates the brightness of the current scene according to the statistical brightness information of the previous frame image, and then controls the exposure time and gain of the camera. If the exposure time and gain estimated according to the current scene are ET, and ET is the product of exposure time and exposure gain, then the long exposure time and gain can be obtained as ETl =ET*get0 , and the short exposure time and gain can be obtained as ETs = ET*get1 , where get0 is greater than 1, which makes the long exposure time longer, and the captured image contains more details in dark areas. Of course, some bright areas may be overexposed, and get1 is less than 1, which makes short exposure time more accurate. If it is smaller, the captured image contains more details in bright areas, and of course some dark areas may be underexposed. In this way, the long-exposure image and the short-exposure image can be fused to expand the dynamic range of the image and capture more image details. The specific values ofget0 andget1 can be set according to the actual brightness average value. If the brightness average value deviates from the standard brightness range If the value is low, the specific values ofget0 andget1 can be appropriately increased; if the average brightness value is higher than the standard brightness range, the specific values ofget0 andget1 can be appropriately reduced. Then get the parameters g0 and g1 of the first digital gain unit and the second digital gain by calculatingget0 andget1 . Generally, it can be calculated in the following way:
g0=1g0 =1
在图15中,动态范围压缩单元将融合得到的Ihdr宽动态范围的图像压缩为低动态范围的图像。合成后的图像数据一般动态范围比较大,表示图像数据的位宽也比较大,例如摄像头输入的原始图像数据位宽为12bit,则合成后的图像数据位宽范围可能为14bit,为了利于图像数据的处理和存储,需压缩图像的动态范围,将14bit的数据压缩到12bit或者是8bit。可以采用多种压缩方式,如全局曲线映射算法,根据统计出的图像直方图,动态计算出映射曲线,将高动态范围的图像压缩为低动态范围的图像;也可以才采用局域映射算法,如直方图均衡,区域直方图均衡,同态滤波器等。In FIG. 15, the dynamic range compression unit compresses the fused Ihdr wide dynamic range image into a low dynamic range image. The synthesized image data generally has a relatively large dynamic range, which means that the bit width of the image data is also relatively large. For example, the original image data input by the camera has a bit width of 12 bits, and the synthesized image data may have a bit width of 14 bits. In order to benefit the image data For processing and storage, it is necessary to compress the dynamic range of the image, and compress the 14bit data to 12bit or 8bit. A variety of compression methods can be used, such as the global curve mapping algorithm, according to the statistical image histogram, dynamically calculate the mapping curve, and compress the high dynamic range image into a low dynamic range image; you can also use the local mapping algorithm, Such as histogram equalization, regional histogram equalization, homomorphic filter, etc.
以上所述实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。The technical features of the above-mentioned embodiments can be combined arbitrarily. To make the description concise, all possible combinations of the technical features in the above-mentioned embodiments are not described. However, as long as there is no contradiction in the combination of these technical features, should be considered as within the scope of this specification.
以上所述实施例仅表达了本发明的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对发明专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本发明构思的前提下,还可以做出若干变形和改进,这些都属于本发明的保护范围。因此,本发明专利的保护范围应以所附权利要求为准。The above-mentioned embodiments only express several implementation modes of the present invention, and the descriptions thereof are relatively specific and detailed, but should not be construed as limiting the patent scope of the invention. It should be pointed out that those skilled in the art can make several modifications and improvements without departing from the concept of the present invention, and these all belong to the protection scope of the present invention. Therefore, the protection scope of the patent for the present invention should be based on the appended claims.
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201810097959.XACN108259774B (en) | 2018-01-31 | 2018-01-31 | Image synthesis method, system and device |
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201810097959.XACN108259774B (en) | 2018-01-31 | 2018-01-31 | Image synthesis method, system and device |
| Publication Number | Publication Date |
|---|---|
| CN108259774Atrue CN108259774A (en) | 2018-07-06 |
| CN108259774B CN108259774B (en) | 2021-04-16 |
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201810097959.XAActiveCN108259774B (en) | 2018-01-31 | 2018-01-31 | Image synthesis method, system and device |
| Country | Link |
|---|---|
| CN (1) | CN108259774B (en) |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN109688322A (en)* | 2018-11-26 | 2019-04-26 | 维沃移动通信(杭州)有限公司 | A kind of method, device and mobile terminal generating high dynamic range images |
| CN110049240A (en)* | 2019-04-03 | 2019-07-23 | Oppo广东移动通信有限公司 | Camera control method and device, electronic equipment and computer readable storage medium |
| CN110166709A (en)* | 2019-06-13 | 2019-08-23 | Oppo广东移动通信有限公司 | Night scene image processing method and device, electronic equipment and storage medium |
| WO2020192113A1 (en)* | 2019-03-25 | 2020-10-01 | 上海商汤智能科技有限公司 | Image processing method and apparatus, electronic device, and storage medium |
| WO2020211334A1 (en)* | 2019-04-15 | 2020-10-22 | Zhejiang Dahua Technology Co., Ltd. | Methods and systems for image combination |
| WO2020237931A1 (en)* | 2019-05-24 | 2020-12-03 | Zhejiang Dahua Technology Co., Ltd. | Systems and methods for image processing |
| CN112053295A (en)* | 2020-08-21 | 2020-12-08 | 珠海市杰理科技股份有限公司 | Image noise reduction method and device, computer equipment and storage medium |
| CN112637515A (en)* | 2020-12-22 | 2021-04-09 | 维沃软件技术有限公司 | Shooting method and device and electronic equipment |
| CN112655015A (en)* | 2018-10-24 | 2021-04-13 | 三星电子株式会社 | Electronic device and method for controlling the same |
| CN112712485A (en)* | 2019-10-24 | 2021-04-27 | 杭州海康威视数字技术股份有限公司 | Image fusion method and device |
| CN113313661A (en)* | 2021-05-26 | 2021-08-27 | Oppo广东移动通信有限公司 | Image fusion method and device, electronic equipment and computer readable storage medium |
| WO2021195896A1 (en)* | 2020-03-30 | 2021-10-07 | 华为技术有限公司 | Target recognition method and device |
| CN114189634A (en)* | 2022-01-26 | 2022-03-15 | 阿里巴巴达摩院(杭州)科技有限公司 | Image acquisition method, electronic device and computer storage medium |
| CN114693723A (en)* | 2020-12-30 | 2022-07-01 | 中兴通讯股份有限公司 | Image fusion method, terminal and storage medium |
| CN114862734A (en)* | 2022-05-23 | 2022-08-05 | Oppo广东移动通信有限公司 | Image processing method, image processing device, electronic equipment and computer readable storage medium |
| CN115314628A (en)* | 2021-05-08 | 2022-11-08 | 杭州海康威视数字技术股份有限公司 | Imaging method, system and camera |
| US11503223B2 (en) | 2019-04-09 | 2022-11-15 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | Method for image-processing and electronic device |
| CN115496696A (en)* | 2022-09-20 | 2022-12-20 | Oppo广东移动通信有限公司 | Image processing method and device, electronic equipment and computer readable storage medium |
| WO2023030139A1 (en)* | 2021-09-03 | 2023-03-09 | 上海肇观电子科技有限公司 | Image fusion method, electronic device, and storage medium |
| CN115861143A (en)* | 2022-12-26 | 2023-03-28 | 北京百度网讯科技有限公司 | Image fusion method and device, intelligent chip and storage medium |
| CN116095502A (en)* | 2023-04-13 | 2023-05-09 | 芯动微电子科技(珠海)有限公司 | Method and device for fusing multiple exposure images |
| CN116311196A (en)* | 2022-12-29 | 2023-06-23 | 杭州海康威视数字技术股份有限公司 | Image processing method and device, electronic equipment and storage medium |
| US11933599B2 (en) | 2018-11-08 | 2024-03-19 | Samsung Electronics Co., Ltd. | Electronic device and method for controlling same |
| WO2024188171A1 (en)* | 2023-03-15 | 2024-09-19 | 华为技术有限公司 | Image processing method and related device thereof |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN102497490A (en)* | 2011-12-16 | 2012-06-13 | 上海富瀚微电子有限公司 | System and method for realizing image high dynamic range compression |
| CN104144298A (en)* | 2014-07-16 | 2014-11-12 | 浙江宇视科技有限公司 | A Wide Dynamic Image Synthesis Method |
| CN104823437A (en)* | 2014-06-12 | 2015-08-05 | 深圳市大疆创新科技有限公司 | Picture processing method and device |
| CN106506981A (en)* | 2016-11-25 | 2017-03-15 | 阿依瓦(北京)技术有限公司 | Generate the apparatus and method of high dynamic range images |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN102497490A (en)* | 2011-12-16 | 2012-06-13 | 上海富瀚微电子有限公司 | System and method for realizing image high dynamic range compression |
| CN104823437A (en)* | 2014-06-12 | 2015-08-05 | 深圳市大疆创新科技有限公司 | Picture processing method and device |
| CN104144298A (en)* | 2014-07-16 | 2014-11-12 | 浙江宇视科技有限公司 | A Wide Dynamic Image Synthesis Method |
| CN106506981A (en)* | 2016-11-25 | 2017-03-15 | 阿依瓦(北京)技术有限公司 | Generate the apparatus and method of high dynamic range images |
| Title |
|---|
| 侯月光,谢建国: "《尼康D7100从入门到精通》", 31 January 2014* |
| 王灏,孟群: "《数字电视制作》", 31 May 2017* |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN112655015B (en)* | 2018-10-24 | 2024-09-10 | 三星电子株式会社 | Electronic device and method for controlling the same |
| US11928797B2 (en) | 2018-10-24 | 2024-03-12 | Samsung Electronics Co., Ltd. | Electronic device and method for acquiring a synthesized image |
| CN112655015A (en)* | 2018-10-24 | 2021-04-13 | 三星电子株式会社 | Electronic device and method for controlling the same |
| US11933599B2 (en) | 2018-11-08 | 2024-03-19 | Samsung Electronics Co., Ltd. | Electronic device and method for controlling same |
| CN109688322B (en)* | 2018-11-26 | 2021-04-02 | 维沃移动通信(杭州)有限公司 | Method and device for generating high dynamic range image and mobile terminal |
| CN109688322A (en)* | 2018-11-26 | 2019-04-26 | 维沃移动通信(杭州)有限公司 | A kind of method, device and mobile terminal generating high dynamic range images |
| WO2020192113A1 (en)* | 2019-03-25 | 2020-10-01 | 上海商汤智能科技有限公司 | Image processing method and apparatus, electronic device, and storage medium |
| CN110049240A (en)* | 2019-04-03 | 2019-07-23 | Oppo广东移动通信有限公司 | Camera control method and device, electronic equipment and computer readable storage medium |
| CN110049240B (en)* | 2019-04-03 | 2021-01-26 | Oppo广东移动通信有限公司 | Camera control method, apparatus, electronic device and computer-readable storage medium |
| US11503223B2 (en) | 2019-04-09 | 2022-11-15 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | Method for image-processing and electronic device |
| WO2020211334A1 (en)* | 2019-04-15 | 2020-10-22 | Zhejiang Dahua Technology Co., Ltd. | Methods and systems for image combination |
| US11887284B2 (en) | 2019-04-15 | 2024-01-30 | Zhejiang Dahua Technology Co., Ltd. | Methods and systems for image combination |
| WO2020237931A1 (en)* | 2019-05-24 | 2020-12-03 | Zhejiang Dahua Technology Co., Ltd. | Systems and methods for image processing |
| US12056848B2 (en) | 2019-05-24 | 2024-08-06 | Zhejiang Dahua Technology Co., Ltd. | Systems and methods for image processing |
| CN110166709A (en)* | 2019-06-13 | 2019-08-23 | Oppo广东移动通信有限公司 | Night scene image processing method and device, electronic equipment and storage medium |
| CN112712485B (en)* | 2019-10-24 | 2024-06-04 | 杭州海康威视数字技术股份有限公司 | Image fusion method and device |
| CN112712485A (en)* | 2019-10-24 | 2021-04-27 | 杭州海康威视数字技术股份有限公司 | Image fusion method and device |
| WO2021195896A1 (en)* | 2020-03-30 | 2021-10-07 | 华为技术有限公司 | Target recognition method and device |
| CN115516522A (en)* | 2020-03-30 | 2022-12-23 | 华为技术有限公司 | Target recognition method and device |
| CN112053295B (en)* | 2020-08-21 | 2024-04-05 | 珠海市杰理科技股份有限公司 | Image noise reduction method, device, computer equipment and storage medium |
| CN112053295A (en)* | 2020-08-21 | 2020-12-08 | 珠海市杰理科技股份有限公司 | Image noise reduction method and device, computer equipment and storage medium |
| CN112637515A (en)* | 2020-12-22 | 2021-04-09 | 维沃软件技术有限公司 | Shooting method and device and electronic equipment |
| CN114693723A (en)* | 2020-12-30 | 2022-07-01 | 中兴通讯股份有限公司 | Image fusion method, terminal and storage medium |
| CN115314628A (en)* | 2021-05-08 | 2022-11-08 | 杭州海康威视数字技术股份有限公司 | Imaging method, system and camera |
| CN115314628B (en)* | 2021-05-08 | 2024-03-01 | 杭州海康威视数字技术股份有限公司 | Imaging method, imaging system and camera |
| CN113313661A (en)* | 2021-05-26 | 2021-08-27 | Oppo广东移动通信有限公司 | Image fusion method and device, electronic equipment and computer readable storage medium |
| WO2023030139A1 (en)* | 2021-09-03 | 2023-03-09 | 上海肇观电子科技有限公司 | Image fusion method, electronic device, and storage medium |
| CN114189634B (en)* | 2022-01-26 | 2022-06-14 | 阿里巴巴达摩院(杭州)科技有限公司 | Image acquisition method, electronic device and computer storage medium |
| CN114189634A (en)* | 2022-01-26 | 2022-03-15 | 阿里巴巴达摩院(杭州)科技有限公司 | Image acquisition method, electronic device and computer storage medium |
| CN114862734A (en)* | 2022-05-23 | 2022-08-05 | Oppo广东移动通信有限公司 | Image processing method, image processing device, electronic equipment and computer readable storage medium |
| CN115496696A (en)* | 2022-09-20 | 2022-12-20 | Oppo广东移动通信有限公司 | Image processing method and device, electronic equipment and computer readable storage medium |
| CN115496696B (en)* | 2022-09-20 | 2025-09-26 | Oppo广东移动通信有限公司 | Image processing method and device, electronic device and computer-readable storage medium |
| CN115861143A (en)* | 2022-12-26 | 2023-03-28 | 北京百度网讯科技有限公司 | Image fusion method and device, intelligent chip and storage medium |
| CN116311196A (en)* | 2022-12-29 | 2023-06-23 | 杭州海康威视数字技术股份有限公司 | Image processing method and device, electronic equipment and storage medium |
| WO2024188171A1 (en)* | 2023-03-15 | 2024-09-19 | 华为技术有限公司 | Image processing method and related device thereof |
| CN116095502A (en)* | 2023-04-13 | 2023-05-09 | 芯动微电子科技(珠海)有限公司 | Method and device for fusing multiple exposure images |
| Publication number | Publication date |
|---|---|
| CN108259774B (en) | 2021-04-16 |
| Publication | Publication Date | Title |
|---|---|---|
| CN108259774B (en) | Image synthesis method, system and device | |
| US11218630B2 (en) | Global tone mapping | |
| EP1924966B1 (en) | Adaptive exposure control | |
| US9432579B1 (en) | Digital image processing | |
| KR101026577B1 (en) | High dynamic range video generation systems, computer implemented methods, and computer readable recording media | |
| US9613408B2 (en) | High dynamic range image composition using multiple images | |
| JP5213670B2 (en) | Imaging apparatus and blur correction method | |
| US8319843B2 (en) | Image processing apparatus and method for blur correction | |
| US8933985B1 (en) | Method, apparatus, and manufacture for on-camera HDR panorama | |
| WO2019183813A1 (en) | Image capture method and device | |
| US10055872B2 (en) | System and method of fast adaptive blending for high dynamic range imaging | |
| CN111915505B (en) | Image processing methods, devices, electronic equipment and storage media | |
| CN103973990B (en) | wide dynamic fusion method and device | |
| US20080239094A1 (en) | Method of and apparatus for image denoising | |
| CN110349163B (en) | Image processing method and apparatus, electronic device, computer-readable storage medium | |
| US20140307129A1 (en) | System and method for lens shading compensation | |
| CN1985274A (en) | Method, system and program module for restoring color components in an image model | |
| US9554058B2 (en) | Method, apparatus, and system for generating high dynamic range image | |
| CN104796583B (en) | Camera noise model produces and application method and the device using this method | |
| Ko et al. | Artifact-free low-light video enhancement using temporal similarity and guide map | |
| CN114584700B (en) | Focus marking method, marking device and electronic equipment | |
| JP2015144475A (en) | Imaging apparatus, control method of the same, program and storage medium | |
| WO2021127972A1 (en) | Image processing method and apparatus, imaging device, and movable carrier | |
| US11102421B1 (en) | Low-light and/or long exposure image capture mode | |
| JP5713643B2 (en) | IMAGING DEVICE, IMAGING DEVICE CONTROL METHOD, PROGRAM, AND STORAGE MEDIUM |
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant | ||
| CP02 | Change in the address of a patent holder | ||
| CP02 | Change in the address of a patent holder | Address after:519000 No. 333, Kexing Road, Xiangzhou District, Zhuhai City, Guangdong Province Patentee after:ZHUHAI JIELI TECHNOLOGY Co.,Ltd. Address before:Floor 1-107, building 904, ShiJiHua Road, Zhuhai City, Guangdong Province Patentee before:ZHUHAI JIELI TECHNOLOGY Co.,Ltd. |