Detailed Description
Reference will now be made in detail to embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below by referring to the drawings are exemplary only for the purpose of explaining the embodiments of the present application, and are not to be construed as limiting the embodiments of the present application.
Referring to fig. 1 and 4, an embodiment of the present application provides an image processing method, including:
01: the first step is as follows: acquiring an initial image P0, wherein the initial image P0 comprises a plurality of blocks with the same size, and each block comprises at least one pixel point;
02: the second step is as follows: taking any one of the blocks as a target block, and obtaining the weight values of a plurality of adjacent blocks around the target block according to the gain data corresponding to the target block and the noise level of the target block in the expansion map A, wherein the adjacent blocks are blocks surrounding and connected with the target block, and the size of the expansion map is the same as that of the initial image P0;
03: the third step: performing noise reduction processing on the target block according to the plurality of weight values; and
04: all blocks in the initial image P0 are traversed, each block is taken as a target block, and the second step and the third step are repeatedly executed to obtain a noise-reduced image P1 (shown in fig. 6).
Referring to fig. 2, the present application further provides animage processing apparatus 10, where theimage processing apparatus 10 includes a noise reduction module 11, and the noise reduction module 11 includes a first obtainingunit 111, a second obtainingunit 113, anoise reduction unit 115, and a third obtainingunit 117. The first obtainingunit 111 is configured to perform the method in 01, i.e. the first obtainingunit 111 is configured to perform the first step: an initial image is obtained, and the initial image P0 includes a plurality of areas with the same size, and each area includes at least one pixel point. The second obtainingunit 113 is configured to perform the method in 02, i.e. the second obtainingunit 113 is configured to perform the second step: taking any one of the blocks as a target block, obtaining the weight values of a plurality of adjacent blocks around the target block according to the gain data corresponding to the target block and the noise level of the target block in the extended graph A, wherein the adjacent blocks are blocks surrounding and connected with the target block, and the size of the extended graph A is the same as that of the initial image P0. Thenoise reduction unit 115 is configured to perform the method in 03, i.e. thenoise reduction unit 115 is configured to perform the third step: and performing noise reduction processing on the target block according to the plurality of weight values. The third obtainingunit 117 is configured to perform the method in 04, that is, the third obtainingunit 117 is configured to: all blocks in the initial image P0 are traversed, each block is taken as a target block, and the second step and the third step are repeatedly executed to obtain a noise-reduced image P1.
Referring to fig. 3, the present application also provides aterminal 100, where theterminal 100 includes one ormore processors 30, amemory 50, and one or more programs. Wherein one or more programs are stored in thememory 50 and executed by the one ormore processors 30, the programs including instructions for performing the image processing methods of the embodiments of the present application. That is, when one ormore processors 30 execute a program, theprocessors 30 may implement the methods in 01, 02, 03, and 04. That is, the one ormore processors 30 are operable to perform: the first step is as follows: an initial image P0 is obtained, and the initial image P0 includes a plurality of areas with the same size, and each area includes at least one pixel point. The second step is as follows: taking any one of the blocks as a target block, obtaining the weight values of a plurality of adjacent blocks around the target block according to the gain data corresponding to the target block and the noise level of the target block in the extended graph A, wherein the adjacent blocks are blocks surrounding and connected with the target block, and the size of the extended graph A is the same as that of the initial image P0. The third step: and performing noise reduction processing on the target block according to the plurality of weight values. All blocks in the initial image P0 are traversed, each block is taken as a target block, and the second step and the third step are repeatedly executed to obtain a noise-reduced image P1.
Specifically, the terminal 100 may include, but is not limited to, a mobile phone, a notebook computer, a smart television, a tablet computer, a smart watch, or a computer. Theimage processing apparatus 10 may be an integration of functional modules integrated in theterminal 100. The present application is described only by taking the terminal 100 as a mobile phone as an example, and the case where the terminal 100 is another type of device is similar to the mobile phone, and will not be described in detail.
In an Image Signal Processing (ISP) system, since the light gathering capability of the central area of a convex Lens is much larger than that of the edge area, the light in the central area of a sensor is stronger than that in the edge area, and therefore, in the ISP system, a Lens Shading Correction (LSC) module is present to improve the brightness of the corners of an Image, so that the brightness of the whole Image is nearly consistent. But after the corner area is brightened by the LSC module, the noise is more obvious. The current optimization method aiming at corner noise is a method for regulating and controlling the noise reduction strength of pixel points by calculating the distance between the pixel points and the central point of an image, and can effectively suppress the noise of corners, but the method also has the following problems: the noise reduction/sharpening strength at different distances needs to be set to suppress corner noise, however, the ideal threshold value will be different under different scenes or different shooting conditions, so this method needs many scenes to debug to obtain a relatively ideal threshold value. When data is collected, special scenes, such as scenes with large brightness flickering, are encountered, and noise at corners of the scenes is extremely obvious. Parameters acquired by debugging a few specific scenes are difficult to adapt to all scenes, so that situations in which the noise reduction/sharpening is too strong or too weak can occur in some special scenes.
In the image processing method, theimage processing apparatus 10, the terminal 100, and the non-volatile computer-readable storage medium 200 of the present application, the noise reduction processing is performed on the target block according to the gain data corresponding to the target block in the extended graph a, the noise level of the target block, and the weight values of a plurality of adjacent blocks of the target block, and all blocks in the initial image P0 are traversed to perform the noise reduction processing on each block in the initial image P0, and the noise reduction image P1 is obtained. The gain data in the expanded image a corresponds to the blocks in the initial image a, that is, each block in the initial image P0 can find corresponding gain data in the expanded image a, so as to effectively regulate and control the noise reduction intensity of different blocks in the initial image P0, thereby more specifically suppressing the noise with bright dark regions, better protecting the details in the initial image P0, and avoiding the phenomenon of uneven noise reduction in the center region and the corner region of the noise-reduced image P1.
In themethod 01, the first obtainingunit 111 or theprocessor 30 obtains an initial image P0, wherein the initial image P0 includes a plurality of areas with the same size, and each area includes at least one pixel. After the initial image P0 is acquired by the first acquiringunit 111 or theprocessor 30, the initial image P0 may be divided into a plurality of blocks of the same size, for example, the initial image P0 may be divided into 3 × 3 blocks, 4 × 4 blocks, 5 × 5 blocks, or 6 × 6 blocks, which is not limited in this respect. Specifically, the number of blocks may be set according to the distribution of the bright areas and the dark areas in the initial image P0 and the size of the dark areas, so that the second obtainingunit 113 or theprocessor 30 obtains a plurality of weight values to more specifically suppress the noise in the dark areas. For example, as shown in fig. 4, the initial image P0 includes 16 blocks, the 16 blocks are P0(0, 0), P0(0, 1), P0(0, 2), P0(0, 3), P0(1, 0), P0(1, 1), P0(1, 2), P0(1, 3), P0(2, 0), P0(2, 1), P0(2, 2), P0(2, 3), P0(3, 0), P0(3, 1), P0(3, 2), and P0(3, 3), and the blocks include one pixel point, two pixel points, three pixel points, four pixel points, or more than four pixel points.
In themethod 02, an arbitrary block in the original image P0 is used as a target block, and the weight values of a plurality of neighboring blocks of the target block are obtained according to the gain data corresponding to the target block in the extended graph a and the noise level of the target block. The adjacent block is a block surrounding and connected to the target block. As shown in fig. 4, the target block is a block P0(0, 0) in the initial image P0, and the adjacent blocks of the target block P0(0, 0) are a block P0(0, 1), a block P0(1, 1) and a block P0(1, 0), respectively. When the target block is the block P0(1, 1) in the initial image P0, a plurality of adjacent blocks of the target block P0(1, 1) are the block P0(0, 0), the block P0(0, 1), the block P0(0, 2), the block P0(1, 2), the block P0(2, 2), the block P0(2, 1), the block P0(2, 0), and the block P0(1, 0), respectively. When the target block is the block P0(2, 0), a plurality of adjacent blocks of the target block P0(2, 0) are the block P0(1, 0), the block P0(1, 1), the block P0(2, 1), the block P0(3, 1), and the block P0(3, 0), respectively. As shown in fig. 4, a (0, 0), a (0, 1), … …, and a (3, 3) in the extended graph a are respectively gain data, a (0, 0) in the extended graph a is gain data corresponding to the block P0(0, 0) in the initial image P0, a (0, 1) in the extended graph a is gain data corresponding to the block P0(0, 1) in the initial image P0, … …, and a (3, 3) in the extended graph a is gain data corresponding to the block P0(3, 3) in the initial image P0. When the target block is the block P0(0, 0), the second obtainingunit 113 or theprocessor 30 obtains the weight value of the adjacent block P0(0, 1) from the gain data a (0, 1) and the noise level of the target block P0(0, 0) in the extended view a, obtains the weight value of the adjacent block P0(1, 1) from the gain data a (1, 1) and the noise level of the target block P0(0, 0) in the extended view a, and obtains the weight value of the adjacent block P0(1, 0) from the gain data a (1, 0) and the noise level of the target block P0(0, 0) in the extended view a.
In themethod 03, thedenoising unit 115 or theprocessor 30 performs denoising processing on the target block according to the weight values of a plurality of adjacent blocks of the target block. For example, as shown in fig. 4, when the target block is the block P0(0, 0), thenoise reduction unit 115 or theprocessor 30 performs noise reduction processing on the target block P0(0, 0) according to the weight value of the adjacent block P0(0, 1), the weight value of the adjacent block P0(1, 1), and the weight value of the adjacent block P0(1, 0).
In themethod 04, all blocks in the initial image P0 are traversed to perform noise reduction processing on each block as a target block, and the specific noise reduction process is as the implementation process of the foregoingmethod 02 andmethod 03, and is not described herein again. Since all blocks in the initial image P0 are used as primary target blocks, and each target block is subjected to noise reduction processing according to the weight values of a plurality of adjacent blocks corresponding to the target block, and the weight value of each adjacent block is related to the noise level of the target block and the gain data corresponding to the target block, thenoise reduction unit 115 or theprocessor 30 effectively regulates and controls the noise reduction intensity of different blocks in the initial image, so as to more specifically suppress the noise with bright dark regions, better protect the details in the initial image, and avoid the phenomenon of uneven noise reduction in the center region and the corner region of the noise-reduced image.
Referring to fig. 4 and 5, in some embodiments, 02: obtaining the weight values of a plurality of adjacent blocks around the target block according to the gain data corresponding to the block in the expansion map A and the noise level of the target block, including:
021: acquiring a first ratio of the adjacent block according to the pixel values of all the pixel points in the target block and the pixel values of the corresponding pixel points in the adjacent block;
023: acquiring a second ratio according to the first ratio, the noise level and the gain data corresponding to the target block in the expansion diagram A; and
025: and acquiring the weight value corresponding to the adjacent block according to the first ratio and the second ratio.
Referring to FIG. 2, the second obtainingunit 113 is further configured to perform the methods of 021, 023 and 025. That is, the second acquiringunit 113 is further configured to: and acquiring a first ratio of the adjacent block according to the pixel values of all the pixel points in the target block and the pixel values of the corresponding pixel points in the adjacent block. And acquiring a second ratio according to the first ratio, the noise level and the gain data corresponding to the target block in the expansion diagram A. And acquiring the weight value corresponding to the adjacent block according to the first ratio and the second ratio.
Referring to FIG. 3, theprocessor 30 is further configured to execute the methods of 021, 023 and 025. That is, theprocessor 30 is further configured to: and acquiring a first ratio of the adjacent block according to the pixel values of all the pixel points in the target block and the pixel values of the corresponding pixel points in the adjacent block. And acquiring a second ratio according to the first ratio, the noise level and the gain data corresponding to the target block in the expansion diagram A. And acquiring the weight value corresponding to the adjacent block according to the first ratio and the second ratio.
In themethod 021, when the second obtainingunit 113 or theprocessor 30 calculates the weighted values in the adjacent blocks in the target block, a first ratio of the weighted values in the adjacent blocks is obtained according to the pixel values of all the pixels in the target block and the adjacent blocks. For example, when each tile in the initial image P0 includes a pixel, the second obtainingunit 113 or theprocessor 30 calculates a first ratio of the adjacent tiles according to the pixel in the target tile and the pixel in the adjacent tiles. For example, referring to fig. 4 and fig. 6, when the block in the initial image P0 includes four pixels, the second obtainingunit 113 or theprocessor 30 calculates a first ratio of the weighted values of the adjacent block according to the pixel values of all the pixels in the target block and the pixel values of all the pixels in the adjacent block. For example, when the target block is the block P0(0, 0), the second obtainingunit 113 or theprocessor 30 obtains the first ratio of the weight values of the adjacent block P0(0, 1), the pixel values of the four pixels in the target block P0(0, 0) are pa, pb, pc and pd, respectively, the pixel values of the four pixels in the adjacent block P0(0, 1) are pe, pf, pg and ph, respectively, the second obtainingunit 113 or theprocessor 30 obtains the first ratio of the adjacent block P0(0, 1) according to the pixel values pa, pb, pc and pd in the target block P0(0, 0) and the pixel values pe, pf, pg and ph in the adjacent block P0(0, 1) to fully utilize the pixel values of the pixels in the adjacent block, so that the weight value obtained by the first ratio can filter the noise in the target block P0(0, 0) more specifically. In themethod 023, the second obtainingunit 113 or theprocessor 30 obtains the second ratio of the adjacent block according to the first ratio obtained in themethod 021, the noise level of the target block, and the gain data corresponding to the target block in the extended graph a. For example, the first ratio may be obtained according to a difference between a total pixel value of all the pixels in the target block and a total pixel value of all the pixels in the neighboring block. Or, obtaining a first ratio according to the sum of the difference values of all pixel points in the target block and all pixel values of all pixel points in the adjacent block. Finally, the method in 025 is performed: the weighted values of the adjacent blocks are obtained according to the first ratio and the second ratio, and when the second obtainingunit 113 or theprocessor 30 obtains the weighted values of other adjacent blocks of the target block, themethod 021, themethod 023 and themethod 025 are executed in sequence.
In summary, when the second obtainingunit 113 or theprocessor 30 obtains the weight values of the adjacent blocks of the target block, the weight values of the adjacent blocks are calculated by combining the pixel values of all the pixels in the target block, the pixel values of all the pixels in the adjacent blocks, the noise level of the target block, and the gain data corresponding to the target block in the extended graph a, and the target block is subjected to targeted noise reduction by using the weight values corresponding to the adjacent blocks, so as to effectively protect the detail parts in the target block.
Referring to fig. 7, in some embodiments, 021: acquiring a first ratio of the adjacent block according to the pixel values of all the pixel points in the target block and the pixel values of the corresponding pixel points in the adjacent block, wherein the first ratio comprises the following steps:
0211: acquiring a difference value of each pixel point according to the pixel value of each pixel point in the target block and the pixel value of the pixel point at the corresponding position in the adjacent block; and
0213: and accumulating the absolute values of the difference values respectively corresponding to all the pixel points to obtain a first ratio of the adjacent blocks.
Referring to FIG. 2, the second obtainingunit 113 is further configured to execute the methods of 0211 and 0213. That is, the second acquiringunit 113 is further configured to: and acquiring the difference value of each pixel point according to the pixel value of each pixel point in the target block and the pixel value of the pixel point at the corresponding position in the adjacent block. And accumulating the absolute values of the difference values respectively corresponding to all the pixel points to obtain a first ratio of the adjacent blocks.
Referring to FIG. 3, theprocessor 30 is also used for executing the methods of 0211 and 0213. That is, theprocessor 30 is further configured to: and acquiring the difference value of each pixel point according to the pixel value of each pixel point in the target block and the pixel value of the pixel point at the corresponding position in the adjacent block. And accumulating the absolute values of the difference values respectively corresponding to all the pixel points to obtain a first ratio of the adjacent blocks.
Referring to fig. 6, in an embodiment, if the target block includes four pixels, the second obtainingunit 113 or theprocessor 30 obtains a difference value of each pixel according to a pixel value of each pixel in the target block and a pixel value of a pixel at a corresponding position in an adjacent block. The pixel point at the upper left corner in the target block corresponds to the pixel point at the upper left corner in the adjacent block, the pixel point at the upper right corner in the target block corresponds to the pixel point at the upper right corner in the adjacent block, the pixel point at the lower left corner in the target block corresponds to the pixel point at the lower left corner in the adjacent block, and the pixel point at the lower right corner in the target block corresponds to the pixel point at the lower right corner in the adjacent block. For example, the pixel with the pixel value pa in the target partition P0(0, 1) corresponds to the pixel with the pixel value pe in the neighboring partition P0(0, 1). When the target block is P0(0, 0), the second obtainingunit 113 or theprocessor 30 obtains a first ratio of the weight values of the adjacent block P0(0, 1), the second obtainingunit 113 or theprocessor 30 sequentially calculates a difference value of each pixel value in the two blocks (the target block and the adjacent block), four difference values of the adjacent block P0(0, 1) are pa-pe, pb-pf, pc-pg, pd-ph, respectively, and then the absolute values of the four difference values are accumulated to obtain a first ratio of the weight values of the adjacent block P0(0, 1) which is | pa-pe | + | pb-pf | + | pc-pg | + | pd-ph.
In summary, the second obtainingunit 113 or theprocessor 30 obtains the weighted values of the neighboring blocks of the target block according to themethods 021, 023, 025, 0211 and 0213 according to the following formula (1).
Wherein Weight in formula (1) represents the weighted value of the adjacent block, diff represents the first ratio of the weighted values of the adjacent blocks, diff + noiselev lut
(x,y)Representing the second ratio of the weight values of the neighboring blocks, noisevell representing the noise level of the target block, determined by other modules in the
image processing apparatus 10 to be knownA numerical value, which is a positive number. lut
(x,y)It is indicated that the target block P0(x, y) corresponds to the gain data a (x, y) in the extended graph a. From the formula (1), when the target block corresponds to the gain data in the extended graph a, the weight values of the neighboring blocks of the target block are larger. Referring to fig. 4 and 6, for example, when the target block is P0(0, 0), the first ratio diff ═ pa-pe | + | pb-pf | + | pc-pg | + | pd-ph | in the weighted values of the neighboring blocks P0(0, 1), and the second ratio diff + noisevel a (0, 0), the weighted value of the neighboring block P0(0, 1) is determined
Referring to fig. 8, in some embodiments, 03: and performing noise reduction processing on the target block according to the plurality of weight values, wherein the noise reduction processing comprises the following steps:
031: acquiring a third ratio according to the pixel values of the adjacent blocks and the weight values respectively corresponding to the adjacent blocks;
033: accumulating a plurality of weighted values respectively corresponding to the adjacent blocks to obtain a fourth ratio; and
035: and acquiring the pixel value of the target block according to the third ratio and the fourth ratio to be used as the pixel value of the noise-reduced image and the block corresponding to the target block.
Referring to fig. 2, thedenoising unit 115 is further configured to perform themethods 031, 033, and 035. That is, thenoise reduction unit 115 is further configured to: and obtaining a third ratio according to the pixel values of the adjacent blocks and the weight values respectively corresponding to the adjacent blocks. And accumulating a plurality of weighted values respectively corresponding to the adjacent blocks to obtain a fourth ratio. And acquiring the pixel value of the target block according to the third ratio and the fourth ratio to be used as the pixel value of the noise-reduced image and the block corresponding to the target block.
Referring to fig. 3, theprocessor 30 is also configured to execute themethods 031, 033 and 035. That is, theprocessor 30 is further configured to: and obtaining a third ratio according to the pixel values of the adjacent blocks and the weight values respectively corresponding to the adjacent blocks. And accumulating a plurality of weighted values respectively corresponding to the adjacent blocks to obtain a fourth ratio. And acquiring the pixel value of the target block according to the third ratio and the fourth ratio to be used as the pixel value of the noise-reduced image and the block corresponding to the target block.
After the second obtainingunit 113 or theprocessor 30 obtains a plurality of weight values corresponding to the adjacent blocks of the target block, thenoise reduction unit 115 or theprocessor 30 obtains a third ratio according to the plurality of weight values and the pixel values of the adjacent blocks, and accumulates the plurality of weight values corresponding to the adjacent blocks to obtain a fourth ratio, thenoise reduction unit 115 obtains the pixel value of the target block after noise reduction by combining the third ratio and the fourth ratio, and uses the pixel value as the pixel value of the block corresponding to the image and the target block after noise reduction. The pixel values of the blocks, the pixel values of the adjacent blocks and the pixel value of the target block all represent the sum of the pixel values of all the pixel points in the blocks. Specifically, the denoising unit orprocessor 30 obtains the pixel values of the blocks corresponding to the target block of the denoised image according to formula (2).
Among them, Pathch
de-noiseRepresents the pixel value, sigma, of the target block after noise reduction
i(Patch
i*Weight
i) A third ratio is represented, which is,
for example, as shown in fig. 4, when the block P0(0, 0) is the target block, the number of the corresponding adjacent blocks is 3, and n is 3. When the block P0(1, 0) is the target block, if the number of corresponding adjacent blocks is 5, n is 5. When the block P0(1, 1) is the target block, if the number of corresponding adjacent blocks is 8, n is equal to 8. Patch represents the sum of pixel values of all pixel points in the ith adjacent block. Weight
iRepresenting the weight value of the ith neighboring block.
Referring to fig. 4 and 9, when the target block is the block P0(0, 0), the sum of the pixel values of all the pixels in the target block without noise reduction is marked as Patch
noiseThe pixel value of the neighboring tile P0(0, 1) is Patch1,the Weight value of neighboring tile P0(0, 1) is Weight 1; the pixel value of the adjacent block P0(1, 1) is Patch2, and the Weight value of the adjacent block P0(1, 1) is Weight 2; the pixel value of the neighboring tile P0(1, 0) is Patch3, and the Weight value of the neighboring tile P0(1, 0) is Weight 3. The
denoising unit 115 or the
processor 30 obtains the pixel value of the denoised target block according to the formula (2) as
That is, the pixel value of the block (i.e., P1(0, 0)) of the noise-reduced image P1 corresponding to the block P0(0, 0) of the initial image P0 is Pathch
de-noise。
If the target block P0(0, 1) includes four pixels, the pixel value distribution of each pixel is as shown in FIG. 6, i.e., Pathch
noisePa + pb + pc + pd. In one embodiment, the pixel values of the pixels in the block P1(0, 0) in the noise-reduced image P1, the pixel values of the pixels in the corresponding position in the original image P0, and the pixel values of the noise-reduced target block (i.e. Pathch)
de-noise) And (4) correlating. Pixel values of pixel points at the upper left corner in the block P1(0, 0) in the noise-reduced image P1
Pixel values of pixel points at the upper right corner in the block P1(0, 0) in the noise-reduced image P1
Pixel values of pixel points at the lower left corner in the block P1(0, 0) in the noise-reduced image P1
Pixel values of pixel points at the lower right corner in the block P1(0, 0) in the noise-reduced image P1
Referring to fig. 10, in some embodiments, 03: denoising the target block according to a plurality of weights, comprising
037: and performing noise reduction processing on the YUV three channels or the RGB three channels in the target block according to the multiple weight values.
Referring to fig. 2, thedenoising unit 115 is further configured to perform the method in 037. That is, thenoise reduction unit 115 is further configured to: and performing noise reduction processing on the YUV three channels or the RGB three channels in the target block according to the multiple weight values.
Referring to fig. 3, theprocessor 30 is also configured to execute the method in 037. That is, theprocessor 30 is further configured to: and performing noise reduction processing on the YUV three channels or the RGB three channels in the target block according to the multiple weight values.
In some embodiments, when thedenoising unit 115 or theprocessor 30 denoises the target block according to a plurality of weights, the denoising process may be performed on three RGB channels in the target block. Alternatively, thenoise reduction unit 115 or theprocessor 30 performs noise reduction processing on the luminance noise on the Y channel in the target block to suppress the luminance noise in the target block. Alternatively, thenoise reduction unit 115 or theprocessor 30 performs noise reduction processing on the color noise on both channels of UV in the target block, thereby suppressing the color noise in the target block. Alternatively, thenoise reduction unit 115 or theprocessor 30 performs joint noise reduction on the YUV three channels. The specific denoising process is the same as the denoising process in themethods 031, 033 and 035, and is not described herein again.
Referring to fig. 9 and 11, in some embodiments, the image processing method further includes:
05: the sharpening process is performed on the noise-reduced image P1 based on the noise-reduced image P1 and the gain data.
Referring to fig. 2, theimage processing apparatus 10 further includes a sharpeningmodule 13. The sharpeningmodule 13 is used to execute the method in 05. That is, sharpeningmodule 13 is to: the sharpening process is performed on the noise-reduced image P1 based on the noise-reduced image P1 and the gain data.
Referring to fig. 3, theprocessor 30 is further configured to execute the method of 05. That is, theprocessor 30 is further configured to: the sharpening process is performed on the noise-reduced image P1 based on the noise-reduced image P1 and the gain data.
Specifically, after the third obtainingunit 117 or theprocessor 30 obtains the noise-reduced image P1, the sharpeningmodule 13 performs sharpening on the noise-reduced image P1 according to the gain data, so as to further improve the sharpness of a part of the region in the noise-reduced image P1 and improve the image quality.
Referring to fig. 12 and 13, in some embodiments, 05: sharpening the noise-reduced image P1 based on the noise-reduced image P1 and the gain data, the sharpening method including:
051: acquiring a detail layer D of the noise reduction image P1;
053: the noise-reduced image P1 is sharpened based on the detail layer D and the gain data corresponding to the detail layer D in the expanded view a.
Please refer to fig. 2, the sharpeningmodule 13 is further configured to perform the methods of 051 and 053. That is, the sharpeningmodule 13 is further configured to: acquiring a detail layer D of the noise reduction image P1; the noise-reduced image P1 is sharpened based on the detail layer D and the gain data corresponding to the detail layer D in the expanded view a.
Referring to fig. 3, theprocessor 30 is also used for executing themethods 051 and 053. That is, theprocessor 30 is further configured to: acquiring a detail layer D of the noise reduction image P1; the noise-reduced image P1 is sharpened based on the detail layer D and the gain data corresponding to the detail layer in the expanded view a.
The noise-reduced image P1 can be decomposed into an underlayer containing low-frequency information of the noise-reduced image P1 and a detail layer containing high-frequency information of the noise-reduced image P1. When the sharpeningmodule 13 obtains the bottom layer of the noise-reduced image P1, the sharpeningmodule 13 may perform filtering processing on the noise-reduced image P1 by using low-pass filtering (such as mean filtering), gaussian filtering or guided filtering to extract the bottom layer of the noise-reduced image P1, and after the bottom layer is extracted, the sharpeningmodule 13 subtracts the bottom layer from the noise-reduced image P1 to obtain the detail layer of the noise-reduced image P1.
As shown in fig. 13, in one example, if the detail layer D in the noise-reduced image P1 is indicated by a hatched portion in the noise-reduced image P1, the gain data corresponding to the detail layer D in the augmented graph a is a (2, 0) + a (3, 0). The sharpeningmodule 13 or theprocessor 30 sharpens the detail layer D to obtain a sharpened detail layer D', D ═ D (a (2, 0) + a (3, 0)), where multiplication of the detail layer D and the gain data (a (2, 0) + a (3, 0)) refers to a process of sharpening the pixel values in the detail layer D by the gain data (a (2, 0) + a (3, 0)). Since the bottom layer of the noise-reduced image P1 is extracted in themethod 051, the sharpened image P2 can be obtained by adding the bottom layer and the sharpened detail layer D' by the sharpeningmodule 13 or theprocessor 30.
Referring to fig. 4 and 14, in some embodiments, the image processing method further includes:
06: performing lens shading correction processing on the initial image P0 to acquire an intermediate image;
07: acquiring data of a GR channel and data of a GB channel of an intermediate image (not shown) in a RAW domain;
08: and acquiring the expansion diagram A according to the data of the GR channel and the data of the GB channel.
Referring to fig. 2, theimage processing apparatus 10 further includes an obtainingmodule 17, and the obtainingmodule 17 is configured to execute the methods in 06, 07, and 08. Namely, the obtainingmodule 17 is configured to: the initial image P0 is subjected to lens shading correction processing to acquire an intermediate image. And acquiring data of the GR channel and data of the GB channel of the intermediate image in the RAW domain. And acquiring the expansion diagram A according to the data of the GR channel and the data of the GB channel.
Referring to fig. 3, theprocessor 30 is also used for executing the methods in 06, 07 and 08. That is, theprocessor 30 is configured to: the initial image P0 is subjected to lens shading correction processing to acquire an intermediate image. And acquiring data of the GR channel and data of the GB channel of the intermediate image in the RAW domain. And acquiring the expansion diagram A according to the data of the GR channel and the data of the GB channel.
Specifically, after the first obtainingunit 111 obtains the initial image P0, due to the influence of the manufacturing process and the optical characteristics of the camera module, the first obtainingunit 111 obtains the initial image which often has a phenomenon that the color and the brightness from the center to the periphery are inconsistent, and the obtainingmodule 17 performs lens shading correction processing on the initial image P0, so that the color and the brightness from the center to the periphery of the intermediate image are consistent. Please refer to fig. 15, the data of the GR channel and the data of the GB channel of the intermediate image in the RAW domain are then acquired. Wherein, the data size of the GR channel represents the gain data of the GR channel pixel (i.e. the brightness multiple of the pixel), and the data size of the GB channel represents the gain data of the GB channel pixel. The initial image P0 has a size of 3072 × 4096, and the GR channel and the GB channel acquired by theacquisition module 17 are both matrices having a size of 13 × 17. Finally, the obtainingmodule 17 obtains the extended graph a according to the data of the GR channel and the data of the GB channel.
Referring to fig. 15 and 16, in some embodiments, 08: obtaining the expansion diagram A according to the data of the GR channel and the data of the GB channel, comprising:
081: averaging the data of the GR channel and the data of the GB channel at the same position to obtain the data of the corresponding position in the G channel;
083: and (5) carrying out interpolation processing on the G channel to obtain an expansion map A.
Please refer to fig. 2, the obtainingmodule 17 is also used to perform the methods in 081 and 083. That is, the obtainingmodule 17 is further configured to: averaging the data of the GR channel and the data of the GB channel at the same position to obtain the data of the corresponding position in the G channel; and (5) carrying out interpolation processing on the G channel to obtain an expansion map A.
Referring to fig. 3, theprocessor 30 is also used for performing the methods of 081 and 083. That is, theprocessor 30 is further configured to: averaging the data of the GR channel and the data of the GB channel at the same position to obtain the data of the corresponding position in the G channel; and (5) carrying out interpolation processing on the G channel to obtain an expansion map A.
Specifically, when the obtainingmodule 17 obtains the data in the G channel, it obtains the average value according to the data in the GR channel and the GB channel, that is, the data G (x, y) in the G channel is (GR (x, y) + GR (x, y)) × 0.5, where x is [0, 12] and y is [0, 16 ]. For example, G (0, 0) ═ GR (0, 0) + GR (0, 0)) × 0.5, G (0, 4) ═ GR (0, 4) + GR (0, 4)) × 0.5, … …, G (16, 12) ═ GR (16, 12) + GR (16, 12)) × 0.5. Then, the data in the G channel is interpolated to obtain an expansion map a having the same size as the original image P0.
Specifically, in themethod 083, the obtainingmodule 17 may perform interpolation processing on the G channel based on a bilinear interpolation method or a bicubic interpolation method. In one example, the obtainingmodule 17 performs interpolation processing on the G channel based on a bilinear interpolation method. As shown in fig. 15, 255 pixels are sequentially inserted between two adjacent data in the G channel horizontal direction, and 255 pixels are sequentially inserted between two adjacent data in the G channel vertical direction, to generate an expanded view a of the same size as the initial image P0 (3072 × 4096). For example, if the difference between two data is denoted as C1, C1 is G (0, 1) -G (0, 0), the first pixel inserted between G (0, 0) and G (0, 1) is G (0, 0) + C1 1/256, the second pixel inserted between G (0, 0) and G (0, 1) is G (0, 0) + C1 2/256, … …, and the 255 th pixel is G (0, 0) + C1 256/256. Then, 255 pixels are inserted between the second data G (0, 1) and the third data G (0, 2), and if the difference between the two data is denoted as C2, C2 is equal to G (0, 2) -G (0, 1), the first pixel inserted between G (0, 1) and G (0, 2) is G (0, 1) + C2 1/256, the second pixel inserted between G (0, 1) and G (0, 2) is G (0, 1) + C2 2/256, and the 255 th pixel of … … is G (0, 1) + C2 256/256. By analogy, 255 pixels are inserted between two adjacent data in each row of data in the G channel, resulting in a matrix of size 3072 × 17. Similarly, 255 pixels are inserted between two adjacent data in each column of data in the G channel, for example, 255 pixels are inserted between the first data G (0, 0) in the first column and the second data G (1, 0) in the first column, and then the first pixel to be inserted is G (0, 0) + (G (1, 0) -G (0, 0)). 1/256. Finally, theacquisition module 17 obtains an expanded graph a with a size of 3072 × 4096.
It should be noted that the gain data a (0, 0) in the extended graph a in fig. 15 includes data including at least one G channel, that is, when a block in the initial image P0 includes one pixel, a (0, 0) in the extended graph a is equal to G (0, 0). When a block in the initial image P0 includes more than two pixel points, a (0, 0) in the augmented image a includes a plurality of G-channel data after interpolation.
Referring to fig. 17, fig. 17 is an extended graph a obtained by the obtainingmodule 17 having a certain initial image, a central area of the extended graph a is dark, and the brightness of the extended graph a is higher toward the corner areas, which indicates that the brightness of the pixels in the corresponding area is also larger by a large factor.
In summary, in the image processing method, theimage processing apparatus 10, the terminal 100, and the non-volatile computerreadable storage medium 200 of the present application, the noise reduction processing is performed on the target block according to the gain data corresponding to the target block in the extended graph a, the noise level of the target block, and the weighting values of a plurality of adjacent blocks of the target block, and all blocks in the original image P0 are traversed to perform the noise reduction processing on each block in the original image P0, so as to obtain the noise reduced image P1. The gain data in the expanded image a corresponds to the blocks in the initial image a, that is, each block in the initial image P0 can find corresponding gain data in the expanded image a, so as to effectively regulate and control the noise reduction intensity of different blocks in the initial image P0, thereby more specifically suppressing the noise with bright dark regions, better protecting the details in the initial image P0, and avoiding the phenomenon of uneven noise reduction in the center region and the corner region of the noise-reduced image P1.
Referring to fig. 18, the present embodiment further provides a non-volatile computer-readable storage medium 200 containing acomputer program 201. Thecomputer program 201, when executed by the one ormore processors 30, causes theprocessors 30 to perform the image processing methods in 01, 02, 03, 04, 05, 06, 07, 08, 021, 023, 025, 0211, 0213, 031, 033, 035, 037, 051, 053, 081, and 083.
In the description herein, references to the description of the terms "certain embodiments," "one example," "exemplary," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and the scope of the preferred embodiments of the present application includes other implementations in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present application.
Although embodiments of the present application have been shown and described above, it is to be understood that the above embodiments are exemplary and not to be construed as limiting the present application, and that changes, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present application.