Movatterモバイル変換


[0]ホーム

URL:


CN113890961A - Image processing method and device, terminal and readable storage medium - Google Patents

Image processing method and device, terminal and readable storage medium
Download PDF

Info

Publication number
CN113890961A
CN113890961ACN202111193917.4ACN202111193917ACN113890961ACN 113890961 ACN113890961 ACN 113890961ACN 202111193917 ACN202111193917 ACN 202111193917ACN 113890961 ACN113890961 ACN 113890961A
Authority
CN
China
Prior art keywords
target block
block
image
noise
blocks
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111193917.4A
Other languages
Chinese (zh)
Other versions
CN113890961B (en
Inventor
林泉佑
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp LtdfiledCriticalGuangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202111193917.4ApriorityCriticalpatent/CN113890961B/en
Publication of CN113890961ApublicationCriticalpatent/CN113890961A/en
Application grantedgrantedCritical
Publication of CN113890961BpublicationCriticalpatent/CN113890961B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Classifications

Landscapes

Abstract

The application provides an image processing method, an image processing device, a terminal and a readable storage medium. The image processing method comprises the following steps: the first step is as follows: the method comprises the steps of obtaining an initial image, wherein the initial image comprises a plurality of blocks with the same size, and each block comprises at least one pixel point. The second step is as follows: taking any one of the blocks as a target block, and obtaining the weight values of a plurality of adjacent blocks around the target block according to the gain data corresponding to the target block in the expansion map and the noise level of the target block, wherein the adjacent blocks are blocks surrounding and connected with the target block, and the size of the expansion map is the same as that of the initial image. The third step: and performing noise reduction processing on the target block according to the plurality of weight values. And traversing all blocks in the initial image, taking each block as a target block, and repeatedly executing the second step and the third step to obtain the noise-reduced image.

Description

Image processing method and device, terminal and readable storage medium
Technical Field
The present application relates to the field of electronic technologies, and in particular, to an image processing method, an image processing apparatus, a terminal, and a non-volatile computer-readable storage medium.
Background
In an Image Signal Processing (ISP) system, since the light gathering capability of the central area of a convex Lens is much larger than that of the edge area, the light in the central area of a sensor is stronger than that in the edge area, and therefore, in the ISP system, a Lens Shading Correction (LSC) module is present to improve the brightness of the corners of an Image, so that the brightness of the whole Image is nearly consistent. But after the corner area is brightened by the LSC module, the noise is more obvious.
The current optimization method aiming at corner noise is a method for regulating and controlling the noise reduction strength of pixel points by calculating the distance between the pixel points and the central point of an image, and can effectively suppress the noise of corners, but the method also has the following problems: the noise reduction/sharpening strength at different distances needs to be set to suppress corner noise, however, the ideal threshold value will be different under different scenes or different shooting conditions, so this method needs many scenes to debug to obtain a relatively ideal threshold value. When data is collected, special scenes, such as scenes with large brightness flickering, are encountered, and noise at corners of the scenes is extremely obvious. Parameters acquired by debugging a few specific scenes are difficult to adapt to all scenes, so that situations in which the noise reduction/sharpening is too strong or too weak can occur in some special scenes.
Disclosure of Invention
The embodiment of the application provides an image processing method, an image processing device, a terminal and a non-volatile computer readable storage medium.
The image processing method of the embodiment of the application comprises the following steps: the first step is as follows: the method comprises the steps of obtaining an initial image, wherein the initial image comprises a plurality of blocks with the same size, and each block comprises at least one pixel point. The second step is as follows: taking any one of the blocks as a target block, and obtaining the weight values of a plurality of adjacent blocks around the target block according to gain data corresponding to the target block in an expansion map and the noise level of the target block, wherein the adjacent blocks are blocks surrounding and connected with the target block, and the size of the expansion map is the same as that of the initial image. The third step: and performing noise reduction processing on the target block according to the plurality of weight values. And traversing all the blocks in the initial image, taking each block as the target block once, and repeatedly executing the second step and the third step to obtain a noise-reduced image.
The image processing apparatus of the embodiment of the present application includes a noise reduction module. The noise reduction module comprises a first acquisition unit, a second acquisition unit, a noise reduction unit and a third acquisition unit. The first obtaining unit is used for executing a first step: the method comprises the steps of obtaining an initial image, wherein the initial image comprises a plurality of blocks with the same size, and each block comprises at least one pixel point. The second obtaining unit is used for executing a second step: taking any one of the blocks as a target block, and obtaining the weight values of a plurality of adjacent blocks around the target block according to gain data corresponding to the target block in an expansion map and the noise level of the target block, wherein the adjacent blocks are blocks surrounding and connected with the target block, and the size of the expansion map is the same as that of the initial image. The noise reduction unit is configured to perform a third step: and performing noise reduction processing on the target block according to the plurality of weight values. The third obtaining unit is configured to traverse all the blocks in the initial image, regard each of the blocks as the target block once, and repeatedly perform the second step and the third step to obtain a noise-reduced image.
The terminal of the embodiments of the present application includes one or more processors, memory, and one or more programs. Wherein one or more of the programs are stored in the memory and executed by one or more of the processors, the programs including instructions for performing the image processing method of the embodiments of the present application. The image processing method comprises the following steps: the first step is as follows: the method comprises the steps of obtaining an initial image, wherein the initial image comprises a plurality of blocks with the same size, and each block comprises at least one pixel point. The second step is as follows: taking any one of the blocks as a target block, and obtaining the weight values of a plurality of adjacent blocks around the target block according to gain data corresponding to the target block in an expansion map and the noise level of the target block, wherein the adjacent blocks are blocks surrounding and connected with the target block, and the size of the expansion map is the same as that of the initial image. The third step: and performing noise reduction processing on the target block according to the plurality of weight values. And traversing all the blocks in the initial image, taking each block as the target block once, and repeatedly executing the second step and the third step to obtain a noise-reduced image.
A non-transitory computer-readable storage medium of an embodiment of the present application contains a computer program that, when executed by one or more processors, causes the processors to perform an image processing method of: the first step is as follows: the method comprises the steps of obtaining an initial image, wherein the initial image comprises a plurality of blocks with the same size, and each block comprises at least one pixel point. The second step is as follows: taking any one of the blocks as a target block, and obtaining the weight values of a plurality of adjacent blocks around the target block according to gain data corresponding to the target block in an expansion map and the noise level of the target block, wherein the adjacent blocks are blocks surrounding and connected with the target block, and the size of the expansion map is the same as that of the initial image. The third step: and performing noise reduction processing on the target block according to the plurality of weight values. And traversing all the blocks in the initial image, taking each block as the target block once, and repeatedly executing the second step and the third step to obtain a noise-reduced image.
In the image processing method, the image processing device, the terminal and the non-volatile computer readable storage medium, the noise reduction processing is performed on the target block according to the gain data corresponding to the target block in the expansion map, the noise level of the target block and the weight values of a plurality of adjacent blocks of the target block, and all blocks in the initial image are traversed, so that the noise reduction processing is performed on each block in the initial image, and the noise reduction image is obtained. The gain data in the extended image corresponds to the blocks in the initial image, that is, each block in the initial image can find the corresponding gain data in the extended image, so that the noise reduction intensity of different blocks in the initial image can be effectively regulated and controlled, the noise with bright dark areas can be more specifically suppressed, the details in the initial image can be better protected, and the phenomenon of uneven noise reduction of the central area and the corner area of the noise-reduced image can be avoided.
Additional aspects and advantages of embodiments of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of embodiments of the present application.
Drawings
The above and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a schematic flow chart diagram of an image processing method according to some embodiments of the present application;
FIG. 2 is a schematic diagram of an image processing apparatus according to some embodiments of the present application;
FIG. 3 is a schematic block diagram of a terminal according to some embodiments of the present application;
FIG. 4 is a schematic illustration of an initial image and an augmented graph of an image processing method of certain embodiments of the present application;
FIG. 5 is a schematic flow chart diagram of an image processing method according to some embodiments of the present application;
FIG. 6 is a schematic diagram illustrating a first ratio of weight values of neighboring blocks obtained in an image processing method according to some embodiments of the present disclosure;
FIGS. 7-8 are schematic flow diagrams of image processing methods according to certain embodiments of the present application;
FIG. 9 is a schematic diagram illustrating an image processing method according to some embodiments of the present application for denoising a target block in an initial image to obtain a denoised image;
FIGS. 10-12 are schematic flow charts of image processing methods according to certain embodiments of the present application;
FIG. 13 is a schematic diagram illustrating sharpening of detail layers in a noise-reduced image by an image processing method according to some embodiments of the present disclosure;
FIG. 14 is a schematic flow chart diagram of an image processing method according to some embodiments of the present application;
FIG. 15 is a schematic diagram illustrating an expanded graph obtained according to GR channel data and GB channel data in an image processing method according to some embodiments of the present application;
FIG. 16 is a schematic flow chart diagram of an image processing method according to some embodiments of the present application;
FIG. 17 is an expanded view of the image processing method of some embodiments of the present application;
FIG. 18 is a schematic diagram of a connection between a non-volatile computer readable storage medium and a processor according to some embodiments of the present application.
Detailed Description
Reference will now be made in detail to embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below by referring to the drawings are exemplary only for the purpose of explaining the embodiments of the present application, and are not to be construed as limiting the embodiments of the present application.
Referring to fig. 1 and 4, an embodiment of the present application provides an image processing method, including:
01: the first step is as follows: acquiring an initial image P0, wherein the initial image P0 comprises a plurality of blocks with the same size, and each block comprises at least one pixel point;
02: the second step is as follows: taking any one of the blocks as a target block, and obtaining the weight values of a plurality of adjacent blocks around the target block according to the gain data corresponding to the target block and the noise level of the target block in the expansion map A, wherein the adjacent blocks are blocks surrounding and connected with the target block, and the size of the expansion map is the same as that of the initial image P0;
03: the third step: performing noise reduction processing on the target block according to the plurality of weight values; and
04: all blocks in the initial image P0 are traversed, each block is taken as a target block, and the second step and the third step are repeatedly executed to obtain a noise-reduced image P1 (shown in fig. 6).
Referring to fig. 2, the present application further provides animage processing apparatus 10, where theimage processing apparatus 10 includes a noise reduction module 11, and the noise reduction module 11 includes a first obtainingunit 111, a second obtainingunit 113, anoise reduction unit 115, and a third obtainingunit 117. The first obtainingunit 111 is configured to perform the method in 01, i.e. the first obtainingunit 111 is configured to perform the first step: an initial image is obtained, and the initial image P0 includes a plurality of areas with the same size, and each area includes at least one pixel point. The second obtainingunit 113 is configured to perform the method in 02, i.e. the second obtainingunit 113 is configured to perform the second step: taking any one of the blocks as a target block, obtaining the weight values of a plurality of adjacent blocks around the target block according to the gain data corresponding to the target block and the noise level of the target block in the extended graph A, wherein the adjacent blocks are blocks surrounding and connected with the target block, and the size of the extended graph A is the same as that of the initial image P0. Thenoise reduction unit 115 is configured to perform the method in 03, i.e. thenoise reduction unit 115 is configured to perform the third step: and performing noise reduction processing on the target block according to the plurality of weight values. The third obtainingunit 117 is configured to perform the method in 04, that is, the third obtainingunit 117 is configured to: all blocks in the initial image P0 are traversed, each block is taken as a target block, and the second step and the third step are repeatedly executed to obtain a noise-reduced image P1.
Referring to fig. 3, the present application also provides aterminal 100, where theterminal 100 includes one ormore processors 30, amemory 50, and one or more programs. Wherein one or more programs are stored in thememory 50 and executed by the one ormore processors 30, the programs including instructions for performing the image processing methods of the embodiments of the present application. That is, when one ormore processors 30 execute a program, theprocessors 30 may implement the methods in 01, 02, 03, and 04. That is, the one ormore processors 30 are operable to perform: the first step is as follows: an initial image P0 is obtained, and the initial image P0 includes a plurality of areas with the same size, and each area includes at least one pixel point. The second step is as follows: taking any one of the blocks as a target block, obtaining the weight values of a plurality of adjacent blocks around the target block according to the gain data corresponding to the target block and the noise level of the target block in the extended graph A, wherein the adjacent blocks are blocks surrounding and connected with the target block, and the size of the extended graph A is the same as that of the initial image P0. The third step: and performing noise reduction processing on the target block according to the plurality of weight values. All blocks in the initial image P0 are traversed, each block is taken as a target block, and the second step and the third step are repeatedly executed to obtain a noise-reduced image P1.
Specifically, the terminal 100 may include, but is not limited to, a mobile phone, a notebook computer, a smart television, a tablet computer, a smart watch, or a computer. Theimage processing apparatus 10 may be an integration of functional modules integrated in theterminal 100. The present application is described only by taking the terminal 100 as a mobile phone as an example, and the case where the terminal 100 is another type of device is similar to the mobile phone, and will not be described in detail.
In an Image Signal Processing (ISP) system, since the light gathering capability of the central area of a convex Lens is much larger than that of the edge area, the light in the central area of a sensor is stronger than that in the edge area, and therefore, in the ISP system, a Lens Shading Correction (LSC) module is present to improve the brightness of the corners of an Image, so that the brightness of the whole Image is nearly consistent. But after the corner area is brightened by the LSC module, the noise is more obvious. The current optimization method aiming at corner noise is a method for regulating and controlling the noise reduction strength of pixel points by calculating the distance between the pixel points and the central point of an image, and can effectively suppress the noise of corners, but the method also has the following problems: the noise reduction/sharpening strength at different distances needs to be set to suppress corner noise, however, the ideal threshold value will be different under different scenes or different shooting conditions, so this method needs many scenes to debug to obtain a relatively ideal threshold value. When data is collected, special scenes, such as scenes with large brightness flickering, are encountered, and noise at corners of the scenes is extremely obvious. Parameters acquired by debugging a few specific scenes are difficult to adapt to all scenes, so that situations in which the noise reduction/sharpening is too strong or too weak can occur in some special scenes.
In the image processing method, theimage processing apparatus 10, the terminal 100, and the non-volatile computer-readable storage medium 200 of the present application, the noise reduction processing is performed on the target block according to the gain data corresponding to the target block in the extended graph a, the noise level of the target block, and the weight values of a plurality of adjacent blocks of the target block, and all blocks in the initial image P0 are traversed to perform the noise reduction processing on each block in the initial image P0, and the noise reduction image P1 is obtained. The gain data in the expanded image a corresponds to the blocks in the initial image a, that is, each block in the initial image P0 can find corresponding gain data in the expanded image a, so as to effectively regulate and control the noise reduction intensity of different blocks in the initial image P0, thereby more specifically suppressing the noise with bright dark regions, better protecting the details in the initial image P0, and avoiding the phenomenon of uneven noise reduction in the center region and the corner region of the noise-reduced image P1.
In themethod 01, the first obtainingunit 111 or theprocessor 30 obtains an initial image P0, wherein the initial image P0 includes a plurality of areas with the same size, and each area includes at least one pixel. After the initial image P0 is acquired by the first acquiringunit 111 or theprocessor 30, the initial image P0 may be divided into a plurality of blocks of the same size, for example, the initial image P0 may be divided into 3 × 3 blocks, 4 × 4 blocks, 5 × 5 blocks, or 6 × 6 blocks, which is not limited in this respect. Specifically, the number of blocks may be set according to the distribution of the bright areas and the dark areas in the initial image P0 and the size of the dark areas, so that the second obtainingunit 113 or theprocessor 30 obtains a plurality of weight values to more specifically suppress the noise in the dark areas. For example, as shown in fig. 4, the initial image P0 includes 16 blocks, the 16 blocks are P0(0, 0), P0(0, 1), P0(0, 2), P0(0, 3), P0(1, 0), P0(1, 1), P0(1, 2), P0(1, 3), P0(2, 0), P0(2, 1), P0(2, 2), P0(2, 3), P0(3, 0), P0(3, 1), P0(3, 2), and P0(3, 3), and the blocks include one pixel point, two pixel points, three pixel points, four pixel points, or more than four pixel points.
In themethod 02, an arbitrary block in the original image P0 is used as a target block, and the weight values of a plurality of neighboring blocks of the target block are obtained according to the gain data corresponding to the target block in the extended graph a and the noise level of the target block. The adjacent block is a block surrounding and connected to the target block. As shown in fig. 4, the target block is a block P0(0, 0) in the initial image P0, and the adjacent blocks of the target block P0(0, 0) are a block P0(0, 1), a block P0(1, 1) and a block P0(1, 0), respectively. When the target block is the block P0(1, 1) in the initial image P0, a plurality of adjacent blocks of the target block P0(1, 1) are the block P0(0, 0), the block P0(0, 1), the block P0(0, 2), the block P0(1, 2), the block P0(2, 2), the block P0(2, 1), the block P0(2, 0), and the block P0(1, 0), respectively. When the target block is the block P0(2, 0), a plurality of adjacent blocks of the target block P0(2, 0) are the block P0(1, 0), the block P0(1, 1), the block P0(2, 1), the block P0(3, 1), and the block P0(3, 0), respectively. As shown in fig. 4, a (0, 0), a (0, 1), … …, and a (3, 3) in the extended graph a are respectively gain data, a (0, 0) in the extended graph a is gain data corresponding to the block P0(0, 0) in the initial image P0, a (0, 1) in the extended graph a is gain data corresponding to the block P0(0, 1) in the initial image P0, … …, and a (3, 3) in the extended graph a is gain data corresponding to the block P0(3, 3) in the initial image P0. When the target block is the block P0(0, 0), the second obtainingunit 113 or theprocessor 30 obtains the weight value of the adjacent block P0(0, 1) from the gain data a (0, 1) and the noise level of the target block P0(0, 0) in the extended view a, obtains the weight value of the adjacent block P0(1, 1) from the gain data a (1, 1) and the noise level of the target block P0(0, 0) in the extended view a, and obtains the weight value of the adjacent block P0(1, 0) from the gain data a (1, 0) and the noise level of the target block P0(0, 0) in the extended view a.
In themethod 03, thedenoising unit 115 or theprocessor 30 performs denoising processing on the target block according to the weight values of a plurality of adjacent blocks of the target block. For example, as shown in fig. 4, when the target block is the block P0(0, 0), thenoise reduction unit 115 or theprocessor 30 performs noise reduction processing on the target block P0(0, 0) according to the weight value of the adjacent block P0(0, 1), the weight value of the adjacent block P0(1, 1), and the weight value of the adjacent block P0(1, 0).
In themethod 04, all blocks in the initial image P0 are traversed to perform noise reduction processing on each block as a target block, and the specific noise reduction process is as the implementation process of the foregoingmethod 02 andmethod 03, and is not described herein again. Since all blocks in the initial image P0 are used as primary target blocks, and each target block is subjected to noise reduction processing according to the weight values of a plurality of adjacent blocks corresponding to the target block, and the weight value of each adjacent block is related to the noise level of the target block and the gain data corresponding to the target block, thenoise reduction unit 115 or theprocessor 30 effectively regulates and controls the noise reduction intensity of different blocks in the initial image, so as to more specifically suppress the noise with bright dark regions, better protect the details in the initial image, and avoid the phenomenon of uneven noise reduction in the center region and the corner region of the noise-reduced image.
Referring to fig. 4 and 5, in some embodiments, 02: obtaining the weight values of a plurality of adjacent blocks around the target block according to the gain data corresponding to the block in the expansion map A and the noise level of the target block, including:
021: acquiring a first ratio of the adjacent block according to the pixel values of all the pixel points in the target block and the pixel values of the corresponding pixel points in the adjacent block;
023: acquiring a second ratio according to the first ratio, the noise level and the gain data corresponding to the target block in the expansion diagram A; and
025: and acquiring the weight value corresponding to the adjacent block according to the first ratio and the second ratio.
Referring to FIG. 2, the second obtainingunit 113 is further configured to perform the methods of 021, 023 and 025. That is, the second acquiringunit 113 is further configured to: and acquiring a first ratio of the adjacent block according to the pixel values of all the pixel points in the target block and the pixel values of the corresponding pixel points in the adjacent block. And acquiring a second ratio according to the first ratio, the noise level and the gain data corresponding to the target block in the expansion diagram A. And acquiring the weight value corresponding to the adjacent block according to the first ratio and the second ratio.
Referring to FIG. 3, theprocessor 30 is further configured to execute the methods of 021, 023 and 025. That is, theprocessor 30 is further configured to: and acquiring a first ratio of the adjacent block according to the pixel values of all the pixel points in the target block and the pixel values of the corresponding pixel points in the adjacent block. And acquiring a second ratio according to the first ratio, the noise level and the gain data corresponding to the target block in the expansion diagram A. And acquiring the weight value corresponding to the adjacent block according to the first ratio and the second ratio.
In themethod 021, when the second obtainingunit 113 or theprocessor 30 calculates the weighted values in the adjacent blocks in the target block, a first ratio of the weighted values in the adjacent blocks is obtained according to the pixel values of all the pixels in the target block and the adjacent blocks. For example, when each tile in the initial image P0 includes a pixel, the second obtainingunit 113 or theprocessor 30 calculates a first ratio of the adjacent tiles according to the pixel in the target tile and the pixel in the adjacent tiles. For example, referring to fig. 4 and fig. 6, when the block in the initial image P0 includes four pixels, the second obtainingunit 113 or theprocessor 30 calculates a first ratio of the weighted values of the adjacent block according to the pixel values of all the pixels in the target block and the pixel values of all the pixels in the adjacent block. For example, when the target block is the block P0(0, 0), the second obtainingunit 113 or theprocessor 30 obtains the first ratio of the weight values of the adjacent block P0(0, 1), the pixel values of the four pixels in the target block P0(0, 0) are pa, pb, pc and pd, respectively, the pixel values of the four pixels in the adjacent block P0(0, 1) are pe, pf, pg and ph, respectively, the second obtainingunit 113 or theprocessor 30 obtains the first ratio of the adjacent block P0(0, 1) according to the pixel values pa, pb, pc and pd in the target block P0(0, 0) and the pixel values pe, pf, pg and ph in the adjacent block P0(0, 1) to fully utilize the pixel values of the pixels in the adjacent block, so that the weight value obtained by the first ratio can filter the noise in the target block P0(0, 0) more specifically. In themethod 023, the second obtainingunit 113 or theprocessor 30 obtains the second ratio of the adjacent block according to the first ratio obtained in themethod 021, the noise level of the target block, and the gain data corresponding to the target block in the extended graph a. For example, the first ratio may be obtained according to a difference between a total pixel value of all the pixels in the target block and a total pixel value of all the pixels in the neighboring block. Or, obtaining a first ratio according to the sum of the difference values of all pixel points in the target block and all pixel values of all pixel points in the adjacent block. Finally, the method in 025 is performed: the weighted values of the adjacent blocks are obtained according to the first ratio and the second ratio, and when the second obtainingunit 113 or theprocessor 30 obtains the weighted values of other adjacent blocks of the target block, themethod 021, themethod 023 and themethod 025 are executed in sequence.
In summary, when the second obtainingunit 113 or theprocessor 30 obtains the weight values of the adjacent blocks of the target block, the weight values of the adjacent blocks are calculated by combining the pixel values of all the pixels in the target block, the pixel values of all the pixels in the adjacent blocks, the noise level of the target block, and the gain data corresponding to the target block in the extended graph a, and the target block is subjected to targeted noise reduction by using the weight values corresponding to the adjacent blocks, so as to effectively protect the detail parts in the target block.
Referring to fig. 7, in some embodiments, 021: acquiring a first ratio of the adjacent block according to the pixel values of all the pixel points in the target block and the pixel values of the corresponding pixel points in the adjacent block, wherein the first ratio comprises the following steps:
0211: acquiring a difference value of each pixel point according to the pixel value of each pixel point in the target block and the pixel value of the pixel point at the corresponding position in the adjacent block; and
0213: and accumulating the absolute values of the difference values respectively corresponding to all the pixel points to obtain a first ratio of the adjacent blocks.
Referring to FIG. 2, the second obtainingunit 113 is further configured to execute the methods of 0211 and 0213. That is, the second acquiringunit 113 is further configured to: and acquiring the difference value of each pixel point according to the pixel value of each pixel point in the target block and the pixel value of the pixel point at the corresponding position in the adjacent block. And accumulating the absolute values of the difference values respectively corresponding to all the pixel points to obtain a first ratio of the adjacent blocks.
Referring to FIG. 3, theprocessor 30 is also used for executing the methods of 0211 and 0213. That is, theprocessor 30 is further configured to: and acquiring the difference value of each pixel point according to the pixel value of each pixel point in the target block and the pixel value of the pixel point at the corresponding position in the adjacent block. And accumulating the absolute values of the difference values respectively corresponding to all the pixel points to obtain a first ratio of the adjacent blocks.
Referring to fig. 6, in an embodiment, if the target block includes four pixels, the second obtainingunit 113 or theprocessor 30 obtains a difference value of each pixel according to a pixel value of each pixel in the target block and a pixel value of a pixel at a corresponding position in an adjacent block. The pixel point at the upper left corner in the target block corresponds to the pixel point at the upper left corner in the adjacent block, the pixel point at the upper right corner in the target block corresponds to the pixel point at the upper right corner in the adjacent block, the pixel point at the lower left corner in the target block corresponds to the pixel point at the lower left corner in the adjacent block, and the pixel point at the lower right corner in the target block corresponds to the pixel point at the lower right corner in the adjacent block. For example, the pixel with the pixel value pa in the target partition P0(0, 1) corresponds to the pixel with the pixel value pe in the neighboring partition P0(0, 1). When the target block is P0(0, 0), the second obtainingunit 113 or theprocessor 30 obtains a first ratio of the weight values of the adjacent block P0(0, 1), the second obtainingunit 113 or theprocessor 30 sequentially calculates a difference value of each pixel value in the two blocks (the target block and the adjacent block), four difference values of the adjacent block P0(0, 1) are pa-pe, pb-pf, pc-pg, pd-ph, respectively, and then the absolute values of the four difference values are accumulated to obtain a first ratio of the weight values of the adjacent block P0(0, 1) which is | pa-pe | + | pb-pf | + | pc-pg | + | pd-ph.
In summary, the second obtainingunit 113 or theprocessor 30 obtains the weighted values of the neighboring blocks of the target block according to themethods 021, 023, 025, 0211 and 0213 according to the following formula (1).
Figure BDA0003302309090000091
Wherein Weight in formula (1) represents the weighted value of the adjacent block, diff represents the first ratio of the weighted values of the adjacent blocks, diff + noiselev lut(x,y)Representing the second ratio of the weight values of the neighboring blocks, noisevell representing the noise level of the target block, determined by other modules in theimage processing apparatus 10 to be knownA numerical value, which is a positive number. lut(x,y)It is indicated that the target block P0(x, y) corresponds to the gain data a (x, y) in the extended graph a. From the formula (1), when the target block corresponds to the gain data in the extended graph a, the weight values of the neighboring blocks of the target block are larger. Referring to fig. 4 and 6, for example, when the target block is P0(0, 0), the first ratio diff ═ pa-pe | + | pb-pf | + | pc-pg | + | pd-ph | in the weighted values of the neighboring blocks P0(0, 1), and the second ratio diff + noisevel a (0, 0), the weighted value of the neighboring block P0(0, 1) is determined
Figure BDA0003302309090000092
Referring to fig. 8, in some embodiments, 03: and performing noise reduction processing on the target block according to the plurality of weight values, wherein the noise reduction processing comprises the following steps:
031: acquiring a third ratio according to the pixel values of the adjacent blocks and the weight values respectively corresponding to the adjacent blocks;
033: accumulating a plurality of weighted values respectively corresponding to the adjacent blocks to obtain a fourth ratio; and
035: and acquiring the pixel value of the target block according to the third ratio and the fourth ratio to be used as the pixel value of the noise-reduced image and the block corresponding to the target block.
Referring to fig. 2, thedenoising unit 115 is further configured to perform themethods 031, 033, and 035. That is, thenoise reduction unit 115 is further configured to: and obtaining a third ratio according to the pixel values of the adjacent blocks and the weight values respectively corresponding to the adjacent blocks. And accumulating a plurality of weighted values respectively corresponding to the adjacent blocks to obtain a fourth ratio. And acquiring the pixel value of the target block according to the third ratio and the fourth ratio to be used as the pixel value of the noise-reduced image and the block corresponding to the target block.
Referring to fig. 3, theprocessor 30 is also configured to execute themethods 031, 033 and 035. That is, theprocessor 30 is further configured to: and obtaining a third ratio according to the pixel values of the adjacent blocks and the weight values respectively corresponding to the adjacent blocks. And accumulating a plurality of weighted values respectively corresponding to the adjacent blocks to obtain a fourth ratio. And acquiring the pixel value of the target block according to the third ratio and the fourth ratio to be used as the pixel value of the noise-reduced image and the block corresponding to the target block.
After the second obtainingunit 113 or theprocessor 30 obtains a plurality of weight values corresponding to the adjacent blocks of the target block, thenoise reduction unit 115 or theprocessor 30 obtains a third ratio according to the plurality of weight values and the pixel values of the adjacent blocks, and accumulates the plurality of weight values corresponding to the adjacent blocks to obtain a fourth ratio, thenoise reduction unit 115 obtains the pixel value of the target block after noise reduction by combining the third ratio and the fourth ratio, and uses the pixel value as the pixel value of the block corresponding to the image and the target block after noise reduction. The pixel values of the blocks, the pixel values of the adjacent blocks and the pixel value of the target block all represent the sum of the pixel values of all the pixel points in the blocks. Specifically, the denoising unit orprocessor 30 obtains the pixel values of the blocks corresponding to the target block of the denoised image according to formula (2).
Figure BDA0003302309090000101
Among them, Pathchde-noiseRepresents the pixel value, sigma, of the target block after noise reductioni(Patchi*Weighti) A third ratio is represented, which is,
Figure BDA0003302309090000102
for example, as shown in fig. 4, when the block P0(0, 0) is the target block, the number of the corresponding adjacent blocks is 3, and n is 3. When the block P0(1, 0) is the target block, if the number of corresponding adjacent blocks is 5, n is 5. When the block P0(1, 1) is the target block, if the number of corresponding adjacent blocks is 8, n is equal to 8. Patch represents the sum of pixel values of all pixel points in the ith adjacent block. WeightiRepresenting the weight value of the ith neighboring block.
Referring to fig. 4 and 9, when the target block is the block P0(0, 0), the sum of the pixel values of all the pixels in the target block without noise reduction is marked as PatchnoiseThe pixel value of the neighboring tile P0(0, 1) is Patch1,the Weight value of neighboring tile P0(0, 1) is Weight 1; the pixel value of the adjacent block P0(1, 1) is Patch2, and the Weight value of the adjacent block P0(1, 1) is Weight 2; the pixel value of the neighboring tile P0(1, 0) is Patch3, and the Weight value of the neighboring tile P0(1, 0) is Weight 3. Thedenoising unit 115 or theprocessor 30 obtains the pixel value of the denoised target block according to the formula (2) as
Figure BDA0003302309090000103
That is, the pixel value of the block (i.e., P1(0, 0)) of the noise-reduced image P1 corresponding to the block P0(0, 0) of the initial image P0 is Pathchde-noise
If the target block P0(0, 1) includes four pixels, the pixel value distribution of each pixel is as shown in FIG. 6, i.e., PathchnoisePa + pb + pc + pd. In one embodiment, the pixel values of the pixels in the block P1(0, 0) in the noise-reduced image P1, the pixel values of the pixels in the corresponding position in the original image P0, and the pixel values of the noise-reduced target block (i.e. Pathch)de-noise) And (4) correlating. Pixel values of pixel points at the upper left corner in the block P1(0, 0) in the noise-reduced image P1
Figure BDA0003302309090000104
Pixel values of pixel points at the upper right corner in the block P1(0, 0) in the noise-reduced image P1
Figure BDA0003302309090000105
Pixel values of pixel points at the lower left corner in the block P1(0, 0) in the noise-reduced image P1
Figure BDA0003302309090000106
Pixel values of pixel points at the lower right corner in the block P1(0, 0) in the noise-reduced image P1
Figure BDA0003302309090000107
Referring to fig. 10, in some embodiments, 03: denoising the target block according to a plurality of weights, comprising
037: and performing noise reduction processing on the YUV three channels or the RGB three channels in the target block according to the multiple weight values.
Referring to fig. 2, thedenoising unit 115 is further configured to perform the method in 037. That is, thenoise reduction unit 115 is further configured to: and performing noise reduction processing on the YUV three channels or the RGB three channels in the target block according to the multiple weight values.
Referring to fig. 3, theprocessor 30 is also configured to execute the method in 037. That is, theprocessor 30 is further configured to: and performing noise reduction processing on the YUV three channels or the RGB three channels in the target block according to the multiple weight values.
In some embodiments, when thedenoising unit 115 or theprocessor 30 denoises the target block according to a plurality of weights, the denoising process may be performed on three RGB channels in the target block. Alternatively, thenoise reduction unit 115 or theprocessor 30 performs noise reduction processing on the luminance noise on the Y channel in the target block to suppress the luminance noise in the target block. Alternatively, thenoise reduction unit 115 or theprocessor 30 performs noise reduction processing on the color noise on both channels of UV in the target block, thereby suppressing the color noise in the target block. Alternatively, thenoise reduction unit 115 or theprocessor 30 performs joint noise reduction on the YUV three channels. The specific denoising process is the same as the denoising process in themethods 031, 033 and 035, and is not described herein again.
Referring to fig. 9 and 11, in some embodiments, the image processing method further includes:
05: the sharpening process is performed on the noise-reduced image P1 based on the noise-reduced image P1 and the gain data.
Referring to fig. 2, theimage processing apparatus 10 further includes a sharpeningmodule 13. The sharpeningmodule 13 is used to execute the method in 05. That is, sharpeningmodule 13 is to: the sharpening process is performed on the noise-reduced image P1 based on the noise-reduced image P1 and the gain data.
Referring to fig. 3, theprocessor 30 is further configured to execute the method of 05. That is, theprocessor 30 is further configured to: the sharpening process is performed on the noise-reduced image P1 based on the noise-reduced image P1 and the gain data.
Specifically, after the third obtainingunit 117 or theprocessor 30 obtains the noise-reduced image P1, the sharpeningmodule 13 performs sharpening on the noise-reduced image P1 according to the gain data, so as to further improve the sharpness of a part of the region in the noise-reduced image P1 and improve the image quality.
Referring to fig. 12 and 13, in some embodiments, 05: sharpening the noise-reduced image P1 based on the noise-reduced image P1 and the gain data, the sharpening method including:
051: acquiring a detail layer D of the noise reduction image P1;
053: the noise-reduced image P1 is sharpened based on the detail layer D and the gain data corresponding to the detail layer D in the expanded view a.
Please refer to fig. 2, the sharpeningmodule 13 is further configured to perform the methods of 051 and 053. That is, the sharpeningmodule 13 is further configured to: acquiring a detail layer D of the noise reduction image P1; the noise-reduced image P1 is sharpened based on the detail layer D and the gain data corresponding to the detail layer D in the expanded view a.
Referring to fig. 3, theprocessor 30 is also used for executing themethods 051 and 053. That is, theprocessor 30 is further configured to: acquiring a detail layer D of the noise reduction image P1; the noise-reduced image P1 is sharpened based on the detail layer D and the gain data corresponding to the detail layer in the expanded view a.
The noise-reduced image P1 can be decomposed into an underlayer containing low-frequency information of the noise-reduced image P1 and a detail layer containing high-frequency information of the noise-reduced image P1. When the sharpeningmodule 13 obtains the bottom layer of the noise-reduced image P1, the sharpeningmodule 13 may perform filtering processing on the noise-reduced image P1 by using low-pass filtering (such as mean filtering), gaussian filtering or guided filtering to extract the bottom layer of the noise-reduced image P1, and after the bottom layer is extracted, the sharpeningmodule 13 subtracts the bottom layer from the noise-reduced image P1 to obtain the detail layer of the noise-reduced image P1.
As shown in fig. 13, in one example, if the detail layer D in the noise-reduced image P1 is indicated by a hatched portion in the noise-reduced image P1, the gain data corresponding to the detail layer D in the augmented graph a is a (2, 0) + a (3, 0). The sharpeningmodule 13 or theprocessor 30 sharpens the detail layer D to obtain a sharpened detail layer D', D ═ D (a (2, 0) + a (3, 0)), where multiplication of the detail layer D and the gain data (a (2, 0) + a (3, 0)) refers to a process of sharpening the pixel values in the detail layer D by the gain data (a (2, 0) + a (3, 0)). Since the bottom layer of the noise-reduced image P1 is extracted in themethod 051, the sharpened image P2 can be obtained by adding the bottom layer and the sharpened detail layer D' by the sharpeningmodule 13 or theprocessor 30.
Referring to fig. 4 and 14, in some embodiments, the image processing method further includes:
06: performing lens shading correction processing on the initial image P0 to acquire an intermediate image;
07: acquiring data of a GR channel and data of a GB channel of an intermediate image (not shown) in a RAW domain;
08: and acquiring the expansion diagram A according to the data of the GR channel and the data of the GB channel.
Referring to fig. 2, theimage processing apparatus 10 further includes an obtainingmodule 17, and the obtainingmodule 17 is configured to execute the methods in 06, 07, and 08. Namely, the obtainingmodule 17 is configured to: the initial image P0 is subjected to lens shading correction processing to acquire an intermediate image. And acquiring data of the GR channel and data of the GB channel of the intermediate image in the RAW domain. And acquiring the expansion diagram A according to the data of the GR channel and the data of the GB channel.
Referring to fig. 3, theprocessor 30 is also used for executing the methods in 06, 07 and 08. That is, theprocessor 30 is configured to: the initial image P0 is subjected to lens shading correction processing to acquire an intermediate image. And acquiring data of the GR channel and data of the GB channel of the intermediate image in the RAW domain. And acquiring the expansion diagram A according to the data of the GR channel and the data of the GB channel.
Specifically, after the first obtainingunit 111 obtains the initial image P0, due to the influence of the manufacturing process and the optical characteristics of the camera module, the first obtainingunit 111 obtains the initial image which often has a phenomenon that the color and the brightness from the center to the periphery are inconsistent, and the obtainingmodule 17 performs lens shading correction processing on the initial image P0, so that the color and the brightness from the center to the periphery of the intermediate image are consistent. Please refer to fig. 15, the data of the GR channel and the data of the GB channel of the intermediate image in the RAW domain are then acquired. Wherein, the data size of the GR channel represents the gain data of the GR channel pixel (i.e. the brightness multiple of the pixel), and the data size of the GB channel represents the gain data of the GB channel pixel. The initial image P0 has a size of 3072 × 4096, and the GR channel and the GB channel acquired by theacquisition module 17 are both matrices having a size of 13 × 17. Finally, the obtainingmodule 17 obtains the extended graph a according to the data of the GR channel and the data of the GB channel.
Referring to fig. 15 and 16, in some embodiments, 08: obtaining the expansion diagram A according to the data of the GR channel and the data of the GB channel, comprising:
081: averaging the data of the GR channel and the data of the GB channel at the same position to obtain the data of the corresponding position in the G channel;
083: and (5) carrying out interpolation processing on the G channel to obtain an expansion map A.
Please refer to fig. 2, the obtainingmodule 17 is also used to perform the methods in 081 and 083. That is, the obtainingmodule 17 is further configured to: averaging the data of the GR channel and the data of the GB channel at the same position to obtain the data of the corresponding position in the G channel; and (5) carrying out interpolation processing on the G channel to obtain an expansion map A.
Referring to fig. 3, theprocessor 30 is also used for performing the methods of 081 and 083. That is, theprocessor 30 is further configured to: averaging the data of the GR channel and the data of the GB channel at the same position to obtain the data of the corresponding position in the G channel; and (5) carrying out interpolation processing on the G channel to obtain an expansion map A.
Specifically, when the obtainingmodule 17 obtains the data in the G channel, it obtains the average value according to the data in the GR channel and the GB channel, that is, the data G (x, y) in the G channel is (GR (x, y) + GR (x, y)) × 0.5, where x is [0, 12] and y is [0, 16 ]. For example, G (0, 0) ═ GR (0, 0) + GR (0, 0)) × 0.5, G (0, 4) ═ GR (0, 4) + GR (0, 4)) × 0.5, … …, G (16, 12) ═ GR (16, 12) + GR (16, 12)) × 0.5. Then, the data in the G channel is interpolated to obtain an expansion map a having the same size as the original image P0.
Specifically, in themethod 083, the obtainingmodule 17 may perform interpolation processing on the G channel based on a bilinear interpolation method or a bicubic interpolation method. In one example, the obtainingmodule 17 performs interpolation processing on the G channel based on a bilinear interpolation method. As shown in fig. 15, 255 pixels are sequentially inserted between two adjacent data in the G channel horizontal direction, and 255 pixels are sequentially inserted between two adjacent data in the G channel vertical direction, to generate an expanded view a of the same size as the initial image P0 (3072 × 4096). For example, if the difference between two data is denoted as C1, C1 is G (0, 1) -G (0, 0), the first pixel inserted between G (0, 0) and G (0, 1) is G (0, 0) + C1 1/256, the second pixel inserted between G (0, 0) and G (0, 1) is G (0, 0) + C1 2/256, … …, and the 255 th pixel is G (0, 0) + C1 256/256. Then, 255 pixels are inserted between the second data G (0, 1) and the third data G (0, 2), and if the difference between the two data is denoted as C2, C2 is equal to G (0, 2) -G (0, 1), the first pixel inserted between G (0, 1) and G (0, 2) is G (0, 1) + C2 1/256, the second pixel inserted between G (0, 1) and G (0, 2) is G (0, 1) + C2 2/256, and the 255 th pixel of … … is G (0, 1) + C2 256/256. By analogy, 255 pixels are inserted between two adjacent data in each row of data in the G channel, resulting in a matrix of size 3072 × 17. Similarly, 255 pixels are inserted between two adjacent data in each column of data in the G channel, for example, 255 pixels are inserted between the first data G (0, 0) in the first column and the second data G (1, 0) in the first column, and then the first pixel to be inserted is G (0, 0) + (G (1, 0) -G (0, 0)). 1/256. Finally, theacquisition module 17 obtains an expanded graph a with a size of 3072 × 4096.
It should be noted that the gain data a (0, 0) in the extended graph a in fig. 15 includes data including at least one G channel, that is, when a block in the initial image P0 includes one pixel, a (0, 0) in the extended graph a is equal to G (0, 0). When a block in the initial image P0 includes more than two pixel points, a (0, 0) in the augmented image a includes a plurality of G-channel data after interpolation.
Referring to fig. 17, fig. 17 is an extended graph a obtained by the obtainingmodule 17 having a certain initial image, a central area of the extended graph a is dark, and the brightness of the extended graph a is higher toward the corner areas, which indicates that the brightness of the pixels in the corresponding area is also larger by a large factor.
In summary, in the image processing method, theimage processing apparatus 10, the terminal 100, and the non-volatile computerreadable storage medium 200 of the present application, the noise reduction processing is performed on the target block according to the gain data corresponding to the target block in the extended graph a, the noise level of the target block, and the weighting values of a plurality of adjacent blocks of the target block, and all blocks in the original image P0 are traversed to perform the noise reduction processing on each block in the original image P0, so as to obtain the noise reduced image P1. The gain data in the expanded image a corresponds to the blocks in the initial image a, that is, each block in the initial image P0 can find corresponding gain data in the expanded image a, so as to effectively regulate and control the noise reduction intensity of different blocks in the initial image P0, thereby more specifically suppressing the noise with bright dark regions, better protecting the details in the initial image P0, and avoiding the phenomenon of uneven noise reduction in the center region and the corner region of the noise-reduced image P1.
Referring to fig. 18, the present embodiment further provides a non-volatile computer-readable storage medium 200 containing acomputer program 201. Thecomputer program 201, when executed by the one ormore processors 30, causes theprocessors 30 to perform the image processing methods in 01, 02, 03, 04, 05, 06, 07, 08, 021, 023, 025, 0211, 0213, 031, 033, 035, 037, 051, 053, 081, and 083.
In the description herein, references to the description of the terms "certain embodiments," "one example," "exemplary," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and the scope of the preferred embodiments of the present application includes other implementations in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present application.
Although embodiments of the present application have been shown and described above, it is to be understood that the above embodiments are exemplary and not to be construed as limiting the present application, and that changes, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present application.

Claims (12)

1. An image processing method, comprising:
the first step is as follows: acquiring an initial image, wherein the initial image comprises a plurality of blocks with the same size, and each block comprises at least one pixel point;
the second step is as follows: taking any one of the blocks as a target block, and obtaining weight values of a plurality of adjacent blocks around the target block according to gain data corresponding to the target block in an expansion map and the noise level of the target block, wherein the adjacent blocks are blocks surrounding and connected with the target block, and the size of the expansion map is the same as that of the initial image;
the third step: performing noise reduction processing on the target block according to the plurality of weight values; and
and traversing all the blocks in the initial image, taking each block as the target block once, and repeatedly executing the second step and the third step to obtain a noise-reduced image.
2. The method of claim 1, wherein obtaining the weighting values of a plurality of neighboring blocks around the target block according to the gain data corresponding to the block in the expansion map and the noise level of the target block comprises:
acquiring a first ratio of the adjacent block according to the pixel values of all the pixel points in the target block and the pixel values of the corresponding pixel points in the adjacent block;
acquiring a second ratio according to the first ratio, the noise level and the gain data corresponding to the target block in the expansion map; and
and acquiring the weight value corresponding to the adjacent block according to the first ratio and the second ratio.
3. The image processing method of claim 2, wherein the obtaining the first ratio of the adjacent block according to the pixel values of all the pixels in the target block and the pixel values of the corresponding pixels in the adjacent block comprises:
acquiring a difference value of each pixel point according to the pixel value of each pixel point in the target block and the pixel value of the pixel point at the corresponding position in the adjacent block; and
and accumulating the absolute values of the difference values respectively corresponding to all the pixel points to obtain the first ratio of the adjacent block.
4. The image processing method according to claim 1, wherein the denoising the target block according to the plurality of weights comprises:
acquiring a third ratio according to the pixel values of the adjacent blocks and the weight values respectively corresponding to the adjacent blocks;
accumulating the weighted values respectively corresponding to the adjacent blocks to obtain a fourth ratio; and
and acquiring the pixel value of the target block according to the third ratio and the fourth ratio to be used as the pixel value of the noise-reduced image and the block corresponding to the target block.
5. The image processing method according to claim 1, wherein the denoising the target block according to the plurality of weights comprises:
and performing noise reduction processing on YUV three channels or RGB three channels in the target block according to the plurality of weight values.
6. The image processing method according to claim 1, characterized in that the image processing method further comprises:
and carrying out sharpening processing on the noise-reduced image according to the noise-reduced image and the gain data.
7. The method according to claim 1, wherein the sharpening the noise-reduced image according to the noise-reduced image and the gain data comprises:
acquiring a detail layer of the noise-reduced image; and
and sharpening the noise-reduced image according to the detail layer and the gain data corresponding to the detail layer in the expansion map.
8. The image processing method according to claim 1, characterized in that the image processing method further comprises:
performing lens shading correction processing on the initial image to obtain an intermediate image;
acquiring data of a GR channel and data of a GB channel of the intermediate image in a RAW domain; and
and acquiring the expansion diagram according to the data of the GR channel and the data of the GB channel.
9. The image processing method according to claim 8, wherein the obtaining the expansion map according to the data of the GR channel and the data of the GB channel includes:
averaging the data of the GR channel and the data of the GB channel at the same position to obtain the data of the corresponding position in the G channel; and
and carrying out interpolation processing on the G channel to obtain the expansion map.
10. An image processing apparatus comprising a noise reduction module, the noise reduction module comprising:
a first obtaining unit configured to perform a first step of: acquiring an initial image, wherein the initial image comprises a plurality of blocks with the same size, and each block comprises at least one pixel point;
a second obtaining unit for performing a second step: taking any one of the blocks as a target block, and obtaining weight values of a plurality of adjacent blocks around the target block according to gain data corresponding to the target block in an expansion map and the noise level of the target block, wherein the adjacent blocks are blocks surrounding and connected with the target block, and the size of the expansion map is the same as that of the initial image;
a noise reduction unit for performing a third step: performing noise reduction processing on the target block according to the plurality of weight values; and
and the third acquisition unit is used for traversing all the blocks in the initial image, taking each block as the target block once, and repeatedly executing the second step and the third step to acquire the noise-reduced image.
11. A terminal, characterized in that the terminal comprises:
one or more processors, memory; and
one or more programs, wherein one or more of the programs are stored in the memory and executed by one or more of the processors, the programs comprising instructions for performing the image processing method of any of claims 1 to 9.
12. A non-transitory computer-readable storage medium storing a computer program which, when executed by one or more processors, implements the image processing method of any one of claims 1 to 9.
CN202111193917.4A2021-10-132021-10-13 Image processing method and device, terminal and readable storage mediumActiveCN113890961B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202111193917.4ACN113890961B (en)2021-10-132021-10-13 Image processing method and device, terminal and readable storage medium

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202111193917.4ACN113890961B (en)2021-10-132021-10-13 Image processing method and device, terminal and readable storage medium

Publications (2)

Publication NumberPublication Date
CN113890961Atrue CN113890961A (en)2022-01-04
CN113890961B CN113890961B (en)2025-02-18

Family

ID=79002735

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202111193917.4AActiveCN113890961B (en)2021-10-132021-10-13 Image processing method and device, terminal and readable storage medium

Country Status (1)

CountryLink
CN (1)CN113890961B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN115713471A (en)*2022-11-232023-02-24珠海视熙科技有限公司Image noise reduction method and device, storage medium and computer equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN104168405A (en)*2013-05-202014-11-26聚晶半导体股份有限公司Noise suppression method and image processing apparatus
US20160086317A1 (en)*2014-09-232016-03-24Intel CorporationNon-local means image denoising with detail preservation using self-similarity driven blending
CN109493281A (en)*2018-11-052019-03-19北京旷视科技有限公司Image processing method, device, electronic equipment and computer readable storage medium
CN109785246A (en)*2018-12-112019-05-21深圳奥比中光科技有限公司A kind of noise-reduction method of non-local mean filtering, device and equipment
CN111402135A (en)*2020-03-172020-07-10Oppo广东移动通信有限公司Image processing method, image processing device, electronic equipment and computer readable storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN104168405A (en)*2013-05-202014-11-26聚晶半导体股份有限公司Noise suppression method and image processing apparatus
US20160086317A1 (en)*2014-09-232016-03-24Intel CorporationNon-local means image denoising with detail preservation using self-similarity driven blending
CN109493281A (en)*2018-11-052019-03-19北京旷视科技有限公司Image processing method, device, electronic equipment and computer readable storage medium
CN109785246A (en)*2018-12-112019-05-21深圳奥比中光科技有限公司A kind of noise-reduction method of non-local mean filtering, device and equipment
CN111402135A (en)*2020-03-172020-07-10Oppo广东移动通信有限公司Image processing method, image processing device, electronic equipment and computer readable storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN115713471A (en)*2022-11-232023-02-24珠海视熙科技有限公司Image noise reduction method and device, storage medium and computer equipment
CN115713471B (en)*2022-11-232023-08-29珠海视熙科技有限公司Image noise reduction method and device, storage medium and computer equipment

Also Published As

Publication numberPublication date
CN113890961B (en)2025-02-18

Similar Documents

PublicationPublication DateTitle
US8730341B2 (en)Image processing apparatus, image pickup apparatus, control method for image processing apparatus, and storage medium storing control program therefor
KR101313911B1 (en)Method and apparatus for processing an image
US8385678B2 (en)Image restoration apparatus and method
US8144984B2 (en)Image processing apparatus, image processing method, and program for color fringing estimation and compensation
US8208039B2 (en)Image processing apparatus and computer-readable medium
EP1766569A1 (en)Methods, system, program modules and computer program product for restoration of color components in an image model
US11995809B2 (en)Method and apparatus for combining low-dynamic range images to a single image
JP5020615B2 (en) Image processing apparatus, imaging apparatus, image processing method, and program
JP4963598B2 (en) Image processing apparatus, imaging apparatus, image processing method, and program
US20120188418A1 (en)Texture detection in image processing
US20140028880A1 (en)Image processing device, image processing method, program, and imaging apparatus
WO2011000392A1 (en)Method and camera system for improving the contrast of a camera image
JP5907590B2 (en) Image processing apparatus, image processing method, and program
US8957999B2 (en)Method of selective aperture sharpening and halo suppression using chroma zones in CMOS imagers
CN108513043A (en)A kind of image denoising method and terminal
CN113890961B (en) Image processing method and device, terminal and readable storage medium
KR101089902B1 (en) Apparatus and method for determining edge region of digital image
JP5439210B2 (en) Image processing device
JP6315239B2 (en) Imaging apparatus, imaging method, image processing apparatus, imaging program
CN113344822B (en)Image denoising method, device, terminal and storage medium
CN108307121B (en)Local image mapping method and vehicle camera
CN113808038B (en)Image processing method, medium and electronic device
CN116385370A (en)Fisheye image processing method, device, electronic equipment and storage medium
JP5535443B2 (en) Image processing device
US20250173829A1 (en)Method and system for processing an image

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp