Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense and do not limit the number of terms, e.g., the first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
Currently, noise reduction algorithms can be classified into many different categories, such as linear/nonlinear, spatial/frequency domain, which includes wavelet transform domain, fourier transform domain, or other transform domain. Noise reduction algorithms often need to balance between speed and effect, and a fast and good noise reduction method is difficult to realize by using a pure software mode. Another idea is to self-similarly combine transform domains. Each block is searched within the image to find a series of blocks similar thereto. The classical Non-Local average Non-Local Means noise reduction algorithm will weight average these similar blocks in the spatial domain. If further, these similar blocks are transformed to the frequency domain, and then transformed back to the spatial domain after some filtering and thresholding, it is a method combining self-similarity and transform domain, such as the classical noise reduction algorithm: this is the principle that is utilized by the Block-Matching and 3D filtering, BM 3D. Similarly, self-similar combination sparse coding, self-similar combination low rank and the like can achieve good noise reduction effect.
However, one of the bottlenecks of the noise reduction algorithm is that there must be some trade-offs in effect and performance, and the noise reduction algorithm with outstanding effect is very outstanding in timing stability and noise reduction effect, but is limited by the complexity and is difficult to fall to the industrial field; and the fast noise reduction algorithm, such as bilateral filtering, median filtering, etc., cannot better separate noise from detail signals, and brings obvious detail loss while reducing noise.
Therefore, the embodiment of the present invention provides a noise reduction processing method, an apparatus and an electronic device, which may obtain a target noise reduction result with good speed and effect by using a motion estimation result, where a stationary or small motion region tends to be sampled from a result of time-domain filtering, and a region with large motion tends to be sampled from a result of spatial-domain filtering of a current frame, and then weighting the results of time-domain and spatial-domain.
Specifically, as shown in fig. 1 to fig. 3, an embodiment of the present invention provides a noise reduction processing method, which is mainly applied to a server preprocessing system. The method specifically comprises the following steps:
step 101, obtaining a first image block of a first frame image in a video to be processed.
In theabove step 101, an application scenario of the noise reduction processing method is shown in fig. 2, which illustrates a location where the noise reduction processing method is located and a necessity of reducing noise of a video. First, a first frame image in a video to be processed is obtained, where the first frame image may be a first frame image in the video to be processed, may also be a last frame image, and may also be any frame image in the middle, and is not specifically limited herein. The first frame image may be composed of a plurality of image blocks, and the first image block is one of the image blocks in the first frame image.
The first frame image may be a video frame image in YUV space, YUV is a picture format and is composed of Y, U, V three parts, and Y represents brightness, that is, a gray scale value; u and V represent the chromaticity of the color, respectively, and serve to describe the image color and saturation for specifying the color of the pixel.
As shown in fig. 2, in step a1, noise intensity estimation is performed on the first frame image to obtain the noise intensity of the video to be processed, and it may be determined whether to perform noise reduction restoration subsequently according to the noise intensity.
Step A2, noise reduction processing is carried out on the first frame image; specifically, each image block in the first frame image is subjected to noise reduction processing to obtain a target noise reduction result of each image block, so that a final noise reduction result of the first frame image is obtained.
Step A3, image enhancement processing; specifically, the noise-reduced video is subjected to image enhancement processing to obtain a processed enhanced video.
Step A4, transcoding in multiple gears; specifically, the enhanced video is transcoded in multiple gears to obtain multiple types of videos, such as: the high definition video, standard definition video, etc. After multi-gear transcoding is carried out, the multi-type videos are issued to the user side, so that the user can select to watch the videos.
It should be noted that, for a sequence without noise, the noise reduction processing procedure of step a2 may be directly omitted, and the subsequent image enhancement processing is performed to improve the computational efficiency.
And 102, performing motion estimation on the first image block to obtain a motion vector related to the first image block.
In theabove step 102, as shown in fig. 3, motion estimation is performed on the first image block in the first frame image in the video to be processed, and a result of the motion estimation, i.e. relative displacement between the first image block and the reference block (i.e. matching block), is obtained, so as to obtain a motion vector for the first image block. The number of motion vectors may be one or more, and depends on the number of reference blocks.
The basic idea of motion estimation is: dividing each frame image in an image sequence (namely, a video to be processed) into a plurality of non-overlapping image blocks, considering that the displacement amounts of all pixels in the image blocks are the same, then finding out a block which is the most similar to the current image block, namely a reference block, from each image block to a reference frame image in a given specific search range according to a matching criterion, wherein the relative displacement between the reference block and the current image block is a motion vector. When the video is compressed, the current image block can be completely restored only by storing the motion vector and the residual data.
103, determining a noise reduction strategy of the first image block according to the motion vector, wherein the noise reduction strategy comprises: temporal noise reduction and/or spatial noise reduction.
In thestep 103, as shown in fig. 3, according to the result of the motion estimation, that is, the motion vector, a noise reduction strategy used for the first image block may be determined, where the noise reduction strategy may be to perform temporal noise reduction, that is, enter step B1; or, performing spatial noise reduction, namely entering step B2; or, time domain noise reduction and spatial domain noise reduction may be adopted, that is, step B3 is performed; thus, the noise reduction strategy applied to the first image block may be determined by the result of the motion estimation.
And 104, respectively performing time domain noise reduction processing and spatial domain noise reduction processing on the first image block under the condition that the noise reduction strategy is time domain noise reduction and spatial domain noise reduction to obtain a time domain noise reduction result and a spatial domain noise reduction result.
Instep 104, if it is determined that the denoising strategy is time-domain denoising, time-domain denoising may be performed only on the first image block to obtain a time-domain denoising result, which is the final target denoising result of the first image block. If the noise reduction strategy is determined to be spatial domain noise reduction, the spatial domain noise reduction can be performed only on the first image block to obtain a spatial domain noise reduction result, namely a final target noise reduction result, and similarly, target noise reduction results corresponding to other image blocks in the first frame image are obtained, so that the noise-reduced first frame image is obtained. If the noise reduction strategy is determined to be time domain noise reduction and spatial domain noise reduction, the first image block is subjected to not only time domain noise reduction to obtain a time domain noise reduction result, but also spatial domain noise reduction to obtain a spatial domain noise reduction result, that is, a static or small motion region tends to be sampled from a time domain filtering result, and a region with large motion tends to be sampled from a spatial domain filtering of a current frame.
And 105, performing weighted fusion processing on the time domain noise reduction result and the spatial domain noise reduction result to obtain a target noise reduction result.
In thestep 105, as shown in fig. 3, if the noise reduction strategy is determined to be time domain noise reduction and space domain noise reduction, the time domain noise reduction result and the space domain noise reduction result are weighted and fused, that is, different weights are selected for combining the time domain noise reduction result and the space domain false result to obtain a final target noise reduction result, so as to achieve the optimal balance of the noise reduction effect, the detail retention, the restoration effect, and the processing speed.
In the above embodiment of the present invention, motion estimation is performed on an acquired first image block to obtain a motion vector related to the first image block, a denoising strategy of the first image block is determined according to the motion vector, and under the condition that the denoising strategy is time domain denoising and space domain denoising, time domain denoising and space domain denoising are performed on the first image block respectively to obtain a time domain denoising result and a space domain denoising result, that is, according to a result of motion estimation, for a region with large motion, a result of space domain filtering can be sampled, and a smaller denoising strength is adopted, and for a region with small motion, a result of time domain filtering can be sampled, and a larger denoising strength is adopted; and then, performing weighting fusion processing on the time domain noise reduction result and the space domain noise reduction result to obtain a target noise reduction result, namely, combining the time domain and space domain result weighting to obtain the target noise reduction result with good speed and effect.
Optionally, thestep 101 of acquiring a first image block of a first frame image in a video to be processed may specifically include the following steps:
acquiring a first frame image in a video to be processed;
blurring the first frame image to obtain a blurred image;
extracting an edge feature map of the blurred image;
carrying out blocking processing on the edge feature map to obtain a first image block set subjected to blocking processing;
wherein the first image block is one of the image blocks in the first set of image blocks.
In the above embodiment, the video to be processed is first acquired, the video to be processed may be decoded to obtain a plurality of frame images, the first frame image may be acquired in a frame extraction manner, and the first frame image may be extracted at equal intervals or randomly, which is not limited specifically herein.
In the above embodiment, as shown in fig. 3, in step B0, after the first frame image is acquired, the first frame image is blurred, and a blurred image after blurring may be obtained, and for a clean image sequence, the configuration in the time domain is generally accurate, but as noise increases, temporal registration becomes difficult, so that the noise picture (i.e. the first frame image) is blurred before motion estimation to reduce interference of noise on registration in the motion estimation process.
In the above-described embodiment, the edge feature map, i.e., the gradient map of the edge feature, is extracted on the basis of the blurred image. For example: the first frame image is taken as a continuous function, and because the pixel value of the edge part is obviously different from the pixel value beside the edge part, the edge information of the whole first frame image can be obtained by locally solving an extreme value of the first frame image; since the first frame image is a two-dimensional discrete function, the derivative becomes a difference, which is referred to as the gradient of the first frame image. The fuzzy image can be obtained by performing low-pass filtering fuzzy processing only on the Y channel, the Y channel of the fuzzy image is a fuzzy Y channel, and an edge feature map, namely a gradient map, is extracted by using canny on the basis of the fuzzy Y channel.
In the above embodiment, after the edge feature map is obtained, the edge feature map is subjected to blocking processing, that is, the edge feature map of the first frame image is subjected to blocking and is divided into a plurality of image blocks, each image block is an M × N image block, and M and N may be the same or different; the image blocks are not overlapped, a plurality of image blocks are combined into a first image block set, and the first image block is one of the image blocks in the first image block set.
Optionally, after thestep 101 acquires a first image block of a first frame image in the video to be processed, the method may further include the following steps:
acquiring the texture complexity of the first image block;
determining the type of the first image block according to the texture complexity of the first image block;
and determining a mode of performing time domain noise reduction processing and/or a mode of performing spatial domain noise reduction processing on the first image block according to the type of the first image block.
In the above embodiment, first, a first image block of a first frame image in a video to be processed is obtained, and texture detection is performed on the first image block, that is, the texture complexity of the first image block is analyzed to obtain the texture complexity of the first image block. And then determining the type of the first image block according to the texture complexity of the first image block, namely determining whether the first image block belongs to a weak texture image block type or a strong texture image block type. If the first image block is determined to belong to the weak texture image block type, performing noise reduction on the first image block in a pixel fusion noise reduction mode, namely, rapidly repairing the first image block by adopting an algorithm with low complexity and high cost performance; if the first image block is determined to belong to the type of the strong texture image block, a noise reduction mode which is good at retaining texture details is adopted to carry out noise reduction processing on the first image block, namely, an algorithm which is obvious in noise reduction effect, good at retaining texture details and relatively high in complexity is adopted to carry out key repair.
In the above embodiment, if the noise reduction strategy is time domain noise reduction, determining a mode of performing time domain noise reduction processing on the first image block according to the type of the first image block; if the noise reduction strategy is spatial domain noise reduction, determining a mode for performing spatial domain noise reduction processing on the first image block according to the type of the first image block; if the noise reduction strategies are time domain noise reduction and space domain noise reduction, determining a mode of performing time domain noise reduction processing on the first image block and determining a mode of performing space domain noise reduction processing on the first image block according to the type of the first image block; the method for adaptively adjusting the local noise reduction algorithm according to the texture complexity in the noise reduction process can not only keep the details of the texture region, but also recover the characteristic of smoothness of the non-texture region, and reduce unnecessary operations while obtaining a better noise reduction effect.
As an alternative embodiment, in the step of performing texture detection on the first image block, for the sake of temporal performance, the texture complexity analysis on the first image block may be performed only in the Y channel.
Optionally, the step of obtaining the texture complexity of the first image block may specifically include the following steps:
acquiring a first number of non-0 pixel values in the first image block;
and determining the texture complexity of the first image block according to the size relation between the first quantity and a first threshold value.
In the above embodiment, for a first image block in the first frame image, a first number of non-0 pixel values may be counted on the edge feature map of the first image block, and the texture complexity of the first image block may be determined according to a size relationship between the first number and a first threshold.
As an alternative embodiment, the first number is compared with a first threshold, and if the first number is greater than or equal to the first threshold, the first image block is determined to be a complex texture image block, that is, a strong texture image block, a noise reduction processing mode that excels in detail restoration can be adopted for the strong texture image block, a detected region such as a human hair braid, a five sense organs, and the like is an intense texture region with clear outline or detail, and a noise reduction algorithm that excels in detail retention and restoration is adopted for the region. If the first number is smaller than the first threshold value, the first image block is judged to be a simple texture image block, namely a weak texture image block or a non-texture image block, a noise reduction mode which is good at erasing flat region noise can be adopted for the weak texture image block, regions such as a background, a floor, black clothes and the like are weak texture regions, noise erasing is mainly adopted for the regions, and a fast noise reduction algorithm which is good at erasing isolated noise is adopted.
Optionally, thestep 102 performs motion estimation on the first image block to obtain a motion vector related to the first image block, which may specifically include the following steps:
determining a reference block set related to the first image block in the M second frame images according to the matching degree of the first image block and each image block in each second frame image in the M second frame images; the M second frame images are M frame images adjacent to the first frame image in the video to be processed, and M is a positive integer;
and performing motion estimation on the first image block according to each reference block in the reference block set to obtain a motion vector related to the first image block.
In the above embodiment, because there is a relatively strong time-domain continuous relationship between the continuous frame images, M frame images adjacent to the first frame image may be obtained, which may be M frame images adjacent before the first frame image, M frame images adjacent after the first frame image, or M frame images adjacent before and after the first frame image, and this is not particularly limited. Searching an image Block with the highest Matching degree (namely, the most approximate) with a first image Block of a first frame image in each second frame image of M second frame images by a Block-Matching (BM) method to serve as a reference Block of the first image Block of the first frame image, wherein each second frame image has a reference Block of the first image Block, namely M second frame images contain M reference blocks, and the M reference blocks form a reference Block set.
As an optional embodiment, in each second frame image, the matching degree is represented by the distance between the first image block and each image block, and the smaller the distance, the higher the matching degree is represented; the distance between the first image block and each image block is calculated according to the formula:
wherein, BcurrentRepresenting pixel values of a first image block;
Bjrepresenting pixel values of a jth image block in the second frame image;
||Bcurrent-Bj||2residual values of a jth image block in the first i image block and the second frame image are represented;
n × M represents the size of the length × width of the first image block;
distance(Bcurrent,Bj) Representing a first image block and a second image blockDistance of the jth image block in the frame image.
According to the formula, the image block with the minimum distance from the first image block in each second frame image is obtained as the reference block, so that the reference block in each second frame image is obtained, and the reference block set is further obtained.
In the above embodiment, after obtaining the reference block set, for each reference block in the reference block set, calculating a relative displacement, i.e., a motion vector, between the reference block and the first image block, M motion vectors may be obtained.
Optionally, when determining that the denoising strategy is time-domain denoising and spatial denoising, the step of determining a time-domain denoising method and/or a spatial denoising method for the first image block according to the type of the first image block may specifically include the following steps:
determining a mode of performing time domain noise reduction processing and a mode of performing spatial domain noise reduction processing on the first image block according to the type of the first image block;
thestep 104 of performing time domain noise reduction processing and spatial domain noise reduction processing on the first image block respectively to obtain a time domain noise reduction result and a spatial domain noise reduction result includes:
performing time domain noise reduction processing on the first image block according to the time domain noise reduction processing mode to obtain a time domain noise reduction result;
and performing spatial domain noise reduction processing on the first image block according to the spatial domain noise reduction processing mode to obtain a spatial domain noise reduction result.
In the above embodiment, when the denoising strategy is time-domain denoising and spatial-domain denoising, according to the type of the first image block, a time-domain denoising method and a spatial-domain denoising method may be determined for the first image block, and a time-domain denoising result may be obtained by performing the time-domain denoising method on the first image block according to the time-domain denoising method; and performing spatial domain noise reduction processing on the first image block according to a spatial domain noise reduction processing mode to obtain a spatial domain noise reduction result. The steps of the time-domain noise reduction processing and the spatial-domain noise reduction processing are not limited in a specific context, and the time-domain noise reduction processing may be performed on the first image block first according to the spatial-domain noise reduction processing, or may be performed on the first image block first according to the time-domain noise reduction processing, or may be performed on the first image block simultaneously with the time-domain noise reduction processing, which is not limited specifically herein.
As an optional embodiment, when the denoising strategy is time-domain denoising and spatial denoising, if the type of the first image block is a weak texture image block type, the time-domain denoising result may be obtained by using the following time-domain denoising processing method:
wherein pixel _ temporal represents a time domain noise reduction result;
Bcurrentrepresenting pixel values of a first image block;
beta represents a first coefficient, the value of beta is between 0 and 1, and the optimal value can be 0.5;
Breferencea set of reference blocks representing a first image block;
Breference_irepresents an ith reference block in the reference block set;
αirepresenting a second coefficient corresponding to the ith reference block in the reference block set, wherein the value of the second coefficient is between 0 and 1, the second coefficient is in direct proportion to the size of a residual error value between the first image block and the ith reference block, and the second coefficient is in inverse proportion to the size of a motion vector between the first image block and the ith reference block;
Σ denotes a summation sign.
If the type of the first image block is the weak texture image block type, obtaining a spatial domain noise reduction result by adopting a spatial domain noise reduction processing mode as follows:
wherein, (x, y) represents coordinates of pixels in the first image block;
pixel (i, j) represents the pixel value of a neighborhood image block of the first image block, the neighborhood image block can be one or more adjacent image blocks or a circle of adjacent image blocks, and can be set as required;
w (i, j) represents the weight of pixel (i, j);
Σ denotes a summation sign.
It should be noted that, the above is only an example of performing a time domain denoising processing mode and a spatial denoising processing mode on a weak texture image block type, for a strong texture image block type, the time domain denoising processing mode may be replaced by an example 3DDCT denoising algorithm, and the spatial denoising processing mode may be replaced by an example non-local mean filtering denoising algorithms, a wavelet denoising algorithm, and the like, and is not limited specifically herein.
Optionally, when the denoising strategy is time-domain denoising and spatial denoising, after thestep 102 performs motion estimation on the first image block to obtain a motion vector for the first image block, the method may further include the following steps:
determining a temporal weight for the temporal noise reduction and a spatial weight for the spatial noise reduction based on the motion vector of the first image block.
In the above embodiment, the time domain weight of the time domain noise reduction result and the spatial domain weight of the spatial domain noise reduction result are dynamically determined according to the magnitude of the motion vector, so that an optimal balance state can be achieved in terms of detail retention, noise point erasure and time performance. The proportion of the time domain weight and the space domain weight is determined by the size of the motion vector, the larger the motion vector is, the larger the weight of the space domain noise reduction result is, the smaller the weight of the time domain noise reduction result is, and vice versa.
Optionally, thestep 105 performs weighted fusion processing on the time domain noise reduction result and the spatial domain noise reduction result to obtain a target noise reduction result, which may specifically include the following steps:
calculating the product of the time domain noise reduction result and the time domain weight to obtain a first result;
calculating the product of the spatial domain noise reduction result and the spatial domain weight to obtain a second result;
adding the first result and the second result to obtain a third result;
and dividing the third result by the sum of the time domain weight and the space domain weight to obtain a target noise reduction result.
In the above embodiment, the calculation of the target noise reduction result may be specifically performed by the following formula:
wherein, the denoised (x, y) represents the target noise reduction result;
wspatialrepresenting spatial weights;
pixelspatial(x,y)representing a spatial domain noise reduction result;
wtemporalrepresenting time domain weights;
wtemporalrepresenting a time domain noise reduction result;
Σ denotes a summation sign.
In the above embodiment, the video noise reduction algorithm aligns and fuses the consecutive frame images to form an image. The alignment is to find the corresponding relation of the image blocks in the plurality of frame images; the fusion is to perform weighted average on the corresponding image blocks in a spatial domain or a frequency domain. The result of the alignment is not necessarily accurate, and therefore it is necessary to confirm whether the result of the alignment is confident before the fusion. The fused time domain weight and the spatial domain weight are adjusted according to the size of a Motion Vector (MV), for a reference block with a small MV, the rotation/scaling/deformation of an object is small, and the confidence coefficient of the MV is very high prior, so that the ratio of time domain filtering sampling from a first frame image is high; for reference blocks with a large MV, with a large a priori rotation/scaling/deformation of the object, the confidence of the MV is low, and therefore the ratio of samples from the spatial filtering of the first frame image is high. The filtering results of different time domain weights and space domain weights are selected in a self-adaptive mode according to the size of the motion vector and the complexity of the texture, and the optimal balance of the noise reduction effect, the detail retaining and repairing effect and the processing speed is achieved.
In summary, in the embodiment of the present invention, when it is determined that the first image block belongs to the weak texture image block type, an algorithm with low complexity and high cost performance is adopted to quickly repair the first image block; when the first image block is determined to belong to the type of the strong texture image block, the first image block is mainly repaired by adopting an algorithm which has obvious noise reduction effect, is good at retaining texture details and has relatively high complexity, namely, a method for adaptively adjusting a local noise reduction algorithm according to the texture complexity in the noise reduction process can retain the details of a texture area and recover the characteristics of smoothness of a non-texture area, and unnecessary operations are reduced while a good noise reduction effect is obtained; and the filtering results of different time domain weights and space domain weights are selected locally in a self-adaptive manner through the size of the motion vector and the texture complexity, and a target noise reduction result with good speed and effect can be obtained by combining the result weighting and fusion processing of the time domain and the space domain.
As shown in fig. 4, an embodiment of the present invention provides a noisereduction processing apparatus 400, which includes:
a first obtainingmodule 401, configured to obtain a first image block of a first frame image in a video to be processed;
afirst estimation module 402, configured to perform motion estimation on the first image block to obtain a motion vector of the first image block;
a first determiningmodule 403, configured to determine a denoising strategy for the first image block according to the motion vector, where the denoising strategy includes: temporal and/or spatial noise reduction;
afirst processing module 404, configured to perform time domain denoising and spatial domain denoising on the first image block respectively to obtain a time domain denoising result and a spatial domain denoising result when the denoising strategy is time domain denoising and spatial domain denoising;
and asecond processing module 405, configured to perform weighted fusion processing on the time domain noise reduction result and the spatial domain noise reduction result to obtain a target noise reduction result.
In the above embodiment of the present invention, motion estimation is performed on an acquired first image block to obtain a motion vector related to the first image block, a denoising strategy of the first image block is determined according to the motion vector, and under the condition that the denoising strategy is time domain denoising and space domain denoising, time domain denoising and space domain denoising are performed on the first image block respectively to obtain a time domain denoising result and a space domain denoising result, that is, according to a result of motion estimation, for a region with large motion, a result of space domain filtering can be sampled, and a smaller denoising strength is adopted, and for a region with small motion, a result of time domain filtering can be sampled, and a larger denoising strength is adopted; and then, performing weighting fusion processing on the time domain noise reduction result and the space domain noise reduction result to obtain a target noise reduction result, namely, combining the time domain and space domain result weighting to obtain the target noise reduction result with good speed and effect.
Optionally, the first obtainingmodule 401 includes:
the first acquisition unit is used for acquiring a first frame image in a video to be processed;
the first processing unit is used for carrying out fuzzy processing on the first frame image to obtain a fuzzy image;
a first extraction unit, configured to extract an edge feature map of the blurred image;
the second processing unit is used for carrying out blocking processing on the edge feature map to obtain a first image block set subjected to blocking processing;
wherein the first image block is one of the image blocks in the first set of image blocks.
Optionally, after the first obtainingmodule 401, the apparatus further includes:
the second obtaining module is used for obtaining the texture complexity of the first image block;
the second determining module is used for determining the type of the first image block according to the texture complexity of the first image block;
and the third determining module is used for determining a time domain denoising processing mode and/or a space domain denoising processing mode of the first image block according to the type of the first image block.
Optionally, the second obtaining module includes:
a second obtaining unit, configured to obtain a first number of non-0 pixel values in the first image block;
and the first determining unit is used for determining the texture complexity of the first image block according to the size relation between the first quantity and a first threshold.
Optionally, when determining that the denoising strategy is time domain denoising and spatial domain denoising, the third determining module includes:
the second determining unit is used for determining a time domain denoising processing mode and a space domain denoising processing mode for the first image block according to the type of the first image block;
wherein thefirst processing module 404 includes:
the third processing unit is used for carrying out time domain noise reduction processing on the first image block according to the time domain noise reduction processing mode to obtain a time domain noise reduction result;
and the fourth processing unit is used for carrying out spatial domain noise reduction processing on the first image block according to the spatial domain noise reduction processing mode to obtain a spatial domain noise reduction result.
Optionally, thefirst estimating module 402 includes:
a third determining unit, configured to determine a reference block set related to the first image block in the M second frame images according to a matching degree between the first image block and each image block in each second frame image in the M second frame images; the M second frame images are M frame images adjacent to the first frame image in the video to be processed, and M is a positive integer;
a first estimating unit, configured to perform motion estimation on the first image block according to each reference block in the reference block set, so as to obtain a motion vector for the first image block.
Optionally, after thefirst estimating module 402, the apparatus further includes:
a fourth determining module, configured to determine, according to the motion vector of the first image block, a temporal weight related to the temporal noise reduction and a spatial weight related to the spatial noise reduction.
Optionally, thesecond processing module 405 includes:
the first calculating unit is used for calculating the product of the time domain noise reduction result and the time domain weight to obtain a first result;
the second calculation unit is used for calculating the product of the spatial domain noise reduction result and the spatial domain weight to obtain a second result;
a third calculating unit, configured to add the first result and the second result to obtain a third result;
and the fourth calculating unit is used for dividing the third result by the sum of the time domain weight and the space domain weight to obtain a target noise reduction result.
It should be noted that the embodiment of the noise reduction processing apparatus is an apparatus corresponding to the above noise reduction processing method, and all implementation manners of the above embodiment are applicable to the embodiment of the apparatus, and can also achieve the same technical effect, which is not described herein again.
In summary, in the embodiment of the present invention, when it is determined that the first image block belongs to the weak texture image block type, an algorithm with low complexity and high cost performance is adopted to quickly repair the first image block; when the first image block is determined to belong to the type of the strong texture image block, the first image block is mainly repaired by adopting an algorithm which has obvious noise reduction effect, is good at retaining texture details and has relatively high complexity, namely, a method for adaptively adjusting a local noise reduction algorithm according to the texture complexity in the noise reduction process can retain the details of a texture area and recover the characteristics of smoothness of a non-texture area, and unnecessary operations are reduced while a good noise reduction effect is obtained; and the filtering results of different time domain weights and space domain weights are selected locally in a self-adaptive manner through the size of the motion vector and the texture complexity, and a target noise reduction result with good speed and effect can be obtained by combining the result weighting and fusion processing of the time domain and the space domain.
The embodiment of the invention also provides the electronic equipment. As shown in fig. 5, the system comprises aprocessor 501, acommunication interface 502, amemory 503 and acommunication bus 504, wherein theprocessor 501, thecommunication interface 502 and thememory 503 are communicated with each other through thecommunication bus 504.
Thememory 503 stores a computer program.
Theprocessor 501 is configured to implement part or all of the steps of the noise reduction processing method provided by the embodiment of the present invention when executing the program stored in thememory 503.
The communication bus mentioned in the electronic device may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
The communication interface is used for communication between the terminal and other equipment.
The Memory may include a Random Access Memory (RAM) or a non-volatile Memory (non-volatile Memory), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; the Integrated Circuit may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, or a discrete hardware component.
In still another embodiment provided by the present invention, a computer-readable storage medium is further provided, which stores instructions that, when executed on a computer, cause the computer to execute the noise reduction processing method described in the above embodiment.
In yet another embodiment provided by the present invention, a computer program product containing instructions is also provided, which when run on a computer, causes the computer to execute the noise reduction processing method described in the above embodiment.
All the embodiments in the present specification are described in a related manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The above description is only for the preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present invention are included in the protection scope of the present invention.