Movatterモバイル変換


[0]ホーム

URL:


CN113012061A - Noise reduction processing method and device and electronic equipment - Google Patents

Noise reduction processing method and device and electronic equipment
Download PDF

Info

Publication number
CN113012061A
CN113012061ACN202110192918.0ACN202110192918ACN113012061ACN 113012061 ACN113012061 ACN 113012061ACN 202110192918 ACN202110192918 ACN 202110192918ACN 113012061 ACN113012061 ACN 113012061A
Authority
CN
China
Prior art keywords
noise reduction
image block
result
image
spatial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110192918.0A
Other languages
Chinese (zh)
Inventor
郭莎
朱飞
杜凌霄
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Bigo Technology Pte Ltd
Original Assignee
Bigo Technology Pte Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Bigo Technology Pte LtdfiledCriticalBigo Technology Pte Ltd
Priority to CN202110192918.0ApriorityCriticalpatent/CN113012061A/en
Publication of CN113012061ApublicationCriticalpatent/CN113012061A/en
Pendinglegal-statusCriticalCurrent

Links

Images

Classifications

Landscapes

Abstract

The embodiment of the invention provides a noise reduction processing method and device and electronic equipment, and relates to the technical field of computers. The method comprises the following steps: acquiring a first image block of a first frame image in a video to be processed; performing motion estimation on the first image block to obtain a motion vector related to the first image block; determining a denoising strategy of the first image block according to the motion vector, wherein the denoising strategy comprises: temporal and/or spatial noise reduction; under the condition that the noise reduction strategy is time domain noise reduction and space domain noise reduction, respectively performing time domain noise reduction processing and space domain noise reduction processing on the first image block to obtain a time domain noise reduction result and a space domain noise reduction result; and performing weighted fusion processing on the time domain noise reduction result and the space domain noise reduction result to obtain a target noise reduction result. According to the scheme, the target noise reduction result with good speed and effect can be obtained by combining the result weighting of the time domain and the space domain.

Description

Noise reduction processing method and device and electronic equipment
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a noise reduction processing method and apparatus, and an electronic device.
Background
Noise is currently a common distortion introduced during signal acquisition. The noise reduction not only can make the subjective feeling of the image/video better, but also can make the image/video compressed without wasting the code rate on the coding noise. Noise reduction algorithms can be divided into spatial and temporal noise reduction. Spatial noise reduction does not fully exploit the effective signal provided by the reference frame in the time domain, and is therefore often less effective than temporal noise reduction. Temporal noise reduction is usually best achieved at rest or in small range of motion, and if large motion and occlusion occur, improper temporal noise reduction can produce artifacts or smearing. In addition to the fusion of the pixels in the time domain, a time domain denoising algorithm in a frequency spectrum domain such as a wavelet Cosine Transform (DCT) domain appears in the subsequent video denoising. The effect of these algorithms is very prominent in the industry, but they are often too complex to be able to fall within the industry. The noise reduction algorithm with light performance has the problems of poor noise reduction effect, unclean noise erasure or detail loss.
Disclosure of Invention
The invention provides a noise reduction processing method and device and electronic equipment, which are used for solving the problems of poor noise reduction effect and the like in the prior art to a certain extent.
In a first aspect of the present invention, there is provided a noise reduction processing method, including:
acquiring a first image block of a first frame image in a video to be processed;
performing motion estimation on the first image block to obtain a motion vector related to the first image block;
determining a denoising strategy of the first image block according to the motion vector, wherein the denoising strategy comprises: temporal and/or spatial noise reduction;
under the condition that the noise reduction strategy is time domain noise reduction and space domain noise reduction, respectively performing time domain noise reduction processing and space domain noise reduction processing on the first image block to obtain a time domain noise reduction result and a space domain noise reduction result;
and performing weighted fusion processing on the time domain noise reduction result and the space domain noise reduction result to obtain a target noise reduction result.
In a second aspect of the present invention, there is provided a noise reduction processing apparatus comprising:
the first acquisition module is used for acquiring a first image block of a first frame image in a video to be processed;
the first estimation module is used for carrying out motion estimation on the first image block to obtain a motion vector related to the first image block;
a first determining module, configured to determine a denoising strategy for the first image block according to the motion vector, where the denoising strategy includes: temporal and/or spatial noise reduction;
the first processing module is used for respectively carrying out time domain noise reduction processing and space domain noise reduction processing on the first image block under the condition that the noise reduction strategy is time domain noise reduction and space domain noise reduction to obtain a time domain noise reduction result and a space domain noise reduction result;
and the second processing module is used for performing weighted fusion processing on the time domain noise reduction result and the spatial domain noise reduction result to obtain a target noise reduction result.
In a third aspect of the present invention, there is also provided an electronic device, including a processor, a communication interface, a memory, and a communication bus, where the processor, the communication interface, and the memory complete communication with each other through the communication bus;
a memory for storing a computer program;
and a processor for implementing the steps of the noise reduction processing method when executing the program stored in the memory.
In a fourth aspect implemented by the present invention, there is also provided a computer-readable storage medium on which a computer program is stored, which when executed by a processor implements the noise reduction processing method as described above.
In a fifth aspect of the embodiments of the present invention, there is also provided a computer program product containing instructions which, when run on a computer, cause the computer to perform the noise reduction processing method as described above.
Aiming at the prior art, the invention has the following advantages:
in the embodiment of the invention, motion estimation is carried out on an obtained first image block to obtain a motion vector related to the first image block, a noise reduction strategy of the first image block is determined according to the motion vector, and under the condition that the noise reduction strategy is time domain noise reduction and space domain noise reduction, time domain noise reduction and space domain noise reduction are respectively carried out on the first image block to obtain a time domain noise reduction result and a space domain noise reduction result, namely according to the result of the motion estimation, for a large-motion area, sampling can be carried out from the result of space domain filtering, and smaller noise reduction intensity is adopted, and for a small-motion area, sampling can be carried out from the result of time domain filtering, and larger noise reduction intensity is adopted; and then, performing weighting fusion processing on the time domain noise reduction result and the space domain noise reduction result to obtain a target noise reduction result, namely, combining the time domain and space domain result weighting to obtain the target noise reduction result with good speed and effect.
The foregoing description is only an overview of the technical solutions of the present invention, and the embodiments of the present invention are described below in order to make the technical means of the present invention more clearly understood and to make the above and other objects, features, and advantages of the present invention more clearly understandable.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments will be briefly described below.
Fig. 1 is a flowchart of a noise reduction processing method according to an embodiment of the present invention;
fig. 2 is a schematic view of an application scenario of the noise reduction processing method according to the embodiment of the present invention;
fig. 3 is a specific flowchart of a noise reduction processing method according to an embodiment of the present invention;
fig. 4 is a block diagram of a noise reduction processing apparatus according to an embodiment of the present invention;
fig. 5 is a block diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense and do not limit the number of terms, e.g., the first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
Currently, noise reduction algorithms can be classified into many different categories, such as linear/nonlinear, spatial/frequency domain, which includes wavelet transform domain, fourier transform domain, or other transform domain. Noise reduction algorithms often need to balance between speed and effect, and a fast and good noise reduction method is difficult to realize by using a pure software mode. Another idea is to self-similarly combine transform domains. Each block is searched within the image to find a series of blocks similar thereto. The classical Non-Local average Non-Local Means noise reduction algorithm will weight average these similar blocks in the spatial domain. If further, these similar blocks are transformed to the frequency domain, and then transformed back to the spatial domain after some filtering and thresholding, it is a method combining self-similarity and transform domain, such as the classical noise reduction algorithm: this is the principle that is utilized by the Block-Matching and 3D filtering, BM 3D. Similarly, self-similar combination sparse coding, self-similar combination low rank and the like can achieve good noise reduction effect.
However, one of the bottlenecks of the noise reduction algorithm is that there must be some trade-offs in effect and performance, and the noise reduction algorithm with outstanding effect is very outstanding in timing stability and noise reduction effect, but is limited by the complexity and is difficult to fall to the industrial field; and the fast noise reduction algorithm, such as bilateral filtering, median filtering, etc., cannot better separate noise from detail signals, and brings obvious detail loss while reducing noise.
Therefore, the embodiment of the present invention provides a noise reduction processing method, an apparatus and an electronic device, which may obtain a target noise reduction result with good speed and effect by using a motion estimation result, where a stationary or small motion region tends to be sampled from a result of time-domain filtering, and a region with large motion tends to be sampled from a result of spatial-domain filtering of a current frame, and then weighting the results of time-domain and spatial-domain.
Specifically, as shown in fig. 1 to fig. 3, an embodiment of the present invention provides a noise reduction processing method, which is mainly applied to a server preprocessing system. The method specifically comprises the following steps:
step 101, obtaining a first image block of a first frame image in a video to be processed.
In theabove step 101, an application scenario of the noise reduction processing method is shown in fig. 2, which illustrates a location where the noise reduction processing method is located and a necessity of reducing noise of a video. First, a first frame image in a video to be processed is obtained, where the first frame image may be a first frame image in the video to be processed, may also be a last frame image, and may also be any frame image in the middle, and is not specifically limited herein. The first frame image may be composed of a plurality of image blocks, and the first image block is one of the image blocks in the first frame image.
The first frame image may be a video frame image in YUV space, YUV is a picture format and is composed of Y, U, V three parts, and Y represents brightness, that is, a gray scale value; u and V represent the chromaticity of the color, respectively, and serve to describe the image color and saturation for specifying the color of the pixel.
As shown in fig. 2, in step a1, noise intensity estimation is performed on the first frame image to obtain the noise intensity of the video to be processed, and it may be determined whether to perform noise reduction restoration subsequently according to the noise intensity.
Step A2, noise reduction processing is carried out on the first frame image; specifically, each image block in the first frame image is subjected to noise reduction processing to obtain a target noise reduction result of each image block, so that a final noise reduction result of the first frame image is obtained.
Step A3, image enhancement processing; specifically, the noise-reduced video is subjected to image enhancement processing to obtain a processed enhanced video.
Step A4, transcoding in multiple gears; specifically, the enhanced video is transcoded in multiple gears to obtain multiple types of videos, such as: the high definition video, standard definition video, etc. After multi-gear transcoding is carried out, the multi-type videos are issued to the user side, so that the user can select to watch the videos.
It should be noted that, for a sequence without noise, the noise reduction processing procedure of step a2 may be directly omitted, and the subsequent image enhancement processing is performed to improve the computational efficiency.
And 102, performing motion estimation on the first image block to obtain a motion vector related to the first image block.
In theabove step 102, as shown in fig. 3, motion estimation is performed on the first image block in the first frame image in the video to be processed, and a result of the motion estimation, i.e. relative displacement between the first image block and the reference block (i.e. matching block), is obtained, so as to obtain a motion vector for the first image block. The number of motion vectors may be one or more, and depends on the number of reference blocks.
The basic idea of motion estimation is: dividing each frame image in an image sequence (namely, a video to be processed) into a plurality of non-overlapping image blocks, considering that the displacement amounts of all pixels in the image blocks are the same, then finding out a block which is the most similar to the current image block, namely a reference block, from each image block to a reference frame image in a given specific search range according to a matching criterion, wherein the relative displacement between the reference block and the current image block is a motion vector. When the video is compressed, the current image block can be completely restored only by storing the motion vector and the residual data.
103, determining a noise reduction strategy of the first image block according to the motion vector, wherein the noise reduction strategy comprises: temporal noise reduction and/or spatial noise reduction.
In thestep 103, as shown in fig. 3, according to the result of the motion estimation, that is, the motion vector, a noise reduction strategy used for the first image block may be determined, where the noise reduction strategy may be to perform temporal noise reduction, that is, enter step B1; or, performing spatial noise reduction, namely entering step B2; or, time domain noise reduction and spatial domain noise reduction may be adopted, that is, step B3 is performed; thus, the noise reduction strategy applied to the first image block may be determined by the result of the motion estimation.
And 104, respectively performing time domain noise reduction processing and spatial domain noise reduction processing on the first image block under the condition that the noise reduction strategy is time domain noise reduction and spatial domain noise reduction to obtain a time domain noise reduction result and a spatial domain noise reduction result.
Instep 104, if it is determined that the denoising strategy is time-domain denoising, time-domain denoising may be performed only on the first image block to obtain a time-domain denoising result, which is the final target denoising result of the first image block. If the noise reduction strategy is determined to be spatial domain noise reduction, the spatial domain noise reduction can be performed only on the first image block to obtain a spatial domain noise reduction result, namely a final target noise reduction result, and similarly, target noise reduction results corresponding to other image blocks in the first frame image are obtained, so that the noise-reduced first frame image is obtained. If the noise reduction strategy is determined to be time domain noise reduction and spatial domain noise reduction, the first image block is subjected to not only time domain noise reduction to obtain a time domain noise reduction result, but also spatial domain noise reduction to obtain a spatial domain noise reduction result, that is, a static or small motion region tends to be sampled from a time domain filtering result, and a region with large motion tends to be sampled from a spatial domain filtering of a current frame.
And 105, performing weighted fusion processing on the time domain noise reduction result and the spatial domain noise reduction result to obtain a target noise reduction result.
In thestep 105, as shown in fig. 3, if the noise reduction strategy is determined to be time domain noise reduction and space domain noise reduction, the time domain noise reduction result and the space domain noise reduction result are weighted and fused, that is, different weights are selected for combining the time domain noise reduction result and the space domain false result to obtain a final target noise reduction result, so as to achieve the optimal balance of the noise reduction effect, the detail retention, the restoration effect, and the processing speed.
In the above embodiment of the present invention, motion estimation is performed on an acquired first image block to obtain a motion vector related to the first image block, a denoising strategy of the first image block is determined according to the motion vector, and under the condition that the denoising strategy is time domain denoising and space domain denoising, time domain denoising and space domain denoising are performed on the first image block respectively to obtain a time domain denoising result and a space domain denoising result, that is, according to a result of motion estimation, for a region with large motion, a result of space domain filtering can be sampled, and a smaller denoising strength is adopted, and for a region with small motion, a result of time domain filtering can be sampled, and a larger denoising strength is adopted; and then, performing weighting fusion processing on the time domain noise reduction result and the space domain noise reduction result to obtain a target noise reduction result, namely, combining the time domain and space domain result weighting to obtain the target noise reduction result with good speed and effect.
Optionally, thestep 101 of acquiring a first image block of a first frame image in a video to be processed may specifically include the following steps:
acquiring a first frame image in a video to be processed;
blurring the first frame image to obtain a blurred image;
extracting an edge feature map of the blurred image;
carrying out blocking processing on the edge feature map to obtain a first image block set subjected to blocking processing;
wherein the first image block is one of the image blocks in the first set of image blocks.
In the above embodiment, the video to be processed is first acquired, the video to be processed may be decoded to obtain a plurality of frame images, the first frame image may be acquired in a frame extraction manner, and the first frame image may be extracted at equal intervals or randomly, which is not limited specifically herein.
In the above embodiment, as shown in fig. 3, in step B0, after the first frame image is acquired, the first frame image is blurred, and a blurred image after blurring may be obtained, and for a clean image sequence, the configuration in the time domain is generally accurate, but as noise increases, temporal registration becomes difficult, so that the noise picture (i.e. the first frame image) is blurred before motion estimation to reduce interference of noise on registration in the motion estimation process.
In the above-described embodiment, the edge feature map, i.e., the gradient map of the edge feature, is extracted on the basis of the blurred image. For example: the first frame image is taken as a continuous function, and because the pixel value of the edge part is obviously different from the pixel value beside the edge part, the edge information of the whole first frame image can be obtained by locally solving an extreme value of the first frame image; since the first frame image is a two-dimensional discrete function, the derivative becomes a difference, which is referred to as the gradient of the first frame image. The fuzzy image can be obtained by performing low-pass filtering fuzzy processing only on the Y channel, the Y channel of the fuzzy image is a fuzzy Y channel, and an edge feature map, namely a gradient map, is extracted by using canny on the basis of the fuzzy Y channel.
In the above embodiment, after the edge feature map is obtained, the edge feature map is subjected to blocking processing, that is, the edge feature map of the first frame image is subjected to blocking and is divided into a plurality of image blocks, each image block is an M × N image block, and M and N may be the same or different; the image blocks are not overlapped, a plurality of image blocks are combined into a first image block set, and the first image block is one of the image blocks in the first image block set.
Optionally, after thestep 101 acquires a first image block of a first frame image in the video to be processed, the method may further include the following steps:
acquiring the texture complexity of the first image block;
determining the type of the first image block according to the texture complexity of the first image block;
and determining a mode of performing time domain noise reduction processing and/or a mode of performing spatial domain noise reduction processing on the first image block according to the type of the first image block.
In the above embodiment, first, a first image block of a first frame image in a video to be processed is obtained, and texture detection is performed on the first image block, that is, the texture complexity of the first image block is analyzed to obtain the texture complexity of the first image block. And then determining the type of the first image block according to the texture complexity of the first image block, namely determining whether the first image block belongs to a weak texture image block type or a strong texture image block type. If the first image block is determined to belong to the weak texture image block type, performing noise reduction on the first image block in a pixel fusion noise reduction mode, namely, rapidly repairing the first image block by adopting an algorithm with low complexity and high cost performance; if the first image block is determined to belong to the type of the strong texture image block, a noise reduction mode which is good at retaining texture details is adopted to carry out noise reduction processing on the first image block, namely, an algorithm which is obvious in noise reduction effect, good at retaining texture details and relatively high in complexity is adopted to carry out key repair.
In the above embodiment, if the noise reduction strategy is time domain noise reduction, determining a mode of performing time domain noise reduction processing on the first image block according to the type of the first image block; if the noise reduction strategy is spatial domain noise reduction, determining a mode for performing spatial domain noise reduction processing on the first image block according to the type of the first image block; if the noise reduction strategies are time domain noise reduction and space domain noise reduction, determining a mode of performing time domain noise reduction processing on the first image block and determining a mode of performing space domain noise reduction processing on the first image block according to the type of the first image block; the method for adaptively adjusting the local noise reduction algorithm according to the texture complexity in the noise reduction process can not only keep the details of the texture region, but also recover the characteristic of smoothness of the non-texture region, and reduce unnecessary operations while obtaining a better noise reduction effect.
As an alternative embodiment, in the step of performing texture detection on the first image block, for the sake of temporal performance, the texture complexity analysis on the first image block may be performed only in the Y channel.
Optionally, the step of obtaining the texture complexity of the first image block may specifically include the following steps:
acquiring a first number of non-0 pixel values in the first image block;
and determining the texture complexity of the first image block according to the size relation between the first quantity and a first threshold value.
In the above embodiment, for a first image block in the first frame image, a first number of non-0 pixel values may be counted on the edge feature map of the first image block, and the texture complexity of the first image block may be determined according to a size relationship between the first number and a first threshold.
As an alternative embodiment, the first number is compared with a first threshold, and if the first number is greater than or equal to the first threshold, the first image block is determined to be a complex texture image block, that is, a strong texture image block, a noise reduction processing mode that excels in detail restoration can be adopted for the strong texture image block, a detected region such as a human hair braid, a five sense organs, and the like is an intense texture region with clear outline or detail, and a noise reduction algorithm that excels in detail retention and restoration is adopted for the region. If the first number is smaller than the first threshold value, the first image block is judged to be a simple texture image block, namely a weak texture image block or a non-texture image block, a noise reduction mode which is good at erasing flat region noise can be adopted for the weak texture image block, regions such as a background, a floor, black clothes and the like are weak texture regions, noise erasing is mainly adopted for the regions, and a fast noise reduction algorithm which is good at erasing isolated noise is adopted.
Optionally, thestep 102 performs motion estimation on the first image block to obtain a motion vector related to the first image block, which may specifically include the following steps:
determining a reference block set related to the first image block in the M second frame images according to the matching degree of the first image block and each image block in each second frame image in the M second frame images; the M second frame images are M frame images adjacent to the first frame image in the video to be processed, and M is a positive integer;
and performing motion estimation on the first image block according to each reference block in the reference block set to obtain a motion vector related to the first image block.
In the above embodiment, because there is a relatively strong time-domain continuous relationship between the continuous frame images, M frame images adjacent to the first frame image may be obtained, which may be M frame images adjacent before the first frame image, M frame images adjacent after the first frame image, or M frame images adjacent before and after the first frame image, and this is not particularly limited. Searching an image Block with the highest Matching degree (namely, the most approximate) with a first image Block of a first frame image in each second frame image of M second frame images by a Block-Matching (BM) method to serve as a reference Block of the first image Block of the first frame image, wherein each second frame image has a reference Block of the first image Block, namely M second frame images contain M reference blocks, and the M reference blocks form a reference Block set.
As an optional embodiment, in each second frame image, the matching degree is represented by the distance between the first image block and each image block, and the smaller the distance, the higher the matching degree is represented; the distance between the first image block and each image block is calculated according to the formula:
Figure BDA0002945855940000101
wherein, BcurrentRepresenting pixel values of a first image block;
Bjrepresenting pixel values of a jth image block in the second frame image;
||Bcurrent-Bj||2residual values of a jth image block in the first i image block and the second frame image are represented;
n × M represents the size of the length × width of the first image block;
distance(Bcurrent,Bj) Representing a first image block and a second image blockDistance of the jth image block in the frame image.
According to the formula, the image block with the minimum distance from the first image block in each second frame image is obtained as the reference block, so that the reference block in each second frame image is obtained, and the reference block set is further obtained.
In the above embodiment, after obtaining the reference block set, for each reference block in the reference block set, calculating a relative displacement, i.e., a motion vector, between the reference block and the first image block, M motion vectors may be obtained.
Optionally, when determining that the denoising strategy is time-domain denoising and spatial denoising, the step of determining a time-domain denoising method and/or a spatial denoising method for the first image block according to the type of the first image block may specifically include the following steps:
determining a mode of performing time domain noise reduction processing and a mode of performing spatial domain noise reduction processing on the first image block according to the type of the first image block;
thestep 104 of performing time domain noise reduction processing and spatial domain noise reduction processing on the first image block respectively to obtain a time domain noise reduction result and a spatial domain noise reduction result includes:
performing time domain noise reduction processing on the first image block according to the time domain noise reduction processing mode to obtain a time domain noise reduction result;
and performing spatial domain noise reduction processing on the first image block according to the spatial domain noise reduction processing mode to obtain a spatial domain noise reduction result.
In the above embodiment, when the denoising strategy is time-domain denoising and spatial-domain denoising, according to the type of the first image block, a time-domain denoising method and a spatial-domain denoising method may be determined for the first image block, and a time-domain denoising result may be obtained by performing the time-domain denoising method on the first image block according to the time-domain denoising method; and performing spatial domain noise reduction processing on the first image block according to a spatial domain noise reduction processing mode to obtain a spatial domain noise reduction result. The steps of the time-domain noise reduction processing and the spatial-domain noise reduction processing are not limited in a specific context, and the time-domain noise reduction processing may be performed on the first image block first according to the spatial-domain noise reduction processing, or may be performed on the first image block first according to the time-domain noise reduction processing, or may be performed on the first image block simultaneously with the time-domain noise reduction processing, which is not limited specifically herein.
As an optional embodiment, when the denoising strategy is time-domain denoising and spatial denoising, if the type of the first image block is a weak texture image block type, the time-domain denoising result may be obtained by using the following time-domain denoising processing method:
Figure BDA0002945855940000111
wherein pixel _ temporal represents a time domain noise reduction result;
Bcurrentrepresenting pixel values of a first image block;
beta represents a first coefficient, the value of beta is between 0 and 1, and the optimal value can be 0.5;
Breferencea set of reference blocks representing a first image block;
Breference_irepresents an ith reference block in the reference block set;
αirepresenting a second coefficient corresponding to the ith reference block in the reference block set, wherein the value of the second coefficient is between 0 and 1, the second coefficient is in direct proportion to the size of a residual error value between the first image block and the ith reference block, and the second coefficient is in inverse proportion to the size of a motion vector between the first image block and the ith reference block;
Σ denotes a summation sign.
If the type of the first image block is the weak texture image block type, obtaining a spatial domain noise reduction result by adopting a spatial domain noise reduction processing mode as follows:
Figure BDA0002945855940000121
wherein, (x, y) represents coordinates of pixels in the first image block;
pixel (i, j) represents the pixel value of a neighborhood image block of the first image block, the neighborhood image block can be one or more adjacent image blocks or a circle of adjacent image blocks, and can be set as required;
w (i, j) represents the weight of pixel (i, j);
Σ denotes a summation sign.
It should be noted that, the above is only an example of performing a time domain denoising processing mode and a spatial denoising processing mode on a weak texture image block type, for a strong texture image block type, the time domain denoising processing mode may be replaced by an example 3DDCT denoising algorithm, and the spatial denoising processing mode may be replaced by an example non-local mean filtering denoising algorithms, a wavelet denoising algorithm, and the like, and is not limited specifically herein.
Optionally, when the denoising strategy is time-domain denoising and spatial denoising, after thestep 102 performs motion estimation on the first image block to obtain a motion vector for the first image block, the method may further include the following steps:
determining a temporal weight for the temporal noise reduction and a spatial weight for the spatial noise reduction based on the motion vector of the first image block.
In the above embodiment, the time domain weight of the time domain noise reduction result and the spatial domain weight of the spatial domain noise reduction result are dynamically determined according to the magnitude of the motion vector, so that an optimal balance state can be achieved in terms of detail retention, noise point erasure and time performance. The proportion of the time domain weight and the space domain weight is determined by the size of the motion vector, the larger the motion vector is, the larger the weight of the space domain noise reduction result is, the smaller the weight of the time domain noise reduction result is, and vice versa.
Optionally, thestep 105 performs weighted fusion processing on the time domain noise reduction result and the spatial domain noise reduction result to obtain a target noise reduction result, which may specifically include the following steps:
calculating the product of the time domain noise reduction result and the time domain weight to obtain a first result;
calculating the product of the spatial domain noise reduction result and the spatial domain weight to obtain a second result;
adding the first result and the second result to obtain a third result;
and dividing the third result by the sum of the time domain weight and the space domain weight to obtain a target noise reduction result.
In the above embodiment, the calculation of the target noise reduction result may be specifically performed by the following formula:
Figure BDA0002945855940000131
wherein, the denoised (x, y) represents the target noise reduction result;
wspatialrepresenting spatial weights;
pixelspatial(x,y)representing a spatial domain noise reduction result;
wtemporalrepresenting time domain weights;
wtemporalrepresenting a time domain noise reduction result;
Σ denotes a summation sign.
In the above embodiment, the video noise reduction algorithm aligns and fuses the consecutive frame images to form an image. The alignment is to find the corresponding relation of the image blocks in the plurality of frame images; the fusion is to perform weighted average on the corresponding image blocks in a spatial domain or a frequency domain. The result of the alignment is not necessarily accurate, and therefore it is necessary to confirm whether the result of the alignment is confident before the fusion. The fused time domain weight and the spatial domain weight are adjusted according to the size of a Motion Vector (MV), for a reference block with a small MV, the rotation/scaling/deformation of an object is small, and the confidence coefficient of the MV is very high prior, so that the ratio of time domain filtering sampling from a first frame image is high; for reference blocks with a large MV, with a large a priori rotation/scaling/deformation of the object, the confidence of the MV is low, and therefore the ratio of samples from the spatial filtering of the first frame image is high. The filtering results of different time domain weights and space domain weights are selected in a self-adaptive mode according to the size of the motion vector and the complexity of the texture, and the optimal balance of the noise reduction effect, the detail retaining and repairing effect and the processing speed is achieved.
In summary, in the embodiment of the present invention, when it is determined that the first image block belongs to the weak texture image block type, an algorithm with low complexity and high cost performance is adopted to quickly repair the first image block; when the first image block is determined to belong to the type of the strong texture image block, the first image block is mainly repaired by adopting an algorithm which has obvious noise reduction effect, is good at retaining texture details and has relatively high complexity, namely, a method for adaptively adjusting a local noise reduction algorithm according to the texture complexity in the noise reduction process can retain the details of a texture area and recover the characteristics of smoothness of a non-texture area, and unnecessary operations are reduced while a good noise reduction effect is obtained; and the filtering results of different time domain weights and space domain weights are selected locally in a self-adaptive manner through the size of the motion vector and the texture complexity, and a target noise reduction result with good speed and effect can be obtained by combining the result weighting and fusion processing of the time domain and the space domain.
As shown in fig. 4, an embodiment of the present invention provides a noisereduction processing apparatus 400, which includes:
a first obtainingmodule 401, configured to obtain a first image block of a first frame image in a video to be processed;
afirst estimation module 402, configured to perform motion estimation on the first image block to obtain a motion vector of the first image block;
a first determiningmodule 403, configured to determine a denoising strategy for the first image block according to the motion vector, where the denoising strategy includes: temporal and/or spatial noise reduction;
afirst processing module 404, configured to perform time domain denoising and spatial domain denoising on the first image block respectively to obtain a time domain denoising result and a spatial domain denoising result when the denoising strategy is time domain denoising and spatial domain denoising;
and asecond processing module 405, configured to perform weighted fusion processing on the time domain noise reduction result and the spatial domain noise reduction result to obtain a target noise reduction result.
In the above embodiment of the present invention, motion estimation is performed on an acquired first image block to obtain a motion vector related to the first image block, a denoising strategy of the first image block is determined according to the motion vector, and under the condition that the denoising strategy is time domain denoising and space domain denoising, time domain denoising and space domain denoising are performed on the first image block respectively to obtain a time domain denoising result and a space domain denoising result, that is, according to a result of motion estimation, for a region with large motion, a result of space domain filtering can be sampled, and a smaller denoising strength is adopted, and for a region with small motion, a result of time domain filtering can be sampled, and a larger denoising strength is adopted; and then, performing weighting fusion processing on the time domain noise reduction result and the space domain noise reduction result to obtain a target noise reduction result, namely, combining the time domain and space domain result weighting to obtain the target noise reduction result with good speed and effect.
Optionally, the first obtainingmodule 401 includes:
the first acquisition unit is used for acquiring a first frame image in a video to be processed;
the first processing unit is used for carrying out fuzzy processing on the first frame image to obtain a fuzzy image;
a first extraction unit, configured to extract an edge feature map of the blurred image;
the second processing unit is used for carrying out blocking processing on the edge feature map to obtain a first image block set subjected to blocking processing;
wherein the first image block is one of the image blocks in the first set of image blocks.
Optionally, after the first obtainingmodule 401, the apparatus further includes:
the second obtaining module is used for obtaining the texture complexity of the first image block;
the second determining module is used for determining the type of the first image block according to the texture complexity of the first image block;
and the third determining module is used for determining a time domain denoising processing mode and/or a space domain denoising processing mode of the first image block according to the type of the first image block.
Optionally, the second obtaining module includes:
a second obtaining unit, configured to obtain a first number of non-0 pixel values in the first image block;
and the first determining unit is used for determining the texture complexity of the first image block according to the size relation between the first quantity and a first threshold.
Optionally, when determining that the denoising strategy is time domain denoising and spatial domain denoising, the third determining module includes:
the second determining unit is used for determining a time domain denoising processing mode and a space domain denoising processing mode for the first image block according to the type of the first image block;
wherein thefirst processing module 404 includes:
the third processing unit is used for carrying out time domain noise reduction processing on the first image block according to the time domain noise reduction processing mode to obtain a time domain noise reduction result;
and the fourth processing unit is used for carrying out spatial domain noise reduction processing on the first image block according to the spatial domain noise reduction processing mode to obtain a spatial domain noise reduction result.
Optionally, thefirst estimating module 402 includes:
a third determining unit, configured to determine a reference block set related to the first image block in the M second frame images according to a matching degree between the first image block and each image block in each second frame image in the M second frame images; the M second frame images are M frame images adjacent to the first frame image in the video to be processed, and M is a positive integer;
a first estimating unit, configured to perform motion estimation on the first image block according to each reference block in the reference block set, so as to obtain a motion vector for the first image block.
Optionally, after thefirst estimating module 402, the apparatus further includes:
a fourth determining module, configured to determine, according to the motion vector of the first image block, a temporal weight related to the temporal noise reduction and a spatial weight related to the spatial noise reduction.
Optionally, thesecond processing module 405 includes:
the first calculating unit is used for calculating the product of the time domain noise reduction result and the time domain weight to obtain a first result;
the second calculation unit is used for calculating the product of the spatial domain noise reduction result and the spatial domain weight to obtain a second result;
a third calculating unit, configured to add the first result and the second result to obtain a third result;
and the fourth calculating unit is used for dividing the third result by the sum of the time domain weight and the space domain weight to obtain a target noise reduction result.
It should be noted that the embodiment of the noise reduction processing apparatus is an apparatus corresponding to the above noise reduction processing method, and all implementation manners of the above embodiment are applicable to the embodiment of the apparatus, and can also achieve the same technical effect, which is not described herein again.
In summary, in the embodiment of the present invention, when it is determined that the first image block belongs to the weak texture image block type, an algorithm with low complexity and high cost performance is adopted to quickly repair the first image block; when the first image block is determined to belong to the type of the strong texture image block, the first image block is mainly repaired by adopting an algorithm which has obvious noise reduction effect, is good at retaining texture details and has relatively high complexity, namely, a method for adaptively adjusting a local noise reduction algorithm according to the texture complexity in the noise reduction process can retain the details of a texture area and recover the characteristics of smoothness of a non-texture area, and unnecessary operations are reduced while a good noise reduction effect is obtained; and the filtering results of different time domain weights and space domain weights are selected locally in a self-adaptive manner through the size of the motion vector and the texture complexity, and a target noise reduction result with good speed and effect can be obtained by combining the result weighting and fusion processing of the time domain and the space domain.
The embodiment of the invention also provides the electronic equipment. As shown in fig. 5, the system comprises aprocessor 501, acommunication interface 502, amemory 503 and acommunication bus 504, wherein theprocessor 501, thecommunication interface 502 and thememory 503 are communicated with each other through thecommunication bus 504.
Thememory 503 stores a computer program.
Theprocessor 501 is configured to implement part or all of the steps of the noise reduction processing method provided by the embodiment of the present invention when executing the program stored in thememory 503.
The communication bus mentioned in the electronic device may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
The communication interface is used for communication between the terminal and other equipment.
The Memory may include a Random Access Memory (RAM) or a non-volatile Memory (non-volatile Memory), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; the Integrated Circuit may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, or a discrete hardware component.
In still another embodiment provided by the present invention, a computer-readable storage medium is further provided, which stores instructions that, when executed on a computer, cause the computer to execute the noise reduction processing method described in the above embodiment.
In yet another embodiment provided by the present invention, a computer program product containing instructions is also provided, which when run on a computer, causes the computer to execute the noise reduction processing method described in the above embodiment.
All the embodiments in the present specification are described in a related manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The above description is only for the preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present invention are included in the protection scope of the present invention.

Claims (11)

Translated fromChinese
1.一种降噪处理方法,其特征在于,所述方法包括:1. A noise reduction processing method, characterized in that the method comprises:获取待处理视频中第一帧图像的第一图像块;Obtain the first image block of the first frame image in the video to be processed;对所述第一图像块进行运动估计,得到关于所述第一图像块的运动矢量;performing motion estimation on the first image block to obtain a motion vector about the first image block;根据所述运动矢量,确定所述第一图像块的降噪策略,所述降噪策略包括:时域降噪和/或空域降噪;Determine a noise reduction strategy for the first image block according to the motion vector, where the noise reduction strategy includes: temporal noise reduction and/or spatial noise reduction;在所述降噪策略为时域降噪和空域降噪的情况下,对所述第一图像块分别进行时域降噪处理和空域降噪处理,得到时域降噪结果和空域降噪结果;When the noise reduction strategy is temporal noise reduction and spatial noise reduction, respectively perform temporal noise reduction processing and spatial domain noise reduction processing on the first image block to obtain a temporal noise reduction result and a spatial domain noise reduction result ;将所述时域降噪结果和空域降噪结果进行加权融合处理,得到目标降噪结果。The time domain noise reduction result and the spatial domain noise reduction result are weighted and fused to obtain the target noise reduction result.2.根据权利要求1所述的方法,其特征在于,所述获取待处理视频中第一帧图像的第一图像块,包括:2. The method according to claim 1, wherein the acquiring the first image block of the first frame image in the video to be processed comprises:获取待处理视频中的第一帧图像;Get the first frame image in the video to be processed;对所述第一帧图像进行模糊处理,得到模糊图像;blurring the first frame of image to obtain a blurred image;提取所述模糊图像的边缘特征图;extracting the edge feature map of the blurred image;将所述边缘特征图进行分块处理,获得分块处理后的第一图像块集合;Performing block processing on the edge feature map to obtain a first image block set after block processing;其中,所述第一图像块为所述第一图像块集合中的其中一个图像块。The first image block is one of the image blocks in the first image block set.3.根据权利要求1所述的方法,其特征在于,所述获取待处理视频中第一帧图像的第一图像块之后,所述方法还包括:3. The method according to claim 1, wherein after acquiring the first image block of the first frame image in the video to be processed, the method further comprises:获取所述第一图像块的纹理复杂度;obtaining the texture complexity of the first image block;根据所述第一图像块的纹理复杂度,确定所述第一图像块的所属类型;determining the type of the first image block according to the texture complexity of the first image block;根据所述第一图像块的所属类型,确定对所述第一图像块进行时域降噪处理的方式和/或空域降噪处理的方式。According to the type of the first image block, a manner of performing temporal noise reduction processing and/or a manner of spatial domain noise reduction processing on the first image block is determined.4.根据权利要求3所述的方法,其特征在于,所述获取所述第一图像块的纹理复杂度,包括:4. The method according to claim 3, wherein the acquiring the texture complexity of the first image block comprises:获取所述第一图像块中非0像素值的第一数量;obtaining a first number of non-zero pixel values in the first image block;根据所述第一数量与第一阈值的大小关系,确定所述第一图像块的纹理复杂度。The texture complexity of the first image block is determined according to the magnitude relationship between the first quantity and the first threshold.5.根据权利要求3所述的方法,其特征在于,在确定所述降噪策略为时域降噪和空域降噪的情况下,所述根据所述第一图像块的所属类型,确定对所述第一图像块进行时域降噪处理的方式和/或空域降噪处理的方式,包括:5 . The method according to claim 3 , wherein, in the case that the noise reduction strategy is determined to be temporal noise reduction and spatial noise reduction, the method of determining the corresponding The manner of performing temporal noise reduction processing and/or the manner of spatial domain noise reduction processing for the first image block includes:根据所述第一图像块的所属类型,确定对所述第一图像块进行时域降噪处理的方式和空域降噪处理的方式;According to the type of the first image block, determine the method of performing temporal noise reduction processing and the method of spatial domain noise reduction processing on the first image block;其中,所述对所述第一图像块分别进行时域降噪处理和空域降噪处理,得到时域降噪结果和空域降噪结果,包括:Wherein, the temporal noise reduction processing and the spatial domain noise reduction processing are respectively performed on the first image block to obtain a temporal noise reduction result and a spatial domain noise reduction result, including:根据所述时域降噪处理的方式对所述第一图像块进行时域降噪处理,得到时域降噪结果;Perform temporal noise reduction processing on the first image block according to the temporal noise reduction processing method to obtain a temporal noise reduction result;根据所述空域降噪处理的方式对所述第一图像块进行空域降噪处理,得到空域降噪结果。Perform spatial noise reduction processing on the first image block according to the spatial noise reduction processing method to obtain a spatial noise reduction result.6.根据权利要求1所述的方法,其特征在于,所述对所述第一图像块进行运动估计,得到关于所述第一图像块的运动矢量,包括:6. The method according to claim 1, wherein the performing motion estimation on the first image block to obtain a motion vector about the first image block, comprising:根据所述第一图像块与M个第二帧图像中每一第二帧图像中的每一图像块的匹配程度,确定所述M个第二帧图像中关于所述第一图像块的参考块集合;所述M个第二帧图像为所述待处理视频中与所述第一帧图像相邻的M个帧图像,M为正整数;According to the matching degree between the first image block and each image block in each of the M second frame images, the reference about the first image block in the M second frame images is determined A block set; the M second frame images are M frame images adjacent to the first frame image in the video to be processed, and M is a positive integer;根据所述参考块集合中的每一参考块,对所述第一图像块进行运动估计,得到关于所述第一图像块的运动矢量。According to each reference block in the reference block set, motion estimation is performed on the first image block to obtain a motion vector for the first image block.7.根据权利要求1所述的方法,其特征在于,在所述降噪策略为时域降噪和空域降噪的情况下,所述对所述第一图像块进行运动估计,得到关于所述第一图像块的运动矢量之后,所述方法还包括:7 . The method according to claim 1 , wherein, when the noise reduction strategy is temporal noise reduction and spatial noise reduction, performing motion estimation on the first image block to obtain information about the After the motion vector of the first image block, the method further includes:根据所述第一图像块的运动矢量,确定关于所述时域降噪的时域权重和关于所述空域降噪的空域权重。Based on the motion vector of the first image block, a temporal weight for the temporal noise reduction and a spatial weight for the spatial noise reduction are determined.8.根据权利要求7所述的方法,其特征在于,所述将所述时域降噪结果和空域降噪结果进行加权融合处理,得到目标降噪结果,包括:8. The method according to claim 7, characterized in that, performing weighted fusion processing on the time-domain noise reduction result and the spatial-domain noise reduction result to obtain a target noise reduction result, comprising:计算所述时域降噪结果与所述时域权重的乘积,得到第一结果;Calculate the product of the time-domain noise reduction result and the time-domain weight to obtain a first result;计算所述空域降噪结果与所述空域权重的乘积,得到第二结果;calculating the product of the airspace noise reduction result and the airspace weight to obtain a second result;将所述第一结果与所述第二结果相加,得到第三结果;adding the first result and the second result to obtain a third result;将所述第三结果除以所述时域权重与所述空域权重之和,得到目标降噪结果。The target noise reduction result is obtained by dividing the third result by the sum of the temporal weight and the spatial weight.9.一种降噪处理装置,其特征在于,所述装置包括:9. A noise reduction processing device, characterized in that the device comprises:第一获取模块,用于获取待处理视频中第一帧图像的第一图像块;a first acquisition module, used for acquiring the first image block of the first frame image in the video to be processed;第一估计模块,用于对所述第一图像块进行运动估计,得到关于所述第一图像块的运动矢量;a first estimation module, configured to perform motion estimation on the first image block to obtain a motion vector about the first image block;第一确定模块,用于根据所述运动矢量,确定所述第一图像块的降噪策略,所述降噪策略包括:时域降噪和/或空域降噪;a first determining module, configured to determine a noise reduction strategy for the first image block according to the motion vector, where the noise reduction strategy includes: temporal noise reduction and/or spatial noise reduction;第一处理模块,用于在所述降噪策略为时域降噪和空域降噪的情况下,对所述第一图像块分别进行时域降噪处理和空域降噪处理,得到时域降噪结果和空域降噪结果;A first processing module, configured to perform temporal noise reduction processing and spatial domain noise reduction processing on the first image block respectively under the condition that the noise reduction strategy is temporal noise reduction and spatial domain noise reduction to obtain a temporal noise reduction. Noise results and spatial noise reduction results;第二处理模块,用于将所述时域降噪结果和空域降噪结果进行加权融合处理,得到目标降噪结果。The second processing module is configured to perform weighted fusion processing on the time domain noise reduction result and the spatial domain noise reduction result to obtain the target noise reduction result.10.一种电子设备,其特征在于,包括:处理器、通信接口、存储器和通信总线;其中,处理器、通信接口以及存储器通过通信总线完成相互间的通信;10. An electronic device, comprising: a processor, a communication interface, a memory and a communication bus; wherein the processor, the communication interface and the memory communicate with each other through the communication bus;存储器,用于存放计算机程序;memory for storing computer programs;处理器,用于执行存储器上所存放的程序时,实现如权利要求1至8任一项所述的降噪处理方法中的步骤。The processor is configured to implement the steps in the noise reduction processing method according to any one of claims 1 to 8 when executing the program stored in the memory.11.一种计算机可读存储介质,其上存储有计算机程序,其特征在于,该程序被处理器执行时实现如权利要求1至8任一项所述的降噪处理方法。11. A computer-readable storage medium on which a computer program is stored, characterized in that, when the program is executed by a processor, the noise reduction processing method according to any one of claims 1 to 8 is implemented.
CN202110192918.0A2021-02-202021-02-20Noise reduction processing method and device and electronic equipmentPendingCN113012061A (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202110192918.0ACN113012061A (en)2021-02-202021-02-20Noise reduction processing method and device and electronic equipment

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202110192918.0ACN113012061A (en)2021-02-202021-02-20Noise reduction processing method and device and electronic equipment

Publications (1)

Publication NumberPublication Date
CN113012061Atrue CN113012061A (en)2021-06-22

Family

ID=76404313

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202110192918.0APendingCN113012061A (en)2021-02-202021-02-20Noise reduction processing method and device and electronic equipment

Country Status (1)

CountryLink
CN (1)CN113012061A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN113327209A (en)*2021-06-292021-08-31Oppo广东移动通信有限公司Depth image generation method and device, electronic equipment and storage medium
CN113674316A (en)*2021-08-042021-11-19浙江大华技术股份有限公司Video noise reduction method, device and equipment
CN114827386A (en)*2022-03-312022-07-29艾酷软件技术(上海)有限公司Video image processing method and device
CN115439369A (en)*2022-09-262022-12-06Oppo广东移动通信有限公司 Image noise reduction method and device, electronic equipment, storage medium
CN116205810A (en)*2023-02-132023-06-02爱芯元智半导体(上海)有限公司Video noise reduction method and device and electronic equipment
CN116437024A (en)*2023-04-272023-07-14深圳聚源视芯科技有限公司Video real-time noise reduction method and device based on motion estimation and noise estimation

Citations (11)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN1633167A (en)*2004-12-202005-06-29海信集团有限公司Time-space domain self-adaptive weighted average interlacing removing method for digital video signal
CN101257630A (en)*2008-03-252008-09-03浙江大学 Video coding method and device combined with three-dimensional filtering
CN102724504A (en)*2012-06-142012-10-10华为技术有限公司Filtering method and filtering device for video coding
CN105260998A (en)*2015-11-162016-01-20华东交通大学MCMC sampling and threshold low-rank approximation-based image de-noising method
CN106251318A (en)*2016-09-292016-12-21杭州雄迈集成电路技术有限公司A kind of denoising device and method of sequence image
CN108174056A (en)*2016-12-072018-06-15南京理工大学 A low-light video noise reduction method based on joint spatio-temporal domain
CN108270945A (en)*2018-02-062018-07-10上海通途半导体科技有限公司A kind of motion compensation denoising method and device
CN109410124A (en)*2016-12-272019-03-01深圳开阳电子股份有限公司A kind of noise-reduction method and device of video image
CN111652814A (en)*2020-05-262020-09-11浙江大华技术股份有限公司Video image denoising method and device, electronic equipment and storage medium
CN112001122A (en)*2020-08-262020-11-27合肥工业大学Non-contact physiological signal measuring method based on end-to-end generation countermeasure network
CN112132751A (en)*2020-09-282020-12-25广西信路威科技发展有限公司Video streaming vehicle body panoramic image splicing device and method based on frequency domain transformation

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN1633167A (en)*2004-12-202005-06-29海信集团有限公司Time-space domain self-adaptive weighted average interlacing removing method for digital video signal
CN101257630A (en)*2008-03-252008-09-03浙江大学 Video coding method and device combined with three-dimensional filtering
CN102724504A (en)*2012-06-142012-10-10华为技术有限公司Filtering method and filtering device for video coding
CN105260998A (en)*2015-11-162016-01-20华东交通大学MCMC sampling and threshold low-rank approximation-based image de-noising method
CN106251318A (en)*2016-09-292016-12-21杭州雄迈集成电路技术有限公司A kind of denoising device and method of sequence image
CN108174056A (en)*2016-12-072018-06-15南京理工大学 A low-light video noise reduction method based on joint spatio-temporal domain
CN109410124A (en)*2016-12-272019-03-01深圳开阳电子股份有限公司A kind of noise-reduction method and device of video image
CN108270945A (en)*2018-02-062018-07-10上海通途半导体科技有限公司A kind of motion compensation denoising method and device
CN111652814A (en)*2020-05-262020-09-11浙江大华技术股份有限公司Video image denoising method and device, electronic equipment and storage medium
CN112001122A (en)*2020-08-262020-11-27合肥工业大学Non-contact physiological signal measuring method based on end-to-end generation countermeasure network
CN112132751A (en)*2020-09-282020-12-25广西信路威科技发展有限公司Video streaming vehicle body panoramic image splicing device and method based on frequency domain transformation

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
倪永婧;王成儒;郭巍;: "一种新的复杂纹理图像的去噪方法", 微计算机信息, no. 33*
赵俞剑;陈耀武;: "基于H.264的运动及纹理自适应去隔行算法", 计算机工程, no. 12, pages 2 - 3*

Cited By (9)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN113327209A (en)*2021-06-292021-08-31Oppo广东移动通信有限公司Depth image generation method and device, electronic equipment and storage medium
CN113674316A (en)*2021-08-042021-11-19浙江大华技术股份有限公司Video noise reduction method, device and equipment
CN114827386A (en)*2022-03-312022-07-29艾酷软件技术(上海)有限公司Video image processing method and device
CN115439369A (en)*2022-09-262022-12-06Oppo广东移动通信有限公司 Image noise reduction method and device, electronic equipment, storage medium
CN115439369B (en)*2022-09-262025-10-03Oppo广东移动通信有限公司 Image noise reduction method and device, electronic device, and storage medium
CN116205810A (en)*2023-02-132023-06-02爱芯元智半导体(上海)有限公司Video noise reduction method and device and electronic equipment
CN116205810B (en)*2023-02-132024-03-19爱芯元智半导体(上海)有限公司Video noise reduction method and device and electronic equipment
CN116437024A (en)*2023-04-272023-07-14深圳聚源视芯科技有限公司Video real-time noise reduction method and device based on motion estimation and noise estimation
CN116437024B (en)*2023-04-272024-04-09深圳聚源视芯科技有限公司Video real-time noise reduction method and device based on motion estimation and noise estimation

Similar Documents

PublicationPublication DateTitle
CN113012061A (en)Noise reduction processing method and device and electronic equipment
Chandel et al.Image filtering algorithms and techniques: A review
US9779491B2 (en)Algorithm and device for image processing
CN111275626A (en)Video deblurring method, device and equipment based on ambiguity
US8908989B2 (en)Recursive conditional means image denoising
CN113962905B (en)Single image rain removing method based on multi-stage characteristic complementary network
CN111415317B (en)Image processing method and device, electronic equipment and computer readable storage medium
CN115908154A (en)Video late-stage particle noise removing method based on image processing
CN112862753A (en)Noise intensity estimation method and device and electronic equipment
Buades et al.Enhancement of noisy and compressed videos by optical flow and non-local denoising
CN106296591A (en)Non local uniform numeral image de-noising method based on mahalanobis distance
Zuo et al.Video Denoising Based on a Spatiotemporal Kalman‐Bilateral Mixture Model
CN106875396B (en) Method and device for extracting salient regions of video based on motion characteristics
Özkan et al.Steered-mixture-of-experts regression for image denoising with multi-model inference
Ollion et al.Joint self-supervised blind denoising and noise estimation
LinA nonlocal means based adaptive denoising framework for mixed image noise removal
Banerjee et al.Bacterial foraging-fuzzy synergism based image Dehazing
Palacios-Enriquez et al.Sparse technique for images corrupted by mixed Gaussian-impulsive noise
Zhang et al.Video super-resolution with registration-reliability regulation and adaptive total variation
CN118297835A (en)Image time-space domain joint noise reduction method, device and equipment
Xiao et al.Video denoising algorithm based on improved dual‐domain filtering and 3D block matching
Sadaka et al.Efficient super-resolution driven by saliency selectivity
Mohan et al.Image denoising with a convolution neural network using Gaussian filtered residuals
Lal et al.A comparative study on CNN based low-light image enhancement
Robinson et al.Blind deconvolution of Gaussian blurred images containing additive white Gaussian noise

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
RJ01Rejection of invention patent application after publication
RJ01Rejection of invention patent application after publication

Application publication date:20210622


[8]ページ先頭

©2009-2025 Movatter.jp