Movatterモバイル変換


[0]ホーム

URL:


CN112634161A - Reflected light removing method based on two-stage reflected light eliminating network and pixel loss - Google Patents

Reflected light removing method based on two-stage reflected light eliminating network and pixel loss
Download PDF

Info

Publication number
CN112634161A
CN112634161ACN202011573525.6ACN202011573525ACN112634161ACN 112634161 ACN112634161 ACN 112634161ACN 202011573525 ACN202011573525 ACN 202011573525ACN 112634161 ACN112634161 ACN 112634161A
Authority
CN
China
Prior art keywords
reflected light
network
image
generator
stage
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011573525.6A
Other languages
Chinese (zh)
Other versions
CN112634161B (en
Inventor
赵东
汪磊
王青
李晨
张见
牛明
郜云波
马弘宇
陶旭
刘朝阳
杨成东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuxi Tuodian Technology Co ltd
Original Assignee
Binjiang College of Nanjing University of Information Engineering
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Binjiang College of Nanjing University of Information EngineeringfiledCriticalBinjiang College of Nanjing University of Information Engineering
Priority to CN202011573525.6ApriorityCriticalpatent/CN112634161B/en
Publication of CN112634161ApublicationCriticalpatent/CN112634161A/en
Application grantedgrantedCritical
Publication of CN112634161BpublicationCriticalpatent/CN112634161B/en
Expired - Fee Relatedlegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Classifications

Landscapes

Abstract

The invention provides a reflected light removing method based on a two-stage reflected light eliminating network and pixel loss, which comprises the steps of firstly, setting a primary sub-network and a secondary sub-network of a generator in the two-stage reflected light eliminating network; then setting a loss function of a generator in a two-stage reflected light elimination network based on pixel loss; then setting a loss function of a discriminator in a two-stage reflected light elimination network; training the two-stage reflected light elimination network until the parameters of the two-stage reflected light elimination network are converged to obtain the trained two-stage reflected light elimination network; and finally, removing image reflection light of the test data set by using the trained two-stage reflection light elimination network, and outputting a transmission diagram after the image reflection light is removed. The invention overcomes the defects of color distortion and detail loss in the prior art, so that the invention has more obvious removal effect on the reflected image and has no color distortion.

Description

Reflected light removing method based on two-stage reflected light eliminating network and pixel loss
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a reflected light removing method based on a two-stage reflected light eliminating network and pixel loss.
Background
The reflected light removal is an important component in the technical field of image quality improvement, and has very wide practical application in a plurality of systems such as a photoelectric imaging system, an image restoration system, an image quality improvement system and the like. In recent years, image reflection light removal methods based on deep learning are widely used in the field of image quality improvement.
In the existing image reflection light removing method, a CEILNet network is composed of two 32-layer sub-networks with the same structure, the total depth reaches 64 layers, wherein the first sub-network receives a reflection interference image and the gradient thereof as input, the output is the gradient prediction of transmission light, and the second sub-network takes the reflection interference image and the prediction gradient value as input, and finally the transmission light estimation is obtained. The two subnetworks are independent of each other and train inference independently. The method has the following defects: since the number of the enhancement features of the CEILNet network is small, there is color distortion in the result after the image reflection light is removed.
At present, a method for removing image reflection light by using a CRRN network also takes a reflection interference image and a gradient thereof as independent inputs, and the method is different in that two sub-networks are interconnected on a plurality of different scales, the gradient and image inference can be realized in parallel, the method is more compact than the method of CEILNet, and the two sub-networks do not need to be trained respectively. The method has the following defects: since the CRRN network directly estimates the transmission map, there is a problem that the details are lost after the image reflection light is removed.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides a reflected light removing method based on a two-stage reflected light eliminating network and pixel loss, which comprises the steps of firstly constructing a training data set and a testing data set by utilizing simulation data and real data, and then setting a primary sub-network and a secondary sub-network of a generator in the two-stage reflected light eliminating network; then setting a loss function of a generator in a two-stage reflected light elimination network based on pixel loss; then setting a loss function of a discriminator in a two-stage reflected light elimination network; training the two-stage reflected light elimination network until the parameters of the two-stage reflected light elimination network are converged to obtain the trained two-stage reflected light elimination network; and finally, removing image reflection light of the test data set by using the trained two-stage reflection light elimination network, and outputting a transmission diagram after the image reflection light is removed.
In order to achieve the purpose, the invention adopts the following technical scheme: a reflected light removing method based on a two-stage reflected light eliminating network and pixel loss comprises the following steps:
the method comprises the following steps of firstly, constructing a training data set and a testing data set by utilizing simulation data and real data;
step two, setting a primary sub-network of a generator in a two-stage reflected light elimination network;
step three, setting a secondary sub-network of a generator in the two-stage reflected light elimination network;
step four, constructing a loss function of a generator in the two-stage reflected light elimination network based on the pixel loss of the simulation data by using a real transmission image and reflection image of the simulation data in the training data set, a roughly estimated transmission image and reflection image and a transmission image after the reflected light of the image is removed;
step five, constructing a loss function of a generator in the two-stage reflected light elimination network based on real data pixel loss by using a real transmission diagram of real data in the training data set, a roughly estimated transmission diagram and a transmission diagram after image reflected light is removed;
weighting and adding a loss function of a generator in the two-stage reflected light elimination network based on the loss of the analog data pixels, a loss function of the generator in the two-stage reflected light elimination network based on the loss of the real data pixels and an original generator countervailing loss function to serve as a loss function of the generator in the two-stage reflected light elimination network;
step seven, setting a loss function of a discriminator in the two-stage reflected light elimination network;
step eight, training a two-stage reflected light elimination network, sequentially loading an Mth frame image in a training data set as a current frame image, inputting the current frame image into a primary sub-network of a generator to obtain a roughly estimated transmission image and a reflection image, and inputting the roughly estimated transmission image and the reflection image into a secondary sub-network of the generator to obtain a transmission image after image reflected light is removed; judging whether the current frame image is the last frame image of the training data set; if yes, finishing the round of training and entering the ninth step; if not, continuing to load the subsequent frame image for training, wherein M represents an integer greater than or equal to one;
step nine, judging whether the parameters of the two-stage reflected light elimination network are converged; if yes, finishing all training and entering the step ten; if not, returning to the step eight, and continuing the next round of training until a trained two-stage reflected light elimination network is obtained;
and step ten, removing image reflection light of the test data set by using the trained two-stage reflection light elimination network, and outputting a transmission image after the image reflection light is removed.
In order to optimize the technical scheme, the specific measures adopted further comprise:
further, the second step is specifically realized by the following steps:
s201, setting an 8-layer coder-decoder, wherein the coder-decoder is provided with 4 rolling blocks with different scales;
s202, respectively connecting coding-decoding layers with the same scale by using 4 convolutional block attention units;
s203, constructing a full convolution neural network, wherein the number of channels of the first seven layers is 64, and the number of channels of the eighth layer is two three channels;
and S204, connecting the steps S201 to S203 together to serve as a primary sub-network of a generator in the two-stage reflected light elimination network.
Further, the third step is realized by the following steps:
s301, setting 9 characteristic extraction layers based on a gate convolution neural network;
s302, setting 1 layer of convolution network feature extraction layer;
and S303, connecting the steps S301 to S302 together to serve as a secondary sub-network of a generator in the two-stage reflected light elimination network.
Further, the fourth step specifically includes: setting a loss function of a generator in the two-stage reflected light elimination network based on the loss of the analog data pixel according to the following formula:
Figure BDA0002860343190000031
wherein L ispixelSRepresenting a loss function of the generator in a two-stage reflected light cancellation network based on simulated data pixel loss,
Figure BDA0002860343190000032
represents the gradient operator, | ·| non-conducting phosphor2Denotes the operation of two norms, eta denotes the constraint factor, lambda1Represents a weight value, λ2Representing the gradient weights, T representing the true transmission map,
Figure BDA0002860343190000033
a transmission map representing a coarse estimate of the transmission,
Figure BDA0002860343190000034
representing image reflected lightThe transmission map after removal, R represents the true reflection map,
Figure BDA0002860343190000035
representing a roughly estimated reflection map.
Further, the fifth step specifically includes: setting a loss function of a generator in the two-stage reflected light elimination network based on real data pixel loss according to the following formula:
Figure BDA0002860343190000036
wherein L ispixelRRepresenting a loss function of a generator in a two-stage reflected light cancellation network based on real data pixel loss.
Further, the sixth step specifically includes: the loss function L of the generator in the two-stage reflected light cancellation network is set according to the following formula:
L=αLA+βLpixelS+χLpixelR
LA=-E(D(I,G(I,θ)))
wherein, alpha, beta and chi are respectively LA、LpixelSAnd LpixelRWeight coefficient of (1), LAFor the raw generator immunity loss function, E (·) denotes the desired operation, D denotes the discriminator in the two-stage reflected light cancellation network, I denotes the input image, G denotes the raw generator, D (I, G (I, θ)) denotes the probability that G (I, θ) output by the discriminator in the two-stage reflected light cancellation network belongs to the transmission image given the input image and the image to be discriminated G (I, θ), and θ denotes the raw generator network parameters.
Further, the seventh step specifically includes: the loss function of the discriminator in the two-stage reflected light cancellation network is set according to the following formula:
Figure BDA0002860343190000037
wherein L isDIn networks representing two-stage reflection light cancellationLoss function of discriminator, T represents true transmission map, μ is
Figure BDA0002860343190000038
The weight coefficient of (2).
The invention has the beneficial effects that:
first, because the first-level sub-network and the second-level sub-network of the generator in the two-level reflected light elimination network described in the second step and the third step are adopted, the estimation accuracy of the transmission diagram is improved by further fully utilizing the information in the estimated reflected image, and the defect that the detail loss exists in the result after the reflected light is removed due to the fact that the transmission diagram is directly estimated in the prior art is overcome.
Secondly, because the invention adopts the loss function calculation mode based on pixel loss from the fourth step to the sixth step, namely, the network considers the roughly estimated transmission diagram and reflection diagram firstly, and uses the two estimated quantities and the extracted characteristics as the input of the secondary sub-network, the precision of the transmission diagram is further improved, namely, the network of the invention adopts a two-stage structure from rough to fine, the invention can effectively remove the reflected light of the images of various scenes, and overcomes the defect that the color distortion is easy to occur in the prior art.
Drawings
FIG. 1 is a flow chart of a reflected light removal method according to the present invention.
Fig. 2 is a schematic diagram of a two-stage reflection canceling network of the present invention.
FIG. 3 is a diagram illustrating an input image of analog data according to an embodiment of the present invention.
FIG. 4 is a schematic diagram of a real transmission plot of simulated data in an embodiment of the present invention.
FIG. 5 is a graphical illustration of the true reflection of simulated data in an embodiment of the invention.
FIG. 6 is a graphical illustration of the transmission of a coarse estimate of the simulated data in an embodiment of the invention.
FIG. 7 is a graphical illustration of a reflection of a coarse estimate of simulated data in an embodiment of the invention.
FIG. 8 is a schematic diagram of the transmission after the reflected light removal of the simulation data in the embodiment of the present invention.
FIG. 9 is a diagram illustrating an input image of real data according to an embodiment of the present invention.
FIG. 10 is a schematic diagram of a real transmission plot of real data in an embodiment of the present invention.
FIG. 11 is a schematic diagram of a transmission plot of a coarse estimate of the real data in an embodiment of the present invention.
FIG. 12 is a schematic diagram of the transmission after the reflected light of the real data is removed in the embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. The present invention will now be described in further detail with reference to the accompanying drawings.
The embodiment of the invention provides a reflected light removing method based on a two-stage reflected light eliminating network and pixel loss, which comprises the following steps as shown in fig. 1 and fig. 2:
step 1: building a training data set and a testing data set by using the simulation data and the real data;
specifically, in one embodiment of the present invention, the training data used by the two-stage reflected light cancellation network is a university of berkeley dataset, the constructed training dataset includes 13700 simulation data of transmission and reflection maps, and 90 real data, and the constructed test dataset includes 20 real data. Fig. 3 is a schematic diagram of an input image of analog data according to an embodiment of the present invention. Fig. 4 is a schematic diagram showing a real transmission diagram of simulation data according to an embodiment of the present invention. Fig. 5 is a schematic diagram showing real reflection of simulation data according to an embodiment of the present invention.
Step 2: setting a primary sub-network of a generator in a two-stage reflected light elimination network;
the method is realized by the following steps:
step 201, setting an 8-layer coder-decoder, wherein the coder-decoder has 4 rolling blocks with different scales;
specifically, the channel number of 8 convolutional layers of the encoder-decoder of the present invention is set to {64, 128, 256, 512, 512, 256, 128, 64}, the convolutional templates are all 3 × 3, and each convolutional layer contains one lreul active layer and a batch regularization operation.
Step 202, connecting coding-decoding layers with the same scale by using 4 convolutional block attention units respectively;
specifically, the convolution block attention unit implements feature enhancement mainly through two steps: firstly, aiming at channel characteristic enhancement, firstly, respectively carrying out maximum pooling and average pooling on each channel to form two characteristic vectors with the same length as the number of characteristic channels; then, processing the two eigenvectors through a three-layer full-connection network sharing the weight to finally obtain an enhanced vector; and finally, taking the value of an element in the enhancement vector as an enhancement coefficient, and multiplying the enhancement coefficient by each channel characteristic graph respectively to realize the channel enhancement of the characteristic. Secondly, aiming at spatial feature enhancement, performing spatial maximum pooling and average pooling on the features to obtain two feature maps; then, obtaining a spatial enhancement coefficient through convolution of parameter sharing and Sigmoid activation; and finally, multiplying the enhancement coefficient by the values of all channels at the same position of the original characteristic diagram respectively to obtain a final result.
Step 203, constructing a full convolution neural network, wherein the number of channels of the first seven layers is 64, and the eighth layer is two three channels;
specifically, the number of channels of the first 7 layers of the full convolution sub-network is set to be 64, hole convolution is introduced to increase the receptive field, the spatial span of the hole convolution is set to be {2, 4, 8, 16, 32, 64, 1, 1}, the sizes of convolution windows are 3 × 3, and the activation and normalization function setting of the first 7 layers is the same as that of the coding and decoding sub-network. The output of the last layer is 3 x 2 channels and is taken as two three-channel RGB images to represent the roughly estimated reflection and transmission maps, respectively.
And step 204, connecting the steps S201 to S203 together to serve as a primary sub-network of a generator in the two-stage reflected light elimination network.
And step 3: setting a secondary sub-network of a generator in the two-stage reflected light elimination network;
the method is realized by the following steps:
step 301, setting 9 characteristic extraction layers based on a gate convolution neural network;
specifically, the number of feature channels of 9 feature extraction layers based on the gate convolution neural network is 32, the adopted space span of the hole convolution is set to be {1, 2, 4, 8, 16, 32, 64, 1, 1} respectively, and the convolution window size is 3 × 3.
Step 302, setting 1 layer of convolution network feature extraction layer;
specifically, the last 1 layer of convolution network feature extraction layer is a common convolution layer and does not contain activation and normalization, and the output of the layer is 3 channels, namely, a transmission image after the reflected light of the transmission image in the RGB format is removed.
Step 303, connecting steps S301 to S302 together as a secondary sub-network of a generator in a two-stage reflected light cancellation network.
And 4, step 4: constructing a loss function of a generator in a two-stage reflected light elimination network based on the pixel loss of the simulation data by utilizing a real transmission diagram and a reflection diagram of the simulation data in a training data set, a roughly estimated transmission diagram and a reflection diagram and a transmission diagram after image reflected light is removed, and specifically comprising the following steps: setting a loss function of a generator in the two-stage reflected light elimination network based on the loss of the analog data pixel according to the following formula:
Figure BDA0002860343190000061
wherein L ispixelSRepresenting a loss function of the generator in a two-stage reflected light cancellation network based on simulated data pixel loss,
Figure BDA0002860343190000062
represents the gradient operator, | ·| non-conducting phosphor2Denotes the operation of two norms, eta denotes the constraint factor, lambda1Represents a weight value, λ2Representing the gradient weights, T representing the true transmission map,
Figure BDA0002860343190000063
a transmission map representing a coarse estimate of the transmission,
Figure BDA0002860343190000064
a transmission map after image reflection light removal, R a real reflection map,
Figure BDA0002860343190000065
representing a roughly estimated reflection map.
Specifically, eta is 0.5 and lambda is taken in the experiment1A value of 0.2, λ2The value is 0.4. The purpose of introducing the constraint factor is to expect that the error weight of the final transmitted light prediction is increased by the design, so that the precision is improved.
And 5: constructing a loss function of a generator in the two-stage reflected light elimination network based on real data pixel loss by using a real transmission diagram of real data in a training data set, a roughly estimated transmission diagram and a transmission diagram after image reflected light removal, and specifically comprising the following steps: setting a loss function of a generator in the two-stage reflected light elimination network based on real data pixel loss according to the following formula:
Figure BDA0002860343190000066
wherein L ispixelRRepresenting a loss function of a generator in a two-stage reflected light cancellation network based on real data pixel loss.
Specifically, for real data, no reflection error term is included in the loss function because there is no reflection reference map.
Step 6: weighting and adding a loss function of a generator in the two-stage reflected light elimination network based on the loss of a simulated data pixel, a loss function of the generator in the two-stage reflected light elimination network based on the loss of a real data pixel and a confrontation loss function of an original generator to serve as a loss function of the generator in the two-stage reflected light elimination network, specifically, setting the loss function of the generator in the two-stage reflected light elimination network according to the following formula:
L=αLA+βLpixelS+χLpixelR
LA=-E(D(I,G(I,θ)))
wherein, alpha, beta and chi are respectively LA、LpixelSAnd LpixelRWeight coefficient of (1), LAFor the raw generator immunity loss function, E (·) denotes the desired operation, D denotes the discriminator in the two-stage reflected light cancellation network, I denotes the input image, G denotes the raw generator, D (I, G (I, θ)) denotes the probability that G (I, θ) output by the discriminator in the two-stage reflected light cancellation network belongs to the transmission image given the input image and the image to be discriminated G (I, θ), and θ denotes the raw generator network parameters.
Specifically, α, β, and χ were all equal to 1 in the experiment.
And 7: setting a loss function of the discriminator in the two-stage reflected light elimination network, specifically, setting the loss function of the discriminator in the two-stage reflected light elimination network according to the following formula:
Figure BDA0002860343190000071
wherein L isDRepresents the loss function of the discriminator in a two-stage reflected light cancellation network, mu is
Figure BDA0002860343190000072
The weight coefficient of (2).
And 8: training a two-stage reflected light elimination network, sequentially loading the Mth frame image in a training data set as a current frame image, inputting the current frame image into a primary sub-network of a generator to obtain a roughly estimated transmission image and a reflection image, inputting the roughly estimated transmission image and the reflection image into a secondary sub-network of the generator to obtain a transmission image with image reflected light removed, judging whether the current frame image is the last frame image of the training data set or not, if so, finishing the training of the round, and entering step 9; if not, enabling M to be M +1, and continuing to load the subsequent frame image for training, wherein M represents an integer greater than or equal to one;
fig. 6 is a schematic diagram of transmission of rough estimation of simulation data in an embodiment of the present invention, fig. 7 is a schematic diagram of reflection of rough estimation of simulation data in an embodiment of the present invention, fig. 8 is a schematic diagram of transmission after reflection of simulation data is removed in an embodiment of the present invention, fig. 9 is a schematic diagram of an input image of real data in an embodiment of the present invention, fig. 10 is a schematic diagram of real transmission of real data in an embodiment of the present invention, fig. 11 is a schematic diagram of transmission of rough estimation of real data in an embodiment of the present invention, and fig. 12 is a schematic diagram of transmission after reflection of real data is removed in an embodiment of the present invention.
And step 9: judging whether the parameters of the two-stage reflected light elimination network are converged, if so, finishing all training and entering the step 10; if not, returning to the step 8, and making M equal to M +1, and continuing the next round of training until a trained two-stage reflected light elimination network is obtained;
specifically, the two-stage reflection light elimination network mentioned in the present invention is trained by Nvidia RTX Titan V and tenserflow 1.9.0 for 150 rounds (50 rounds of learning rate 0.0001, 0.00003 and 0.00001).
Step 10: and (4) removing image reflection light of the test data set by using the trained two-stage reflection light elimination network, and outputting a transmission diagram after the image reflection light is removed.
The method comprises the steps of removing image reflected light by a reflected light removing method based on a two-stage reflected light removing network, and firstly setting a primary sub-network and a secondary sub-network of a generator in the two-stage reflected light removing network; then setting a loss function of a generator in the two-stage reflected light elimination network; then setting a loss function of a discriminator in a two-stage reflected light elimination network; training the two-stage reflected light elimination network until the parameters of the two-stage reflected light elimination network are converged to obtain the trained two-stage reflected light elimination network; and finally, removing image reflection light of the test data set by using the trained two-stage reflection light elimination network, realizing a transmission image after removing the image reflection light, effectively removing the reflection light of the images of various scenes, and avoiding color distortion and detail loss.
The above is only a preferred embodiment of the present invention, and the protection scope of the present invention is not limited to the above-mentioned embodiments, and all technical solutions belonging to the idea of the present invention belong to the protection scope of the present invention. It should be noted that modifications and embellishments within the scope of the invention may be made by those skilled in the art without departing from the principle of the invention.

Claims (7)

1. The reflected light removing method based on the two-stage reflected light eliminating network and the pixel loss is characterized by comprising the following steps of:
the method comprises the following steps of firstly, constructing a training data set and a testing data set by utilizing simulation data and real data;
step two, setting a primary sub-network of a generator in a two-stage reflected light elimination network;
step three, setting a secondary sub-network of a generator in the two-stage reflected light elimination network;
step four, constructing a loss function of a generator in the two-stage reflected light elimination network based on the pixel loss of the simulation data by using a real transmission image and reflection image of the simulation data in the training data set, a roughly estimated transmission image and reflection image and a transmission image after the reflected light of the image is removed;
step five, constructing a loss function of a generator in the two-stage reflected light elimination network based on real data pixel loss by using a real transmission diagram of real data in the training data set, a roughly estimated transmission diagram and a transmission diagram after image reflected light is removed;
weighting and adding a loss function of a generator in the two-stage reflected light elimination network based on the loss of the analog data pixels, a loss function of the generator in the two-stage reflected light elimination network based on the loss of the real data pixels and an original generator countervailing loss function to serve as a loss function of the generator in the two-stage reflected light elimination network;
step seven, setting a loss function of a discriminator in the two-stage reflected light elimination network;
step eight, training a two-stage reflected light elimination network, sequentially loading an Mth frame image in a training data set as a current frame image, inputting the current frame image into a primary sub-network of a generator to obtain a roughly estimated transmission image and a reflection image, and inputting the roughly estimated transmission image and the reflection image into a secondary sub-network of the generator to obtain a transmission image after image reflected light is removed; judging whether the current frame image is the last frame image of the training data set; if yes, finishing the round of training and entering the ninth step; if not, continuing to load the subsequent frame image for training, wherein M represents an integer greater than or equal to one;
step nine, judging whether the parameters of the two-stage reflected light elimination network are converged; if yes, finishing all training and entering the step ten; if not, returning to the step eight, and continuing the next round of training until a trained two-stage reflected light elimination network is obtained;
and step ten, removing image reflection light of the test data set by using the trained two-stage reflection light elimination network, and outputting a transmission image after the image reflection light is removed.
2. The reflected light removal method of claim 1, wherein step two is specifically achieved by:
s201, setting an 8-layer coder-decoder, wherein the coder-decoder is provided with 4 rolling blocks with different scales;
s202, respectively connecting coding-decoding layers with the same scale by using 4 convolutional block attention units;
s203, constructing a full convolution neural network, wherein the number of channels of the first seven layers is 64, and the number of channels of the eighth layer is two three channels;
and S204, connecting the steps S201 to S203 together to serve as a primary sub-network of a generator in the two-stage reflected light elimination network.
3. The reflected light removal method of claim 1, wherein step three is specifically achieved by:
s301, setting 9 characteristic extraction layers based on a gate convolution neural network;
s302, setting 1 layer of convolution network feature extraction layer;
and S303, connecting the steps S301 to S302 together to serve as a secondary sub-network of a generator in the two-stage reflected light elimination network.
4. The reflected light removal method according to claim 1, wherein the fourth step specifically includes: setting a loss function of a generator in the two-stage reflected light elimination network based on the loss of the analog data pixel according to the following formula:
Figure FDA0002860343180000021
wherein L ispixelSRepresenting a loss function of the generator in a two-stage reflected light cancellation network based on simulated data pixel loss,
Figure FDA0002860343180000022
represents the gradient operator, | ·| non-conducting phosphor2Denotes the operation of two norms, eta denotes the constraint factor, lambda1Represents a weight value, λ2Representing the gradient weights, T representing the true transmission map,
Figure FDA0002860343180000023
a transmission map representing a coarse estimate of the transmission,
Figure FDA0002860343180000024
a transmission map after image reflection light removal, R a real reflection map,
Figure FDA0002860343180000025
representing a roughly estimated reflection map.
5. The reflected light removal method according to claim 4, wherein the step five specifically includes: setting a loss function of a generator in the two-stage reflected light elimination network based on real data pixel loss according to the following formula:
Figure FDA0002860343180000026
wherein L ispixelRRepresenting a loss function of a generator in a two-stage reflected light cancellation network based on real data pixel loss.
6. The reflected light removal method according to claim 1, wherein the sixth step specifically includes: the loss function L of the generator in the two-stage reflected light cancellation network is set according to the following formula:
L=αLA+βLpixelS+χLpixelR
LA=-E(D(I,G(I,θ)))
wherein, alpha, beta and chi are respectively LA、LpixelSAnd LpixelRWeight coefficient of (1), LAFor the raw generator immunity loss function, E (·) denotes the desired operation, D denotes the discriminator in the two-stage reflected light cancellation network, I denotes the input image, G denotes the raw generator, D (I, G (I, θ)) denotes the probability that G (I, θ) output by the discriminator in the two-stage reflected light cancellation network belongs to the transmission image given the input image and the image to be discriminated G (I, θ), and θ denotes the raw generator network parameters.
7. The reflected light removal method according to claim 6, wherein the seventh step specifically includes: the loss function of the discriminator in the two affix reflected light cancellation network is set according to the following formula:
Figure FDA0002860343180000027
wherein L isDRepresents the loss function of the discriminator in the two-stage reflected light elimination network, T represents the real transmission diagram, mu is
Figure FDA0002860343180000028
The weight coefficient of (2).
CN202011573525.6A2020-12-252020-12-25 Reflected light removal method based on two-stage reflected light removal network and pixel lossExpired - Fee RelatedCN112634161B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202011573525.6ACN112634161B (en)2020-12-252020-12-25 Reflected light removal method based on two-stage reflected light removal network and pixel loss

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202011573525.6ACN112634161B (en)2020-12-252020-12-25 Reflected light removal method based on two-stage reflected light removal network and pixel loss

Publications (2)

Publication NumberPublication Date
CN112634161Atrue CN112634161A (en)2021-04-09
CN112634161B CN112634161B (en)2022-11-08

Family

ID=75325807

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202011573525.6AExpired - Fee RelatedCN112634161B (en)2020-12-252020-12-25 Reflected light removal method based on two-stage reflected light removal network and pixel loss

Country Status (1)

CountryLink
CN (1)CN112634161B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN107103590A (en)*2017-03-222017-08-29华南理工大学A kind of image for resisting generation network based on depth convolution reflects minimizing technology
CN107464227A (en)*2017-08-242017-12-12深圳市唯特视科技有限公司A kind of method that reflection and smoothed image are removed based on deep neural network
US20190147320A1 (en)*2017-11-152019-05-16Uber Technologies, Inc."Matching Adversarial Networks"
CN111507910A (en)*2020-03-182020-08-07南方电网科学研究院有限责任公司Single image reflection removing method and device and storage medium
CN112116537A (en)*2020-08-312020-12-22中国科学院长春光学精密机械与物理研究所Image reflected light elimination method and image reflected light elimination network construction method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN107103590A (en)*2017-03-222017-08-29华南理工大学A kind of image for resisting generation network based on depth convolution reflects minimizing technology
CN107464227A (en)*2017-08-242017-12-12深圳市唯特视科技有限公司A kind of method that reflection and smoothed image are removed based on deep neural network
US20190147320A1 (en)*2017-11-152019-05-16Uber Technologies, Inc."Matching Adversarial Networks"
CN111507910A (en)*2020-03-182020-08-07南方电网科学研究院有限责任公司Single image reflection removing method and device and storage medium
CN112116537A (en)*2020-08-312020-12-22中国科学院长春光学精密机械与物理研究所Image reflected light elimination method and image reflected light elimination network construction method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
SANGHYUN WOO 等: "CBAM: Convolutional Block Attention Module.", 《ARXIV:1807.06521V2 [CS.CV]》*

Also Published As

Publication numberPublication date
CN112634161B (en)2022-11-08

Similar Documents

PublicationPublication DateTitle
CN111476717B (en)Face image super-resolution reconstruction method based on self-attention generation countermeasure network
CN111242862B (en)Multi-scale fusion parallel dense residual convolution neural network image denoising method
CN110532859B (en) Remote sensing image target detection method based on deep evolutionary pruning convolutional network
CN110378844B (en) A Blind Image Deblurring Method Based on Recurrent Multiscale Generative Adversarial Networks
CN111123257B (en)Radar moving target multi-frame joint detection method based on graph space-time network
CN106204467B (en)Image denoising method based on cascade residual error neural network
CN110706181B (en)Image denoising method and system based on multi-scale expansion convolution residual error network
CN112818969B (en)Knowledge distillation-based face pose estimation method and system
CN112598598B (en)Image reflected light removing method based on two-stage reflected light eliminating network
CN110992275A (en)Refined single image rain removing method based on generation countermeasure network
CN112634146B (en)Multi-channel CNN medical CT image denoising method based on multiple attention mechanisms
CN109635763B (en)Crowd density estimation method
CN112183742A (en)Neural network hybrid quantization method based on progressive quantization and Hessian information
CN114359073A (en) A low-light image enhancement method, system, device and medium
CN112381897A (en)Low-illumination image enhancement method based on self-coding network structure
CN115081532A (en) Federated Continuous Learning Training Method Based on Memory Replay and Differential Privacy
CN113343796A (en)Knowledge distillation-based radar signal modulation mode identification method
CN112651917A (en)Space satellite low-illumination image enhancement method based on generation countermeasure network
CN106651829B (en)A kind of non-reference picture method for evaluating objective quality based on energy and texture analysis
CN114758293B (en) Deep learning crowd counting method based on auxiliary branch optimization and local density block enhancement
CN112734649A (en)Image degradation method and system based on lightweight neural network
CN110942106A (en) A pooled convolutional neural network image classification method based on square mean
CN113627597A (en)Countermeasure sample generation method and system based on general disturbance
CN109448039B (en)Monocular vision depth estimation method based on deep convolutional neural network
CN116433509A (en)Progressive image defogging method and system based on CNN and convolution LSTM network

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant
TR01Transfer of patent right
TR01Transfer of patent right

Effective date of registration:20221228

Address after:A401, 403, 405, Liye Building, No. 20 Qingyuan Road, Xinwu District, Wuxi City, Jiangsu Province, 214000

Patentee after:Wuxi Tuodian Technology Co.,Ltd.

Address before:No.333 Xishan Avenue, Wuxi City, Jiangsu Province

Patentee before:Binjiang College of Nanjing University of Information Engineering

CF01Termination of patent right due to non-payment of annual fee
CF01Termination of patent right due to non-payment of annual fee

Granted publication date:20221108


[8]ページ先頭

©2009-2025 Movatter.jp