Movatterモバイル変換


[0]ホーム

URL:


CN112686794A - Watermark removing method based on generating type countermeasure network - Google Patents

Watermark removing method based on generating type countermeasure network
Download PDF

Info

Publication number
CN112686794A
CN112686794ACN202011517946.7ACN202011517946ACN112686794ACN 112686794 ACN112686794 ACN 112686794ACN 202011517946 ACN202011517946 ACN 202011517946ACN 112686794 ACN112686794 ACN 112686794A
Authority
CN
China
Prior art keywords
watermark
network
picture
attention
discriminator
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011517946.7A
Other languages
Chinese (zh)
Other versions
CN112686794B (en
Inventor
张西
王雷
居燕峰
朱坚
陆向东
赵庆勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fujia Newland Software Engineering Co ltd
Original Assignee
Fujia Newland Software Engineering Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fujia Newland Software Engineering Co ltdfiledCriticalFujia Newland Software Engineering Co ltd
Priority to CN202011517946.7ApriorityCriticalpatent/CN112686794B/en
Publication of CN112686794ApublicationCriticalpatent/CN112686794A/en
Application grantedgrantedCritical
Publication of CN112686794BpublicationCriticalpatent/CN112686794B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Landscapes

Abstract

The invention provides a watermark removing method based on a generating type countermeasure network in the technical field of image processing, which comprises the following steps: step S10, building a generator based on the recursive attention cycle network and the context automatic encoder; step S20, building a discriminator based on the recursion attention cycle network and PatchGAN; step S30, inputting a plurality of watermark sample pictures into a conditional generation type countermeasure network composed of the generator and the discriminator to carry out countermeasure training; and step S40, inputting the watermark picture into the generator after the countermeasure training to generate a watermark-removed picture. The invention has the advantages that: the automatic watermark removal is realized, and the watermark removal effect is greatly improved.

Description

Watermark removing method based on generating type countermeasure network
Technical Field
The invention relates to the technical field of image processing, in particular to a watermark removing method based on a generating type countermeasure network.
Background
Watermarking is a widely used method for protecting copyright information of multimedia data such as images and videos, but some watermarking with malicious marketing property can affect the appreciation of the images. Thus, a need arises to remove the watermark.
The following 3 methods are mainly used for removing the watermark: 1. directly removing watermark characters in the image by means of a software tool; 2. the image watermark is trimmed by utilizing a trimming mode, so that the method is suitable for the condition that the watermark is at the edge of the image, and the premise is that the overall impression of the image is not influenced after trimming; 3. the method covers the characters in the image by using brushes with similar colors, and is suitable for the case that the image is pure color, such as the case that only black or white exists on the image.
The 3 methods for removing the watermark need tools, can only be operated manually, and process one image at a time, and the processing mode is complicated, low in efficiency and not suitable for processing large-batch images with complex background and complex watermark. Although there is also a method for building a watermark remover by using a full convolution network in the prior art, the input of the full convolution network is an area of an image with a watermark, and the image without the watermark is output after multi-layer convolution processing, but the method needs to mark the watermark part in the image and then perform watermark removing operation on the marked watermark area, and is not suitable for the image with complex watermark.
Therefore, how to provide a watermark removing method based on a generative countermeasure network to automatically remove a watermark and improve the watermark removing effect becomes a problem to be solved urgently.
Disclosure of Invention
The technical problem to be solved by the present invention is to provide a watermark removing method based on a generative countermeasure network, which realizes automatic watermark removal and improves the watermark removing effect.
The invention is realized by the following steps: a watermark removing method based on a generative countermeasure network comprises the following steps:
step S10, building a generator based on the recursive attention cycle network and the context automatic encoder;
step S20, building a discriminator based on the recursion attention cycle network and PatchGAN;
step S30, inputting a plurality of watermark sample pictures into a conditional generation type countermeasure network composed of the generator and the discriminator to carry out countermeasure training;
and step S40, inputting the watermark picture into the generator after the countermeasure training to generate a watermark-removed picture.
Further, in the step S10,
the recursive attention cycle network comprises at least two layers of ResNet, a convolution LSTM unit and an attention distribution graph for generating an attention distribution graph
Figure BDA0002848042840000021
Convs of the convolution layer of (a); wherein N is a positive integer; the recursive attention cycle network is used for positioning the area needing removing the watermark;
the context auto-encoder consists of a U-Net structure of 16 Conv-relu blocks for de-watermarking the recursively attention-cycled network located regions.
Further, in step S10, the generated network loss function of the generator is:
LG=10-2LGAN(O)+LATT({A},M)+LM({S},{T})+LP(O,T);
LGAN(O)=log(1-D(O));
Figure BDA0002848042840000022
Figure BDA0002848042840000023
LP(O,T)=LMSE(VGG(O),VGG(T));
wherein L isGA loss value representing the generated network; o represents the watermark-removed picture generated by the generator; t represents a waterless picture corresponding to O; d represents a discrimination network; m represents a binary mask; l isGAN(O) represents a loss function of the generated network; l isATT({ A }, M) represents the loss function of the recursive attention-cycle network, attention profile A, output at time step ttMean square error with binary mask M, N is 5, θ is 0.9; l isMSE(. to) represents the mean square error; l isM({ S }, { T }) represents the multi-scale loss function of the context autocoder, SiRepresenting the ith output, T, extracted from the context autocoderiShows the reduction of the waterless printed picture to SiSame size, λiWeights representing different size pictures; l isPAnd (O, T) represents a perceptual loss function of the context automatic encoder, namely, a plurality of features are extracted from the pictures O and T by using a trained feature network VGG, and the sum is obtained after the mean square error is calculated.
Further, in step S20, the discriminant network loss function of the discriminator is:
LD(T,O)=-log(D(T))-log(1-D(O))+γLmap(O,T,AN);
Lmap(O,T,AN)=LMSE(Dmap(O),AN)+LMSE(Dmap(T),0);
wherein L isD(T, O) represents a loss value for discriminating the network; gamma denotes Lmap(O,T,AN) The weight lost; l ismap(O,T,AN) Representing a difference between an attention mask generated by one of the layers of the arbiter and the attention profile; dmap() represents the process by which the arbiter generates the attention mask; 0 represents an attention profile that contains only 0 values.
Further, the step S30 specifically includes:
step S31, inputting a plurality of watermark sample pictures into the generator to generate a watermark sample picture;
step S32, inputting the watermark sample picture and the watermark removed sample picture into a discriminator;
step S33, judging whether all the water mark removing sample pictures are true and whether the water mark removing sample pictures are matched with the corresponding water mark sample pictures by the discriminator, if so, finishing the confrontation training, and entering step S40; if not, the process proceeds to step S31 to continue the countermeasure training.
Further, in step S30, the objective function of the conditional adversary network is:
Figure BDA0002848042840000031
wherein L iscGAN(G, D) represents an objective function of the condition generating countermeasure network; s represents a watermark sample picture; x represents a real picture corresponding to the watermark sample picture; d (s, x) represents inputting the watermark sample picture and the real picture into the discriminator; d (s, g (s)) represents inputting the watermark sample picture and the de-watermark sample picture generated by the generator into the discriminator;
Figure BDA0002848042840000032
representing an expectation of joint distribution of the watermark sample picture and the corresponding real picture;
Figure BDA0002848042840000033
indicating the desirability of de-watermark sample picture distribution.
The invention has the advantages that:
1. the method comprises the steps of inputting a watermark sample picture into a conditional generation type countermeasure network consisting of a generator and a discriminator to perform countermeasure training, and then inputting the watermark picture into the generator after the countermeasure training to generate a watermark removing picture, namely, the generator after the countermeasure training is used for automatically removing the watermark in batch, so that the watermark removing efficiency is improved, and the method is suitable for processing large-batch images with complex background and complex watermark; the generator is built based on the recursive attention cycle network and the context automatic encoder, the discriminator is built based on the recursive attention cycle network and the PatchGAN, namely the generator generates an attention distribution map through the recursive attention cycle network for positioning the region needing to be removed with the watermark, the context automatic encoder removes the watermark based on the positioned region, the discriminator concentrates attention on the region needing to be removed with the watermark based on the attention distribution map, the watermark region does not need to be marked in advance like the prior art, all regions with the watermark can be automatically noticed, the method is suitable for the picture with the complex watermark, the automatic watermark removal is finally realized, and the removal effect of the watermark is greatly improved.
2. The conditional generation type countermeasure network (C-GAN) replaces the traditional generation type countermeasure network (GAN), the generator inputs the watermark sample picture and the watermark removing sample picture into the discriminator, and the discriminator not only needs to judge the authenticity of the watermark removing sample picture, but also needs to judge whether the watermark removing sample picture is matched with the watermark sample picture, so that the watermark removing effect is greatly improved.
3. The PatchGAN replaces the traditional GAN to build a discriminator, the influence of different parts of the image on the discriminator is considered, the trained model can pay more attention to the detailed part of the image, the overall difference representation which is more accurate than the single scalar output is realized, and the local feature and the overall feature of the image are fused.
Drawings
The invention will be further described with reference to the following examples with reference to the accompanying drawings.
Fig. 1 is a flowchart of a watermark removing method based on a generative countermeasure network according to the present invention.
FIG. 2 is a schematic diagram of the architecture of the recursive attention cycle network of the present invention.
Fig. 3 is a schematic diagram of the structure of the context autocoder of the present invention.
FIG. 4 is a schematic diagram of the structure of the discriminator according to the present invention.
Detailed Description
The technical scheme in the embodiment of the application has the following general idea: the traditional picture watermark removing method is converted into an image conversion task, the watermarked picture is converted into the watermark removing picture, namely the watermark removing picture generated by the generator is real enough through continuous countertraining between the generator and the discriminator, and therefore the ideal watermark removing effect is achieved.
Referring to fig. 1 to 4, a preferred embodiment of a watermark removing method based on a generative countermeasure network according to the present invention includes the following steps:
step S10, based on the recursion attention cycle network and the context automatic encoder building Generator (Generator); the input of the context automatic encoder is a watermark picture and an attention distribution map generated by a recursive attention circulation network, and the output is a watermark-removed picture;
step S20, building a Discriminator (Discriminator) based on the recursion attention cycle network and PatchGAN; the generator is used for generating a watermark removing picture, and the discriminator is used for judging the truth of the watermark removing picture;
since the GAN discriminator is to map the input information into a real number, i.e., the probability that the input sample is a real sample, the Patchgan discriminator is to map the input information into a Patch (matrix) X of N × Ni,j,Xi,jThe value of (D) represents the probability that each Patch is a true sample, Xi,jAnd the average value is the final output of the discriminator. Xi,jThe method is characterized in that a feature map output by a convolution layer is used, a certain position of an original image can be traced from the feature map, and the influence of the position on a final output result can be seen from the result of a discriminator, so that the discriminator can pay more attention to the detailed part of a generated image, namely, the discriminator is more sensitive to high frequency, and the PatchGAN is used for replacing the traditional GAN to build the discriminator.
Step S30, inputting a plurality of watermark sample pictures into a conditional generation type confrontation network (C-GAN) composed of the generator and the discriminator for confrontation training;
the traditional generation type countermeasure network (GAN) only needs to judge the authenticity of the watermark sample picture, can not ensure whether the input watermark sample picture generates the corresponding watermark sample picture, and is easy to drill empty bits;
and step S40, inputting the watermark picture into the generator after the countermeasure training to generate a watermark-removed picture.
In the step S10, in the above step,
each time step of the recursive attention loop network comprises one toTwo-layer-less ResNet, a convolution LSTM unit and a method for generating an attention profile
Figure BDA0002848042840000051
Convs of the convolution layer of (a); wherein N is a positive integer; the recursive attention circulation network is used for positioning the area needing removing the watermark, so that the generation network can pay more attention to the watermark area and the surrounding structure, and the judgment network can better evaluate the local consistency of the watermark recovery area;
the attention profile is a matrix from 0 to 1, with larger values indicating more attention; the attention profile is a non-binary map representing a gradually increasing attention from the non-watermarked area to the watermarked area, even though the attention in the watermarked area is different, because there is a difference in transparency of the watermark area, some parts of the watermark do not completely block the background, thereby conveying some background information.
The context auto-encoder consists of a U-Net structure of 16 Conv-relu blocks for de-watermarking the recursively attention-cycled network located regions.
The U-Net network belongs to an encoder-decoder, but is different from the traditional encoder-decoder in the characteristic jump layer connection, and all data information is required to flow through each layer from input to output in the traditional GAN generation model network structure, so that the training time is undoubtedly prolonged; for the task of image watermarking, although an input image and a target image need to be subjected to complex conversion, the structures of the input image and an output image are basically the same, namely the input image and the output image are shared on low-level information in the image conversion process, and the information does not need to be converted, so that the waste is caused by the traditional GAN generation model network structure, the network structure is adjusted according to the image conversion requirement, and the information sharing between the input image and the output image can be realized by using a U-Net network structure; the benefit of the U-Net network architecture is that the connection between the encode and decode parts of the same size in the network gives the generative model the ability to skip some of the subsequent steps, also known as skip-connections, so that low-level detail information under different resolution conditions is preserved and part of the information can be transmitted directly through the connection when the network is trained.
In step S10, the generated network loss function of the generator is:
LG=10-2LGAN(O)+LATT({A},M)+LM({S},{T})+LP(O,T);
LGAN(O)=log(1-D(O));
Figure BDA0002848042840000061
Figure BDA0002848042840000062
LP(O,T)=LMSE(VGG(O),VGG(T));
wherein L isGA loss value representing the generated network; o represents the watermark-removed picture generated by the generator; t represents a waterless picture corresponding to O; d represents a discrimination network; m represents a binary mask; l isGAN(O) represents a loss function of the generated network; l isATT({ A }, M) represents the loss function of the recursive attention-cycle network, attention profile A, output at time step ttThe mean square error with the binary mask M, N is 5, θ is 0.9, we expect a higher N to produce a better attention map, but for a very large N, more video memory resources are needed, so let N take the value of 5; l isMSE(. to) represents the mean square error; l isM({ S }, { T }) represents the multi-scale loss function of the context autocoder, SiRepresenting the ith output, T, extracted from the context autocoderiShows the reduction of the waterless printed picture to SiSame size, λiWeights representing pictures of different sizes, will beiThe values of (a) are respectively set to be 0.6, 0.8 and 1, so that the sizes of the output pictures of the last layer, the third last layer and the fifth last layer of the context automatic encoder can be respectively the original sizes1/4, 1/2 and 1 fold; l isPAnd (O, T) represents a perceptual loss function of the context automatic encoder, namely, a plurality of features are extracted from the pictures O and T by using a trained feature network VGG, and the sum is obtained after the mean square error is calculated.
In step S20, the discriminant network loss function of the discriminator is:
LD(T,O)=-log(D(T))-log(1-D(O))+γLmap(O,T,AN);
Lmap(O,T,AN)=LMSE(Dmap(O),AN)+LMSE(Dmap(T),0);
wherein L isD(T, O) represents a loss value for discriminating the network; gamma denotes Lmap(O,T,AN) The weight lost; l ismap(O,T,AN) Representing a difference between an Attention mask and an Attention Map (Attention Map) generated by one of the layers of the arbiter; dmap() represents the process by which the arbiter generates the attention mask; 0 represents an attention profile that contains only 0 values.
The step S30 specifically includes:
step S31, inputting a plurality of watermark sample pictures into the generator to generate a watermark sample picture;
step S32, inputting the watermark sample picture and the watermark removed sample picture into a discriminator;
step S33, judging whether all the water mark removing sample pictures are true and whether the water mark removing sample pictures are matched with the corresponding water mark sample pictures by the discriminator, if so, finishing the confrontation training, and entering step S40; if not, the process proceeds to step S31 to continue the countermeasure training.
In step S30, the objective function of the conditional adversary network is:
Figure BDA0002848042840000071
wherein L iscGAN(G, D) represents an objective function of the condition generating countermeasure network; s represents a watermark sample picture; x tableDisplaying a real picture corresponding to the watermark sample picture; d (s, x) represents inputting the watermark sample picture and the real picture into the discriminator; d (s, g (s)) represents inputting the watermark sample picture and the de-watermark sample picture generated by the generator into the discriminator;
Figure BDA0002848042840000081
representing an expectation of joint distribution of the watermark sample picture and the corresponding real picture;
Figure BDA0002848042840000082
indicating the desirability of de-watermark sample picture distribution.
The generator of the C-GAN algorithm generates an image based on random noise, but the random noise is often buried in a watermark sample picture, so that the input random noise is omitted.
In summary, the invention has the advantages that:
1. the method comprises the steps of inputting a watermark sample picture into a conditional generation type countermeasure network consisting of a generator and a discriminator to perform countermeasure training, and then inputting the watermark picture into the generator after the countermeasure training to generate a watermark removing picture, namely, the generator after the countermeasure training is used for automatically removing the watermark in batch, so that the watermark removing efficiency is improved, and the method is suitable for processing large-batch images with complex background and complex watermark; the generator is built based on the recursive attention cycle network and the context automatic encoder, the discriminator is built based on the recursive attention cycle network and the PatchGAN, namely the generator generates an attention distribution map through the recursive attention cycle network for positioning the region needing to be removed with the watermark, the context automatic encoder removes the watermark based on the positioned region, the discriminator concentrates attention on the region needing to be removed with the watermark based on the attention distribution map, the watermark region does not need to be marked in advance like the prior art, all regions with the watermark can be automatically noticed, the method is suitable for the picture with the complex watermark, the automatic watermark removal is finally realized, and the removal effect of the watermark is greatly improved.
2. The conditional generation type countermeasure network (C-GAN) replaces the traditional generation type countermeasure network (GAN), the generator inputs the watermark sample picture and the watermark removing sample picture into the discriminator, and the discriminator not only needs to judge the authenticity of the watermark removing sample picture, but also needs to judge whether the watermark removing sample picture is matched with the watermark sample picture, so that the watermark removing effect is greatly improved.
3. The PatchGAN replaces the traditional GAN to build a discriminator, the influence of different parts of the image on the discriminator is considered, the trained model can pay more attention to the detailed part of the image, the overall difference representation which is more accurate than the single scalar output is realized, and the local feature and the overall feature of the image are fused.
Although specific embodiments of the invention have been described above, it will be understood by those skilled in the art that the specific embodiments described are illustrative only and are not limiting upon the scope of the invention, and that equivalent modifications and variations can be made by those skilled in the art without departing from the spirit of the invention, which is to be limited only by the appended claims.

Claims (6)

1. A watermark removing method based on a generative countermeasure network is characterized in that: the method comprises the following steps:
step S10, building a generator based on the recursive attention cycle network and the context automatic encoder;
step S20, building a discriminator based on the recursion attention cycle network and PatchGAN;
step S30, inputting a plurality of watermark sample pictures into a conditional generation type countermeasure network composed of the generator and the discriminator to carry out countermeasure training;
and step S40, inputting the watermark picture into the generator after the countermeasure training to generate a watermark-removed picture.
2. The watermark removal method based on the generative countermeasure network as claimed in claim 1, wherein: in the step S10, in the above step,
the recursive attention-cycling network comprises at least two layers of ResNet, a convolution LSTM unit and an attention distribution generation unit for generating an attention distributionDrawing (A)
Figure FDA0002848042830000011
Convs of the convolution layer of (a); wherein N is a positive integer; the recursive attention cycle network is used for positioning the area needing removing the watermark;
the context auto-encoder consists of a U-Net structure of 16 Conv-relu blocks for de-watermarking the recursively attention-cycled network located regions.
3. The watermark removal method based on the generative countermeasure network as claimed in claim 1, wherein: in step S10, the generated network loss function of the generator is:
LG=10-2LGAN(O)+LATT({A},M)+LM({S},{T})+LP(O,T);
LGAN(O)=log(1-D(O));
Figure FDA0002848042830000012
Figure FDA0002848042830000013
LP(O,T)=LMSE(VGG(O),VGG(T));
wherein L isGA loss value representing the generated network; o represents the watermark-removed picture generated by the generator; t represents a waterless picture corresponding to O; d represents a discrimination network; m represents a binary mask; l isGAN(O) represents a loss function of the generated network; l isATT({ A }, M) represents the loss function of the recursive attention-cycle network, attention profile A, output at time step ttMean square error with binary mask M, N is 5, θ is 0.9; l isMSE(. to) represents the mean square error; l isM({ S }, { T }) represents the multi-scale loss function of the context autocoder, SiRepresenting extraction from a context autocoderIth output, TiShows the reduction of the waterless printed picture to SiSame size, λiWeights representing different size pictures; l isPAnd (O, T) represents a perceptual loss function of the context automatic encoder, namely, a plurality of features are extracted from the pictures O and T by using a trained feature network VGG, and the sum is obtained after the mean square error is calculated.
4. A watermark removal method based on a generative countermeasure network as claimed in claim 3, wherein: in step S20, the discriminant network loss function of the discriminator is:
LD(T,O)=-log(D(T))-log(1-D(O))+γLmap(O,T,AN);
Lmap(O,T,AN)=LMSE(Dmap(O),AN)+LMSE(Dmap(T),0);
wherein L isD(T, O) represents a loss value for discriminating the network; gamma denotes Lmap(O,T,AN) The weight lost; l ismap(O,T,AN) Representing a difference between an attention mask generated by one of the layers of the arbiter and the attention profile; dmap() represents the process by which the arbiter generates the attention mask; 0 represents an attention profile that contains only 0 values.
5. The watermark removal method based on the generative countermeasure network as claimed in claim 1, wherein: the step S30 specifically includes:
step S31, inputting a plurality of watermark sample pictures into the generator to generate a watermark sample picture;
step S32, inputting the watermark sample picture and the watermark removed sample picture into a discriminator;
step S33, judging whether all the water mark removing sample pictures are true and whether the water mark removing sample pictures are matched with the corresponding water mark sample pictures by the discriminator, if so, finishing the confrontation training, and entering step S40; if not, the process proceeds to step S31 to continue the countermeasure training.
6. The watermark removal method based on the generative countermeasure network as claimed in claim 5, wherein: in step S30, the objective function of the conditional adversary network is:
Figure FDA0002848042830000021
wherein L iscGAN(G, D) represents an objective function of the condition generating countermeasure network; s represents a watermark sample picture; x represents a real picture corresponding to the watermark sample picture; d (s, x) represents inputting the watermark sample picture and the real picture into the discriminator; d (s, g (s)) represents inputting the watermark sample picture and the de-watermark sample picture generated by the generator into the discriminator;
Figure FDA0002848042830000031
representing an expectation of joint distribution of the watermark sample picture and the corresponding real picture;
Figure FDA0002848042830000032
indicating the desirability of de-watermark sample picture distribution.
CN202011517946.7A2020-12-212020-12-21Watermark removing method based on generation type countermeasure networkActiveCN112686794B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202011517946.7ACN112686794B (en)2020-12-212020-12-21Watermark removing method based on generation type countermeasure network

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202011517946.7ACN112686794B (en)2020-12-212020-12-21Watermark removing method based on generation type countermeasure network

Publications (2)

Publication NumberPublication Date
CN112686794Atrue CN112686794A (en)2021-04-20
CN112686794B CN112686794B (en)2023-06-02

Family

ID=75449752

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202011517946.7AActiveCN112686794B (en)2020-12-212020-12-21Watermark removing method based on generation type countermeasure network

Country Status (1)

CountryLink
CN (1)CN112686794B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN115272039A (en)*2022-06-132022-11-01广东技术师范大学 A GAN-based watermark attack method and system, digital watermark embedding method
CN115908387A (en)*2022-12-232023-04-04北京市农林科学院信息技术研究中心 Pig farm stain monitoring image decontamination method, device and electronic equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20170243318A1 (en)*2011-04-262017-08-24Digimarc CorporationSalient point-based arrangements
CN108805789A (en)*2018-05-292018-11-13厦门市美亚柏科信息股份有限公司A kind of method, apparatus, equipment and readable medium removing watermark based on confrontation neural network
CN111062903A (en)*2019-12-062020-04-24携程计算机技术(上海)有限公司Automatic processing method and system for image watermark, electronic equipment and storage medium
CN111105336A (en)*2019-12-042020-05-05山东浪潮人工智能研究院有限公司Image watermarking removing method based on countermeasure network
CN111696046A (en)*2019-03-132020-09-22北京奇虎科技有限公司Watermark removing method and device based on generating type countermeasure network

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20170243318A1 (en)*2011-04-262017-08-24Digimarc CorporationSalient point-based arrangements
CN108805789A (en)*2018-05-292018-11-13厦门市美亚柏科信息股份有限公司A kind of method, apparatus, equipment and readable medium removing watermark based on confrontation neural network
CN111696046A (en)*2019-03-132020-09-22北京奇虎科技有限公司Watermark removing method and device based on generating type countermeasure network
CN111105336A (en)*2019-12-042020-05-05山东浪潮人工智能研究院有限公司Image watermarking removing method based on countermeasure network
CN111062903A (en)*2019-12-062020-04-24携程计算机技术(上海)有限公司Automatic processing method and system for image watermark, electronic equipment and storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN115272039A (en)*2022-06-132022-11-01广东技术师范大学 A GAN-based watermark attack method and system, digital watermark embedding method
CN115908387A (en)*2022-12-232023-04-04北京市农林科学院信息技术研究中心 Pig farm stain monitoring image decontamination method, device and electronic equipment
CN115908387B (en)*2022-12-232024-12-17北京市农林科学院信息技术研究中心Live pig farm stain monitoring image decontamination method and device and electronic equipment

Also Published As

Publication numberPublication date
CN112686794B (en)2023-06-02

Similar Documents

PublicationPublication DateTitle
Salloum et al.Image splicing localization using a multi-task fully convolutional network (MFCN)
Lu et al.Secure halftone image steganography based on pixel density transition
US8009862B2 (en)Embedding data in images
CN105741224A (en)Reversible watermarking algorithm based on PVO (Pixel Value Ordering) and self-adaptive block segmentation
CN115034982A (en)Underwater image enhancement method based on multi-scale attention mechanism fusion
CN111932431B (en) Visible watermark removal method and electronic equipment based on watermark decomposition model
CN113379833A (en)Image visible watermark positioning and segmenting method based on neural network
JP7539998B2 (en) Zoom Agnostic Watermark Extraction
Wu et al.Detection of digital doctoring in exemplar-based inpainted images
CN114581646B (en) Text recognition method, device, electronic device and storage medium
CN112686794A (en)Watermark removing method based on generating type countermeasure network
CN113778719A (en)Anomaly detection algorithm based on copy and paste
Mohanty et al.A VLSI architecture for visible watermarking in a secure still digital camera (S/sup 2/DC) design (Corrected)
CN115134474A (en) Reversible Data Hiding Method for Parametric Binary Tree Based on Pixel Prediction
Luo et al.Leca: A learned approach for efficient cover-agnostic watermarking
Zhang et al.Multi-scale segmentation strategies in PRNU-based image tampering localization
CN119067828B (en)Diffusion model-based generation type visible watermark generation method
US8243981B2 (en)Identifying embedded data in an image
CN117974412B (en) Robust watermark embedding method and system based on multi-dimensional information embedding and texture guidance
CN119180751A (en)HRCT image generation method and system based on conditional control diffusion model
CN116342363B (en)Visible watermark removing method based on two-stage deep neural network
US8031905B2 (en)Extracting data from images
CN116542837A (en)Interactive image water-jet printing method, device, equipment and storage medium
US7974437B2 (en)Identifying steganographic data in an image
CN111127288B (en)Reversible image watermarking method, reversible image watermarking device and computer readable storage medium

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp