Movatterモバイル変換


[0]ホーム

URL:


CN113658066B - Image processing method and device, and electronic equipment - Google Patents

Image processing method and device, and electronic equipment
Download PDF

Info

Publication number
CN113658066B
CN113658066BCN202110912037.1ACN202110912037ACN113658066BCN 113658066 BCN113658066 BCN 113658066BCN 202110912037 ACN202110912037 ACN 202110912037ACN 113658066 BCN113658066 BCN 113658066B
Authority
CN
China
Prior art keywords
image
filter
color
training
processed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110912037.1A
Other languages
Chinese (zh)
Other versions
CN113658066A (en
Inventor
毛芳勤
郭桦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co LtdfiledCriticalVivo Mobile Communication Co Ltd
Priority to CN202110912037.1ApriorityCriticalpatent/CN113658066B/en
Publication of CN113658066ApublicationCriticalpatent/CN113658066A/en
Priority to PCT/CN2022/110522prioritypatent/WO2023016365A1/en
Application grantedgrantedCritical
Publication of CN113658066BpublicationCriticalpatent/CN113658066B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Classifications

Landscapes

Abstract

The application discloses an image processing method and device and electronic equipment, and belongs to the technical field of artificial intelligence. The method comprises the following steps: acquiring a first filter image and an image to be processed, wherein the first filter image comprises an image subjected to filter processing by a target filter; determining the target filter based on the first filter image and the image to be processed; determining filter weight values corresponding to pixels in the image to be processed respectively based on the difference between the colors of the pixels in the image to be processed and the colors indicated by the target filters; and applying the target filter to each pixel in the image to be processed by using the filter weight value corresponding to each pixel respectively to obtain a second filter image.

Description

Image processing method and device and electronic equipment
Technical Field
The application belongs to the technical field of artificial intelligence, and particularly relates to an image processing method and device and electronic equipment.
Background
In the current photographing and album processing, a filter has become a necessary image processing function. Essentially, the filter is used for adjusting the color of the picture so as to achieve the purpose of changing the style of the picture.
In order to realize that a user can apply the filter effect of other images to a target image, the filter migration technology is generated, namely, the filter on the picture is extracted and applied to a new picture, so that the aim of obtaining the corresponding filter without downloading corresponding software is fulfilled.
However, the filter usually corresponds to a solid color image with a single color, and when the extracted filter is applied to other images, a special color effect is generated, but an effect similar to a mask is generated, so that the color effect on the image is quite hard and unnatural.
Disclosure of Invention
The embodiment of the application aims to provide an image processing method and device and electronic equipment, which can solve the problems of hard and unnatural color effect caused when a filter extracted from a filter image is applied to other images in the prior art.
In a first aspect, an embodiment of the present application provides an image processing method, including:
Acquiring a first filter image and an image to be processed, wherein the first filter image comprises an image subjected to filter processing by a target filter;
determining the target filter based on the first filter image and the image to be processed;
determining filter weight values corresponding to pixels in the image to be processed respectively based on the difference between the colors of the pixels in the image to be processed and the colors indicated by the target filters;
And applying the target filter to each pixel in the image to be processed by using the filter weight value corresponding to each pixel respectively to obtain a second filter image.
In a second aspect, an embodiment of the present application provides an image processing apparatus including:
The device comprises a first acquisition module, a second acquisition module and a processing module, wherein the first acquisition module is used for acquiring a first filter image and an image to be processed, and the first filter image comprises an image subjected to filter processing by adopting a target filter;
The first determining module is used for determining the target filter based on the first filter image and the image to be processed;
the second determining module is used for determining filter weight values corresponding to pixels in the image to be processed respectively based on the difference between the colors of the pixels in the image to be processed and the colors indicated by the target filters;
And the filter module is used for applying the target filter to each pixel in the image to be processed by using the filter weight value corresponding to each pixel respectively to obtain a second filter image.
In a third aspect, an embodiment of the present application provides an electronic device, including a processor, a memory, and a program or instruction stored on the memory and executable on the processor, the program or instruction implementing the steps of the method according to the first aspect when executed by the processor.
In a fourth aspect, embodiments of the present application provide a readable storage medium having stored thereon a program or instructions which when executed by a processor perform the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and where the processor is configured to execute a program or instructions to implement a method according to the first aspect.
In the embodiment of the application, the target filter used when the first filter image is obtained by performing the filter processing is determined according to the first filter image processed by the filter and the image to be processed which is not processed by the filter. By taking the image to be processed into consideration, interference generated by the content color of the first filter image itself when the target filter is extracted based only on the first filter image can be avoided. And then determining filter weight values corresponding to the pixels in the image to be processed respectively based on the difference between the colors of the pixels in the image to be processed and the colors indicated by the target filters, combining the filter weight values corresponding to the pixels, and applying the target filters to the pixels of the image to be processed to obtain a second filter image after filter processing.
Drawings
Fig. 1 is a flowchart of steps of an image processing method provided in an embodiment of the present application;
Fig. 2 is a schematic diagram of an actual application of an image processing method according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a process of processing pictures by a deep learning network model according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a training process of a deep learning network model according to an embodiment of the present application;
fig. 5 is a block diagram of an image processing apparatus according to an embodiment of the present application;
Fig. 6 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present application;
Fig. 7 is a second schematic diagram of a hardware structure of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions of the embodiments of the present application will be clearly described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which are obtained by a person skilled in the art based on the embodiments of the present application, fall within the scope of protection of the present application.
The terms first, second and the like in the description and in the claims, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged, as appropriate, such that embodiments of the present application may be implemented in sequences other than those illustrated or described herein, and that the objects identified by "first," "second," etc. are generally of a type, and are not limited to the number of objects, such as the first object may be one or more. Furthermore, in the description and claims, "and/or" means at least one of the connected objects, and the character "/", generally means that the associated object is an "or" relationship.
The image processing method provided by the embodiment of the application is described in detail below through specific embodiments and application scenes thereof with reference to the accompanying drawings.
As shown in fig. 1, an image processing method according to an embodiment of the present application includes:
Step 101: and acquiring a first filter image and an image to be processed.
In this step, the first filter image includes an image processed by a target filter, that is, the first filter image may be understood as an image that generates a color effect after any one of the images is applied with a filter, where the target filter is the applied filter. Applying a filter to an image may create a special color effect on the image, which filter may be understood as one color data, e.g. a pure color map of a single color. Thus, each filter may indicate a color. The color effect here may be an effect produced by adjusting color, texture, or the like. Of course, for images containing faces, the color effects may also include effects from different make-ups. The image to be processed can be any image selected by a user, specifically, the image to be processed is the image which is selected by the user and is not processed by the filter. The user wants to apply a target filter to the image to achieve the same color effect as the first filter image.
Step 102: the target filter is determined based on the first filter image and the image to be processed.
In this step, a filter image with a color effect is generated after a filter is applied to an image, and the applied filter can be extracted based on the images before and after the filter is applied. Here, the original image corresponding to the first filter image is replaced by the image to be processed, wherein the filter processing is performed on the original image corresponding to the first filter image, and the first filter image is obtained. Of course, the filter applied to the image can be obtained by directly extracting the filter from the filter image, that is, the filter color in the first filter image can be extracted to obtain the target filter.
Step 103: and determining filter weight values corresponding to the pixels in the image to be processed respectively based on the difference between the colors of the pixels in the image to be processed and the color indicated by the target filter.
In this step, the difference between colors can be understood as the difference between color values of different colors in the same color space. The distance between two colors in the same color space can be used as a measurement value for measuring the difference between the two colors, and the larger the distance is, the larger the difference between the colors is; similarly, the smaller the distance, the smaller the difference, and if the distances are the same, it means that the two colors are the same and have no difference.
The filter weight value of each pixel is associated with the difference between the color of that pixel and the color indicated by the target filter, and therefore, the filter weight values corresponding to the pixels of different colors are different. It will be appreciated that the image to be processed is made up of a large number of pixels, each of which may be the same or different in colour, with the same filter weight corresponding to the same colour pixel and different filter weights corresponding to the different colours pixel.
Step 104: and applying the target filter to each pixel in the image to be processed by using the filter weight value corresponding to each pixel respectively to obtain a second filter image.
In the step, the target filter is applied to each pixel in the image to be processed according to the filter weight value corresponding to each pixel, namely, the image to be processed is subjected to filter processing through the target filter and the filter weight value. Specifically, for a target pixel of an image to be processed, a target filter is applied to the target pixel with a filter weight value corresponding to the target pixel, where the target pixel is all pixels of the image to be processed, that is, each pixel in the image to be processed needs to be subjected to filter processing. It will be appreciated that the filter weight is a specific value, and when the target filter is applied to a pixel in the image to be processed with a certain filter weight, the color value of the target filter is multiplied by the filter weight to obtain a new color value, and then the new color value is applied to the pixel in the image to be processed. For example, the color value of the target filter is 100, and the filter weight value of the target pixel on the image to be processed is 0.5, the process of applying the target filter to the target pixel includes: the new color value 50 is obtained by multiplying the color value 100 by the filter weight value 0.5, and the color value 50 is applied to the target pixel. Because the filter weight values corresponding to the pixels with different colors in the image to be processed are different, the color values applied to the pixels with different colors are different, and the color effects on the pixels with different colors in the second filter image are different. For example, when the color indicated by the target filter is blue and the target filter is applied to a pixel of a blue sky part with a larger filter weight value in the image to be processed, the blue sky is more blue, and when the target filter is applied to a pixel of a building part with a smaller filter weight value in the image to be processed, the building part is only light and unobvious blue, so that the overall color effect of the second filter image after the filter processing is staggered.
In the embodiment of the application, the target filter used when the first filter image is obtained by performing filter processing is determined according to the first filter image processed by the filter and the image to be processed which is not processed by the filter. By taking the image to be processed into consideration, interference generated by the content color of the first filter image itself when the target filter is extracted based only on the first filter image can be avoided. And then determining filter weight values corresponding to the pixels in the image to be processed respectively based on the difference between the colors of the pixels in the image to be processed and the colors indicated by the target filters, combining the filter weight values corresponding to the pixels, and applying the target filters to the pixels of the image to be processed to obtain a second filter image after filter processing.
Optionally, determining the target filter based on the first filter image and the image to be processed includes:
and inputting the first filter image and the image to be processed into a preset target network model.
In this step, the preset target network model is a pre-trained network model. Here, an initial model may be trained based on the deep learning network to obtain the target network model. The first filter image and the image to be processed are model inputs of a target network model.
And acquiring a first color characteristic of the first filter image and a second color characteristic of the image to be processed through the target network model.
In this step, the first color feature is a feature of data associated with the color of the first filter image, and the second color feature is a feature of data associated with the color of the image to be processed. Here, the color features may be output as a phase of the target network model rather than as a final model.
And acquiring a characteristic difference value of the first color characteristic and the second color characteristic through the target network model, and determining the characteristic difference value as a target filter.
In this step, the characteristic difference obtained by subtracting the first color characteristic from the second color characteristic may represent the target filter, and the color corresponding to the characteristic difference may be used as the color indicated by the target filter. It is understood that the feature difference may represent the color difference between the image to be processed and the first filter image, so that the image to be processed is processed through the color difference, and thus an image having no color difference with the first filter image may be obtained. Therefore, the characteristic difference value can be regarded as the target filter of the first filter image.
In the embodiment of the application, the target filter is extracted by utilizing the pre-trained target network model, and the first filter image and the image to be processed are input as the model of the target network model, so that the target filter can be quickly and accurately obtained.
Optionally, acquiring, by the target network model, the first color feature of the first filter image and the second color feature of the image to be processed includes:
And acquiring a first image feature vector of the first filter image and a second image feature vector of the image to be processed by an image feature extraction module of the target network model.
In this step, in order to facilitate image processing, the image may be first converted into a mathematical expression, and the first filter image and the image to be processed may be represented by using the mathematical expression. The first image feature vector is a mathematical expression of the first filter image, and may represent features of each dimension of the first filter image. The second image feature vector is a mathematical expression of the image to be processed, and may represent features of each dimension of the image to be processed. Wherein the dimensions of the image may include a brightness dimension, a color dimension, and the like.
Specifically, the image feature extraction module may be implemented by using a number of column-stacked mathematical operations, so as to obtain an image feature vector of the image. For example, a first formula is used to calculate a first image feature vector and a second image feature vector.
A first formula: outFeature =wn*(wn-1*(...(w1*x+b1))+bn-1)+bn, where x is input, outFeature is output, outFeature is a first image feature vector when x is a first filter image, outFeature is a second image feature vector when x is an image to be processed, w1~wn is n convolution kernels, and b1~bn is n offset values, where n is a positive integer.
And respectively inputting the first image feature vector and the second image feature vector into a color feature extraction module of the target network model, and acquiring a first color feature of the first filter image and a second color feature of the image to be processed.
In this step, the color feature extraction module may also perform a mathematical operation of stacking a number of columns, so as to obtain the color feature of the image feature vector. The first color feature and the second color feature are calculated, for example, using a second formula.
The second formula :outColor=cwn*(cwn-1*(...(cw1*outFeature+cb1))+cbn-1)+cbn,, wherein outFeature is an input, outColor is an output, outColor is a first color feature when outFeature is a first image feature vector, outColor is a second color feature when outFeature is a second image feature vector, cw1~cwn is n convolution kernels, cb1~cbn is n offset values, where n is a positive integer.
In the embodiment of the application, a stepwise processing mode is adopted, the image feature vector is obtained first, and then the color features related to the colors are screened based on the image feature vector, so that the whole process is convenient, simple and easy to realize.
Optionally, determining the filter weight value corresponding to each pixel in the image to be processed based on the difference between the color of each pixel in the image to be processed and the color indicated by the target filter includes:
and inputting the first filter image and the image to be processed into a preset target network model.
In this step, the preset target network model is a pre-trained network model. Here, an initial model may be trained based on the deep learning network to obtain the target network model. The first filter image and the image to be processed are model inputs of a target network model.
Acquiring a first image feature vector of a first filter image and a second image feature vector of an image to be processed through an image feature extraction module of a target network model;
In this step, in order to facilitate image processing, the image may be first converted into a mathematical expression, and the first filter image and the image to be processed may be represented by using the mathematical expression. The first image feature vector is a mathematical expression of the first filter image, and may represent features of each dimension of the first filter image. The second image feature vector is a mathematical expression of the image to be processed, and may represent features of each dimension of the image to be processed. Wherein the dimensions of the image may include a brightness dimension, a color dimension, and the like.
Specifically, the image feature extraction module may be implemented by using a number of column-stacked mathematical operations, so as to obtain an image feature vector of the image. For example, the first formula in the embodiment of the application is used to calculate the first image feature vector and the second image feature vector, which are not described herein.
And obtaining a vector difference value of the first image feature vector and the second image feature vector through the target network model.
In this step, in the case where the first image feature vector and the second image feature vector have been acquired, a vector difference value can be obtained by subtracting the two image feature vectors.
And inputting the absolute value of the vector difference value into a weight branching module of the target network model, and acquiring filter weight values corresponding to pixels in the image to be processed, wherein the closer the color of the target pixel is to the color indicated by the target filter, the larger the filter weight value corresponding to the target pixel is, and the target pixel comprises any pixel in the image to be processed.
In this step, the weight branching module may be implemented by using a number of mathematical operations stacked in columns, so as to obtain a filter weight value. For example, a third formula may be used to calculate filter weight values for different portions of the image to be processed. The third formula may be :WeightImg=wwn*(wwn-1*(...(ww1*|outFeature1-outFeature2|+wb1))+wbn-1)+wbn,, where i outFeature1-outFeature2 is an absolute value of the vector difference, WEIGHTIMG is an output, and represents filter weight values corresponding to different pixels of the image to be processed, ww1~wwn is a convolution operation, wb1~wbn is n convolution kernels, and n is a positive integer. It is understood that WEIGHTIMG may also be considered as a global weight map of the image to be processed, including a filter weight value corresponding to each pixel of the image to be processed.
In the embodiment of the application, the closer the color of the target pixel in the image to be processed is to the color indicated by the target filter, the larger the filter weight value of the target pixel is, so that the more obvious the color effect is when the target filter acts on the target pixel.
Optionally, before acquiring the first filter image and the image to be processed, the image processing method further includes:
Acquiring an initial network model and sample data; wherein the initial network model comprises: the system comprises an image feature extraction module, a color feature extraction module, a weight branching module and a content extraction module, wherein sample data comprise: the original image, the filter result image and the filter color image, wherein the filter result image is an image obtained by adding the filter color image to the original image.
In this step, the initial network model may be regarded as an untrained target network model. Different modules in the initial network model are used for executing different functions, such as an image feature extraction module is used for extracting image feature vectors of images, a color feature extraction module is used for extracting color features from the image feature vectors, a weight branching module is used for extracting weight values of pixels in an original image, and a content extraction module is used for extracting image content from the image feature vectors through decoupling learning.
Respectively inputting the filter result diagram and the original diagram into an image feature extraction module to obtain a first training image feature vector and a second training image feature vector;
In this step, each image may be in an RGB color mode, which is a color standard in industry, and is obtained by changing three color channels of red (R), green (G), and blue (B) and overlapping them with each other, where RGB is a color representing the three channels of red, green, and blue. Of course, the color model can also be a Lab color model, wherein Lab is a device-independent color model and is also a color model based on physiological characteristics. The Lab color model consists of three elements, one element being the luminance (L), and a and b being the two color channels. a includes colors ranging from dark green (low brightness value) to gray (medium brightness value) to bright pink (high brightness value); b is from bright blue (low luminance value) to gray (medium luminance value) to yellow (high luminance value). Here, to facilitate decoupling of color and luminance, in the case where each image is in RGB color mode, it is converted into Lab color model, that is, we say, RGB space of the image is converted into Lab space. In extracting the training image feature vector, the extraction may be performed based on the first formula described above.
Respectively inputting the first training image feature vector and the second training image feature vector into a color feature extraction module to obtain a first training color feature and a second training color feature;
in this step, color features are extracted from the training image feature vector, and extraction may be performed based on the second formula described above.
And determining the difference value obtained by subtracting the first training color feature from the second training color feature as the predicted filter color.
In the step, the predicted filter color is calculated, and when the original image is subjected to filter processing to obtain a filter result image, the color indicated by the applied filter is obtained.
And inputting the target absolute value into a weight branching module to obtain training filter weight values corresponding to each pixel in the original image, wherein the target absolute value is the absolute value of a difference value obtained by subtracting the second training image feature vector from the first training image feature vector.
In this step, the training filter weight value corresponding to each pixel in the original image may be extracted based on the third formula.
Inputting the first training image feature vector into a content extraction module to obtain a training content image related to the image content of the filter result image;
In this step, the image content of the filter result graph, i.e., the training content graph, may be extracted based on a fourth formula :ImageContent=iwn*(iwn-1*(...(iw1*outFeature+ib1))+ibn-1)+ibn,, where outFeature is an input, imageContent is an output, and is a convolution operation, iw1~iwn is n convolution kernels, ib1~ibn is n offset values, where n is a positive integer.
Determining model loss based on the predicted filter color, the training filter weight value, the training content map, and the filter color map, wherein the model loss includes image content loss, image color loss, and cyclic consistency loss;
In the step, the image content loss carries out content difference measurement; specifically, the image content loss= | ImgContent-ImgSource |, wherein ImgContent represents a training content graph and ImgSource represents an original graph. Measuring the difference of the color of the filter by making a difference between the value (predicted filter color) output by the color branch and the mark color (filter color chart) obtained in advance, and specifically, color loss= | ColorGroudtruth-ColorPredicted, wherein ColorGroudtruth represents the filter color chart; colorPredicted denotes a predicted filter color; and combining the color branch output value (predicted filter color) and the original image by using training filter weight values corresponding to different pixels output by the weight branch module. Then adding colors to the original image value and multiplying the colors by a training filter weight value to obtain a result image, and solving L1 loss of the obtained result image and the input filter result image to measure reconstruction accuracy; specifically, the cyclic consistency loss= | IMGTARGET-ImgSource x ColorPredicted x WEIGHTIMG |, wherein IMGTARGET represents a filter result graph; imgSource represents an original; colorPredicted denotes a predicted filter color; WEIGHTIMG denotes training filter weight values.
Updating model parameters, continuing training based on new sample data until model loss converges, and determining the initial network model after training is finished as a target network model.
In this step, the content loss, color loss, and cyclic consistency loss are solved, the partial derivatives are calculated for the convolution kernels in the above formulas, and then each convolution kernel is updated, where the new convolution kernel is equal to the sum of the old convolution kernel plus the partial derivatives calculated for the convolution kernel in the previous training process. By training in this way, training is finished after model loss is converged, and model parameters, namely convolution kernels in the above formulas, are saved. Here, model training using different sample data is required, and model parameters are updated once per training
In the embodiment of the application, model training is performed based on an original image, a filter result image and a filter color image in sample data, model parameters are updated based on the result of each training, and training is stopped after model loss converges, so that a trained target network model is obtained.
Optionally, inputting the first filter image and the image to be processed into a preset target network model includes:
In the case where the first filter image and the image to be processed are in RGB color mode, the first filter image and the image to be processed are respectively converted into Lab color mode.
In the embodiment of the application, the color space of the image is converted, so that the subsequent extraction of the color features is convenient.
Fig. 2 is a schematic diagram of practical application of an image processing method according to an embodiment of the present application, where the method includes:
step 201: and obtaining a user image and a target filter image of a filter which the user wants to migrate, wherein the user image is the image to be processed in the embodiment of the application, and the target filter image is the first filter image in the embodiment of the application.
Step 202: and inputting the acquired picture into a deep learning network model.
Step 203: and obtaining a result diagram after the user diagram migrates the filter, namely a second filter diagram in the embodiment of the application.
Fig. 3 is a schematic diagram of a process of the deep learning network model on the picture. The target filter image and the user image are respectively input into an image feature extraction module to obtain respective image feature vectors, the respective image feature vectors are respectively input into respective corresponding color feature extraction modules to obtain respective color features, then the color features obtained by the filter image processing branches are subtracted by the color features obtained by the original image processing branches to obtain predicted colors, meanwhile, absolute values obtained by subtracting the two image feature vectors are input into a weight branch module of the original image processing branches to obtain a global weight image, and a result image after the filter image is migrated by the aid of the predicted colors, the global weight image and the user image can be obtained. Result plot = ImgSource x ColorPredicted x WEIGHTIMG, where ImgSource represents the user plot; colorPredicted denotes a predicted color; WEIGHTIMG represents a global weight map, and the process of obtaining the image feature vector, the color feature, and the global weight map in the embodiment of the present application may refer to the first formula, the second formula, and the third formula in the above application embodiment, which are not described herein again. It is noted that the convolution kernel is shared between the two image feature extraction modules in fig. 3, and the convolution kernel is shared between the two color feature extraction modules.
It will be appreciated that the training process for the deep learning network module (target network model) is similar to that of fig. 3, and as shown in fig. 4, specifically, the training process uses sample data including: the original image, the filter result image and the filter color image, wherein the filter result image is an image obtained by adding the filter color image to the original image. The original image and the filter result image are respectively input into the model to obtain a predicted color and a global weight image, wherein the process of obtaining the predicted color and the global weight image is similar to the process of respectively inputting the user image and the target filter image in fig. 3 to obtain the predicted color and the global weight image, and the description thereof is omitted. It is noted that in the training process, the training content graph corresponding to the filter result graph can also be obtained through the image feature extraction module and the content extraction module of the model. In fig. 4, the convolution kernel is shared between two image feature extraction modules, and the convolution kernel is shared between two color feature extraction modules. After each item of data is obtained, model loss is calculated based on each item of obtained data, and model parameters are updated. Here, the model loss includes content loss, color loss, and cyclic coincidence loss, wherein the content loss measures the difference of the image contents; specifically, the image content loss= | ImgContent-ImgSource |, wherein ImgContent represents a training content graph and ImgSource represents an original graph. Color loss the color difference of the filter is measured, and the color loss is= | ColorGroudtruth-ColorPredicted |, wherein ColorGroudtruth represents the actual color, namely a color chart of the filter; colorPredicted denotes a predicted color; and combining the color branch output value (predicted color) and the original image by using training filter weight values corresponding to different pixels output by the weight branch module. Then adding colors to the original image value and multiplying the colors by a training filter weight value to obtain a result image, and solving L1 loss of the obtained result image and the input filter result image to measure reconstruction accuracy; specifically, the cyclic consistency loss= | IMGTARGET-ImgSource x ColorPredicted x WEIGHTIMG |, wherein IMGTARGET represents a filter result graph; imgSource represents an original; colorPredicted denotes a predicted color; WEIGHTIMG denotes training filter weight values. Through continuous training, model parameters can be continuously updated until model loss converges, and training is stopped.
The embodiment of the application can estimate the color of the filter more accurately, so that the filter is prevented from being interfered by the content of the picture. In addition, a global weight map is additionally output, and filters are obtained at different positions of the image by using different weights, so that the result map is more natural, and the masking effect is avoided.
It should be noted that, in the image processing method provided in the embodiment of the present application, the execution subject may be an image processing apparatus, or a control module for executing the image processing method in the image processing apparatus. In the embodiment of the present application, an image processing apparatus is described by taking an example of an image processing method performed by the image processing apparatus.
As shown in fig. 5, an embodiment of the present application further provides an image processing apparatus, including:
A first obtaining module 51, configured to obtain a first filter image and an image to be processed, where the first filter image includes an image processed by a target filter;
A first determining module 52 for determining a target filter based on the first filter image and the image to be processed;
A second determining module 53, configured to determine filter weight values corresponding to each pixel in the image to be processed, based on a difference between the color of each pixel in the image to be processed and the color indicated by the target filter;
The filter module 54 is configured to apply the target filter to each pixel in the image to be processed with a filter weight value corresponding to each pixel, so as to obtain a second filter image.
Optionally, the first determining module 52 includes:
The first input unit is used for inputting the first filter image and the image to be processed into a preset target network model;
the first model unit is used for acquiring a first color characteristic of the first filter image and a second color characteristic of the image to be processed through the target network model;
and the second model unit is used for acquiring the characteristic difference value of the first color characteristic and the second color characteristic through the target network model and determining the characteristic difference value as a target filter.
Optionally, the first model unit includes:
The first model subunit is used for acquiring a first image feature vector of the first filter image and a second image feature vector of the image to be processed through an image feature extraction module of the target network model;
The second model subunit is configured to input the first image feature vector and the second image feature vector into a color feature extraction module of the target network model, respectively, to obtain a first color feature of the first filter image and a second color feature of the image to be processed.
Optionally, the second determining module 53 includes:
The second input unit is used for inputting the first filter image and the image to be processed into a preset target network model;
the third model unit is used for acquiring a first image feature vector of the first filter image and a second image feature vector of the image to be processed through an image feature extraction module of the target network model;
A fourth model unit, configured to obtain a vector difference value between the first image feature vector and the second image feature vector through the target network model;
and the fifth model unit is used for inputting the absolute value of the vector difference value into the weight branching module of the target network model, and acquiring filter weight values corresponding to pixels in the image to be processed, wherein the closer the color of the target pixel is to the color indicated by the target filter, the larger the filter weight value corresponding to the target pixel is, and the target pixel comprises any pixel in the image to be processed.
Optionally, the image processing apparatus further includes:
the second acquisition module is used for acquiring the initial network model and sample data; wherein the initial network model comprises: the system comprises an image feature extraction module, a color feature extraction module, a weight branching module and a content extraction module, wherein sample data comprise: the original image, the filter result image and the filter color image are obtained by adding the filter color image to the original image;
The first training module is used for respectively inputting the filter result image and the original image into the image feature extraction module to obtain a first training image feature vector and a second training image feature vector;
The second training module is used for respectively inputting the first training image feature vector and the second training image feature vector into the color feature extraction module to obtain a first training color feature and a second training color feature;
the third training module is used for determining the difference value obtained by subtracting the first training color characteristic from the second training color characteristic as the predicted filter color;
The fourth training module is used for inputting a target absolute value into the weight branching module to obtain training filter weight values corresponding to pixels in the original image respectively, wherein the target absolute value is the absolute value of a difference value obtained by subtracting the second training image feature vector from the first training image feature vector;
The fifth training module is used for inputting the feature vector of the first training image into the content extraction module to obtain a training content image related to the image content of the filter result image;
A sixth training module configured to determine model loss based on the predicted filter color, the training filter weight value, the training content map, and the filter color map, wherein the model loss includes an image content loss, an image color loss, and a cyclic consistency loss;
And the seventh training module is used for updating the model parameters, continuing training based on the new sample data until the model loss converges, and determining the initial network model after the training is finished as the target network model.
Optionally, the first input unit is specifically configured to convert the first filter image and the image to be processed into the Lab color model when the first filter image and the image to be processed are in the RGB color mode, respectively.
In the embodiment of the application, the target filter used when the first filter image is obtained by performing filter processing is determined according to the first filter image processed by the filter and the image to be processed which is not processed by the filter. By taking the image to be processed into consideration, interference generated by the content color of the first filter image itself when the target filter is extracted based only on the first filter image can be avoided. And then determining filter weight values corresponding to the pixels in the image to be processed respectively based on the difference between the colors of the pixels in the image to be processed and the colors indicated by the target filters, combining the filter weight values corresponding to the pixels, and applying the target filters to the pixels of the image to be processed to obtain a second filter image after filter processing.
The image processing device in the embodiment of the application can be a device, and can also be a component, an integrated circuit or a chip in a terminal. The device may be a mobile electronic device or a non-mobile electronic device. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), etc., and the non-mobile electronic device may be a server, a network attached storage (Network Attached Storage, NAS), a personal computer (personal computer, PC), a Television (TV), a teller machine, a self-service machine, etc., and the embodiments of the present application are not limited in particular.
The image processing apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android operating system, an iOS operating system, or other possible operating systems, and the embodiment of the present application is not limited specifically.
The image processing device provided in the embodiment of the present application can implement each process implemented by the embodiments of the methods of fig. 1 to fig. 4, and in order to avoid repetition, a detailed description is omitted here.
Optionally, as shown in fig. 6, the embodiment of the present application further provides an electronic device 600, including a processor 601, a memory 602, and a program or an instruction stored in the memory 602 and capable of running on the processor 601, where the program or the instruction implements each process of the above-mentioned image processing method embodiment when executed by the processor 601, and the same technical effects can be achieved, so that repetition is avoided, and no further description is given here.
The electronic device in the embodiment of the application includes the mobile electronic device and the non-mobile electronic device.
Fig. 7 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 700 includes, but is not limited to: radio frequency unit 701, network module 702, audio output unit 703, input unit 704, sensor 705, display unit 706, user input unit 707, interface unit 708, memory 709, and processor 710.
Those skilled in the art will appreciate that the electronic device 700 may also include a power source (e.g., a battery) for powering the various components, which may be logically connected to the processor 710 via a power management system so as to perform functions such as managing charge, discharge, and power consumption via the power management system. The electronic device structure shown in fig. 7 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than shown, or may combine certain components, or may be arranged in different components, which are not described in detail herein.
A processor 710, configured to obtain a first filter image and an image to be processed, where the first filter image includes an image processed by a target filter;
A processor 710 for determining a target filter based on the first filter image and the image to be processed;
The processor 710 is further configured to determine filter weight values corresponding to each pixel in the image to be processed, based on a difference between a color of each pixel in the image to be processed and a color indicated by the target filter;
the processor 710 is further configured to apply the target filter to each pixel in the image to be processed with a filter weight value corresponding to each pixel, so as to obtain a second filter image.
In the embodiment of the application, the target filter used when the first filter image is obtained by performing filter processing is determined according to the first filter image processed by the filter and the image to be processed which is not processed by the filter. By taking the image to be processed into consideration, interference generated by the content color of the first filter image itself when the target filter is extracted based only on the first filter image can be avoided. And then determining filter weight values corresponding to the pixels in the image to be processed respectively based on the difference between the colors of the pixels in the image to be processed and the colors indicated by the target filters, combining the filter weight values corresponding to the pixels, and applying the target filters to the pixels of the image to be processed to obtain a second filter image after filter processing.
It should be appreciated that in embodiments of the present application, the input unit 704 may include a graphics processor (Graphics Processing Unit, GPU) 7041 and a microphone 7042, with the graphics processor 7041 processing image data of still pictures or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The display unit 706 may include a display panel 7061, and the display panel 7061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 707 includes a touch panel 7071 and other input devices 7072. The touch panel 7071 is also referred to as a touch screen. The touch panel 7071 may include two parts, a touch detection device and a touch controller. Other input devices 7072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and so forth, which are not described in detail herein. Memory 709 may be used to store software programs as well as various data including, but not limited to, application programs and an operating system. The processor 710 may integrate an application processor that primarily processes operating systems, user interfaces, applications, etc., with a modem processor that primarily processes wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 710.
The embodiment of the application also provides a readable storage medium, on which a program or an instruction is stored, which when executed by a processor, implements each process of the above image processing method embodiment, and can achieve the same technical effects, and in order to avoid repetition, a detailed description is omitted here.
Wherein the processor is a processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium such as a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk or an optical disk, and the like.
The embodiment of the application further provides a chip, which comprises a processor and a communication interface, wherein the communication interface is coupled with the processor, and the processor is used for running programs or instructions to realize the processes of the embodiment of the image processing method, and can achieve the same technical effects, so that repetition is avoided, and the description is omitted here.
It should be understood that the chips referred to in the embodiments of the present application may also be referred to as system-on-chip chips, chip systems, or system-on-chip chips, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Furthermore, it should be noted that the scope of the methods and apparatus in the embodiments of the present application is not limited to performing the functions in the order shown or discussed, but may also include performing the functions in a substantially simultaneous manner or in an opposite order depending on the functions involved, e.g., the described methods may be performed in an order different from that described, and various steps may be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a computer software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method according to the embodiments of the present application.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those having ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are to be protected by the present application.

Claims (12)

Translated fromChinese
1.一种图像处理方法,其特征在于,所述图像处理方法包括:1. An image processing method, characterized in that the image processing method comprises:获取第一滤镜图像以及待处理图像,其中,所述第一滤镜图像包括采用目标滤镜进行滤镜处理后的图像;Acquire a first filtered image and an image to be processed, wherein the first filtered image includes an image after filter processing using a target filter;基于所述第一滤镜图像和所述待处理图像,确定所述目标滤镜;Determining the target filter based on the first filtered image and the image to be processed;基于所述待处理图像中各像素的颜色与所述目标滤镜指示的颜色之间的差异,确定所述待处理图像中各像素分别对应的滤镜权重值,所述滤镜权重值与颜色差异大小呈反比;Based on the difference between the color of each pixel in the image to be processed and the color indicated by the target filter, determine the filter weight value corresponding to each pixel in the image to be processed, wherein the filter weight value is inversely proportional to the color difference;将所述目标滤镜以所述各像素分别对应的滤镜权重值应用于所述待处理图像中的各像素,得到第二滤镜图像。The target filter is applied to each pixel in the image to be processed with the filter weight value corresponding to each pixel, so as to obtain a second filtered image.2.根据权利要求1所述的图像处理方法,其特征在于,所述基于所述第一滤镜图像和所述待处理图像,确定所述目标滤镜,包括:2. The image processing method according to claim 1, wherein determining the target filter based on the first filtered image and the image to be processed comprises:将所述第一滤镜图像和所述待处理图像输入至预设的目标网络模型中;Inputting the first filtered image and the image to be processed into a preset target network model;通过所述目标网络模型获取所述第一滤镜图像的第一颜色特征和所述待处理图像的第二颜色特征;Acquire a first color feature of the first filter image and a second color feature of the image to be processed through the target network model;通过所述目标网络模型获取所述第一颜色特征与所述第二颜色特征的特征差值,并将所述特征差值确定为所述目标滤镜。A feature difference between the first color feature and the second color feature is obtained through the target network model, and the feature difference is determined as the target filter.3.根据权利要求2所述的图像处理方法,其特征在于,所述通过所述目标网络模型获取所述第一滤镜图像的第一颜色特征和所述待处理图像的第二颜色特征,包括:3. The image processing method according to claim 2, characterized in that the step of obtaining the first color feature of the first filter image and the second color feature of the image to be processed through the target network model comprises:通过所述目标网络模型的图像特征提取模块,获取所述第一滤镜图像的第一图像特征向量和所述待处理图像的第二图像特征向量;Obtaining, by means of an image feature extraction module of the target network model, a first image feature vector of the first filtered image and a second image feature vector of the image to be processed;分别将所述第一图像特征向量和所述第二图像特征向量输入所述目标网络模型的颜色特征提取模块,获取所述第一滤镜图像的第一颜色特征以及所述待处理图像的第二颜色特征。The first image feature vector and the second image feature vector are respectively input into the color feature extraction module of the target network model to obtain the first color feature of the first filter image and the second color feature of the image to be processed.4.根据权利要求1所述的图像处理方法,其特征在于,所述基于所述待处理图像中各像素的颜色与所述目标滤镜指示的颜色之间的差异,确定所述待处理图像中各像素分别对应的滤镜权重值,包括:4. The image processing method according to claim 1, characterized in that the step of determining the filter weight values corresponding to the pixels in the image to be processed based on the difference between the color of each pixel in the image to be processed and the color indicated by the target filter comprises:将所述第一滤镜图像和所述待处理图像输入至预设的目标网络模型中;Inputting the first filtered image and the image to be processed into a preset target network model;通过所述目标网络模型的图像特征提取模块,获取所述第一滤镜图像的第一图像特征向量和所述待处理图像的第二图像特征向量;Obtaining, by means of an image feature extraction module of the target network model, a first image feature vector of the first filtered image and a second image feature vector of the image to be processed;通过所述目标网络模型获取所述第一图像特征向量和所述第二图像特征向量的向量差值;Acquire a vector difference between the first image feature vector and the second image feature vector through the target network model;将所述向量差值的绝对值输入所述目标网络模型的权重分支模块,获取所述待处理图像中各像素分别对应的滤镜权重值,其中,目标像素的颜色与所述目标滤镜指示的颜色越接近,所述目标像素对应的滤镜权重值越大,所述目标像素包括所述待处理图像中的任意像素。The absolute value of the vector difference is input into the weight branch module of the target network model to obtain the filter weight values corresponding to each pixel in the image to be processed, wherein the closer the color of the target pixel is to the color indicated by the target filter, the larger the filter weight value corresponding to the target pixel, and the target pixel includes any pixel in the image to be processed.5.根据权利要求4所述的图像处理方法,其特征在于,在所述获取第一滤镜图像以及待处理图像之前,所述图像处理方法还包括:5. The image processing method according to claim 4, characterized in that before obtaining the first filter image and the image to be processed, the image processing method further comprises:获取初始网络模型以及样本数据;其中,所述初始网络模型包括:所述图像特征提取模块,颜色特征提取模块,所述权重分支模块以及内容提取模块,所述样本数据包括:原图、滤镜结果图、滤镜颜色图,所述滤镜结果图为对所述原图添加所述滤镜颜色图之后获得的图像;Acquire an initial network model and sample data; wherein the initial network model includes: the image feature extraction module, the color feature extraction module, the weight branch module and the content extraction module; the sample data includes: an original image, a filter result image, and a filter color image; the filter result image is an image obtained by adding the filter color image to the original image;分别将所述滤镜结果图以及所述原图输入所述图像特征提取模块,得到第一训练图像特征向量以及第二训练图像特征向量;Inputting the filter result image and the original image into the image feature extraction module respectively to obtain a first training image feature vector and a second training image feature vector;分别将所述第一训练图像特征向量以及所述第二训练图像特征向量输入所述颜色特征提取模块,得到第一训练颜色特征以及第二训练颜色特征;Inputting the first training image feature vector and the second training image feature vector into the color feature extraction module respectively to obtain a first training color feature and a second training color feature;将所述第二训练颜色特征减去第一训练颜色特征得到的差值,确定为预测滤镜颜色;Determine a difference obtained by subtracting the first training color feature from the second training color feature as a predicted filter color;将目标绝对值输入所述权重分支模块,得到所述原图中各像素分别对应的训练滤镜权重值,其中,所述目标绝对值为所述第一训练图像特征向量减去第二训练图像特征向量得到的差值的绝对值;Inputting the target absolute value into the weight branch module to obtain the training filter weight value corresponding to each pixel in the original image, wherein the target absolute value is the absolute value of the difference between the first training image feature vector and the second training image feature vector;将所述第一训练图像特征向量输入所述内容提取模块,得到与所述滤镜结果图的图像内容相关的训练内容图;Inputting the first training image feature vector into the content extraction module to obtain a training content graph related to the image content of the filter result graph;基于所述预测滤镜颜色、所述训练滤镜权重值、所述训练内容图以及所述滤镜颜色图,确定模型损失,其中,所述模型损失包括图像内容损失、图像颜色损失以及循环一致损失;Determining a model loss based on the predicted filter color, the training filter weight value, the training content map, and the filter color map, wherein the model loss includes an image content loss, an image color loss, and a cycle consistency loss;更新模型参数,并基于新的样本数据继续训练直至所述模型损失收敛为止,将训练结束后的所述初始网络模型确定为所述目标网络模型。The model parameters are updated, and training is continued based on new sample data until the model loss converges, and the initial network model after the training is determined as the target network model.6.一种图像处理装置,其特征在于,所述图像处理装置包括:6. An image processing device, characterized in that the image processing device comprises:第一获取模块,用于获取第一滤镜图像以及待处理图像,其中,所述第一滤镜图像包括采用目标滤镜进行滤镜处理后的图像;A first acquisition module, used to acquire a first filtered image and an image to be processed, wherein the first filtered image includes an image after filter processing using a target filter;第一确定模块,用于基于所述第一滤镜图像和所述待处理图像,确定所述目标滤镜;A first determination module, configured to determine the target filter based on the first filtered image and the image to be processed;第二确定模块,用于基于所述待处理图像中各像素的颜色与所述目标滤镜指示的颜色之间的差异,确定所述待处理图像中各像素分别对应的滤镜权重值,所述滤镜权重值与颜色差异大小呈反比;A second determination module is used to determine the filter weight value corresponding to each pixel in the image to be processed based on the difference between the color of each pixel in the image to be processed and the color indicated by the target filter, wherein the filter weight value is inversely proportional to the color difference;滤镜模块,用于将所述目标滤镜以所述各像素分别对应的滤镜权重值应用于所述待处理图像中的各像素,得到第二滤镜图像。The filter module is used to apply the target filter to each pixel in the image to be processed with the filter weight value corresponding to each pixel, so as to obtain a second filtered image.7.根据权利要求6所述的图像处理装置,其特征在于,所述第一确定模块,包括:7. The image processing device according to claim 6, wherein the first determining module comprises:第一输入单元,用于将所述第一滤镜图像和所述待处理图像输入至预设的目标网络模型中;A first input unit, used for inputting the first filtered image and the image to be processed into a preset target network model;第一模型单元,用于通过所述目标网络模型获取所述第一滤镜图像的第一颜色特征和所述待处理图像的第二颜色特征;A first model unit, configured to obtain a first color feature of the first filter image and a second color feature of the image to be processed through the target network model;第二模型单元,用于通过所述目标网络模型获取所述第一颜色特征与所述第二颜色特征的特征差值,并将所述特征差值确定为所述目标滤镜。The second model unit is used to obtain a feature difference between the first color feature and the second color feature through the target network model, and determine the feature difference as the target filter.8.根据权利要求7所述的图像处理装置,其特征在于,所述第一模型单元,包括:8. The image processing device according to claim 7, wherein the first model unit comprises:第一模型子单元,用于通过所述目标网络模型的图像特征提取模块,获取所述第一滤镜图像的第一图像特征向量和所述待处理图像的第二图像特征向量;A first model subunit is used to obtain a first image feature vector of the first filtered image and a second image feature vector of the image to be processed through an image feature extraction module of the target network model;第二模型子单元,用于分别将所述第一图像特征向量和所述第二图像特征向量输入所述目标网络模型的颜色特征提取模块,获取所述第一滤镜图像的第一颜色特征以及所述待处理图像的第二颜色特征。The second model subunit is used to input the first image feature vector and the second image feature vector into the color feature extraction module of the target network model respectively, so as to obtain the first color feature of the first filter image and the second color feature of the image to be processed.9.根据权利要求6所述的图像处理装置,其特征在于,所述第二确定模块,包括:9. The image processing device according to claim 6, wherein the second determining module comprises:第二输入单元,用于将所述第一滤镜图像和所述待处理图像输入至预设的目标网络模型中;A second input unit, used for inputting the first filtered image and the image to be processed into a preset target network model;第三模型单元,用于通过所述目标网络模型的图像特征提取模块,获取所述第一滤镜图像的第一图像特征向量和所述待处理图像的第二图像特征向量;A third model unit, configured to obtain a first image feature vector of the first filtered image and a second image feature vector of the image to be processed through an image feature extraction module of the target network model;第四模型单元,用于通过所述目标网络模型获取所述第一图像特征向量和所述第二图像特征向量的向量差值;a fourth model unit, configured to obtain a vector difference between the first image feature vector and the second image feature vector through the target network model;第五模型单元,用于将所述向量差值的绝对值输入所述目标网络模型的权重分支模块,获取所述待处理图像中各像素分别对应的滤镜权重值,其中,目标像素的颜色与所述目标滤镜指示的颜色越接近,所述目标像素对应的滤镜权重值越大,所述目标像素包括所述待处理图像中的任意像素。The fifth model unit is used to input the absolute value of the vector difference into the weight branch module of the target network model to obtain the filter weight value corresponding to each pixel in the image to be processed, wherein the closer the color of the target pixel is to the color indicated by the target filter, the larger the filter weight value corresponding to the target pixel, and the target pixel includes any pixel in the image to be processed.10.根据权利要求9所述的图像处理装置,其特征在于,所述图像处理装置还包括:10. The image processing device according to claim 9, characterized in that the image processing device further comprises:第二获取模块,用于获取初始网络模型以及样本数据;其中,所述初始网络模型包括:所述图像特征提取模块,颜色特征提取模块,所述权重分支模块以及内容提取模块,所述样本数据包括:原图、滤镜结果图、滤镜颜色图,所述滤镜结果图为对所述原图添加所述滤镜颜色图之后获得的图像;A second acquisition module is used to acquire an initial network model and sample data; wherein the initial network model includes: the image feature extraction module, the color feature extraction module, the weight branch module and the content extraction module; the sample data includes: an original image, a filter result image, and a filter color image; the filter result image is an image obtained by adding the filter color image to the original image;第一训练模块,用于分别将所述滤镜结果图以及所述原图输入所述图像特征提取模块,得到第一训练图像特征向量以及第二训练图像特征向量;A first training module, used for inputting the filter result image and the original image into the image feature extraction module respectively, to obtain a first training image feature vector and a second training image feature vector;第二训练模块,用于分别将所述第一训练图像特征向量以及所述第二训练图像特征向量输入所述颜色特征提取模块,得到第一训练颜色特征以及第二训练颜色特征;A second training module, used for inputting the first training image feature vector and the second training image feature vector into the color feature extraction module respectively to obtain a first training color feature and a second training color feature;第三训练模块,用于将所述第二训练颜色特征减去第一训练颜色特征得到的差值,确定为预测滤镜颜色;A third training module, configured to determine a difference obtained by subtracting the first training color feature from the second training color feature as a predicted filter color;第四训练模块,用于将目标绝对值输入所述权重分支模块,得到所述原图中各像素分别对应的训练滤镜权重值,其中,所述目标绝对值为所述第一训练图像特征向量减去第二训练图像特征向量得到的差值的绝对值;a fourth training module, configured to input a target absolute value into the weight branch module to obtain a training filter weight value corresponding to each pixel in the original image, wherein the target absolute value is an absolute value of a difference between the first training image feature vector and the second training image feature vector;第五训练模块,用于将所述第一训练图像特征向量输入所述内容提取模块,得到与所述滤镜结果图的图像内容相关的训练内容图;A fifth training module, configured to input the first training image feature vector into the content extraction module to obtain a training content graph related to the image content of the filter result graph;第六训练模块,用于基于所述预测滤镜颜色、所述训练滤镜权重值、所述训练内容图以及所述滤镜颜色图,确定模型损失,其中,所述模型损失包括图像内容损失、图像颜色损失以及循环一致损失;a sixth training module, configured to determine a model loss based on the predicted filter color, the training filter weight value, the training content map, and the filter color map, wherein the model loss includes an image content loss, an image color loss, and a cycle consistency loss;第七训练模块,用于更新模型参数,并基于新的样本数据继续训练直至所述模型损失收敛为止,将训练结束后的所述初始网络模型确定为所述目标网络模型。The seventh training module is used to update the model parameters and continue training based on new sample data until the model loss converges, and determine the initial network model after the training as the target network model.11.一种电子设备,其特征在于,包括处理器,存储器及存储在所述存储器上并可在所述处理器上运行的程序或指令,所述程序或指令被所述处理器执行时实现如权利要求1-5任一项所述的图像处理方法的步骤。11. An electronic device, characterized in that it comprises a processor, a memory, and a program or instruction stored in the memory and executable on the processor, wherein when the program or instruction is executed by the processor, the steps of the image processing method according to any one of claims 1 to 5 are implemented.12.一种可读存储介质,其特征在于,所述可读存储介质上存储程序或指令,所述程序或指令被处理器执行时实现如权利要求1-5任一项所述的图像处理方法的步骤。12. A readable storage medium, characterized in that the readable storage medium stores a program or instruction, and when the program or instruction is executed by a processor, the steps of the image processing method according to any one of claims 1 to 5 are implemented.
CN202110912037.1A2021-08-092021-08-09 Image processing method and device, and electronic equipmentActiveCN113658066B (en)

Priority Applications (2)

Application NumberPriority DateFiling DateTitle
CN202110912037.1ACN113658066B (en)2021-08-092021-08-09 Image processing method and device, and electronic equipment
PCT/CN2022/110522WO2023016365A1 (en)2021-08-092022-08-05Image processing method and apparatus, and electronic device

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202110912037.1ACN113658066B (en)2021-08-092021-08-09 Image processing method and device, and electronic equipment

Publications (2)

Publication NumberPublication Date
CN113658066A CN113658066A (en)2021-11-16
CN113658066Btrue CN113658066B (en)2024-11-15

Family

ID=78491066

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202110912037.1AActiveCN113658066B (en)2021-08-092021-08-09 Image processing method and device, and electronic equipment

Country Status (2)

CountryLink
CN (1)CN113658066B (en)
WO (1)WO2023016365A1 (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN113658066B (en)*2021-08-092024-11-15维沃移动通信有限公司 Image processing method and device, and electronic equipment
CN115205102A (en)*2022-02-222022-10-18维沃移动通信有限公司 Image processing method, image processing apparatus, electronic device, and readable storage medium
CN116721034A (en)*2023-06-252023-09-08建信金融科技有限责任公司Image processing method, device and equipment
CN118570111B (en)*2024-07-312025-02-14汉朔科技股份有限公司 Image display enhancement method, device, electronic device and storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN109741283A (en)*2019-01-232019-05-10芜湖明凯医疗器械科技有限公司A kind of method and apparatus for realizing smart filter

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN105376640B (en)*2014-08-062019-12-03腾讯科技(北京)有限公司Filter processing method, device and electronic equipment
CN108961170B (en)*2017-05-242022-05-03阿里巴巴集团控股有限公司 Image processing method, device and system
CN112529808A (en)*2020-12-152021-03-19北京映客芝士网络科技有限公司Image color adjusting method, device, equipment and medium
CN113014803A (en)*2021-02-042021-06-22维沃移动通信有限公司Filter adding method and device and electronic equipment
CN113111791B (en)*2021-04-162024-04-09深圳市格灵人工智能与机器人研究院有限公司Image filter conversion network training method and computer readable storage medium
CN113658066B (en)*2021-08-092024-11-15维沃移动通信有限公司 Image processing method and device, and electronic equipment

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN109741283A (en)*2019-01-232019-05-10芜湖明凯医疗器械科技有限公司A kind of method and apparatus for realizing smart filter

Also Published As

Publication numberPublication date
CN113658066A (en)2021-11-16
WO2023016365A1 (en)2023-02-16

Similar Documents

PublicationPublication DateTitle
CN113658066B (en) Image processing method and device, and electronic equipment
CN109741279A (en) Image saturation adjustment method, device, storage medium and terminal
CN111369644A (en)Face image makeup trial processing method and device, computer equipment and storage medium
CN111835982B (en) Image acquisition method, image acquisition device, electronic device and storage medium
CN110211030B (en)Image generation method and device
CA3154893C (en)Image color transferring method, device, computer equipment and storage medium
CN106339224B (en)Readability enhancing method and device
US11481927B2 (en)Method and apparatus for determining text color
JP2017187994A (en)Image processing apparatus, image processing method, image processing system, and program
CN113538223B (en)Noise image generation method, device, electronic equipment and storage medium
WO2023045884A1 (en)Screen light detection model training method, ambient light detection method, and apparatus
CN112991366B (en)Method, device and mobile terminal for carrying out real-time chromaticity matting on image
WO2022042754A1 (en)Image processing method and apparatus, and device
CN113676713A (en)Image processing method, apparatus, device and medium
US20160316151A1 (en)Filter realization method and apparatus of camera application
WO2025011490A1 (en)Video special-effect adding method and apparatus, and device, storage medium and program product
CN109615620A (en)The recognition methods of compression of images degree, device, equipment and computer readable storage medium
CN111768377A (en) Image color evaluation method, device, electronic device and storage medium
CN108282643B (en)Image processing method, image processing device and electronic equipment
WO2025011491A1 (en)Video processing method and apparatus, device, storage medium and program product
WO2022083081A1 (en)Image rendering method and apparatus, and device and storage medium
CN112468794A (en)Image processing method and device, electronic equipment and readable storage medium
CN116456148A (en)Method and device for determining similarity between video frames, electronic equipment and storage medium
CN114327715A (en)Interface display method, interface display device, electronic equipment and readable storage medium
WO2018036526A1 (en)Display method and device

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp