Detailed Description
The technical solutions of the embodiments of the present application will be clearly described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which are obtained by a person skilled in the art based on the embodiments of the present application, fall within the scope of protection of the present application.
The terms first, second and the like in the description and in the claims, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged, as appropriate, such that embodiments of the present application may be implemented in sequences other than those illustrated or described herein, and that the objects identified by "first," "second," etc. are generally of a type, and are not limited to the number of objects, such as the first object may be one or more. Furthermore, in the description and claims, "and/or" means at least one of the connected objects, and the character "/", generally means that the associated object is an "or" relationship.
The image processing method provided by the embodiment of the application is described in detail below through specific embodiments and application scenes thereof with reference to the accompanying drawings.
As shown in fig. 1, an image processing method according to an embodiment of the present application includes:
Step 101: and acquiring a first filter image and an image to be processed.
In this step, the first filter image includes an image processed by a target filter, that is, the first filter image may be understood as an image that generates a color effect after any one of the images is applied with a filter, where the target filter is the applied filter. Applying a filter to an image may create a special color effect on the image, which filter may be understood as one color data, e.g. a pure color map of a single color. Thus, each filter may indicate a color. The color effect here may be an effect produced by adjusting color, texture, or the like. Of course, for images containing faces, the color effects may also include effects from different make-ups. The image to be processed can be any image selected by a user, specifically, the image to be processed is the image which is selected by the user and is not processed by the filter. The user wants to apply a target filter to the image to achieve the same color effect as the first filter image.
Step 102: the target filter is determined based on the first filter image and the image to be processed.
In this step, a filter image with a color effect is generated after a filter is applied to an image, and the applied filter can be extracted based on the images before and after the filter is applied. Here, the original image corresponding to the first filter image is replaced by the image to be processed, wherein the filter processing is performed on the original image corresponding to the first filter image, and the first filter image is obtained. Of course, the filter applied to the image can be obtained by directly extracting the filter from the filter image, that is, the filter color in the first filter image can be extracted to obtain the target filter.
Step 103: and determining filter weight values corresponding to the pixels in the image to be processed respectively based on the difference between the colors of the pixels in the image to be processed and the color indicated by the target filter.
In this step, the difference between colors can be understood as the difference between color values of different colors in the same color space. The distance between two colors in the same color space can be used as a measurement value for measuring the difference between the two colors, and the larger the distance is, the larger the difference between the colors is; similarly, the smaller the distance, the smaller the difference, and if the distances are the same, it means that the two colors are the same and have no difference.
The filter weight value of each pixel is associated with the difference between the color of that pixel and the color indicated by the target filter, and therefore, the filter weight values corresponding to the pixels of different colors are different. It will be appreciated that the image to be processed is made up of a large number of pixels, each of which may be the same or different in colour, with the same filter weight corresponding to the same colour pixel and different filter weights corresponding to the different colours pixel.
Step 104: and applying the target filter to each pixel in the image to be processed by using the filter weight value corresponding to each pixel respectively to obtain a second filter image.
In the step, the target filter is applied to each pixel in the image to be processed according to the filter weight value corresponding to each pixel, namely, the image to be processed is subjected to filter processing through the target filter and the filter weight value. Specifically, for a target pixel of an image to be processed, a target filter is applied to the target pixel with a filter weight value corresponding to the target pixel, where the target pixel is all pixels of the image to be processed, that is, each pixel in the image to be processed needs to be subjected to filter processing. It will be appreciated that the filter weight is a specific value, and when the target filter is applied to a pixel in the image to be processed with a certain filter weight, the color value of the target filter is multiplied by the filter weight to obtain a new color value, and then the new color value is applied to the pixel in the image to be processed. For example, the color value of the target filter is 100, and the filter weight value of the target pixel on the image to be processed is 0.5, the process of applying the target filter to the target pixel includes: the new color value 50 is obtained by multiplying the color value 100 by the filter weight value 0.5, and the color value 50 is applied to the target pixel. Because the filter weight values corresponding to the pixels with different colors in the image to be processed are different, the color values applied to the pixels with different colors are different, and the color effects on the pixels with different colors in the second filter image are different. For example, when the color indicated by the target filter is blue and the target filter is applied to a pixel of a blue sky part with a larger filter weight value in the image to be processed, the blue sky is more blue, and when the target filter is applied to a pixel of a building part with a smaller filter weight value in the image to be processed, the building part is only light and unobvious blue, so that the overall color effect of the second filter image after the filter processing is staggered.
In the embodiment of the application, the target filter used when the first filter image is obtained by performing filter processing is determined according to the first filter image processed by the filter and the image to be processed which is not processed by the filter. By taking the image to be processed into consideration, interference generated by the content color of the first filter image itself when the target filter is extracted based only on the first filter image can be avoided. And then determining filter weight values corresponding to the pixels in the image to be processed respectively based on the difference between the colors of the pixels in the image to be processed and the colors indicated by the target filters, combining the filter weight values corresponding to the pixels, and applying the target filters to the pixels of the image to be processed to obtain a second filter image after filter processing.
Optionally, determining the target filter based on the first filter image and the image to be processed includes:
and inputting the first filter image and the image to be processed into a preset target network model.
In this step, the preset target network model is a pre-trained network model. Here, an initial model may be trained based on the deep learning network to obtain the target network model. The first filter image and the image to be processed are model inputs of a target network model.
And acquiring a first color characteristic of the first filter image and a second color characteristic of the image to be processed through the target network model.
In this step, the first color feature is a feature of data associated with the color of the first filter image, and the second color feature is a feature of data associated with the color of the image to be processed. Here, the color features may be output as a phase of the target network model rather than as a final model.
And acquiring a characteristic difference value of the first color characteristic and the second color characteristic through the target network model, and determining the characteristic difference value as a target filter.
In this step, the characteristic difference obtained by subtracting the first color characteristic from the second color characteristic may represent the target filter, and the color corresponding to the characteristic difference may be used as the color indicated by the target filter. It is understood that the feature difference may represent the color difference between the image to be processed and the first filter image, so that the image to be processed is processed through the color difference, and thus an image having no color difference with the first filter image may be obtained. Therefore, the characteristic difference value can be regarded as the target filter of the first filter image.
In the embodiment of the application, the target filter is extracted by utilizing the pre-trained target network model, and the first filter image and the image to be processed are input as the model of the target network model, so that the target filter can be quickly and accurately obtained.
Optionally, acquiring, by the target network model, the first color feature of the first filter image and the second color feature of the image to be processed includes:
And acquiring a first image feature vector of the first filter image and a second image feature vector of the image to be processed by an image feature extraction module of the target network model.
In this step, in order to facilitate image processing, the image may be first converted into a mathematical expression, and the first filter image and the image to be processed may be represented by using the mathematical expression. The first image feature vector is a mathematical expression of the first filter image, and may represent features of each dimension of the first filter image. The second image feature vector is a mathematical expression of the image to be processed, and may represent features of each dimension of the image to be processed. Wherein the dimensions of the image may include a brightness dimension, a color dimension, and the like.
Specifically, the image feature extraction module may be implemented by using a number of column-stacked mathematical operations, so as to obtain an image feature vector of the image. For example, a first formula is used to calculate a first image feature vector and a second image feature vector.
A first formula: outFeature =wn*(wn-1*(...(w1*x+b1))+bn-1)+bn, where x is input, outFeature is output, outFeature is a first image feature vector when x is a first filter image, outFeature is a second image feature vector when x is an image to be processed, w1~wn is n convolution kernels, and b1~bn is n offset values, where n is a positive integer.
And respectively inputting the first image feature vector and the second image feature vector into a color feature extraction module of the target network model, and acquiring a first color feature of the first filter image and a second color feature of the image to be processed.
In this step, the color feature extraction module may also perform a mathematical operation of stacking a number of columns, so as to obtain the color feature of the image feature vector. The first color feature and the second color feature are calculated, for example, using a second formula.
The second formula :outColor=cwn*(cwn-1*(...(cw1*outFeature+cb1))+cbn-1)+cbn,, wherein outFeature is an input, outColor is an output, outColor is a first color feature when outFeature is a first image feature vector, outColor is a second color feature when outFeature is a second image feature vector, cw1~cwn is n convolution kernels, cb1~cbn is n offset values, where n is a positive integer.
In the embodiment of the application, a stepwise processing mode is adopted, the image feature vector is obtained first, and then the color features related to the colors are screened based on the image feature vector, so that the whole process is convenient, simple and easy to realize.
Optionally, determining the filter weight value corresponding to each pixel in the image to be processed based on the difference between the color of each pixel in the image to be processed and the color indicated by the target filter includes:
and inputting the first filter image and the image to be processed into a preset target network model.
In this step, the preset target network model is a pre-trained network model. Here, an initial model may be trained based on the deep learning network to obtain the target network model. The first filter image and the image to be processed are model inputs of a target network model.
Acquiring a first image feature vector of a first filter image and a second image feature vector of an image to be processed through an image feature extraction module of a target network model;
In this step, in order to facilitate image processing, the image may be first converted into a mathematical expression, and the first filter image and the image to be processed may be represented by using the mathematical expression. The first image feature vector is a mathematical expression of the first filter image, and may represent features of each dimension of the first filter image. The second image feature vector is a mathematical expression of the image to be processed, and may represent features of each dimension of the image to be processed. Wherein the dimensions of the image may include a brightness dimension, a color dimension, and the like.
Specifically, the image feature extraction module may be implemented by using a number of column-stacked mathematical operations, so as to obtain an image feature vector of the image. For example, the first formula in the embodiment of the application is used to calculate the first image feature vector and the second image feature vector, which are not described herein.
And obtaining a vector difference value of the first image feature vector and the second image feature vector through the target network model.
In this step, in the case where the first image feature vector and the second image feature vector have been acquired, a vector difference value can be obtained by subtracting the two image feature vectors.
And inputting the absolute value of the vector difference value into a weight branching module of the target network model, and acquiring filter weight values corresponding to pixels in the image to be processed, wherein the closer the color of the target pixel is to the color indicated by the target filter, the larger the filter weight value corresponding to the target pixel is, and the target pixel comprises any pixel in the image to be processed.
In this step, the weight branching module may be implemented by using a number of mathematical operations stacked in columns, so as to obtain a filter weight value. For example, a third formula may be used to calculate filter weight values for different portions of the image to be processed. The third formula may be :WeightImg=wwn*(wwn-1*(...(ww1*|outFeature1-outFeature2|+wb1))+wbn-1)+wbn,, where i outFeature1-outFeature2 is an absolute value of the vector difference, WEIGHTIMG is an output, and represents filter weight values corresponding to different pixels of the image to be processed, ww1~wwn is a convolution operation, wb1~wbn is n convolution kernels, and n is a positive integer. It is understood that WEIGHTIMG may also be considered as a global weight map of the image to be processed, including a filter weight value corresponding to each pixel of the image to be processed.
In the embodiment of the application, the closer the color of the target pixel in the image to be processed is to the color indicated by the target filter, the larger the filter weight value of the target pixel is, so that the more obvious the color effect is when the target filter acts on the target pixel.
Optionally, before acquiring the first filter image and the image to be processed, the image processing method further includes:
Acquiring an initial network model and sample data; wherein the initial network model comprises: the system comprises an image feature extraction module, a color feature extraction module, a weight branching module and a content extraction module, wherein sample data comprise: the original image, the filter result image and the filter color image, wherein the filter result image is an image obtained by adding the filter color image to the original image.
In this step, the initial network model may be regarded as an untrained target network model. Different modules in the initial network model are used for executing different functions, such as an image feature extraction module is used for extracting image feature vectors of images, a color feature extraction module is used for extracting color features from the image feature vectors, a weight branching module is used for extracting weight values of pixels in an original image, and a content extraction module is used for extracting image content from the image feature vectors through decoupling learning.
Respectively inputting the filter result diagram and the original diagram into an image feature extraction module to obtain a first training image feature vector and a second training image feature vector;
In this step, each image may be in an RGB color mode, which is a color standard in industry, and is obtained by changing three color channels of red (R), green (G), and blue (B) and overlapping them with each other, where RGB is a color representing the three channels of red, green, and blue. Of course, the color model can also be a Lab color model, wherein Lab is a device-independent color model and is also a color model based on physiological characteristics. The Lab color model consists of three elements, one element being the luminance (L), and a and b being the two color channels. a includes colors ranging from dark green (low brightness value) to gray (medium brightness value) to bright pink (high brightness value); b is from bright blue (low luminance value) to gray (medium luminance value) to yellow (high luminance value). Here, to facilitate decoupling of color and luminance, in the case where each image is in RGB color mode, it is converted into Lab color model, that is, we say, RGB space of the image is converted into Lab space. In extracting the training image feature vector, the extraction may be performed based on the first formula described above.
Respectively inputting the first training image feature vector and the second training image feature vector into a color feature extraction module to obtain a first training color feature and a second training color feature;
in this step, color features are extracted from the training image feature vector, and extraction may be performed based on the second formula described above.
And determining the difference value obtained by subtracting the first training color feature from the second training color feature as the predicted filter color.
In the step, the predicted filter color is calculated, and when the original image is subjected to filter processing to obtain a filter result image, the color indicated by the applied filter is obtained.
And inputting the target absolute value into a weight branching module to obtain training filter weight values corresponding to each pixel in the original image, wherein the target absolute value is the absolute value of a difference value obtained by subtracting the second training image feature vector from the first training image feature vector.
In this step, the training filter weight value corresponding to each pixel in the original image may be extracted based on the third formula.
Inputting the first training image feature vector into a content extraction module to obtain a training content image related to the image content of the filter result image;
In this step, the image content of the filter result graph, i.e., the training content graph, may be extracted based on a fourth formula :ImageContent=iwn*(iwn-1*(...(iw1*outFeature+ib1))+ibn-1)+ibn,, where outFeature is an input, imageContent is an output, and is a convolution operation, iw1~iwn is n convolution kernels, ib1~ibn is n offset values, where n is a positive integer.
Determining model loss based on the predicted filter color, the training filter weight value, the training content map, and the filter color map, wherein the model loss includes image content loss, image color loss, and cyclic consistency loss;
In the step, the image content loss carries out content difference measurement; specifically, the image content loss= | ImgContent-ImgSource |, wherein ImgContent represents a training content graph and ImgSource represents an original graph. Measuring the difference of the color of the filter by making a difference between the value (predicted filter color) output by the color branch and the mark color (filter color chart) obtained in advance, and specifically, color loss= | ColorGroudtruth-ColorPredicted, wherein ColorGroudtruth represents the filter color chart; colorPredicted denotes a predicted filter color; and combining the color branch output value (predicted filter color) and the original image by using training filter weight values corresponding to different pixels output by the weight branch module. Then adding colors to the original image value and multiplying the colors by a training filter weight value to obtain a result image, and solving L1 loss of the obtained result image and the input filter result image to measure reconstruction accuracy; specifically, the cyclic consistency loss= | IMGTARGET-ImgSource x ColorPredicted x WEIGHTIMG |, wherein IMGTARGET represents a filter result graph; imgSource represents an original; colorPredicted denotes a predicted filter color; WEIGHTIMG denotes training filter weight values.
Updating model parameters, continuing training based on new sample data until model loss converges, and determining the initial network model after training is finished as a target network model.
In this step, the content loss, color loss, and cyclic consistency loss are solved, the partial derivatives are calculated for the convolution kernels in the above formulas, and then each convolution kernel is updated, where the new convolution kernel is equal to the sum of the old convolution kernel plus the partial derivatives calculated for the convolution kernel in the previous training process. By training in this way, training is finished after model loss is converged, and model parameters, namely convolution kernels in the above formulas, are saved. Here, model training using different sample data is required, and model parameters are updated once per training
In the embodiment of the application, model training is performed based on an original image, a filter result image and a filter color image in sample data, model parameters are updated based on the result of each training, and training is stopped after model loss converges, so that a trained target network model is obtained.
Optionally, inputting the first filter image and the image to be processed into a preset target network model includes:
In the case where the first filter image and the image to be processed are in RGB color mode, the first filter image and the image to be processed are respectively converted into Lab color mode.
In the embodiment of the application, the color space of the image is converted, so that the subsequent extraction of the color features is convenient.
Fig. 2 is a schematic diagram of practical application of an image processing method according to an embodiment of the present application, where the method includes:
step 201: and obtaining a user image and a target filter image of a filter which the user wants to migrate, wherein the user image is the image to be processed in the embodiment of the application, and the target filter image is the first filter image in the embodiment of the application.
Step 202: and inputting the acquired picture into a deep learning network model.
Step 203: and obtaining a result diagram after the user diagram migrates the filter, namely a second filter diagram in the embodiment of the application.
Fig. 3 is a schematic diagram of a process of the deep learning network model on the picture. The target filter image and the user image are respectively input into an image feature extraction module to obtain respective image feature vectors, the respective image feature vectors are respectively input into respective corresponding color feature extraction modules to obtain respective color features, then the color features obtained by the filter image processing branches are subtracted by the color features obtained by the original image processing branches to obtain predicted colors, meanwhile, absolute values obtained by subtracting the two image feature vectors are input into a weight branch module of the original image processing branches to obtain a global weight image, and a result image after the filter image is migrated by the aid of the predicted colors, the global weight image and the user image can be obtained. Result plot = ImgSource x ColorPredicted x WEIGHTIMG, where ImgSource represents the user plot; colorPredicted denotes a predicted color; WEIGHTIMG represents a global weight map, and the process of obtaining the image feature vector, the color feature, and the global weight map in the embodiment of the present application may refer to the first formula, the second formula, and the third formula in the above application embodiment, which are not described herein again. It is noted that the convolution kernel is shared between the two image feature extraction modules in fig. 3, and the convolution kernel is shared between the two color feature extraction modules.
It will be appreciated that the training process for the deep learning network module (target network model) is similar to that of fig. 3, and as shown in fig. 4, specifically, the training process uses sample data including: the original image, the filter result image and the filter color image, wherein the filter result image is an image obtained by adding the filter color image to the original image. The original image and the filter result image are respectively input into the model to obtain a predicted color and a global weight image, wherein the process of obtaining the predicted color and the global weight image is similar to the process of respectively inputting the user image and the target filter image in fig. 3 to obtain the predicted color and the global weight image, and the description thereof is omitted. It is noted that in the training process, the training content graph corresponding to the filter result graph can also be obtained through the image feature extraction module and the content extraction module of the model. In fig. 4, the convolution kernel is shared between two image feature extraction modules, and the convolution kernel is shared between two color feature extraction modules. After each item of data is obtained, model loss is calculated based on each item of obtained data, and model parameters are updated. Here, the model loss includes content loss, color loss, and cyclic coincidence loss, wherein the content loss measures the difference of the image contents; specifically, the image content loss= | ImgContent-ImgSource |, wherein ImgContent represents a training content graph and ImgSource represents an original graph. Color loss the color difference of the filter is measured, and the color loss is= | ColorGroudtruth-ColorPredicted |, wherein ColorGroudtruth represents the actual color, namely a color chart of the filter; colorPredicted denotes a predicted color; and combining the color branch output value (predicted color) and the original image by using training filter weight values corresponding to different pixels output by the weight branch module. Then adding colors to the original image value and multiplying the colors by a training filter weight value to obtain a result image, and solving L1 loss of the obtained result image and the input filter result image to measure reconstruction accuracy; specifically, the cyclic consistency loss= | IMGTARGET-ImgSource x ColorPredicted x WEIGHTIMG |, wherein IMGTARGET represents a filter result graph; imgSource represents an original; colorPredicted denotes a predicted color; WEIGHTIMG denotes training filter weight values. Through continuous training, model parameters can be continuously updated until model loss converges, and training is stopped.
The embodiment of the application can estimate the color of the filter more accurately, so that the filter is prevented from being interfered by the content of the picture. In addition, a global weight map is additionally output, and filters are obtained at different positions of the image by using different weights, so that the result map is more natural, and the masking effect is avoided.
It should be noted that, in the image processing method provided in the embodiment of the present application, the execution subject may be an image processing apparatus, or a control module for executing the image processing method in the image processing apparatus. In the embodiment of the present application, an image processing apparatus is described by taking an example of an image processing method performed by the image processing apparatus.
As shown in fig. 5, an embodiment of the present application further provides an image processing apparatus, including:
A first obtaining module 51, configured to obtain a first filter image and an image to be processed, where the first filter image includes an image processed by a target filter;
A first determining module 52 for determining a target filter based on the first filter image and the image to be processed;
A second determining module 53, configured to determine filter weight values corresponding to each pixel in the image to be processed, based on a difference between the color of each pixel in the image to be processed and the color indicated by the target filter;
The filter module 54 is configured to apply the target filter to each pixel in the image to be processed with a filter weight value corresponding to each pixel, so as to obtain a second filter image.
Optionally, the first determining module 52 includes:
The first input unit is used for inputting the first filter image and the image to be processed into a preset target network model;
the first model unit is used for acquiring a first color characteristic of the first filter image and a second color characteristic of the image to be processed through the target network model;
and the second model unit is used for acquiring the characteristic difference value of the first color characteristic and the second color characteristic through the target network model and determining the characteristic difference value as a target filter.
Optionally, the first model unit includes:
The first model subunit is used for acquiring a first image feature vector of the first filter image and a second image feature vector of the image to be processed through an image feature extraction module of the target network model;
The second model subunit is configured to input the first image feature vector and the second image feature vector into a color feature extraction module of the target network model, respectively, to obtain a first color feature of the first filter image and a second color feature of the image to be processed.
Optionally, the second determining module 53 includes:
The second input unit is used for inputting the first filter image and the image to be processed into a preset target network model;
the third model unit is used for acquiring a first image feature vector of the first filter image and a second image feature vector of the image to be processed through an image feature extraction module of the target network model;
A fourth model unit, configured to obtain a vector difference value between the first image feature vector and the second image feature vector through the target network model;
and the fifth model unit is used for inputting the absolute value of the vector difference value into the weight branching module of the target network model, and acquiring filter weight values corresponding to pixels in the image to be processed, wherein the closer the color of the target pixel is to the color indicated by the target filter, the larger the filter weight value corresponding to the target pixel is, and the target pixel comprises any pixel in the image to be processed.
Optionally, the image processing apparatus further includes:
the second acquisition module is used for acquiring the initial network model and sample data; wherein the initial network model comprises: the system comprises an image feature extraction module, a color feature extraction module, a weight branching module and a content extraction module, wherein sample data comprise: the original image, the filter result image and the filter color image are obtained by adding the filter color image to the original image;
The first training module is used for respectively inputting the filter result image and the original image into the image feature extraction module to obtain a first training image feature vector and a second training image feature vector;
The second training module is used for respectively inputting the first training image feature vector and the second training image feature vector into the color feature extraction module to obtain a first training color feature and a second training color feature;
the third training module is used for determining the difference value obtained by subtracting the first training color characteristic from the second training color characteristic as the predicted filter color;
The fourth training module is used for inputting a target absolute value into the weight branching module to obtain training filter weight values corresponding to pixels in the original image respectively, wherein the target absolute value is the absolute value of a difference value obtained by subtracting the second training image feature vector from the first training image feature vector;
The fifth training module is used for inputting the feature vector of the first training image into the content extraction module to obtain a training content image related to the image content of the filter result image;
A sixth training module configured to determine model loss based on the predicted filter color, the training filter weight value, the training content map, and the filter color map, wherein the model loss includes an image content loss, an image color loss, and a cyclic consistency loss;
And the seventh training module is used for updating the model parameters, continuing training based on the new sample data until the model loss converges, and determining the initial network model after the training is finished as the target network model.
Optionally, the first input unit is specifically configured to convert the first filter image and the image to be processed into the Lab color model when the first filter image and the image to be processed are in the RGB color mode, respectively.
In the embodiment of the application, the target filter used when the first filter image is obtained by performing filter processing is determined according to the first filter image processed by the filter and the image to be processed which is not processed by the filter. By taking the image to be processed into consideration, interference generated by the content color of the first filter image itself when the target filter is extracted based only on the first filter image can be avoided. And then determining filter weight values corresponding to the pixels in the image to be processed respectively based on the difference between the colors of the pixels in the image to be processed and the colors indicated by the target filters, combining the filter weight values corresponding to the pixels, and applying the target filters to the pixels of the image to be processed to obtain a second filter image after filter processing.
The image processing device in the embodiment of the application can be a device, and can also be a component, an integrated circuit or a chip in a terminal. The device may be a mobile electronic device or a non-mobile electronic device. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), etc., and the non-mobile electronic device may be a server, a network attached storage (Network Attached Storage, NAS), a personal computer (personal computer, PC), a Television (TV), a teller machine, a self-service machine, etc., and the embodiments of the present application are not limited in particular.
The image processing apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android operating system, an iOS operating system, or other possible operating systems, and the embodiment of the present application is not limited specifically.
The image processing device provided in the embodiment of the present application can implement each process implemented by the embodiments of the methods of fig. 1 to fig. 4, and in order to avoid repetition, a detailed description is omitted here.
Optionally, as shown in fig. 6, the embodiment of the present application further provides an electronic device 600, including a processor 601, a memory 602, and a program or an instruction stored in the memory 602 and capable of running on the processor 601, where the program or the instruction implements each process of the above-mentioned image processing method embodiment when executed by the processor 601, and the same technical effects can be achieved, so that repetition is avoided, and no further description is given here.
The electronic device in the embodiment of the application includes the mobile electronic device and the non-mobile electronic device.
Fig. 7 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 700 includes, but is not limited to: radio frequency unit 701, network module 702, audio output unit 703, input unit 704, sensor 705, display unit 706, user input unit 707, interface unit 708, memory 709, and processor 710.
Those skilled in the art will appreciate that the electronic device 700 may also include a power source (e.g., a battery) for powering the various components, which may be logically connected to the processor 710 via a power management system so as to perform functions such as managing charge, discharge, and power consumption via the power management system. The electronic device structure shown in fig. 7 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than shown, or may combine certain components, or may be arranged in different components, which are not described in detail herein.
A processor 710, configured to obtain a first filter image and an image to be processed, where the first filter image includes an image processed by a target filter;
A processor 710 for determining a target filter based on the first filter image and the image to be processed;
The processor 710 is further configured to determine filter weight values corresponding to each pixel in the image to be processed, based on a difference between a color of each pixel in the image to be processed and a color indicated by the target filter;
the processor 710 is further configured to apply the target filter to each pixel in the image to be processed with a filter weight value corresponding to each pixel, so as to obtain a second filter image.
In the embodiment of the application, the target filter used when the first filter image is obtained by performing filter processing is determined according to the first filter image processed by the filter and the image to be processed which is not processed by the filter. By taking the image to be processed into consideration, interference generated by the content color of the first filter image itself when the target filter is extracted based only on the first filter image can be avoided. And then determining filter weight values corresponding to the pixels in the image to be processed respectively based on the difference between the colors of the pixels in the image to be processed and the colors indicated by the target filters, combining the filter weight values corresponding to the pixels, and applying the target filters to the pixels of the image to be processed to obtain a second filter image after filter processing.
It should be appreciated that in embodiments of the present application, the input unit 704 may include a graphics processor (Graphics Processing Unit, GPU) 7041 and a microphone 7042, with the graphics processor 7041 processing image data of still pictures or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The display unit 706 may include a display panel 7061, and the display panel 7061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 707 includes a touch panel 7071 and other input devices 7072. The touch panel 7071 is also referred to as a touch screen. The touch panel 7071 may include two parts, a touch detection device and a touch controller. Other input devices 7072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and so forth, which are not described in detail herein. Memory 709 may be used to store software programs as well as various data including, but not limited to, application programs and an operating system. The processor 710 may integrate an application processor that primarily processes operating systems, user interfaces, applications, etc., with a modem processor that primarily processes wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 710.
The embodiment of the application also provides a readable storage medium, on which a program or an instruction is stored, which when executed by a processor, implements each process of the above image processing method embodiment, and can achieve the same technical effects, and in order to avoid repetition, a detailed description is omitted here.
Wherein the processor is a processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium such as a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk or an optical disk, and the like.
The embodiment of the application further provides a chip, which comprises a processor and a communication interface, wherein the communication interface is coupled with the processor, and the processor is used for running programs or instructions to realize the processes of the embodiment of the image processing method, and can achieve the same technical effects, so that repetition is avoided, and the description is omitted here.
It should be understood that the chips referred to in the embodiments of the present application may also be referred to as system-on-chip chips, chip systems, or system-on-chip chips, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Furthermore, it should be noted that the scope of the methods and apparatus in the embodiments of the present application is not limited to performing the functions in the order shown or discussed, but may also include performing the functions in a substantially simultaneous manner or in an opposite order depending on the functions involved, e.g., the described methods may be performed in an order different from that described, and various steps may be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a computer software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method according to the embodiments of the present application.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those having ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are to be protected by the present application.