Movatterモバイル変換


[0]ホーム

URL:


CN113643214B - Image exposure correction method and system based on artificial intelligence - Google Patents

Image exposure correction method and system based on artificial intelligence
Download PDF

Info

Publication number
CN113643214B
CN113643214BCN202111184389.6ACN202111184389ACN113643214BCN 113643214 BCN113643214 BCN 113643214BCN 202111184389 ACN202111184389 ACN 202111184389ACN 113643214 BCN113643214 BCN 113643214B
Authority
CN
China
Prior art keywords
image
exposure correction
exposure
neural network
subjected
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111184389.6A
Other languages
Chinese (zh)
Other versions
CN113643214A (en
Inventor
康然
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Weipei Communication Technology Development Co ltd
Original Assignee
Jiangsu Weipei Communication Technology Development Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu Weipei Communication Technology Development Co ltdfiledCriticalJiangsu Weipei Communication Technology Development Co ltd
Priority to CN202111184389.6ApriorityCriticalpatent/CN113643214B/en
Publication of CN113643214ApublicationCriticalpatent/CN113643214A/en
Application grantedgrantedCritical
Publication of CN113643214BpublicationCriticalpatent/CN113643214B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Classifications

Landscapes

Abstract

The invention relates to an image exposure correction method and system based on artificial intelligence, and the method comprises the following specific steps: acquiring an image to be subjected to exposure correction and an image subjected to exposure correction; taking an image to be subjected to exposure correction as the input of an image exposure correction neural network, taking the image subjected to exposure correction as the output of the image exposure correction neural network, and carrying out neural network training; constructing a loss function of the neural network; and outputting the exposure corrected image after the neural network training is finished. The method can reduce noise, blur and other impurities, and improve the performance of exposure correction.

Description

Image exposure correction method and system based on artificial intelligence
Technical Field
The invention relates to the field of image processing, in particular to an artificial intelligence-based image exposure correction method and system.
Background
Exposure correction, i.e. adjusting the lighting conditions of an image, is a classic and active problem in computer vision. While low light conditions during capture may result in dark, noisy and blurred images, digital cameras, which are generally better or more expensive, have a higher range of exposure settings and therefore are able to capture better quality photographs in poor lighting conditions, such as digital single lens reflex cameras, are equipped with advanced hardware capable of handling scenes such as large apertures, slow shutter speeds and sensitive sensors. Unlike digital single-lens reflex cameras, which are affected by the consumer's demand for lighter and thinner hardware, smart phones have limited the size of the sensor, with smaller sensors resulting in low light and poor performance.
Disclosure of Invention
In order to overcome the disadvantages of the prior art, the present invention provides an image exposure correction method and system based on artificial intelligence.
In order to achieve the above purpose, the invention adopts the following technical scheme, and an image exposure correction method based on artificial intelligence specifically comprises the following steps:
acquiring an image to be subjected to exposure correction and an image subjected to exposure correction;
image exposure correction neural network training: taking an image to be subjected to exposure correction as the input of an image exposure correction neural network, taking the image subjected to exposure correction as the output of the image exposure correction neural network, and carrying out neural network training;
the loss function formula included in the image exposure correction neural network training is as follows:
Figure DEST_PATH_IMAGE001
in the formula:
Figure 412518DEST_PATH_IMAGE003
the number of pixels of the network input image,
Figure 267341DEST_PATH_IMAGE005
representing network input image
Figure 984762DEST_PATH_IMAGE007
The net output value of the individual pixels,
Figure 397605DEST_PATH_IMAGE008
representing network input image
Figure 752680DEST_PATH_IMAGE007
The image tag value of an individual pixel,
Figure 322257DEST_PATH_IMAGE010
representing network input image
Figure 210895DEST_PATH_IMAGE007
The adaptive exposure weight for each pixel,
Figure 870651DEST_PATH_IMAGE012
in order to map the coefficients of the image,
Figure 611604DEST_PATH_IMAGE014
is a multi-scale spectrum difference loss function;
and outputting the exposure corrected image after the neural network training is finished.
Further, the method for obtaining the adaptive exposure weight comprises the following steps: and converting each exposure image into a Lab color space through color space conversion, and normalizing a brightness channel in the Lab color space to obtain the self-adaptive exposure weight.
Further, the adaptive exposure weight expression:
Figure 293252DEST_PATH_IMAGE016
in the formula:
Figure 803765DEST_PATH_IMAGE018
represents the average value of the luminance of the pixels of the image to be corrected for exposure,
Figure 877080DEST_PATH_IMAGE020
representing the image to be corrected for exposure
Figure 553229DEST_PATH_IMAGE003
Luminance of each pixel and
Figure 589635DEST_PATH_IMAGE018
the standard deviation of (a) is determined,
Figure 150247DEST_PATH_IMAGE022
representing the image to be corrected for exposure
Figure 212061DEST_PATH_IMAGE003
The luminance channel of each pixel normalizes the luminance value.
Further, the calculation formula of the multi-scale spectrum difference loss function is as follows:
Figure DEST_PATH_IMAGE023
in the formula:
Figure 855269DEST_PATH_IMAGE025
the number of classes representing the resolution of the feature map,
Figure 171981DEST_PATH_IMAGE027
is the number of channels of the feature map,
Figure 238157DEST_PATH_IMAGE029
is shown as
Figure 461645DEST_PATH_IMAGE007
In resolution of
Figure 447794DEST_PATH_IMAGE031
The width of the feature map of each channel,
Figure 798003DEST_PATH_IMAGE033
is shown as
Figure 538743DEST_PATH_IMAGE007
In resolution of
Figure 769185DEST_PATH_IMAGE031
The height of the profile of an individual channel,
Figure 866454DEST_PATH_IMAGE035
is shown as
Figure 731959DEST_PATH_IMAGE007
In resolution of
Figure 393939DEST_PATH_IMAGE031
The characteristic diagram of each channel is shown,
Figure 550114DEST_PATH_IMAGE037
is shown as
Figure 159267DEST_PATH_IMAGE007
In resolution of
Figure 749965DEST_PATH_IMAGE031
Exposure corrected images of the individual channels, Σ denotes performing a pixel summation operation on the difference spectrogram,
Figure 983500DEST_PATH_IMAGE039
is a scale factor that is a function of,
Figure 486157DEST_PATH_IMAGE041
is to perform a fast fourier transform on the image.
Further, the image exposure correction neural network includes a generator and a discriminator;
the generator is used for learning the data characteristics of the training set and generating similar data with the characteristics of the training set under the guidance of the discriminator;
the discriminator is used for distinguishing whether the input data is real or false data generated by the generator and feeding back the data to the generator.
Further, the generator is structured as an encoder-decoder; the input of the encoder is an image after image processing to be subjected to exposure correction, and the output is a characteristic diagram; the input of the decoder is the characteristic diagram output by the encoder, and the output is the characteristic diagram with different resolutions and the image after exposure correction.
Further, the image to be subjected to exposure correction is shot by a mobile phone, and the image subjected to exposure correction is shot by a digital camera.
An artificial intelligence based image exposure correction system comprising:
an input unit that inputs an acquired image to be subjected to exposure correction;
the processing unit is used for inputting the image to be subjected to exposure correction input by the input unit into trained image exposure correction neural network training for processing, and acquiring the image to be subjected to exposure correction after exposure correction;
the loss function formula included in the image exposure correction neural network training is as follows:
Figure 994498DEST_PATH_IMAGE001
in the formula:
Figure 643884DEST_PATH_IMAGE003
the number of pixels of the network input image,
Figure 8186DEST_PATH_IMAGE005
representing network input image
Figure 506480DEST_PATH_IMAGE007
The net output value of the individual pixels,
Figure 37136DEST_PATH_IMAGE008
representing network input image
Figure 623155DEST_PATH_IMAGE007
The image tag value of an individual pixel,
Figure 710377DEST_PATH_IMAGE010
representing network input image
Figure 492443DEST_PATH_IMAGE007
The adaptive exposure weight for each pixel,
Figure 870651DEST_PATH_IMAGE012
in order to map the coefficients of the image,
Figure 710749DEST_PATH_IMAGE014
is a multi-scale spectrum difference loss function;
and an output unit that outputs the exposure-corrected image processed by the processing unit.
The invention has the beneficial effects that:
1. the method of the present invention trains a neural network in an end-to-end manner using back propagation. Second, it is generic and can be added to any existing framework without additional overhead.
2. The invention calculates the frequency of different frequencies
Figure 583207DEST_PATH_IMAGE041
So that the network can see artifacts at different scales, learning a scale-independent representation, which is an ideal feature in the wild image. Ultimately the loss function improves the performance of the exposure correction by reducing noise, blur and other impurities, such as color artifacts. Multi-rulerThe degree spectrum difference loss function can definitely guide the network to learn the real frequency component of the correct exposure image distribution and ignore the noise frequency of the poor exposure input.
3. The invention combines the depth map to obtain the plane in the image, and can provide plane information for the neural network, thereby eliminating the influence caused by different reflected light components of different objects.
4. The invention adopts a self-adaptive exposure weight method, can allocate larger weight to a dark area during image long exposure, allocate larger weight to a bright area during image short exposure, and finally obtain the exposure weight of each pixel position of each image, wherein the larger the weight is, the more the brightness of the position needs to be compensated, so that the network can better correct different exposure areas of the image.
Drawings
FIG. 1 is a schematic flow diagram of the present invention;
fig. 2 is a schematic structural diagram of the present invention.
Detailed Description
The invention is described in detail below with reference to the figures and examples.
In the description of the present invention, it is to be understood that the terms "center", "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", and the like indicate orientations or positional relationships based on those shown in the drawings, and are only for convenience of description and simplicity of description, and do not indicate or imply that the referenced devices or elements must have a particular orientation, be constructed and operated in a particular orientation, and thus, are not to be construed as limiting the present invention.
The terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature; in the description of the present invention, "a plurality" means two or more unless otherwise specified.
Example 1
The specific scene aimed by the invention is a mobile phone photographing link, and the image may be darkened, noisy and fuzzy due to the weak light condition in the photographing process.
In order to solve the above technical problem, the present invention provides an artificial intelligence based image exposure correction method, as shown in fig. 1. The method specifically comprises the following steps:
the first step is to acquire the image. Firstly, shooting by a mobile phone under a severe illumination condition to obtain an image to be subjected to exposure correction, and then collecting depth information of the shot image to be subjected to exposure correction by a depth camera to obtain a depth map. And secondly, acquiring an exposure-corrected image by using a digital camera, and using the exposure-corrected image as a label output by the neural network. Because digital cameras have a higher exposure setting range, it is possible to take a picture of better quality under poor lighting conditions.
And then constructing a neural network to realize the correction of image exposure.
The image exposure correction neural network is an anti-neural network for generating an exposure corrected image. The antagonistic neural network comprises a generator and a discriminator. The generator is used for reconstructing an image which accords with the real distribution of the training set data as much as possible under the guidance of the discriminator by learning the characteristics of the training set data, thereby generating similar data with the characteristics of the training set. The discriminator is used for extracting the characteristics of the image, the input of the discriminator is the image generated by the generator after exposure correction and the image shot by the digital camera for the same scene, and the discriminator is responsible for distinguishing whether the input image is real or false image generated by the generator and feeding back the input image to the generator. The two networks are alternately trained, and the capability is synchronously improved until the data generated by the generated network can be in a false state and reach a certain balance with the capability of the identified network.
The generator is in encoder-decoder structure and can adopt
Figure 422724DEST_PATH_IMAGE043
The network model is equal, the input of the encoder isAnd outputting a fused image obtained by performing correlation (joint operation) on the RGB image to be subjected to exposure correction and the depth map as a feature map. Then inputting the feature maps into a decoder, outputting feature maps with different resolutions after fitting and up-sampling, wherein the empirical value of the resolution is 5, the last layer of the feature maps with different resolutions is 3 (the RGB images are three channels), and the final decoder outputs an image after exposure correction, and assuming that the input resolution of the image is 512 × 512, the generated 4 resolution images are respectively 32,64,128 and 256, and the original image is 512 × 512 resolution, and the total is 5.
The method comprises the following steps of constructing a loss function of a neural network, optimizing network parameters through the constructed loss function, reducing noise, blur and other impurities, improving the performance of exposure correction, wherein the constructed loss function adopts a multi-scale spectrum difference loss and exposure weighted mean square error loss function, and the formula is as follows:
Figure 416088DEST_PATH_IMAGE001
in the formula:
Figure 110692DEST_PATH_IMAGE003
the number of pixels of the network input image,
Figure 329501DEST_PATH_IMAGE005
representing network input image
Figure 49512DEST_PATH_IMAGE007
The net output value of the individual pixels,
Figure 965832DEST_PATH_IMAGE008
representing network input image
Figure 413049DEST_PATH_IMAGE007
The image tag value of an individual pixel,
Figure 620356DEST_PATH_IMAGE010
representing a networkInput image to
Figure 984659DEST_PATH_IMAGE007
The adaptive exposure weight for each pixel,
Figure 686216DEST_PATH_IMAGE012
in order to map the coefficients of the image,
Figure 482450DEST_PATH_IMAGE014
is a multi-scale spectral difference loss function.
The method for obtaining the self-adaptive exposure weight in the multi-scale spectrum difference loss and exposure weighting mean square error loss function comprises the following steps:
and carrying out exposure weight calculation on the brightness channel of each exposure image. Firstly, color space conversion is carried out on each exposure image, and the exposure image is converted into a Lab color space, wherein L is a brightness channel, and the following calculation is carried out on the brightness channel. The L component in Lab color space is used to represent the brightness of the pixel, and the value range is [0,100], which means from pure black to pure white, the L channel needs to be normalized.
The adaptive exposure weight is:
Figure 486178DEST_PATH_IMAGE016
in the formula:
Figure 361742DEST_PATH_IMAGE018
represents the average value of the luminance of the pixels of the image to be corrected for exposure,
Figure 178705DEST_PATH_IMAGE020
representing the image to be corrected for exposure
Figure 282108DEST_PATH_IMAGE003
Luminance of each pixel and
Figure 429372DEST_PATH_IMAGE018
the standard deviation of (a) is determined,
Figure 50026DEST_PATH_IMAGE022
representing the image to be corrected for exposure
Figure 557286DEST_PATH_IMAGE003
The luminance channel of each pixel normalizes the luminance value.
By this formula, the luminance value is made closer to (1-
Figure 773821DEST_PATH_IMAGE018
) The larger the weighted value of the pixel is, the closer to (1-
Figure 971901DEST_PATH_IMAGE018
) The dark areas in the long-exposure image and the bright areas in the short-exposure image can be well highlighted by the formula. I.e. when the whole image is bright (long exposure), dark areas are given a greater weight; when the entire image is dark (short exposure), a weight of a bright area is assigned.
The exposure weight of each pixel position of each image is finally obtained, the larger the weight is, the more the brightness value of the position is compensated, and the obtained image is called an adaptive exposure weight map.
The process is to convert each exposure image into a Lab color space through a color space, and normalize a brightness channel in the Lab color space to obtain the self-adaptive exposure weight. The adaptive weight calculation method has the advantage that the dark areas can be assigned with larger weight when the image is exposed for a long time, and the bright areas can be assigned with larger weight when the image is exposed for a short time.
The calculation formula of the multi-scale spectrum difference loss function in the multi-scale spectrum difference loss and exposure weighted mean square error loss function is as follows:
Figure 300114DEST_PATH_IMAGE023
in the formula:
Figure 223388DEST_PATH_IMAGE025
the number of categories representing the resolution of the feature map, i.e. 5 resolution feature maps,
Figure 638243DEST_PATH_IMAGE027
is the number of channels of the feature map, 3,
Figure 711559DEST_PATH_IMAGE044
is shown as
Figure 184445DEST_PATH_IMAGE007
In resolution of
Figure 424114DEST_PATH_IMAGE031
The width of the feature map of each channel,
Figure 656829DEST_PATH_IMAGE033
is shown as
Figure 951599DEST_PATH_IMAGE007
In resolution of
Figure 475301DEST_PATH_IMAGE031
The height of the profile of an individual channel,
Figure 890419DEST_PATH_IMAGE035
is shown as
Figure 439529DEST_PATH_IMAGE007
In resolution of
Figure 286579DEST_PATH_IMAGE031
The characteristic diagram of each channel is shown,
Figure 188993DEST_PATH_IMAGE037
is shown as
Figure 97004DEST_PATH_IMAGE007
In resolution of
Figure 962509DEST_PATH_IMAGE031
The different resolutions of the exposure-corrected images of the channels can be realized by adopting down-sampling, sigma represents the pixel summation operation of the spectrogram after the difference,
Figure 594796DEST_PATH_IMAGE039
is a scale factor, since the feature size decreases by a factor of 2, the feature size is a scale factor
Figure 220129DEST_PATH_IMAGE046
The image is subjected to fast Fourier transform, and the data is transformed to a uniform value range through logarithmic transformation.
Computing label images and predictive feature maps during neural network training
Figure 704255DEST_PATH_IMAGE041
And then computing the label image
Figure 214051DEST_PATH_IMAGE041
And predicting feature maps
Figure 959470DEST_PATH_IMAGE041
Average of absolute differences. The proposed multi-scale spectral difference loss function can explicitly guide the network to learn the true frequency components of the distribution of correctly exposed images and ignore the noise frequency of poorly exposed inputs.
The multi-scale spectrum difference loss function has the following advantages: first, it is differentiable and therefore suitable for training neural networks in an end-to-end manner using back-propagation. Second, it is generic and can be added to any existing framework without additional overhead. Third, by calculating the resolution at different levels
Figure 104461DEST_PATH_IMAGE041
The network can see artifacts at different scales, learning a scale-independent representation, which is an ideal feature in the image. This loss function improves the performance of exposure correction by reducing noise, blur, and other impurities.
Therefore, the neural network can be optimized according to the objective function through an optimization method such as a gradient descent method, and finally exposure correction of the image is achieved through a generator of the antagonistic neural network.
Example 2
The specific scene aimed by the invention is a mobile phone photographing link, and the image may be darkened, noisy and fuzzy due to the weak light condition in the photographing process.
As shown in fig. 2, the present invention provides an artificial intelligence based image exposure correction system, comprising:
the input unit is used for inputting the image shot by the camera of the mobile phone or the image stored in the memory of the mobile phone into the input unit;
the processing unit is used for inputting the image input by the input unit into trained image exposure correction neural network training for processing and acquiring an image after exposure correction obtained after processing;
and the output unit is used for displaying the image after exposure correction processed by the processing unit on a display screen of the mobile phone or storing the image into a memory of the mobile phone.
The above embodiments are merely illustrative of the present invention, and should not be construed as limiting the scope of the present invention, and all designs identical or similar to the present invention are within the scope of the present invention.

Claims (8)

1. An artificial intelligence based image exposure correction method, characterized by comprising the following steps:
acquiring an image to be subjected to exposure correction and an image subjected to exposure correction;
image exposure correction neural network training: taking an image to be subjected to exposure correction as the input of an image exposure correction neural network, taking the image subjected to exposure correction as the output of the image exposure correction neural network, and carrying out neural network training;
the loss function formula included in the image exposure correction neural network training is as follows:
Figure 339840DEST_PATH_IMAGE002
in the formula:
Figure 342443DEST_PATH_IMAGE004
the number of pixels of the network input image,
Figure 145500DEST_PATH_IMAGE006
representing network input image
Figure 438258DEST_PATH_IMAGE008
The net output value of the individual pixels,
Figure 650244DEST_PATH_IMAGE010
representing network input image
Figure 725834DEST_PATH_IMAGE008
The image tag value of an individual pixel,
Figure 297815DEST_PATH_IMAGE012
representing network input image
Figure 31602DEST_PATH_IMAGE008
The adaptive exposure weight for each pixel,
Figure 153459DEST_PATH_IMAGE014
in order to map the coefficients of the image,
Figure 296175DEST_PATH_IMAGE016
is a multi-scale spectrum difference loss function;
and outputting the exposure corrected image after the neural network training is finished.
2. The artificial intelligence based image exposure correction method according to claim 1, wherein the adaptive exposure weight is obtained by: and converting each exposure image into a Lab color space through color space conversion, and normalizing a brightness channel in the Lab color space to obtain the self-adaptive exposure weight.
3. The artificial intelligence based image exposure correction method according to claim 2, wherein the adaptive exposure weight expression:
Figure 109411DEST_PATH_IMAGE018
in the formula:
Figure 706931DEST_PATH_IMAGE020
represents the average value of the luminance of the pixels of the image to be corrected for exposure,
Figure 117239DEST_PATH_IMAGE022
representing the image to be corrected for exposure
Figure 740167DEST_PATH_IMAGE004
Luminance of each pixel and
Figure 801981DEST_PATH_IMAGE020
the standard deviation of (a) is determined,
Figure 732208DEST_PATH_IMAGE024
representing the image to be corrected for exposure
Figure 209642DEST_PATH_IMAGE004
The luminance channel of each pixel normalizes the luminance value.
4. The artificial intelligence based image exposure correction method of claim 1, wherein the multi-scale spectral difference loss function is calculated by the following formula:
Figure 5560DEST_PATH_IMAGE026
in the formula:
Figure 171279DEST_PATH_IMAGE028
the number of classes representing the resolution of the feature map,
Figure 958024DEST_PATH_IMAGE030
is the number of channels of the feature map,
Figure 640996DEST_PATH_IMAGE032
is shown as
Figure 712037DEST_PATH_IMAGE008
In resolution of
Figure 803938DEST_PATH_IMAGE034
The width of the feature map of each channel,
Figure 533176DEST_PATH_IMAGE036
is shown as
Figure 266963DEST_PATH_IMAGE008
In resolution of
Figure 299739DEST_PATH_IMAGE034
The height of the profile of an individual channel,
Figure 567089DEST_PATH_IMAGE038
is shown as
Figure 737357DEST_PATH_IMAGE008
In resolution of
Figure 928484DEST_PATH_IMAGE034
The characteristic diagram of each channel is shown,
Figure 242101DEST_PATH_IMAGE040
is shown as
Figure 950480DEST_PATH_IMAGE008
In resolution of
Figure 312509DEST_PATH_IMAGE034
Exposure corrected images of the individual channels, Σ denotes performing a pixel summation operation on the difference spectrogram,
Figure 662773DEST_PATH_IMAGE042
is a scale factor that is a function of,
Figure 807633DEST_PATH_IMAGE044
is to perform a fast fourier transform on the image.
5. The artificial intelligence based image exposure correction method of claim 1, wherein the image exposure correction neural network comprises a generator and a discriminator;
the generator is used for learning the data characteristics of the training set and generating similar data with the characteristics of the training set under the guidance of the discriminator;
the discriminator is used for distinguishing whether the input data is real or false data generated by the generator and feeding back the data to the generator.
6. The artificial intelligence based image exposure correction method of claim 5, wherein the generator is structured as an encoder-decoder; the input of the encoder is an image after image processing to be subjected to exposure correction, and the output is a characteristic diagram; the input of the decoder is the characteristic diagram output by the encoder, and the output is the characteristic diagram with different resolutions and the image after exposure correction.
7. The artificial intelligence based image exposure correction method of claim 1, wherein the image to be subjected to exposure correction is photographed by a mobile phone, and the image after exposure correction is photographed by a digital camera.
8. An artificial intelligence based image exposure correction system, comprising:
an input unit that inputs an acquired image to be subjected to exposure correction;
the processing unit is used for inputting the image to be subjected to exposure correction input by the input unit into trained image exposure correction neural network training for processing, and acquiring the image to be subjected to exposure correction after exposure correction;
the loss function formula included in the image exposure correction neural network training is as follows:
Figure 928035DEST_PATH_IMAGE002
in the formula:
Figure 712769DEST_PATH_IMAGE004
the number of pixels of the network input image,
Figure 818445DEST_PATH_IMAGE006
representing network input image
Figure 612275DEST_PATH_IMAGE008
The net output value of the individual pixels,
Figure 851681DEST_PATH_IMAGE010
representing network input image
Figure 811864DEST_PATH_IMAGE008
The image tag value of an individual pixel,
Figure 92989DEST_PATH_IMAGE012
representing network input image
Figure 688367DEST_PATH_IMAGE008
The adaptive exposure weight for each pixel,
Figure 870267DEST_PATH_IMAGE014
in order to map the coefficients of the image,
Figure 740319DEST_PATH_IMAGE016
is a multi-scale spectrum difference loss function;
and an output unit that outputs the exposure-corrected image processed by the processing unit.
CN202111184389.6A2021-10-122021-10-12Image exposure correction method and system based on artificial intelligenceActiveCN113643214B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202111184389.6ACN113643214B (en)2021-10-122021-10-12Image exposure correction method and system based on artificial intelligence

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202111184389.6ACN113643214B (en)2021-10-122021-10-12Image exposure correction method and system based on artificial intelligence

Publications (2)

Publication NumberPublication Date
CN113643214A CN113643214A (en)2021-11-12
CN113643214Btrue CN113643214B (en)2022-02-11

Family

ID=78426497

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202111184389.6AActiveCN113643214B (en)2021-10-122021-10-12Image exposure correction method and system based on artificial intelligence

Country Status (1)

CountryLink
CN (1)CN113643214B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN114638764B (en)*2022-03-252023-01-24江苏元贞智能科技有限公司Multi-exposure image fusion method and system based on artificial intelligence
CN114862698B (en)*2022-04-122024-06-07北京理工大学 A real over-exposure image correction method and device based on channel guidance
CN114820356B (en)*2022-04-122025-04-29中国计量大学 Exposure anomaly correction method for power tower images based on improved MDIN network
CN115240022B (en)*2022-06-092025-06-27北京大学 A low-light image enhancement method using long exposure compensation
CN115082312A (en)*2022-06-212022-09-20广东工业大学 A method and system for infrared image super-resolution based on degenerate distillation network
CN116071268B (en)*2023-03-012023-06-23中国民用航空飞行学院 Image Deillumination Model Based on Contrastive Learning and Its Training Method
CN120471809B (en)*2025-07-152025-09-26吉林农业大学 A global and local feature fusion method for image exposure correction

Citations (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN104620519A (en)*2012-09-102015-05-13皇家飞利浦有限公司Light detection system and method
CN111640068A (en)*2019-03-012020-09-08同济大学Unsupervised automatic correction method for image exposure
CN111835983A (en)*2020-07-232020-10-27福州大学 A method and system for multi-exposure high dynamic range imaging based on generative adversarial network
CN113191995A (en)*2021-04-302021-07-30东北大学Video image automatic exposure correction method based on deep learning

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN104620519A (en)*2012-09-102015-05-13皇家飞利浦有限公司Light detection system and method
CN111640068A (en)*2019-03-012020-09-08同济大学Unsupervised automatic correction method for image exposure
CN111835983A (en)*2020-07-232020-10-27福州大学 A method and system for multi-exposure high dynamic range imaging based on generative adversarial network
CN113191995A (en)*2021-04-302021-07-30东北大学Video image automatic exposure correction method based on deep learning

Also Published As

Publication numberPublication date
CN113643214A (en)2021-11-12

Similar Documents

PublicationPublication DateTitle
CN113643214B (en)Image exposure correction method and system based on artificial intelligence
US11403740B2 (en)Method and apparatus for image capturing and processing
US11882357B2 (en)Image display method and device
US20230214976A1 (en)Image fusion method and apparatus and training method and apparatus for image fusion model
US11431915B2 (en)Image acquisition method, electronic device, and non-transitory computer readable storage medium
CN108198152B (en)Image processing method and device, electronic equipment and computer readable storage medium
CN108156369B (en)Image processing method and device
CN110349163B (en) Image processing method and apparatus, electronic device, computer-readable storage medium
JP2012217102A (en)Image processing device and control method and program for the same
CN117135293B (en)Image processing method and electronic device
CN114638764B (en)Multi-exposure image fusion method and system based on artificial intelligence
CN114820405A (en)Image fusion method, device, equipment and computer readable storage medium
US11671714B1 (en)Motion based exposure control
CN107995396B (en)Two camera modules and terminal
CN110047060A (en) Image processing method, device, storage medium and electronic device
CN113691724A (en) HDR scene detection method and device, terminal and readable storage medium
WO2020133331A1 (en)Systems and methods for exposure control
CN107835351B (en)Two camera modules and terminal
CN113962844B (en) Image fusion method, storage medium and terminal device
CN108668124B (en) Photosensitive chip testing method and equipment based on charge calculation
US12160670B2 (en)High dynamic range (HDR) image generation using a combined short exposure image
CN115170420A (en)Image contrast processing method and system
KR101039404B1 (en) Image signal processor, smart phone and automatic exposure control method
JP6554009B2 (en) IMAGE PROCESSING APPARATUS, CONTROL METHOD THEREOF, PROGRAM, AND RECORDING MEDIUM
JP2021093694A (en)Information processing apparatus and method for controlling the same

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp