Disclosure of Invention
The high dynamic range image full-resolution reconstruction method, the high dynamic range image full-resolution reconstruction device and the electronic equipment can effectively realize a high dynamic range function in a video, and improve the processing capacity of image details and a motion scene, so that the quality of a dynamic range image is improved.
In a first aspect, the present invention provides a full resolution reconstruction method for a high dynamic range image, the method comprising:
acquiring an image simultaneously containing long exposure pixels and short exposure pixels through a high dynamic range sensor;
calculating gradient values of the region of the current pixel point in all directions to obtain gradient information of the region of the current pixel point in four directions;
judging the direction of the region where the current pixel point is located according to the gradient information, if the gradient values in the four directions are all smaller than a set threshold value, the region is a flat region, when the region where the current pixel point is located is not the flat region, preliminarily determining the direction with the smaller gradient value as an interpolation direction, and further judging whether the direction is a preferential interpolation direction or a non-preferential interpolation direction;
when the direction with smaller gradient is the non-priority interpolation direction, judging whether the ratio of the gradient of the current pixel point in the non-priority interpolation direction to the gradient of the priority interpolation direction is smaller than a set threshold value or not, judging whether the area where the current pixel point is located is a saturated area, a low-brightness area and a motion area or not, and if the conditions are met, giving up the non-priority interpolation direction;
performing interpolation calculation in a preferential interpolation direction or interpolation calculation in a non-preferential interpolation direction on the current pixel point according to the interpolation direction judgment result;
performing motion detection on the area where the current pixel is located, judging whether motion information exceeds a preset threshold value, performing motion compensation processing on a pixel value which moves, and removing ghost;
and performing brightness estimation on the region of the current pixel, and performing fusion decision and processing on the image by using the brightness value of the region of the current pixel and a preset fusion threshold value.
Optionally, the exposure ratio of the image acquired by the high dynamic range sensor is 1:1, 1:2, 1:4, 1:8 or 1:16, the exposure ratio being the ratio of the long exposure time and the short exposure time of the image within the same frame.
Optionally, the calculating the gradient value of each direction of the region where the current pixel point is located includes:
converting the pixel value of the short exposure position of the area where the current pixel point is located into a long exposure pixel value according to the exposure proportion, or converting the pixel value of the long exposure position of the area where the current pixel point is located into a short exposure pixel value according to the exposure proportion;
and acquiring a horizontal gradient value, a vertical gradient value, an oblique 45-degree gradient value and an oblique 135-degree gradient value of the region where the converted current pixel point is located by using a corresponding gradient detection operator.
Optionally, the converting the pixel value of the short exposure position of the region where the current pixel point is located into a long exposure pixel value according to the exposure proportion includes: multiplying the pixel value of the short exposure position of the area where the current pixel point is located by the exposure proportion to obtain a long exposure pixel value of the short exposure position of the area where the current pixel point is located;
the step of converting the pixel value of the long exposure position of the area where the current pixel point is located into the short exposure pixel value according to the exposure proportion comprises the following steps: and dividing the pixel value of the long exposure position of the area where the current pixel point is located by the exposure proportion to obtain the short exposure pixel value of the long exposure position of the area where the current pixel point is located.
Optionally, when the region where the current pixel is located is not a flat region, determining whether the direction in which the current pixel needs to be interpolated is a preferential interpolation direction or a non-preferential interpolation direction includes:
when a pixel point different from the current pixel exposure time exists in the direction needing interpolation, judging that the interpolation direction is a preferential interpolation direction;
and when no pixel point with the exposure time different from that of the current pixel exists in the direction needing interpolation, judging that the interpolation direction is a non-preferential interpolation direction.
Optionally, the performing, according to the interpolation direction determination result, interpolation calculation in a preferential interpolation direction or interpolation calculation in a non-preferential interpolation direction on the current pixel point includes:
when the direction needing interpolation is a preferential interpolation direction, performing interpolation calculation of the preferential interpolation direction on the current pixel point;
when the direction needing interpolation is a non-priority interpolation direction, the non-priority interpolation direction is selected with the following limitations:
and if the ratio of the gradient of the current pixel point in the non-preferential interpolation direction to the gradient of the preferential interpolation direction is smaller than a set threshold value, or the region in which the current pixel point is located is any one of a saturated region, a low-brightness region and a motion region, giving up to select the non-preferential interpolation direction, and performing interpolation calculation of the preferential interpolation direction on the current pixel point.
Optionally, when the direction to be interpolated is a preferential interpolation direction, performing interpolation calculation of the preferential interpolation direction on the current pixel point includes:
when the current pixel point is a long exposure pixel and the pixel to be interpolated is a short exposure pixel, directly interpolating by using the short exposure pixel in the direction needing interpolation to obtain a short exposure pixel value of the current pixel position;
when the current pixel point is a short exposure pixel and the pixel to be interpolated is a long exposure pixel, directly interpolating by using the long exposure pixel in the direction needing interpolation to obtain a long exposure pixel value of the current pixel position;
when the direction needing interpolation is a non-preferential interpolation direction, the interpolation calculation of the current pixel point in the non-preferential interpolation direction comprises the following steps:
when the current pixel point is a long exposure pixel and the pixel to be interpolated is a short exposure pixel, converting the long exposure pixel value in the direction needing interpolation into a short exposure pixel value, and then performing interpolation calculation, wherein the conversion method is that the long exposure pixel value is divided by an exposure proportion;
and when the current pixel point is a short-exposure pixel and the pixel to be interpolated is a long-exposure pixel, converting the short-exposure pixel value in the direction needing interpolation into a long-exposure pixel value, and then performing interpolation calculation, wherein the conversion method is to multiply the short-exposure pixel value by an exposure proportion.
Optionally, the method further comprises:
when the region where the current pixel point is located is a flat region, the current pixel point is a long exposure pixel, and the pixel to be interpolated is a short exposure pixel, directly interpolating all short exposure pixels around the current pixel point to obtain a short exposure pixel value of the current long exposure position;
and when the region where the current pixel point is located is a flat region, the current pixel point is a short exposure pixel, and the pixel to be interpolated is a long exposure pixel, directly interpolating all long exposure pixels around the current pixel point to obtain a short exposure pixel value of the current long exposure position.
Optionally, the performing of the brightness estimation on the region where the current pixel is located, and performing the fusion decision and the processing on the image by using the brightness value of the region where the current pixel is located and a preset fusion threshold includes:
calculating the brightness value of the area where the current prime point is located;
comparing the brightness value with a set first fusion threshold value and a set second fusion threshold value;
when the brightness value is smaller than the first threshold value, image fusion is carried out on the area where the current pixel point is located by adopting a long exposure pixel value; when the brightness value is between a first threshold and a second threshold, carrying out image fusion on the region where the current pixel point is located by adopting a linear weighted average value of a long exposure pixel value and a short exposure pixel value; and when the brightness value is larger than the second threshold value, carrying out image fusion on the region where the current pixel point is located by adopting the short-exposure pixel value.
In a second aspect, the present invention provides a high dynamic range image full resolution reconstruction apparatus, comprising:
the acquisition unit is used for acquiring an image simultaneously comprising long exposure pixels and short exposure pixels through the high dynamic range sensor;
the first calculation unit is used for calculating gradient values of the region where the current pixel point is located in all directions to obtain gradient information of the region where the current pixel point is located in four directions;
the first judgment unit is used for judging the direction of the region where the current pixel point is located according to the gradient information, and if the gradient values in the four directions are all smaller than a set threshold value, the region is a flat region;
a second judging unit, configured to preliminarily determine, when the region where the current pixel is located is not a flat region, that the direction with the smaller gradient value is an interpolation direction, and further judge whether the direction in which the current pixel needs to be interpolated is a preferential interpolation direction or a non-preferential interpolation direction;
the third judging unit is used for judging whether the ratio of the gradient of the current pixel point in the non-priority interpolation direction to the gradient of the priority interpolation direction is smaller than a set threshold value or not when the direction with smaller gradient is the non-priority interpolation direction, judging whether the area where the current pixel point is located is a saturated area, a low-brightness area and a motion area or not, and giving up the non-priority interpolation direction if the conditions are met;
the second calculation unit is used for carrying out interpolation calculation in a preferential interpolation direction or interpolation calculation in a non-preferential interpolation direction on the current pixel point according to the judgment result of the interpolation direction;
the first processing unit is used for carrying out motion detection on the area where the current pixel is located, judging whether motion information exceeds a preset threshold value, carrying out motion compensation processing on a pixel value which moves, and removing ghost;
and the second processing unit is used for performing brightness estimation on the region of the current pixel and performing fusion decision and processing on the image by using the brightness value of the region of the current pixel and a preset fusion threshold.
Optionally, an exposure ratio of an image acquired by the high dynamic range sensor is set to 1:1, 1:2, 1:4, 1:8 or 1:16, the exposure ratio being a ratio of a long exposure time and a short exposure time of the image within the same frame.
Optionally, the first computing unit includes:
the conversion module is used for converting the pixel value of the short exposure position of the area where the current pixel point is located into a long exposure pixel value according to the exposure proportion, or converting the pixel value of the long exposure position of the area where the current pixel point is located into a short exposure pixel value according to the exposure proportion;
and the acquisition module is used for acquiring the horizontal gradient value, the vertical gradient value, the gradient value of 45 degrees at an incline and the gradient value of 135 degrees at an incline of the converted region where the current pixel point is located by using the corresponding gradient detection operator.
Optionally, the conversion module is configured to multiply the pixel value of the short exposure position of the region where the current pixel point is located by the exposure ratio to obtain a long exposure pixel value of the short exposure position of the region where the current pixel point is located, or divide the pixel value of the long exposure position of the region where the current pixel point is located by the exposure ratio to obtain a short exposure pixel value of the long exposure position of the region where the current pixel point is located.
Optionally, the second determining unit is configured to determine, when a pixel point different from the current pixel exposure time exists in a direction in which interpolation is required, that the interpolation direction is a preferential interpolation direction;
and when no pixel point with the exposure time different from that of the current pixel exists in the direction needing interpolation, judging that the interpolation direction is a non-preferential interpolation direction.
Optionally, the second computing unit includes:
the first calculation module is used for performing interpolation calculation of a preferential interpolation direction on the current pixel point when the direction needing interpolation is the preferential interpolation direction;
a second calculating module, configured to, when the direction needing to be interpolated is a non-priority interpolation direction, select the non-priority interpolation direction with the following limitations: and if the ratio of the gradient of the current pixel point in the non-preferential interpolation direction to the gradient of the preferential interpolation direction is smaller than a set threshold value, or the region in which the current pixel point is located is any one of a saturated region, a low-brightness region and a motion region, giving up to select the non-preferential interpolation direction, and performing interpolation calculation of the preferential interpolation direction on the current pixel point.
Optionally, the second computing module comprises:
the first calculation submodule is used for directly carrying out interpolation by using the short-exposure pixel in the direction needing interpolation to obtain the short-exposure pixel value of the current pixel position when the current pixel point is the long-exposure pixel and the pixel to be interpolated is the short-exposure pixel;
the second calculation submodule is used for directly carrying out interpolation by using the long exposure pixel in the direction needing interpolation to obtain the long exposure pixel value of the current pixel position when the current pixel point is the short exposure pixel and the pixel to be interpolated is the long exposure pixel;
a third computing submodule, configured to convert the long-exposure pixel value in the direction requiring interpolation into a short-exposure pixel value when the current pixel point is a long-exposure pixel and the pixel to be interpolated is a short-exposure pixel, and then perform interpolation computation, where the conversion method is dividing the long-exposure pixel value by an exposure proportion;
and the fourth calculation submodule is used for converting the short-exposure pixel value in the direction needing interpolation into a long-exposure pixel value when the current pixel point is a short-exposure pixel and the pixel to be interpolated is a long-exposure pixel, and then performing interpolation calculation, wherein the conversion method is that the short-exposure pixel value is multiplied by the exposure proportion.
Optionally, the apparatus further comprises:
the third calculation unit is used for directly interpolating all the short-exposure pixels around the current pixel point to obtain a short-exposure pixel value of the current long-exposure position when the region where the current pixel point is located is a flat region, the current pixel point is a long-exposure pixel, and the pixel to be interpolated is a short-exposure pixel;
and when the region where the current pixel point is located is a flat region, the current pixel point is a short exposure pixel, and the pixel to be interpolated is a long exposure pixel, directly interpolating all long exposure pixels around the current pixel point to obtain a short exposure pixel value of the current long exposure position.
Optionally, the second processing unit comprises:
the brightness calculation module is used for calculating the brightness value of the area where the current prime point is located;
the comparison module is used for comparing the brightness value with a set first fusion threshold value and a set second fusion threshold value;
the fusion processing module is used for carrying out image fusion on the area where the current pixel point is located by adopting a long exposure pixel value when the brightness value is smaller than the first threshold value; when the brightness value is between a first threshold and a second threshold, carrying out image fusion on the region where the current pixel point is located by adopting a linear weighted average value of a long exposure pixel value and a short exposure pixel value; and when the brightness value is larger than the second threshold value, carrying out image fusion on the region where the current pixel point is located by adopting the short-exposure pixel value.
In a third aspect, the present invention provides an electronic device, which includes the above-mentioned high dynamic range image full resolution reconstruction apparatus.
The high dynamic range image full resolution reconstruction method, the device and the electronic equipment provided by the embodiment of the invention collect a high dynamic range image simultaneously containing long exposure pixels and short exposure pixels through a high dynamic range sensor, calculate the gradient value of each direction of the region where the current pixel point is located, judge whether the region where the current pixel point is located is a flat region, judge whether the direction in which the current pixel point needs to be interpolated is a preferential interpolation direction or a non-preferential interpolation direction and judge whether the restriction condition of the non-preferential interpolation direction is met or not when the region is not the flat region, perform preferential interpolation direction interpolation calculation or non-preferential interpolation direction interpolation calculation on the current pixel point according to the judgment result, and perform motion compensation and fusion processing on the region where the pixel point is located after the interpolation calculation. Compared with the prior art, the full-resolution reconstruction algorithm is realized by hardware, can effectively realize the function of high dynamic range in the video, and has better processing capability in the aspects of image analysis force, image details, motion scenes and the like, thereby being capable of providing high-quality high-dynamic-range images.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Another direction of the high dynamic range technology is to support partitioned high dynamic range exposure based on the In-camera HDR technology implemented by hardware for the high dynamic range sensor, such as the second generation stacked image sensor IMX214 developed by sony corporation, and the exposure time of each row of pixels can be precisely controlled In the same frame, and images with different exposure times can be obtained In the same frame In an alternating manner of every two rows of long and short exposures, or In a zigzag manner. The image collected by the sensor needs to be reconstructed in full resolution because the long and short exposure pixels respectively account for half of the total amount of the whole image pixels, the full-resolution long exposure image and the full-resolution short exposure image are restored through interpolation, and then the dynamic range is improved through a fusion algorithm.
The invention provides a high dynamic range image full resolution reconstruction method, as shown in fig. 1, the method comprises the following steps:
s11, acquiring an image containing both long-exposure pixels and short-exposure pixels through a high dynamic range sensor;
an image shot by the high dynamic range sensor is shown in fig. 2, wherein white pixel points represent long exposure pixels, gray pixel points represent short exposure pixels, and the long exposure pixels and the short exposure pixels are alternately arranged in a zigzag manner in the whole image. The long exposure pixel point has longer exposure time, and image signals collected at the low-brightness part of the scene contain more effective information. The short exposure pixel point is short in exposure time, but the image signal of the image with the high brightness is not saturated, and some detail information can be reserved. And performing full-resolution restoration on the original image containing the long and short exposure pixel values to obtain a long exposure image and a short exposure image, and then realizing the improvement of the dynamic range of the image through fusion, and better presenting the detail information of low-brightness and high-brightness parts in the same image.
The long exposure time and the short exposure time have a predetermined proportional relationship, and the exposure time ratio is defined as an exposure ratio (exposure ratio), optionally, the exposure ratio is set to 1:1, 1:2, 1:4, 1:8, 1:16, and the like, and when the exposure ratio is set to 1:1, the exposure time is equal, which is equivalent to that of a common image sensor.
As shown in fig. 3, the full resolution reconstruction is performed by interpolating the missing short-exposure pixel values at the long-exposure positions and interpolating the missing long-exposure pixel values at the short-exposure positions. And obtaining a long exposure image with full resolution and a short exposure image with full resolution after the interpolation is finished.
The full resolution reconstruction method comprises the following steps: gradient calculation, interpolation direction selection, interpolation calculation, motion compensation and image fusion. The full-resolution reconstruction algorithm is realized by hardware, so that the problem of multi-frame fusion by adopting a software scheme can be better avoided, the high-dynamic-range function can be effectively realized in a video, the processing capability is better in the aspects of image analysis force, image details, motion scenes and the like, and the high-dynamic-range image with higher quality can be provided.
S12, calculating gradient values of the region where the current pixel point is located in all directions to obtain gradient information of the region where the current pixel point is located in four directions;
interpolation is implemented based on an nxn pixel block of a neighborhood of a current pixel point in an image arranged in an original zigzag manner, optionally, an original input 5 × 5 pixel block is converted into a 5 × 5 long-exposure pixel block before calculating a gradient, the conversion method is to multiply a pixel value of a short-exposure position by a corresponding exposure proportion, and as shown in fig. 4, the short-exposure pixel values of all gray positions in the image are multiplied by the exposure proportion and converted into long-exposure pixel values.
Alternatively, the original input 5 × 5 pixel block is converted to a 5 × 5 short-exposure pixel block by dividing the pixel value of the long-exposure position by the corresponding exposure ratio.
Gradient calculations are performed based on the transformed 5 x 5 long-exposure pixel blocks, and the gradient information will guide directional interpolation for full resolution reconstruction. Optionally, a corresponding gradient detection operator is used to obtain a horizontal gradient value (h), a vertical gradient value (v), an oblique 45-degree gradient value (45), an oblique 135-degree gradient value (135), and gradient values in four directions are gradient information of the current pixel neighborhood.
S13, judging the direction of the area where the current pixel point is located according to the gradient information, if the gradient values in the four directions are all smaller than a set threshold value, the area is a flat area, when the area where the current pixel point is located is not a flat area, preliminarily determining the direction with the smaller gradient value as an interpolation direction, and further judging whether the direction is a preferential interpolation direction or a non-preferential interpolation direction;
firstly, judging a flat area according to the gradient information, if the gradient information meets the judgment logic of the flat area, the interpolation direction is flat, the interpolation of the flat area is non-directional interpolation, and the interpolation mode is to calculate the average value of the surrounding pixel values.
If the gradient information does not conform to the logic of the flat region, a direction determination is required. The interpolation direction is judged by utilizing the gradient information (h/v/45/135) extracted by the gradient calculation, and the interpolation is preferentially carried out along the direction with smaller gradient, thereby being beneficial to protecting the high-frequency information of the image, such as edge and detail, and reducing the loss of the image resolution.
The invention has a special definition mode for the interpolation direction, which comprises the following specific steps:
and if a pixel point with different exposure time from the current pixel exists in the interpolation direction to be carried out, defining the pixel point as a preferential interpolation direction. And if no pixel point with the exposure time different from that of the current pixel exists in the interpolation direction needing to be carried out, defining the direction as a non-preferential interpolation direction.
Specifically, since the positions of the long and short exposure pixel arrangements of the R/B channel and the G channel are different, the preferential interpolation direction and the non-preferential interpolation direction are different in both cases.
As shown in fig. 5, the central pixel point p (2,2) in the graph (a) is the long exposure pixel value B, and the short exposure pixel value B at the position needs to be obtained through interpolation, it can be seen that the short exposure pixel points p (2,0) and p (2,4) exist in the horizontal direction, the short exposure pixel points p (0,2) and p (4,2) exist in the vertical direction, the short exposure value of the central pixel point can be directly obtained through interpolation of the short exposure pixel points in the directions, which is different from the exposure time of the central pixel point, and the direction in which interpolation can be directly performed is the preferential interpolation direction. In the 45-degree or 135-degree oblique direction, the positions of p (0,0) and p (4,4), and the positions of p (0,4) and p (4,0) are long-exposure pixel points, the short-exposure pixel values at the central position cannot be obtained by utilizing the pixel values through a direct interpolation mode, calculation needs to be carried out by combining exposure proportion, and the direction which cannot be directly interpolated is a non-priority interpolation direction.
For the R/B channel, horizontal and vertical are preferential interpolation directions, and 45 and 135 degrees skew are non-preferential interpolation directions. Similarly, as shown in fig. 5, in (b), for the G channel, 45 degrees and 135 degrees are tilted as preferential interpolation directions, and horizontal and vertical are non-preferential interpolation directions.
In summary, the preferential interpolation direction and the non-preferential interpolation direction of the current pixel point can be obtained according to the pixel of which the current pixel point is the R/B channel or the G channel, and then the direction needing to be interpolated is determined according to the calculated gradient information, so that whether the direction needing to be interpolated is the preferential interpolation direction or the non-preferential interpolation direction can be known.
S14, when the direction with smaller gradient is the non-priority interpolation direction, judging whether the ratio of the gradient of the non-priority interpolation direction of the current pixel point to the gradient of the priority interpolation direction is smaller than a set threshold value, judging whether the area where the current pixel point is located is a saturated area, a low-brightness area or a motion area, and if the conditions are met, giving up the non-priority interpolation direction;
there are some constraints on the selection of the non-preferential interpolation direction, and only the preferential interpolation direction can be selected under the following conditions:
(1) the ratio of the gradient of the non-preferential interpolation direction to the gradient of the preferential interpolation direction is smaller than a set threshold value.
When the above conditions are satisfied, the long-exposure pixel values and the short-exposure pixel values in the neighborhood do not maintain a certain proportional relationship, and erroneous interpolation may occur when a non-preferential interpolation direction is selected for interpolation.
As shown in steps S12, S13, and S14, the interpolation direction determination method includes the following steps:
calculating to obtain gradient information in the horizontal direction, the vertical direction, the 45-degree oblique direction and the 135-degree oblique direction;
judging whether the area where the pixel points are located is a flat area or not;
when the judgment result is a flat area, selecting non-directional interpolation;
when the judgment result is not a flat area, judging whether the current pixel point is an R/B channel or a G channel;
when the current pixel point is an R/B channel, judging whether the area where the current pixel point is located meets one of the limiting conditions of the non-preferential interpolation direction, and selecting the preferential interpolation direction (horizontal direction or vertical direction) when any one limiting condition is met; otherwise, selecting the interpolation direction directly according to whether the direction needing interpolation is a preferential interpolation direction or a non-preferential interpolation direction;
when the current pixel point is a G channel, judging whether the area where the current pixel point is located meets one of the limiting conditions of the non-preferential interpolation direction, and selecting the preferential interpolation direction (45-degree oblique direction or 135-degree oblique direction) when any one limiting condition is met; otherwise, the interpolation direction is selected directly according to whether the direction needing interpolation is a preferential interpolation direction or a non-preferential interpolation direction.
S15, performing interpolation calculation in a preferential interpolation direction or interpolation calculation in a non-preferential interpolation direction on the current pixel point according to the interpolation direction judgment result;
performing a preferential interpolation calculation for a pixel point satisfying one of the constraints of any one of the non-preferential interpolation directions according to the determination result of step S14; for a pixel point which does not meet one of the limiting conditions of any one non-preferential interpolation direction, when the direction needing interpolation is a preferential interpolation direction, performing preferential interpolation direction interpolation calculation on the current pixel point; and when the direction needing interpolation is a non-priority interpolation direction, performing non-priority interpolation direction interpolation calculation on the current pixel point.
Optionally, when the current pixel point is a long-exposure pixel, directly interpolating the short-exposure pixel in the direction needing interpolation to obtain a short-exposure pixel value of the current pixel point;
when the current pixel point is a short exposure pixel, directly interpolating the long exposure pixel in the direction needing interpolation to obtain a long exposure pixel value of the current pixel point;
when the direction needing interpolation is a non-preferential interpolation direction, the interpolation calculation of the non-preferential interpolation direction on the current pixel point comprises the following steps:
when the current pixel point is a long exposure pixel, converting the long exposure pixel value in the direction needing interpolation into a short exposure pixel value, and then performing interpolation calculation, wherein the conversion method is that the long exposure pixel value is divided by an exposure proportion;
and when the current pixel point is a short-exposure pixel, converting the short-exposure pixel value in the direction needing interpolation into a long-exposure pixel value, and then performing interpolation calculation, wherein the conversion method is that the short-exposure pixel value is multiplied by an exposure proportion.
Optionally, the method further comprises:
when the region where the current pixel point is located is a flat region and the current pixel point is a long exposure pixel, directly interpolating all short exposure pixels around the current pixel point to obtain a short exposure pixel value of the current long exposure position;
when the area where the current pixel point is located is a flat area and the current pixel point is a short exposure pixel, directly interpolating all long exposure pixels around the current pixel point to obtain a short exposure pixel value of the current long exposure position.
Specifically, according to the interpolation direction determined in step S14, the interpolation of the current pixel is completed within the input 5 × 5 pixel block:
when the current pixel value is R or B channel long exposure pixel, if the interpolation direction is horizontal direction or vertical direction, it belongs to the interpolation of the preferential interpolation direction, the interpolation uses the short exposure pixel in the direction in the 5 x 5 pixel block to directly interpolate, and the short exposure pixel value of the current long exposure position is obtained. If the interpolation direction is 45-degree oblique direction or 135-degree oblique direction, the interpolation belongs to the interpolation direction of non-priority, the interpolation uses the long exposure pixel in the direction in the 5 × 5 pixel block for interpolation, the long exposure pixel value participating in the interpolation needs to be converted into the short exposure pixel value, the conversion method is to divide the exposure proportion, and then the short exposure pixel value of the current long exposure position is obtained. If the current pixel is located in the flat area and has no obvious interpolation direction, all the short-exposure pixels around the current pixel in the 5 multiplied by 5 pixel block are used for directly interpolating to obtain the short-exposure pixel value of the current long-exposure position.
When the current pixel value is R or B channel short exposure pixel, if the interpolation direction is horizontal direction or vertical direction, it belongs to the interpolation of the preferential interpolation direction, the interpolation uses the long exposure pixel in the direction in the 5 x 5 pixel block to directly interpolate, and obtains the long exposure pixel value of the current short exposure position. If the interpolation direction is 45-degree oblique direction or 135-degree oblique direction, the interpolation belongs to the interpolation direction of non-priority, the interpolation uses the short exposure pixel in the direction in the 5 × 5 pixel block for interpolation, the short exposure pixel value participating in the interpolation needs to be converted into the long exposure pixel value, the conversion method is to multiply the exposure proportion, and then the long exposure pixel value of the current short exposure position is obtained. If the current pixel is located in the flat area and has no obvious interpolation direction, all long exposure pixels around the current pixel in the 5 multiplied by 5 pixel block are used for directly interpolating to obtain the long exposure pixel value of the current short exposure position.
The R/B channel interpolation calculation method is shown in table 1:
TABLE 1
When the current pixel value is a G-channel long exposure pixel, if the interpolation direction is a 45-degree oblique direction or a 135-degree oblique direction, the interpolation belongs to the interpolation of the preferential interpolation direction, and the interpolation directly carries out interpolation by using the short exposure pixel in the direction in the 5 multiplied by 5 pixel block to obtain the short exposure pixel value of the current long exposure position. If the interpolation direction is the horizontal direction or the vertical direction, the interpolation belongs to the interpolation in the non-preferential interpolation direction, the interpolation uses the long exposure pixel in the direction in the 5 x 5 pixel block for interpolation, the long exposure pixel value participating in the interpolation needs to be converted into the short exposure pixel value, the conversion method is to divide the exposure proportion, and then the short exposure pixel value of the current long exposure position is obtained. If the current pixel is located in the flat area and has no obvious interpolation direction, all the short-exposure pixels around the current pixel in the 5 multiplied by 5 pixel block are used for directly interpolating to obtain the short-exposure pixel value of the current long-exposure position.
When the current pixel value is a G-channel short-exposure pixel, if the interpolation direction is a 45-degree oblique direction or a 135-degree oblique direction, the interpolation belongs to the interpolation of the preferential interpolation direction, and the interpolation directly carries out interpolation by using the long-exposure pixel in the direction in the 5 multiplied by 5 pixel block to obtain the long-exposure pixel value of the current short-exposure position. If the interpolation direction is the horizontal direction or the vertical direction, the interpolation belongs to the interpolation in the non-preferential interpolation direction, the interpolation uses the short-exposure pixel in the direction in the 5 x 5 pixel block for interpolation, the short-exposure pixel value participating in the interpolation needs to be converted into a long-exposure pixel value, the conversion method is to multiply the exposure proportion, and then the long-exposure pixel value of the current short-exposure position is obtained. If the current pixel is located in the flat area and has no obvious interpolation direction, all long exposure pixels around the current pixel in the 5 multiplied by 5 pixel block are used for directly interpolating to obtain the long exposure pixel value of the current short exposure position.
The G channel interpolation calculation method is shown in table 2:
TABLE 2
S16, performing motion detection on the area where the current pixel is located, judging whether the motion information exceeds a preset threshold value, performing motion compensation processing on the pixel value which moves, and removing ghost;
the motion compensation solves the problem that when a moving object exists in a scene, due to different exposure time, the positions of the moving object in two frames of images participating in fusion are different, and the fused images have ghost images.
When the central pixel of the input 5 × 5 pixel block is a long exposure pixel, motion detection calculation is performed in the pixel block to obtain motion information (motion), the current pixel is long exposure, interpolation of the short exposure pixel needs to be completed through a complete full resolution recovery interpolation algorithm to obtain a short exposure pixel value of the current pixel position, and then motion compensation processing is performed by using the detected motion information, the long exposure pixel value (input) and the short exposure pixel value (interpolation).
When the central pixel of the input 5 × 5 pixel block is a short-exposure pixel, motion information (motion) is obtained by performing motion detection calculation in the pixel block, the current pixel is short-exposure, the long-exposure pixel value at the current pixel position is obtained by completing interpolation of the long-exposure pixel through a complete full-resolution recovery interpolation algorithm, and then motion compensation processing is performed by using the detected motion information, the short-exposure (input) pixel value and the long-exposure pixel value (interpolation).
The method comprises the following steps that under the condition that a current pixel is short-exposed, noise reduction processing needs to be carried out on the short-exposed pixel, and the method specifically comprises the following steps: carrying out frequency detection, noise reduction processing and texture enhancement processing on the short-exposure pixels; and then the processed short-exposure pixels are subjected to motion compensation.
And S17, performing brightness estimation on the region of the current pixel, and performing fusion decision and processing on the image by using the brightness value of the region of the current pixel and a preset fusion threshold.
The luminance estimation is performed based on the pixel blocks of the long-exposure image 3 × 3, the luminance value L is compared with the set fusion thresholds S1 and S2, and the final fusion policy is determined:
when L < S1, the brightness of the area where the pixel is located is low, and the brightness of the area can be effectively improved and the noise level can be reduced by completely adopting the long exposure pixel value.
When L > S2, the brightness of the area where the pixel is located is high, and the brightness saturation can be reduced by completely adopting the short-exposure pixel value, so that the image details of the high-brightness area are increased.
When S1< L < S2, the area where the pixel is located is normal brightness, the fusion value is a linear weighted average of the long-exposure pixel value and the short-exposure pixel value (L ═ ((S2-L) × T1+ (L-S1) × T2))/((S2-S1)), and smooth transition of the fusion image can be achieved, and an obvious fusion boundary line is prevented from appearing.
The fusion method provided by the invention refers to the brightness domain guidance fusion strategy of the long-exposure image, and can effectively inhibit the false color at the boundary of the high-brightness and low-brightness regions.
The high dynamic range image full resolution reconstruction method provided by the embodiment of the invention collects a high dynamic range image simultaneously containing long exposure pixels and short exposure pixels through a high dynamic range sensor, calculates gradient values of all directions of a region where a current pixel point is located, judges whether the region where the current pixel point is located is a flat region, judges whether the direction in which the current pixel point needs to be interpolated is a preferential interpolation direction or a non-preferential interpolation direction and judges whether a limiting condition of the non-preferential interpolation direction is met or not when the region is not the flat region, performs preferential interpolation direction interpolation calculation or non-preferential interpolation direction interpolation calculation on the current pixel point according to a judgment result, and performs motion compensation and fusion processing on the region where the pixel point is located after the interpolation calculation. Compared with the prior art, the full-resolution reconstruction algorithm is realized by hardware, can effectively realize the function of high dynamic range in the video, and has better processing capability in the aspects of image analysis force, image details, motion scenes and the like, thereby being capable of providing high-quality high-dynamic-range images.
An embodiment of the present invention further provides a high dynamic range image full resolution reconstruction apparatus, as shown in fig. 6, the apparatus includes:
theacquisition unit 11 is used for acquiring an image simultaneously containing long exposure pixels and short exposure pixels through a high dynamic range sensor;
the first calculating unit 12 is configured to calculate gradient values of the region where the current pixel point is located in each direction, and obtain gradient information of the region where the current pixel point is located in four directions;
afirst judging unit 13, configured to judge, according to the gradient information, a direction of a region where a current pixel is located, where, if gradient values in four directions are all smaller than a set threshold, the region is a flat region;
a second determiningunit 14, configured to preliminarily determine, when the region where the current pixel is located is not a flat region, that the direction with the smaller gradient value is an interpolation direction, and further determine whether the direction in which the current pixel needs to be interpolated is a preferential interpolation direction or a non-preferential interpolation direction;
a third determiningunit 15, configured to determine, when the direction with the smaller gradient is the non-priority interpolation direction, whether a ratio of the gradient of the current pixel point in the non-priority interpolation direction to the gradient of the priority interpolation direction is smaller than a set threshold, and determine whether the region where the current pixel point is located is a saturation region, a low-luminance region, and a motion region, and if the above conditions are met, abandon the non-priority interpolation direction;
the second calculatingunit 16 is configured to perform interpolation calculation in a preferential interpolation direction or interpolation calculation in a non-preferential interpolation direction on the current pixel point according to the interpolation direction determination result;
thefirst processing unit 17 is configured to perform motion detection on an area where a current pixel is located, determine whether motion information exceeds a preset threshold, perform motion compensation processing on a pixel value where motion occurs, and remove a ghost;
and thesecond processing unit 18 is configured to perform brightness estimation on the region where the current pixel is located, and perform fusion decision and processing on the image by using the brightness value of the region where the current pixel is located and a preset fusion threshold.
Optionally, an exposure ratio of an image acquired by the high dynamic range sensor is set to 1:1, 1:2, 1:4, 1:8 or 1:16, the exposure ratio being a ratio of a long exposure time and a short exposure time of the image within the same frame.
Optionally, the first computing unit 12 includes:
the conversion module is used for converting the pixel value of the short exposure position of the area where the current pixel point is located into a long exposure pixel value according to the exposure proportion, or converting the pixel value of the long exposure position of the area where the current pixel point is located into a short exposure pixel value according to the exposure proportion;
and the acquisition module is used for acquiring the horizontal gradient value, the vertical gradient value, the gradient value of 45 degrees at an incline and the gradient value of 135 degrees at an incline of the converted region where the current pixel point is located by using the corresponding gradient detection operator.
Optionally, the conversion module is configured to multiply the pixel value of the short exposure position of the region where the current pixel point is located by the exposure ratio to obtain a long exposure pixel value of the short exposure position of the region where the current pixel point is located, or divide the pixel value of the long exposure position of the region where the current pixel point is located by the exposure ratio to obtain a short exposure pixel value of the long exposure position of the region where the current pixel point is located.
Optionally, the second determiningunit 14 is configured to determine, when a pixel point different from the current pixel exposure time exists in a direction in which interpolation is required, that the interpolation direction is a preferential interpolation direction;
and when no pixel point with the exposure time different from that of the current pixel exists in the direction needing interpolation, judging that the interpolation direction is a non-preferential interpolation direction.
Optionally, thesecond computing unit 16 includes:
the first calculation module is used for performing interpolation calculation of a preferential interpolation direction on the current pixel point when the direction needing interpolation is the preferential interpolation direction;
a second calculating module, configured to, when the direction needing to be interpolated is a non-priority interpolation direction, select the non-priority interpolation direction with the following limitations: and if the ratio of the gradient of the current pixel point in the non-preferential interpolation direction to the gradient of the preferential interpolation direction is smaller than a set threshold value, or the region in which the current pixel point is located is any one of a saturated region, a low-brightness region and a motion region, giving up to select the non-preferential interpolation direction, and performing interpolation calculation of the preferential interpolation direction on the current pixel point.
Optionally, the second computing module comprises:
the first calculation submodule is used for directly carrying out interpolation by using the short-exposure pixel in the direction needing interpolation to obtain the short-exposure pixel value of the current pixel position when the current pixel point is the long-exposure pixel and the pixel to be interpolated is the short-exposure pixel;
the second calculation submodule is used for directly carrying out interpolation by using the long exposure pixel in the direction needing interpolation to obtain the long exposure pixel value of the current pixel position when the current pixel point is the short exposure pixel and the pixel to be interpolated is the long exposure pixel;
a third computing submodule, configured to convert the long-exposure pixel value in the direction requiring interpolation into a short-exposure pixel value when the current pixel point is a long-exposure pixel and the pixel to be interpolated is a short-exposure pixel, and then perform interpolation computation, where the conversion method is dividing the long-exposure pixel value by an exposure proportion;
and the fourth calculation submodule is used for converting the short-exposure pixel value in the direction needing interpolation into a long-exposure pixel value when the current pixel point is a short-exposure pixel and the pixel to be interpolated is a long-exposure pixel, and then performing interpolation calculation, wherein the conversion method is that the short-exposure pixel value is multiplied by the exposure proportion.
Optionally, the apparatus further comprises:
the third calculation unit is used for directly interpolating all the short-exposure pixels around the current pixel point to obtain a short-exposure pixel value of the current long-exposure position when the region where the current pixel point is located is a flat region, the current pixel point is a long-exposure pixel, and the pixel to be interpolated is a short-exposure pixel;
and when the region where the current pixel point is located is a flat region, the current pixel point is a short exposure pixel, and the pixel to be interpolated is a long exposure pixel, directly interpolating all long exposure pixels around the current pixel point to obtain a short exposure pixel value of the current long exposure position.
Optionally, thesecond processing unit 18 includes:
the brightness calculation module is used for calculating the brightness value of the area where the current prime point is located;
the comparison module is used for comparing the brightness value with a set first fusion threshold value and a set second fusion threshold value;
the fusion processing module is used for carrying out image fusion on the area where the current pixel point is located by adopting a long exposure pixel value when the brightness value is smaller than the first threshold value; when the brightness value is between a first threshold and a second threshold, carrying out image fusion on the region where the current pixel point is located by adopting a linear weighted average value of a long exposure pixel value and a short exposure pixel value; and when the brightness value is larger than the second threshold value, carrying out image fusion on the region where the current pixel point is located by adopting the short-exposure pixel value.
The high dynamic range image full resolution reconstruction device provided by the embodiment of the invention collects a high dynamic range image simultaneously containing long exposure pixels and short exposure pixels through a high dynamic range sensor, calculates gradient values of all directions of a region where a current pixel point is located, judges whether the region where the current pixel point is located is a flat region, judges whether the direction in which the current pixel point needs to be interpolated is a preferential interpolation direction or a non-preferential interpolation direction and judges whether a limiting condition of the non-preferential interpolation direction is met or not when the region is not the flat region, performs preferential interpolation direction interpolation calculation or non-preferential interpolation direction interpolation calculation on the current pixel point according to a judgment result, and performs motion compensation and fusion processing on the region where the pixel point is located after the interpolation calculation. Compared with the prior art, the full-resolution reconstruction algorithm is realized by hardware, can effectively realize the function of high dynamic range in the video, and has better processing capability in the aspects of image analysis force, image details, motion scenes and the like, thereby being capable of providing high-quality high-dynamic-range images.
The embodiment of the invention also provides electronic equipment which comprises the high dynamic range image full-resolution reconstruction device.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like.
The above description is only for the specific embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention are included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.