Movatterモバイル変換


[0]ホーム

URL:


CN110418081B - High dynamic range image full-resolution reconstruction method and device and electronic equipment - Google Patents

High dynamic range image full-resolution reconstruction method and device and electronic equipment
Download PDF

Info

Publication number
CN110418081B
CN110418081BCN201810394367.4ACN201810394367ACN110418081BCN 110418081 BCN110418081 BCN 110418081BCN 201810394367 ACN201810394367 ACN 201810394367ACN 110418081 BCN110418081 BCN 110418081B
Authority
CN
China
Prior art keywords
exposure
interpolation
current pixel
value
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810394367.4A
Other languages
Chinese (zh)
Other versions
CN110418081A (en
Inventor
霍星
蔡进
孟春芝
李怀东
王微
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Ziguang Zhanrui Communication Technology Co Ltd
Original Assignee
Beijing Ziguang Zhanrui Communication Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Ziguang Zhanrui Communication Technology Co LtdfiledCriticalBeijing Ziguang Zhanrui Communication Technology Co Ltd
Priority to CN201810394367.4ApriorityCriticalpatent/CN110418081B/en
Publication of CN110418081ApublicationCriticalpatent/CN110418081A/en
Application grantedgrantedCritical
Publication of CN110418081BpublicationCriticalpatent/CN110418081B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Classifications

Landscapes

Abstract

The invention provides a high dynamic range image full resolution reconstruction method, which comprises the following steps: acquiring an image simultaneously containing long exposure pixels and short exposure pixels; calculating gradient values of the current pixel point in all directions to obtain gradient information in four directions; judging the direction of the region where the current pixel point is located according to the gradient information, and preliminarily determining the direction with a smaller gradient value as an interpolation direction when the region where the current pixel point is located is not a flat region; when the direction is a non-priority interpolation direction, judging whether the ratio of the gradient of the non-priority interpolation direction to the gradient of the priority interpolation direction is smaller than a set threshold value; performing interpolation calculation in a preferential interpolation direction or a non-preferential interpolation direction on the current pixel point according to the judgment result; carrying out motion detection on the area where the current pixel is located, and carrying out motion compensation processing on the pixel value which moves; and performing brightness estimation on the region where the current pixel is located, and performing fusion decision and processing on the image by using the brightness value of the region where the current pixel is located and a preset fusion threshold value.

Description

High dynamic range image full-resolution reconstruction method and device and electronic equipment
Technical Field
The invention relates to the technical field of image processing, in particular to a high dynamic range image full-resolution reconstruction method and device and electronic equipment.
Background
Since the dynamic range of a common image sensor is smaller than that of a real scene, information loss in a highlight or low-highlight place is caused in a shot image. The Dynamic Range of the generated image can be greatly increased by using a High Dynamic Range (HDR) technology, and the current High Dynamic Range technology is widely applied to electronic devices such as smart phones and tablet computers.
The most common high dynamic range technology in mobile phones is a multiple exposure synthesis technology, which is generally implemented by software, and uses images with two frames long and short exposure before and after or images with three frames long, medium and short exposure before and after to be fused, wherein the short exposure image is used at a highlight part, and the long exposure image is used at a shadow part to present details, so that the purpose of improving the dynamic range of the images is achieved. However, this method has some disadvantages: the time difference exists between the front and the back shot images, the problem of image alignment can be encountered during fusion, and factors such as long operation time of multi-frame fusion software limit the application of the multi-frame fusion software in the aspect of high dynamic range of videos.
Disclosure of Invention
The high dynamic range image full-resolution reconstruction method, the high dynamic range image full-resolution reconstruction device and the electronic equipment can effectively realize a high dynamic range function in a video, and improve the processing capacity of image details and a motion scene, so that the quality of a dynamic range image is improved.
In a first aspect, the present invention provides a full resolution reconstruction method for a high dynamic range image, the method comprising:
acquiring an image simultaneously containing long exposure pixels and short exposure pixels through a high dynamic range sensor;
calculating gradient values of the region of the current pixel point in all directions to obtain gradient information of the region of the current pixel point in four directions;
judging the direction of the region where the current pixel point is located according to the gradient information, if the gradient values in the four directions are all smaller than a set threshold value, the region is a flat region, when the region where the current pixel point is located is not the flat region, preliminarily determining the direction with the smaller gradient value as an interpolation direction, and further judging whether the direction is a preferential interpolation direction or a non-preferential interpolation direction;
when the direction with smaller gradient is the non-priority interpolation direction, judging whether the ratio of the gradient of the current pixel point in the non-priority interpolation direction to the gradient of the priority interpolation direction is smaller than a set threshold value or not, judging whether the area where the current pixel point is located is a saturated area, a low-brightness area and a motion area or not, and if the conditions are met, giving up the non-priority interpolation direction;
performing interpolation calculation in a preferential interpolation direction or interpolation calculation in a non-preferential interpolation direction on the current pixel point according to the interpolation direction judgment result;
performing motion detection on the area where the current pixel is located, judging whether motion information exceeds a preset threshold value, performing motion compensation processing on a pixel value which moves, and removing ghost;
and performing brightness estimation on the region of the current pixel, and performing fusion decision and processing on the image by using the brightness value of the region of the current pixel and a preset fusion threshold value.
Optionally, the exposure ratio of the image acquired by the high dynamic range sensor is 1:1, 1:2, 1:4, 1:8 or 1:16, the exposure ratio being the ratio of the long exposure time and the short exposure time of the image within the same frame.
Optionally, the calculating the gradient value of each direction of the region where the current pixel point is located includes:
converting the pixel value of the short exposure position of the area where the current pixel point is located into a long exposure pixel value according to the exposure proportion, or converting the pixel value of the long exposure position of the area where the current pixel point is located into a short exposure pixel value according to the exposure proportion;
and acquiring a horizontal gradient value, a vertical gradient value, an oblique 45-degree gradient value and an oblique 135-degree gradient value of the region where the converted current pixel point is located by using a corresponding gradient detection operator.
Optionally, the converting the pixel value of the short exposure position of the region where the current pixel point is located into a long exposure pixel value according to the exposure proportion includes: multiplying the pixel value of the short exposure position of the area where the current pixel point is located by the exposure proportion to obtain a long exposure pixel value of the short exposure position of the area where the current pixel point is located;
the step of converting the pixel value of the long exposure position of the area where the current pixel point is located into the short exposure pixel value according to the exposure proportion comprises the following steps: and dividing the pixel value of the long exposure position of the area where the current pixel point is located by the exposure proportion to obtain the short exposure pixel value of the long exposure position of the area where the current pixel point is located.
Optionally, when the region where the current pixel is located is not a flat region, determining whether the direction in which the current pixel needs to be interpolated is a preferential interpolation direction or a non-preferential interpolation direction includes:
when a pixel point different from the current pixel exposure time exists in the direction needing interpolation, judging that the interpolation direction is a preferential interpolation direction;
and when no pixel point with the exposure time different from that of the current pixel exists in the direction needing interpolation, judging that the interpolation direction is a non-preferential interpolation direction.
Optionally, the performing, according to the interpolation direction determination result, interpolation calculation in a preferential interpolation direction or interpolation calculation in a non-preferential interpolation direction on the current pixel point includes:
when the direction needing interpolation is a preferential interpolation direction, performing interpolation calculation of the preferential interpolation direction on the current pixel point;
when the direction needing interpolation is a non-priority interpolation direction, the non-priority interpolation direction is selected with the following limitations:
and if the ratio of the gradient of the current pixel point in the non-preferential interpolation direction to the gradient of the preferential interpolation direction is smaller than a set threshold value, or the region in which the current pixel point is located is any one of a saturated region, a low-brightness region and a motion region, giving up to select the non-preferential interpolation direction, and performing interpolation calculation of the preferential interpolation direction on the current pixel point.
Optionally, when the direction to be interpolated is a preferential interpolation direction, performing interpolation calculation of the preferential interpolation direction on the current pixel point includes:
when the current pixel point is a long exposure pixel and the pixel to be interpolated is a short exposure pixel, directly interpolating by using the short exposure pixel in the direction needing interpolation to obtain a short exposure pixel value of the current pixel position;
when the current pixel point is a short exposure pixel and the pixel to be interpolated is a long exposure pixel, directly interpolating by using the long exposure pixel in the direction needing interpolation to obtain a long exposure pixel value of the current pixel position;
when the direction needing interpolation is a non-preferential interpolation direction, the interpolation calculation of the current pixel point in the non-preferential interpolation direction comprises the following steps:
when the current pixel point is a long exposure pixel and the pixel to be interpolated is a short exposure pixel, converting the long exposure pixel value in the direction needing interpolation into a short exposure pixel value, and then performing interpolation calculation, wherein the conversion method is that the long exposure pixel value is divided by an exposure proportion;
and when the current pixel point is a short-exposure pixel and the pixel to be interpolated is a long-exposure pixel, converting the short-exposure pixel value in the direction needing interpolation into a long-exposure pixel value, and then performing interpolation calculation, wherein the conversion method is to multiply the short-exposure pixel value by an exposure proportion.
Optionally, the method further comprises:
when the region where the current pixel point is located is a flat region, the current pixel point is a long exposure pixel, and the pixel to be interpolated is a short exposure pixel, directly interpolating all short exposure pixels around the current pixel point to obtain a short exposure pixel value of the current long exposure position;
and when the region where the current pixel point is located is a flat region, the current pixel point is a short exposure pixel, and the pixel to be interpolated is a long exposure pixel, directly interpolating all long exposure pixels around the current pixel point to obtain a short exposure pixel value of the current long exposure position.
Optionally, the performing of the brightness estimation on the region where the current pixel is located, and performing the fusion decision and the processing on the image by using the brightness value of the region where the current pixel is located and a preset fusion threshold includes:
calculating the brightness value of the area where the current prime point is located;
comparing the brightness value with a set first fusion threshold value and a set second fusion threshold value;
when the brightness value is smaller than the first threshold value, image fusion is carried out on the area where the current pixel point is located by adopting a long exposure pixel value; when the brightness value is between a first threshold and a second threshold, carrying out image fusion on the region where the current pixel point is located by adopting a linear weighted average value of a long exposure pixel value and a short exposure pixel value; and when the brightness value is larger than the second threshold value, carrying out image fusion on the region where the current pixel point is located by adopting the short-exposure pixel value.
In a second aspect, the present invention provides a high dynamic range image full resolution reconstruction apparatus, comprising:
the acquisition unit is used for acquiring an image simultaneously comprising long exposure pixels and short exposure pixels through the high dynamic range sensor;
the first calculation unit is used for calculating gradient values of the region where the current pixel point is located in all directions to obtain gradient information of the region where the current pixel point is located in four directions;
the first judgment unit is used for judging the direction of the region where the current pixel point is located according to the gradient information, and if the gradient values in the four directions are all smaller than a set threshold value, the region is a flat region;
a second judging unit, configured to preliminarily determine, when the region where the current pixel is located is not a flat region, that the direction with the smaller gradient value is an interpolation direction, and further judge whether the direction in which the current pixel needs to be interpolated is a preferential interpolation direction or a non-preferential interpolation direction;
the third judging unit is used for judging whether the ratio of the gradient of the current pixel point in the non-priority interpolation direction to the gradient of the priority interpolation direction is smaller than a set threshold value or not when the direction with smaller gradient is the non-priority interpolation direction, judging whether the area where the current pixel point is located is a saturated area, a low-brightness area and a motion area or not, and giving up the non-priority interpolation direction if the conditions are met;
the second calculation unit is used for carrying out interpolation calculation in a preferential interpolation direction or interpolation calculation in a non-preferential interpolation direction on the current pixel point according to the judgment result of the interpolation direction;
the first processing unit is used for carrying out motion detection on the area where the current pixel is located, judging whether motion information exceeds a preset threshold value, carrying out motion compensation processing on a pixel value which moves, and removing ghost;
and the second processing unit is used for performing brightness estimation on the region of the current pixel and performing fusion decision and processing on the image by using the brightness value of the region of the current pixel and a preset fusion threshold.
Optionally, an exposure ratio of an image acquired by the high dynamic range sensor is set to 1:1, 1:2, 1:4, 1:8 or 1:16, the exposure ratio being a ratio of a long exposure time and a short exposure time of the image within the same frame.
Optionally, the first computing unit includes:
the conversion module is used for converting the pixel value of the short exposure position of the area where the current pixel point is located into a long exposure pixel value according to the exposure proportion, or converting the pixel value of the long exposure position of the area where the current pixel point is located into a short exposure pixel value according to the exposure proportion;
and the acquisition module is used for acquiring the horizontal gradient value, the vertical gradient value, the gradient value of 45 degrees at an incline and the gradient value of 135 degrees at an incline of the converted region where the current pixel point is located by using the corresponding gradient detection operator.
Optionally, the conversion module is configured to multiply the pixel value of the short exposure position of the region where the current pixel point is located by the exposure ratio to obtain a long exposure pixel value of the short exposure position of the region where the current pixel point is located, or divide the pixel value of the long exposure position of the region where the current pixel point is located by the exposure ratio to obtain a short exposure pixel value of the long exposure position of the region where the current pixel point is located.
Optionally, the second determining unit is configured to determine, when a pixel point different from the current pixel exposure time exists in a direction in which interpolation is required, that the interpolation direction is a preferential interpolation direction;
and when no pixel point with the exposure time different from that of the current pixel exists in the direction needing interpolation, judging that the interpolation direction is a non-preferential interpolation direction.
Optionally, the second computing unit includes:
the first calculation module is used for performing interpolation calculation of a preferential interpolation direction on the current pixel point when the direction needing interpolation is the preferential interpolation direction;
a second calculating module, configured to, when the direction needing to be interpolated is a non-priority interpolation direction, select the non-priority interpolation direction with the following limitations: and if the ratio of the gradient of the current pixel point in the non-preferential interpolation direction to the gradient of the preferential interpolation direction is smaller than a set threshold value, or the region in which the current pixel point is located is any one of a saturated region, a low-brightness region and a motion region, giving up to select the non-preferential interpolation direction, and performing interpolation calculation of the preferential interpolation direction on the current pixel point.
Optionally, the second computing module comprises:
the first calculation submodule is used for directly carrying out interpolation by using the short-exposure pixel in the direction needing interpolation to obtain the short-exposure pixel value of the current pixel position when the current pixel point is the long-exposure pixel and the pixel to be interpolated is the short-exposure pixel;
the second calculation submodule is used for directly carrying out interpolation by using the long exposure pixel in the direction needing interpolation to obtain the long exposure pixel value of the current pixel position when the current pixel point is the short exposure pixel and the pixel to be interpolated is the long exposure pixel;
a third computing submodule, configured to convert the long-exposure pixel value in the direction requiring interpolation into a short-exposure pixel value when the current pixel point is a long-exposure pixel and the pixel to be interpolated is a short-exposure pixel, and then perform interpolation computation, where the conversion method is dividing the long-exposure pixel value by an exposure proportion;
and the fourth calculation submodule is used for converting the short-exposure pixel value in the direction needing interpolation into a long-exposure pixel value when the current pixel point is a short-exposure pixel and the pixel to be interpolated is a long-exposure pixel, and then performing interpolation calculation, wherein the conversion method is that the short-exposure pixel value is multiplied by the exposure proportion.
Optionally, the apparatus further comprises:
the third calculation unit is used for directly interpolating all the short-exposure pixels around the current pixel point to obtain a short-exposure pixel value of the current long-exposure position when the region where the current pixel point is located is a flat region, the current pixel point is a long-exposure pixel, and the pixel to be interpolated is a short-exposure pixel;
and when the region where the current pixel point is located is a flat region, the current pixel point is a short exposure pixel, and the pixel to be interpolated is a long exposure pixel, directly interpolating all long exposure pixels around the current pixel point to obtain a short exposure pixel value of the current long exposure position.
Optionally, the second processing unit comprises:
the brightness calculation module is used for calculating the brightness value of the area where the current prime point is located;
the comparison module is used for comparing the brightness value with a set first fusion threshold value and a set second fusion threshold value;
the fusion processing module is used for carrying out image fusion on the area where the current pixel point is located by adopting a long exposure pixel value when the brightness value is smaller than the first threshold value; when the brightness value is between a first threshold and a second threshold, carrying out image fusion on the region where the current pixel point is located by adopting a linear weighted average value of a long exposure pixel value and a short exposure pixel value; and when the brightness value is larger than the second threshold value, carrying out image fusion on the region where the current pixel point is located by adopting the short-exposure pixel value.
In a third aspect, the present invention provides an electronic device, which includes the above-mentioned high dynamic range image full resolution reconstruction apparatus.
The high dynamic range image full resolution reconstruction method, the device and the electronic equipment provided by the embodiment of the invention collect a high dynamic range image simultaneously containing long exposure pixels and short exposure pixels through a high dynamic range sensor, calculate the gradient value of each direction of the region where the current pixel point is located, judge whether the region where the current pixel point is located is a flat region, judge whether the direction in which the current pixel point needs to be interpolated is a preferential interpolation direction or a non-preferential interpolation direction and judge whether the restriction condition of the non-preferential interpolation direction is met or not when the region is not the flat region, perform preferential interpolation direction interpolation calculation or non-preferential interpolation direction interpolation calculation on the current pixel point according to the judgment result, and perform motion compensation and fusion processing on the region where the pixel point is located after the interpolation calculation. Compared with the prior art, the full-resolution reconstruction algorithm is realized by hardware, can effectively realize the function of high dynamic range in the video, and has better processing capability in the aspects of image analysis force, image details, motion scenes and the like, thereby being capable of providing high-quality high-dynamic-range images.
Drawings
FIG. 1 is a flowchart of a full resolution reconstruction method for a high dynamic range image according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of an image taken by a high dynamic range sensor;
FIG. 3 is a diagram illustrating a full resolution reconstruction process according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a short exposure pixel being converted to a long exposure pixel in accordance with one embodiment of the present invention;
FIG. 5 is a schematic diagram of a center pixel being an R/B channel long exposure pixel and a G channel long exposure pixel, respectively;
fig. 6 is a schematic structural diagram of a high dynamic range image full resolution reconstruction apparatus according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Another direction of the high dynamic range technology is to support partitioned high dynamic range exposure based on the In-camera HDR technology implemented by hardware for the high dynamic range sensor, such as the second generation stacked image sensor IMX214 developed by sony corporation, and the exposure time of each row of pixels can be precisely controlled In the same frame, and images with different exposure times can be obtained In the same frame In an alternating manner of every two rows of long and short exposures, or In a zigzag manner. The image collected by the sensor needs to be reconstructed in full resolution because the long and short exposure pixels respectively account for half of the total amount of the whole image pixels, the full-resolution long exposure image and the full-resolution short exposure image are restored through interpolation, and then the dynamic range is improved through a fusion algorithm.
The invention provides a high dynamic range image full resolution reconstruction method, as shown in fig. 1, the method comprises the following steps:
s11, acquiring an image containing both long-exposure pixels and short-exposure pixels through a high dynamic range sensor;
an image shot by the high dynamic range sensor is shown in fig. 2, wherein white pixel points represent long exposure pixels, gray pixel points represent short exposure pixels, and the long exposure pixels and the short exposure pixels are alternately arranged in a zigzag manner in the whole image. The long exposure pixel point has longer exposure time, and image signals collected at the low-brightness part of the scene contain more effective information. The short exposure pixel point is short in exposure time, but the image signal of the image with the high brightness is not saturated, and some detail information can be reserved. And performing full-resolution restoration on the original image containing the long and short exposure pixel values to obtain a long exposure image and a short exposure image, and then realizing the improvement of the dynamic range of the image through fusion, and better presenting the detail information of low-brightness and high-brightness parts in the same image.
The long exposure time and the short exposure time have a predetermined proportional relationship, and the exposure time ratio is defined as an exposure ratio (exposure ratio), optionally, the exposure ratio is set to 1:1, 1:2, 1:4, 1:8, 1:16, and the like, and when the exposure ratio is set to 1:1, the exposure time is equal, which is equivalent to that of a common image sensor.
As shown in fig. 3, the full resolution reconstruction is performed by interpolating the missing short-exposure pixel values at the long-exposure positions and interpolating the missing long-exposure pixel values at the short-exposure positions. And obtaining a long exposure image with full resolution and a short exposure image with full resolution after the interpolation is finished.
The full resolution reconstruction method comprises the following steps: gradient calculation, interpolation direction selection, interpolation calculation, motion compensation and image fusion. The full-resolution reconstruction algorithm is realized by hardware, so that the problem of multi-frame fusion by adopting a software scheme can be better avoided, the high-dynamic-range function can be effectively realized in a video, the processing capability is better in the aspects of image analysis force, image details, motion scenes and the like, and the high-dynamic-range image with higher quality can be provided.
S12, calculating gradient values of the region where the current pixel point is located in all directions to obtain gradient information of the region where the current pixel point is located in four directions;
interpolation is implemented based on an nxn pixel block of a neighborhood of a current pixel point in an image arranged in an original zigzag manner, optionally, an original input 5 × 5 pixel block is converted into a 5 × 5 long-exposure pixel block before calculating a gradient, the conversion method is to multiply a pixel value of a short-exposure position by a corresponding exposure proportion, and as shown in fig. 4, the short-exposure pixel values of all gray positions in the image are multiplied by the exposure proportion and converted into long-exposure pixel values.
Alternatively, the original input 5 × 5 pixel block is converted to a 5 × 5 short-exposure pixel block by dividing the pixel value of the long-exposure position by the corresponding exposure ratio.
Gradient calculations are performed based on the transformed 5 x 5 long-exposure pixel blocks, and the gradient information will guide directional interpolation for full resolution reconstruction. Optionally, a corresponding gradient detection operator is used to obtain a horizontal gradient value (h), a vertical gradient value (v), an oblique 45-degree gradient value (45), an oblique 135-degree gradient value (135), and gradient values in four directions are gradient information of the current pixel neighborhood.
S13, judging the direction of the area where the current pixel point is located according to the gradient information, if the gradient values in the four directions are all smaller than a set threshold value, the area is a flat area, when the area where the current pixel point is located is not a flat area, preliminarily determining the direction with the smaller gradient value as an interpolation direction, and further judging whether the direction is a preferential interpolation direction or a non-preferential interpolation direction;
firstly, judging a flat area according to the gradient information, if the gradient information meets the judgment logic of the flat area, the interpolation direction is flat, the interpolation of the flat area is non-directional interpolation, and the interpolation mode is to calculate the average value of the surrounding pixel values.
If the gradient information does not conform to the logic of the flat region, a direction determination is required. The interpolation direction is judged by utilizing the gradient information (h/v/45/135) extracted by the gradient calculation, and the interpolation is preferentially carried out along the direction with smaller gradient, thereby being beneficial to protecting the high-frequency information of the image, such as edge and detail, and reducing the loss of the image resolution.
The invention has a special definition mode for the interpolation direction, which comprises the following specific steps:
and if a pixel point with different exposure time from the current pixel exists in the interpolation direction to be carried out, defining the pixel point as a preferential interpolation direction. And if no pixel point with the exposure time different from that of the current pixel exists in the interpolation direction needing to be carried out, defining the direction as a non-preferential interpolation direction.
Specifically, since the positions of the long and short exposure pixel arrangements of the R/B channel and the G channel are different, the preferential interpolation direction and the non-preferential interpolation direction are different in both cases.
As shown in fig. 5, the central pixel point p (2,2) in the graph (a) is the long exposure pixel value B, and the short exposure pixel value B at the position needs to be obtained through interpolation, it can be seen that the short exposure pixel points p (2,0) and p (2,4) exist in the horizontal direction, the short exposure pixel points p (0,2) and p (4,2) exist in the vertical direction, the short exposure value of the central pixel point can be directly obtained through interpolation of the short exposure pixel points in the directions, which is different from the exposure time of the central pixel point, and the direction in which interpolation can be directly performed is the preferential interpolation direction. In the 45-degree or 135-degree oblique direction, the positions of p (0,0) and p (4,4), and the positions of p (0,4) and p (4,0) are long-exposure pixel points, the short-exposure pixel values at the central position cannot be obtained by utilizing the pixel values through a direct interpolation mode, calculation needs to be carried out by combining exposure proportion, and the direction which cannot be directly interpolated is a non-priority interpolation direction.
For the R/B channel, horizontal and vertical are preferential interpolation directions, and 45 and 135 degrees skew are non-preferential interpolation directions. Similarly, as shown in fig. 5, in (b), for the G channel, 45 degrees and 135 degrees are tilted as preferential interpolation directions, and horizontal and vertical are non-preferential interpolation directions.
In summary, the preferential interpolation direction and the non-preferential interpolation direction of the current pixel point can be obtained according to the pixel of which the current pixel point is the R/B channel or the G channel, and then the direction needing to be interpolated is determined according to the calculated gradient information, so that whether the direction needing to be interpolated is the preferential interpolation direction or the non-preferential interpolation direction can be known.
S14, when the direction with smaller gradient is the non-priority interpolation direction, judging whether the ratio of the gradient of the non-priority interpolation direction of the current pixel point to the gradient of the priority interpolation direction is smaller than a set threshold value, judging whether the area where the current pixel point is located is a saturated area, a low-brightness area or a motion area, and if the conditions are met, giving up the non-priority interpolation direction;
there are some constraints on the selection of the non-preferential interpolation direction, and only the preferential interpolation direction can be selected under the following conditions:
(1) the ratio of the gradient of the non-preferential interpolation direction to the gradient of the preferential interpolation direction is smaller than a set threshold value.
When the above conditions are satisfied, the long-exposure pixel values and the short-exposure pixel values in the neighborhood do not maintain a certain proportional relationship, and erroneous interpolation may occur when a non-preferential interpolation direction is selected for interpolation.
As shown in steps S12, S13, and S14, the interpolation direction determination method includes the following steps:
calculating to obtain gradient information in the horizontal direction, the vertical direction, the 45-degree oblique direction and the 135-degree oblique direction;
judging whether the area where the pixel points are located is a flat area or not;
when the judgment result is a flat area, selecting non-directional interpolation;
when the judgment result is not a flat area, judging whether the current pixel point is an R/B channel or a G channel;
when the current pixel point is an R/B channel, judging whether the area where the current pixel point is located meets one of the limiting conditions of the non-preferential interpolation direction, and selecting the preferential interpolation direction (horizontal direction or vertical direction) when any one limiting condition is met; otherwise, selecting the interpolation direction directly according to whether the direction needing interpolation is a preferential interpolation direction or a non-preferential interpolation direction;
when the current pixel point is a G channel, judging whether the area where the current pixel point is located meets one of the limiting conditions of the non-preferential interpolation direction, and selecting the preferential interpolation direction (45-degree oblique direction or 135-degree oblique direction) when any one limiting condition is met; otherwise, the interpolation direction is selected directly according to whether the direction needing interpolation is a preferential interpolation direction or a non-preferential interpolation direction.
S15, performing interpolation calculation in a preferential interpolation direction or interpolation calculation in a non-preferential interpolation direction on the current pixel point according to the interpolation direction judgment result;
performing a preferential interpolation calculation for a pixel point satisfying one of the constraints of any one of the non-preferential interpolation directions according to the determination result of step S14; for a pixel point which does not meet one of the limiting conditions of any one non-preferential interpolation direction, when the direction needing interpolation is a preferential interpolation direction, performing preferential interpolation direction interpolation calculation on the current pixel point; and when the direction needing interpolation is a non-priority interpolation direction, performing non-priority interpolation direction interpolation calculation on the current pixel point.
Optionally, when the current pixel point is a long-exposure pixel, directly interpolating the short-exposure pixel in the direction needing interpolation to obtain a short-exposure pixel value of the current pixel point;
when the current pixel point is a short exposure pixel, directly interpolating the long exposure pixel in the direction needing interpolation to obtain a long exposure pixel value of the current pixel point;
when the direction needing interpolation is a non-preferential interpolation direction, the interpolation calculation of the non-preferential interpolation direction on the current pixel point comprises the following steps:
when the current pixel point is a long exposure pixel, converting the long exposure pixel value in the direction needing interpolation into a short exposure pixel value, and then performing interpolation calculation, wherein the conversion method is that the long exposure pixel value is divided by an exposure proportion;
and when the current pixel point is a short-exposure pixel, converting the short-exposure pixel value in the direction needing interpolation into a long-exposure pixel value, and then performing interpolation calculation, wherein the conversion method is that the short-exposure pixel value is multiplied by an exposure proportion.
Optionally, the method further comprises:
when the region where the current pixel point is located is a flat region and the current pixel point is a long exposure pixel, directly interpolating all short exposure pixels around the current pixel point to obtain a short exposure pixel value of the current long exposure position;
when the area where the current pixel point is located is a flat area and the current pixel point is a short exposure pixel, directly interpolating all long exposure pixels around the current pixel point to obtain a short exposure pixel value of the current long exposure position.
Specifically, according to the interpolation direction determined in step S14, the interpolation of the current pixel is completed within the input 5 × 5 pixel block:
when the current pixel value is R or B channel long exposure pixel, if the interpolation direction is horizontal direction or vertical direction, it belongs to the interpolation of the preferential interpolation direction, the interpolation uses the short exposure pixel in the direction in the 5 x 5 pixel block to directly interpolate, and the short exposure pixel value of the current long exposure position is obtained. If the interpolation direction is 45-degree oblique direction or 135-degree oblique direction, the interpolation belongs to the interpolation direction of non-priority, the interpolation uses the long exposure pixel in the direction in the 5 × 5 pixel block for interpolation, the long exposure pixel value participating in the interpolation needs to be converted into the short exposure pixel value, the conversion method is to divide the exposure proportion, and then the short exposure pixel value of the current long exposure position is obtained. If the current pixel is located in the flat area and has no obvious interpolation direction, all the short-exposure pixels around the current pixel in the 5 multiplied by 5 pixel block are used for directly interpolating to obtain the short-exposure pixel value of the current long-exposure position.
When the current pixel value is R or B channel short exposure pixel, if the interpolation direction is horizontal direction or vertical direction, it belongs to the interpolation of the preferential interpolation direction, the interpolation uses the long exposure pixel in the direction in the 5 x 5 pixel block to directly interpolate, and obtains the long exposure pixel value of the current short exposure position. If the interpolation direction is 45-degree oblique direction or 135-degree oblique direction, the interpolation belongs to the interpolation direction of non-priority, the interpolation uses the short exposure pixel in the direction in the 5 × 5 pixel block for interpolation, the short exposure pixel value participating in the interpolation needs to be converted into the long exposure pixel value, the conversion method is to multiply the exposure proportion, and then the long exposure pixel value of the current short exposure position is obtained. If the current pixel is located in the flat area and has no obvious interpolation direction, all long exposure pixels around the current pixel in the 5 multiplied by 5 pixel block are used for directly interpolating to obtain the long exposure pixel value of the current short exposure position.
The R/B channel interpolation calculation method is shown in table 1:
TABLE 1
Figure BDA0001644223300000161
When the current pixel value is a G-channel long exposure pixel, if the interpolation direction is a 45-degree oblique direction or a 135-degree oblique direction, the interpolation belongs to the interpolation of the preferential interpolation direction, and the interpolation directly carries out interpolation by using the short exposure pixel in the direction in the 5 multiplied by 5 pixel block to obtain the short exposure pixel value of the current long exposure position. If the interpolation direction is the horizontal direction or the vertical direction, the interpolation belongs to the interpolation in the non-preferential interpolation direction, the interpolation uses the long exposure pixel in the direction in the 5 x 5 pixel block for interpolation, the long exposure pixel value participating in the interpolation needs to be converted into the short exposure pixel value, the conversion method is to divide the exposure proportion, and then the short exposure pixel value of the current long exposure position is obtained. If the current pixel is located in the flat area and has no obvious interpolation direction, all the short-exposure pixels around the current pixel in the 5 multiplied by 5 pixel block are used for directly interpolating to obtain the short-exposure pixel value of the current long-exposure position.
When the current pixel value is a G-channel short-exposure pixel, if the interpolation direction is a 45-degree oblique direction or a 135-degree oblique direction, the interpolation belongs to the interpolation of the preferential interpolation direction, and the interpolation directly carries out interpolation by using the long-exposure pixel in the direction in the 5 multiplied by 5 pixel block to obtain the long-exposure pixel value of the current short-exposure position. If the interpolation direction is the horizontal direction or the vertical direction, the interpolation belongs to the interpolation in the non-preferential interpolation direction, the interpolation uses the short-exposure pixel in the direction in the 5 x 5 pixel block for interpolation, the short-exposure pixel value participating in the interpolation needs to be converted into a long-exposure pixel value, the conversion method is to multiply the exposure proportion, and then the long-exposure pixel value of the current short-exposure position is obtained. If the current pixel is located in the flat area and has no obvious interpolation direction, all long exposure pixels around the current pixel in the 5 multiplied by 5 pixel block are used for directly interpolating to obtain the long exposure pixel value of the current short exposure position.
The G channel interpolation calculation method is shown in table 2:
TABLE 2
Figure BDA0001644223300000171
S16, performing motion detection on the area where the current pixel is located, judging whether the motion information exceeds a preset threshold value, performing motion compensation processing on the pixel value which moves, and removing ghost;
the motion compensation solves the problem that when a moving object exists in a scene, due to different exposure time, the positions of the moving object in two frames of images participating in fusion are different, and the fused images have ghost images.
When the central pixel of the input 5 × 5 pixel block is a long exposure pixel, motion detection calculation is performed in the pixel block to obtain motion information (motion), the current pixel is long exposure, interpolation of the short exposure pixel needs to be completed through a complete full resolution recovery interpolation algorithm to obtain a short exposure pixel value of the current pixel position, and then motion compensation processing is performed by using the detected motion information, the long exposure pixel value (input) and the short exposure pixel value (interpolation).
When the central pixel of the input 5 × 5 pixel block is a short-exposure pixel, motion information (motion) is obtained by performing motion detection calculation in the pixel block, the current pixel is short-exposure, the long-exposure pixel value at the current pixel position is obtained by completing interpolation of the long-exposure pixel through a complete full-resolution recovery interpolation algorithm, and then motion compensation processing is performed by using the detected motion information, the short-exposure (input) pixel value and the long-exposure pixel value (interpolation).
The method comprises the following steps that under the condition that a current pixel is short-exposed, noise reduction processing needs to be carried out on the short-exposed pixel, and the method specifically comprises the following steps: carrying out frequency detection, noise reduction processing and texture enhancement processing on the short-exposure pixels; and then the processed short-exposure pixels are subjected to motion compensation.
And S17, performing brightness estimation on the region of the current pixel, and performing fusion decision and processing on the image by using the brightness value of the region of the current pixel and a preset fusion threshold.
The luminance estimation is performed based on the pixel blocks of the long-exposure image 3 × 3, the luminance value L is compared with the set fusion thresholds S1 and S2, and the final fusion policy is determined:
when L < S1, the brightness of the area where the pixel is located is low, and the brightness of the area can be effectively improved and the noise level can be reduced by completely adopting the long exposure pixel value.
When L > S2, the brightness of the area where the pixel is located is high, and the brightness saturation can be reduced by completely adopting the short-exposure pixel value, so that the image details of the high-brightness area are increased.
When S1< L < S2, the area where the pixel is located is normal brightness, the fusion value is a linear weighted average of the long-exposure pixel value and the short-exposure pixel value (L ═ ((S2-L) × T1+ (L-S1) × T2))/((S2-S1)), and smooth transition of the fusion image can be achieved, and an obvious fusion boundary line is prevented from appearing.
The fusion method provided by the invention refers to the brightness domain guidance fusion strategy of the long-exposure image, and can effectively inhibit the false color at the boundary of the high-brightness and low-brightness regions.
The high dynamic range image full resolution reconstruction method provided by the embodiment of the invention collects a high dynamic range image simultaneously containing long exposure pixels and short exposure pixels through a high dynamic range sensor, calculates gradient values of all directions of a region where a current pixel point is located, judges whether the region where the current pixel point is located is a flat region, judges whether the direction in which the current pixel point needs to be interpolated is a preferential interpolation direction or a non-preferential interpolation direction and judges whether a limiting condition of the non-preferential interpolation direction is met or not when the region is not the flat region, performs preferential interpolation direction interpolation calculation or non-preferential interpolation direction interpolation calculation on the current pixel point according to a judgment result, and performs motion compensation and fusion processing on the region where the pixel point is located after the interpolation calculation. Compared with the prior art, the full-resolution reconstruction algorithm is realized by hardware, can effectively realize the function of high dynamic range in the video, and has better processing capability in the aspects of image analysis force, image details, motion scenes and the like, thereby being capable of providing high-quality high-dynamic-range images.
An embodiment of the present invention further provides a high dynamic range image full resolution reconstruction apparatus, as shown in fig. 6, the apparatus includes:
theacquisition unit 11 is used for acquiring an image simultaneously containing long exposure pixels and short exposure pixels through a high dynamic range sensor;
the first calculating unit 12 is configured to calculate gradient values of the region where the current pixel point is located in each direction, and obtain gradient information of the region where the current pixel point is located in four directions;
afirst judging unit 13, configured to judge, according to the gradient information, a direction of a region where a current pixel is located, where, if gradient values in four directions are all smaller than a set threshold, the region is a flat region;
a second determiningunit 14, configured to preliminarily determine, when the region where the current pixel is located is not a flat region, that the direction with the smaller gradient value is an interpolation direction, and further determine whether the direction in which the current pixel needs to be interpolated is a preferential interpolation direction or a non-preferential interpolation direction;
a third determiningunit 15, configured to determine, when the direction with the smaller gradient is the non-priority interpolation direction, whether a ratio of the gradient of the current pixel point in the non-priority interpolation direction to the gradient of the priority interpolation direction is smaller than a set threshold, and determine whether the region where the current pixel point is located is a saturation region, a low-luminance region, and a motion region, and if the above conditions are met, abandon the non-priority interpolation direction;
the second calculatingunit 16 is configured to perform interpolation calculation in a preferential interpolation direction or interpolation calculation in a non-preferential interpolation direction on the current pixel point according to the interpolation direction determination result;
thefirst processing unit 17 is configured to perform motion detection on an area where a current pixel is located, determine whether motion information exceeds a preset threshold, perform motion compensation processing on a pixel value where motion occurs, and remove a ghost;
and thesecond processing unit 18 is configured to perform brightness estimation on the region where the current pixel is located, and perform fusion decision and processing on the image by using the brightness value of the region where the current pixel is located and a preset fusion threshold.
Optionally, an exposure ratio of an image acquired by the high dynamic range sensor is set to 1:1, 1:2, 1:4, 1:8 or 1:16, the exposure ratio being a ratio of a long exposure time and a short exposure time of the image within the same frame.
Optionally, the first computing unit 12 includes:
the conversion module is used for converting the pixel value of the short exposure position of the area where the current pixel point is located into a long exposure pixel value according to the exposure proportion, or converting the pixel value of the long exposure position of the area where the current pixel point is located into a short exposure pixel value according to the exposure proportion;
and the acquisition module is used for acquiring the horizontal gradient value, the vertical gradient value, the gradient value of 45 degrees at an incline and the gradient value of 135 degrees at an incline of the converted region where the current pixel point is located by using the corresponding gradient detection operator.
Optionally, the conversion module is configured to multiply the pixel value of the short exposure position of the region where the current pixel point is located by the exposure ratio to obtain a long exposure pixel value of the short exposure position of the region where the current pixel point is located, or divide the pixel value of the long exposure position of the region where the current pixel point is located by the exposure ratio to obtain a short exposure pixel value of the long exposure position of the region where the current pixel point is located.
Optionally, the second determiningunit 14 is configured to determine, when a pixel point different from the current pixel exposure time exists in a direction in which interpolation is required, that the interpolation direction is a preferential interpolation direction;
and when no pixel point with the exposure time different from that of the current pixel exists in the direction needing interpolation, judging that the interpolation direction is a non-preferential interpolation direction.
Optionally, thesecond computing unit 16 includes:
the first calculation module is used for performing interpolation calculation of a preferential interpolation direction on the current pixel point when the direction needing interpolation is the preferential interpolation direction;
a second calculating module, configured to, when the direction needing to be interpolated is a non-priority interpolation direction, select the non-priority interpolation direction with the following limitations: and if the ratio of the gradient of the current pixel point in the non-preferential interpolation direction to the gradient of the preferential interpolation direction is smaller than a set threshold value, or the region in which the current pixel point is located is any one of a saturated region, a low-brightness region and a motion region, giving up to select the non-preferential interpolation direction, and performing interpolation calculation of the preferential interpolation direction on the current pixel point.
Optionally, the second computing module comprises:
the first calculation submodule is used for directly carrying out interpolation by using the short-exposure pixel in the direction needing interpolation to obtain the short-exposure pixel value of the current pixel position when the current pixel point is the long-exposure pixel and the pixel to be interpolated is the short-exposure pixel;
the second calculation submodule is used for directly carrying out interpolation by using the long exposure pixel in the direction needing interpolation to obtain the long exposure pixel value of the current pixel position when the current pixel point is the short exposure pixel and the pixel to be interpolated is the long exposure pixel;
a third computing submodule, configured to convert the long-exposure pixel value in the direction requiring interpolation into a short-exposure pixel value when the current pixel point is a long-exposure pixel and the pixel to be interpolated is a short-exposure pixel, and then perform interpolation computation, where the conversion method is dividing the long-exposure pixel value by an exposure proportion;
and the fourth calculation submodule is used for converting the short-exposure pixel value in the direction needing interpolation into a long-exposure pixel value when the current pixel point is a short-exposure pixel and the pixel to be interpolated is a long-exposure pixel, and then performing interpolation calculation, wherein the conversion method is that the short-exposure pixel value is multiplied by the exposure proportion.
Optionally, the apparatus further comprises:
the third calculation unit is used for directly interpolating all the short-exposure pixels around the current pixel point to obtain a short-exposure pixel value of the current long-exposure position when the region where the current pixel point is located is a flat region, the current pixel point is a long-exposure pixel, and the pixel to be interpolated is a short-exposure pixel;
and when the region where the current pixel point is located is a flat region, the current pixel point is a short exposure pixel, and the pixel to be interpolated is a long exposure pixel, directly interpolating all long exposure pixels around the current pixel point to obtain a short exposure pixel value of the current long exposure position.
Optionally, thesecond processing unit 18 includes:
the brightness calculation module is used for calculating the brightness value of the area where the current prime point is located;
the comparison module is used for comparing the brightness value with a set first fusion threshold value and a set second fusion threshold value;
the fusion processing module is used for carrying out image fusion on the area where the current pixel point is located by adopting a long exposure pixel value when the brightness value is smaller than the first threshold value; when the brightness value is between a first threshold and a second threshold, carrying out image fusion on the region where the current pixel point is located by adopting a linear weighted average value of a long exposure pixel value and a short exposure pixel value; and when the brightness value is larger than the second threshold value, carrying out image fusion on the region where the current pixel point is located by adopting the short-exposure pixel value.
The high dynamic range image full resolution reconstruction device provided by the embodiment of the invention collects a high dynamic range image simultaneously containing long exposure pixels and short exposure pixels through a high dynamic range sensor, calculates gradient values of all directions of a region where a current pixel point is located, judges whether the region where the current pixel point is located is a flat region, judges whether the direction in which the current pixel point needs to be interpolated is a preferential interpolation direction or a non-preferential interpolation direction and judges whether a limiting condition of the non-preferential interpolation direction is met or not when the region is not the flat region, performs preferential interpolation direction interpolation calculation or non-preferential interpolation direction interpolation calculation on the current pixel point according to a judgment result, and performs motion compensation and fusion processing on the region where the pixel point is located after the interpolation calculation. Compared with the prior art, the full-resolution reconstruction algorithm is realized by hardware, can effectively realize the function of high dynamic range in the video, and has better processing capability in the aspects of image analysis force, image details, motion scenes and the like, thereby being capable of providing high-quality high-dynamic-range images.
The embodiment of the invention also provides electronic equipment which comprises the high dynamic range image full-resolution reconstruction device.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like.
The above description is only for the specific embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention are included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (15)

1. A method for full resolution reconstruction of a high dynamic range image, the method comprising:
acquiring an image simultaneously containing long exposure pixels and short exposure pixels through a high dynamic range sensor;
calculating gradient values of the region of the current pixel point in all directions to obtain gradient information of the region of the current pixel point in four directions;
judging the direction of the region where the current pixel point is located according to the gradient information, if the gradient values in four directions are all smaller than a set threshold value, the region is a flat region, when the region where the current pixel point is located is not a flat region, preliminarily determining the direction with the smaller gradient value as an interpolation direction, further judging whether the direction is a preferential interpolation direction or a non-preferential interpolation direction, and specifically: when a pixel point with different exposure time from the current pixel point exists in the direction needing interpolation, judging that the interpolation direction is a preferential interpolation direction; when no pixel point with the exposure time different from that of the current pixel point exists in the direction needing interpolation, judging that the interpolation direction is a non-preferential interpolation direction;
when the direction with smaller gradient is the non-priority interpolation direction, judging whether the ratio of the gradient of the current pixel point in the non-priority interpolation direction to the gradient of the priority interpolation direction is smaller than a set threshold value or not, and judging whether the area where the current pixel point is located is a saturated area, a low-brightness area or a motion area or not, if the ratio of the gradient of the current pixel point in the non-priority interpolation direction to the gradient of the priority interpolation direction is smaller than the set threshold value or the area where the current pixel point is located is the saturated area, the low-brightness area or the motion area, abandoning the non-priority interpolation direction;
carrying out interpolation calculation in a preferential interpolation direction or interpolation calculation in a non-preferential interpolation direction on the current pixel point according to the interpolation direction judgment result to obtain a short exposure pixel value of the long exposure position of the current pixel point or a long exposure pixel value of the short exposure position, and obtaining a long exposure image with full resolution and a short exposure image with full resolution after the interpolation is finished;
performing motion detection on the area where the current pixel point is located to obtain motion information, and performing motion compensation processing by using the motion information obtained by detection, the input long-exposure pixel value and the short-exposure pixel value obtained by interpolation, or performing motion compensation processing by using the motion information obtained by detection, the input short-exposure pixel value and the long-exposure pixel value obtained by interpolation;
performing brightness estimation on the region where the current pixel point of the long-exposure image is located, and performing fusion decision and processing on the long-exposure image and the short-exposure image by using the brightness value of the region where the current pixel point of the long-exposure image is located, a preset fusion threshold and a motion compensation result;
the brightness estimation of the area where the current pixel point of the long-exposure image is located, and the fusion decision and processing of the long-exposure image and the short-exposure image by using the brightness value of the area where the current pixel point of the long-exposure image is located, a preset fusion threshold and a motion compensation result comprise: calculating the brightness value of the area where the current pixel point of the long exposure image is located; comparing the brightness value with a set first fusion threshold value and a set second fusion threshold value; when the brightness value is smaller than the first fusion threshold value, adopting a long exposure pixel value as an image fusion result for the area where the current pixel point is located; when the brightness value is between the first fusion threshold value and the second fusion threshold value, the linear weighted average value of the long exposure pixel value and the short exposure pixel value is adopted as an image fusion result for the area where the current pixel point is located; and when the brightness value is larger than the second fusion threshold value, adopting the short-exposure pixel value as an image fusion result for the area where the current pixel point is located.
2. The method of claim 1, wherein the image captured by the high dynamic range sensor has an exposure ratio of 1:1, 1:2, 1:4, 1:8, or 1:16, the exposure ratio being the ratio of short exposure time to long exposure time of the image within the same frame.
3. The method of claim 2, wherein the calculating the gradient values of the regions in which the current pixel is located in all directions comprises:
converting the pixel value of the short exposure position of the area where the current pixel point is located into a long exposure pixel value according to the exposure proportion, or converting the pixel value of the long exposure position of the area where the current pixel point is located into a short exposure pixel value according to the exposure proportion;
and acquiring a horizontal gradient value, a vertical gradient value, an oblique 45-degree gradient value and an oblique 135-degree gradient value of the region where the converted current pixel point is located by using a corresponding gradient detection operator.
4. The method of claim 3, wherein converting the pixel value of the short exposure position of the region where the current pixel is located into a long exposure pixel value according to the exposure ratio comprises: dividing the pixel value of the short exposure position of the area where the current pixel point is located by the exposure proportion to obtain the long exposure pixel value of the short exposure position of the area where the current pixel point is located;
the step of converting the pixel value of the long exposure position of the area where the current pixel point is located into the short exposure pixel value according to the exposure proportion comprises the following steps: and multiplying the pixel value of the long exposure position of the area where the current pixel point is located by the exposure proportion to obtain the short exposure pixel value of the long exposure position of the area where the current pixel point is located.
5. The method of claim 1, wherein the performing interpolation calculation in a preferential interpolation direction or interpolation calculation in a non-preferential interpolation direction on the current pixel point according to the interpolation direction determination result comprises:
when the direction needing interpolation is a preferential interpolation direction, performing interpolation calculation of the preferential interpolation direction on the current pixel point;
and when the direction needing interpolation is a non-priority interpolation direction and the condition of giving up the non-priority interpolation direction is met, performing interpolation calculation of the priority interpolation direction on the current pixel point.
6. The method according to claim 5, wherein when the direction to be interpolated is a preferential interpolation direction, performing interpolation calculation of the preferential interpolation direction on the current pixel point comprises:
when the current pixel point is a long exposure pixel and the pixel to be interpolated is a short exposure pixel, directly interpolating by using the short exposure pixel in the direction needing interpolation to obtain a short exposure pixel value of the current pixel position;
when the current pixel point is a short exposure pixel and the pixel to be interpolated is a long exposure pixel, directly interpolating by using the long exposure pixel in the direction needing interpolation to obtain a long exposure pixel value of the current pixel position;
when the direction needing interpolation is a non-preferential interpolation direction, the interpolation calculation of the current pixel point in the non-preferential interpolation direction comprises the following steps:
when the current pixel point is a long exposure pixel and the pixel to be interpolated is a short exposure pixel, converting the long exposure pixel value in the direction needing interpolation into a short exposure pixel value, and then performing interpolation calculation, wherein the conversion method is to multiply the long exposure pixel value by an exposure proportion;
and when the current pixel point is a short-exposure pixel and the pixel to be interpolated is a long-exposure pixel, converting the short-exposure pixel value in the direction needing interpolation into a long-exposure pixel value, and then performing interpolation calculation, wherein the conversion method is that the short-exposure pixel value is divided by the exposure proportion.
7. The method according to claim 1, wherein after the determining the direction of the region where the current pixel point is located according to the gradient information, the method further comprises:
when the region where the current pixel point is located is a flat region, the current pixel point is a long exposure pixel, and the pixel to be interpolated is a short exposure pixel, directly interpolating all short exposure pixels around the current pixel point to obtain a short exposure pixel value of the current long exposure position;
and when the region where the current pixel point is located is a flat region, the current pixel point is a short exposure pixel, and the pixel to be interpolated is a long exposure pixel, directly interpolating all long exposure pixels around the current pixel point to obtain a short exposure pixel value of the current long exposure position.
8. A high dynamic range image full resolution reconstruction apparatus, the apparatus comprising:
the acquisition unit is used for acquiring an image simultaneously comprising long exposure pixels and short exposure pixels through the high dynamic range sensor;
the first calculation unit is used for calculating gradient values of the region where the current pixel point is located in all directions to obtain gradient information of the region where the current pixel point is located in four directions;
the first judgment unit is used for judging the direction of the region where the current pixel point is located according to the gradient information, and if the gradient values in the four directions are all smaller than a set threshold value, the region is a flat region;
the second judging unit is used for preliminarily determining that the direction with the smaller gradient value is an interpolation direction when the region where the current pixel point is located is not a flat region, and further judging whether the direction in which the current pixel point needs to be interpolated is a preferential interpolation direction or a non-preferential interpolation direction, and specifically comprises the following steps: when a pixel point with different exposure time from the current pixel point exists in the direction needing interpolation, judging that the interpolation direction is a preferential interpolation direction; when no pixel point with the exposure time different from that of the current pixel point exists in the direction needing interpolation, judging that the interpolation direction is a non-preferential interpolation direction;
a third determining unit, configured to determine, when the direction with the smaller gradient is the non-priority interpolation direction, whether a ratio of the gradient of the current pixel in the non-priority interpolation direction to the gradient of the priority interpolation direction is smaller than a set threshold, and determine whether a region where the current pixel is located is a saturated region, a low-luminance region, or a motion region, and if the ratio of the gradient of the current pixel in the non-priority interpolation direction to the gradient of the priority interpolation direction is smaller than the set threshold, or the region where the current pixel is located is the saturated region, the low-luminance region, or the motion region, abandon the non-priority interpolation direction;
the second calculation unit is used for carrying out interpolation calculation in a preferential interpolation direction or interpolation calculation in a non-preferential interpolation direction on the current pixel point according to the interpolation direction judgment result to obtain a short exposure pixel value of the long exposure position or a long exposure pixel value of the short exposure position of the current pixel point, and obtaining a long exposure image with full resolution and a short exposure image with full resolution after interpolation is finished;
the first processing unit is used for carrying out motion detection on the area where the current pixel point is located to obtain motion information, and carrying out motion compensation processing by using the motion information obtained by detection, the input long-exposure pixel value and the short-exposure pixel value obtained by interpolation, or carrying out motion compensation processing by using the motion information obtained by detection, the input short-exposure pixel value and the long-exposure pixel value obtained by interpolation;
the second processing unit is used for carrying out brightness estimation on the area where the current pixel point of the long-exposure image is located, and carrying out fusion decision and processing on the long-exposure image and the short-exposure image by using the brightness value of the area where the current pixel point of the long-exposure image is located, a preset fusion threshold and a motion compensation result;
the second processing unit includes: the brightness calculation module is used for calculating the brightness value of the area where the current pixel point of the long exposure image is located; the comparison module is used for comparing the brightness value with a set first fusion threshold value and a set second fusion threshold value; the fusion processing module is used for adopting a long exposure pixel value as an image fusion result for the area where the current pixel point is located when the brightness value is smaller than the first fusion threshold value; when the brightness value is between the first fusion threshold value and the second fusion threshold value, the linear weighted average value of the long exposure pixel value and the short exposure pixel value is adopted as an image fusion result for the area where the current pixel point is located; and when the brightness value is larger than the second fusion threshold value, adopting the short-exposure pixel value as an image fusion result for the area where the current pixel point is located.
9. The apparatus of claim 8, wherein the image captured by the high dynamic range sensor has an exposure ratio of 1:1, 1:2, 1:4, 1:8, or 1:16, and wherein the exposure ratio is a ratio of a short exposure time and a long exposure time of the image within the same frame.
10. The apparatus of claim 9, wherein the first computing unit comprises:
the conversion module is used for converting the pixel value of the short exposure position of the area where the current pixel point is located into a long exposure pixel value according to the exposure proportion, or converting the pixel value of the long exposure position of the area where the current pixel point is located into a short exposure pixel value according to the exposure proportion;
and the acquisition module is used for acquiring the horizontal gradient value, the vertical gradient value, the gradient value of 45 degrees at an incline and the gradient value of 135 degrees at an incline of the converted region where the current pixel point is located by using the corresponding gradient detection operator.
11. The apparatus according to claim 10, wherein the converting module is configured to divide the pixel value of the short exposure position of the area where the current pixel is located by the exposure ratio to obtain a long exposure pixel value of the short exposure position of the area where the current pixel is located, or multiply the pixel value of the long exposure position of the area where the current pixel is located by the exposure ratio to obtain a short exposure pixel value of the long exposure position of the area where the current pixel is located.
12. The apparatus of claim 8, wherein the second computing unit comprises:
the first calculation module is used for performing interpolation calculation of a preferential interpolation direction on the current pixel point when the direction needing interpolation is the preferential interpolation direction;
and the second calculation module is used for performing interpolation calculation of the preferential interpolation direction on the current pixel point when the direction needing interpolation is a non-preferential interpolation direction and meets the condition of giving up the non-preferential interpolation direction.
13. The apparatus of claim 12, wherein the second computing module comprises:
the first calculation submodule is used for directly carrying out interpolation by using the short-exposure pixel in the direction needing interpolation to obtain the short-exposure pixel value of the current pixel position when the current pixel point is the long-exposure pixel and the pixel to be interpolated is the short-exposure pixel;
the second calculation submodule is used for directly carrying out interpolation by using the long exposure pixel in the direction needing interpolation to obtain the long exposure pixel value of the current pixel position when the current pixel point is the short exposure pixel and the pixel to be interpolated is the long exposure pixel;
a third computing submodule, configured to convert the long-exposure pixel value in the direction requiring interpolation into a short-exposure pixel value when the current pixel point is a long-exposure pixel and the pixel to be interpolated is a short-exposure pixel, and then perform interpolation computation, where the conversion method is to multiply the long-exposure pixel value by an exposure ratio;
and the fourth calculation submodule is used for converting the short-exposure pixel value in the direction needing interpolation into a long-exposure pixel value when the current pixel point is a short-exposure pixel and the pixel to be interpolated is a long-exposure pixel, and then performing interpolation calculation, wherein the conversion method is that the short-exposure pixel value is divided by the exposure proportion.
14. The apparatus of claim 8, further comprising:
the third calculation unit is used for directly interpolating all the short-exposure pixels around the current pixel point to obtain a short-exposure pixel value of the current long-exposure position when the region where the current pixel point is located is a flat region, the current pixel point is a long-exposure pixel, and the pixel to be interpolated is a short-exposure pixel;
and when the region where the current pixel point is located is a flat region, the current pixel point is a short exposure pixel, and the pixel to be interpolated is a long exposure pixel, directly interpolating all long exposure pixels around the current pixel point to obtain a short exposure pixel value of the current long exposure position.
15. An electronic device, characterized in that it comprises a high dynamic range image full resolution reconstruction apparatus according to any one of claims 8 to 14.
CN201810394367.4A2018-04-272018-04-27High dynamic range image full-resolution reconstruction method and device and electronic equipmentActiveCN110418081B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN201810394367.4ACN110418081B (en)2018-04-272018-04-27High dynamic range image full-resolution reconstruction method and device and electronic equipment

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201810394367.4ACN110418081B (en)2018-04-272018-04-27High dynamic range image full-resolution reconstruction method and device and electronic equipment

Publications (2)

Publication NumberPublication Date
CN110418081A CN110418081A (en)2019-11-05
CN110418081Btrue CN110418081B (en)2021-12-24

Family

ID=68346743

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201810394367.4AActiveCN110418081B (en)2018-04-272018-04-27High dynamic range image full-resolution reconstruction method and device and electronic equipment

Country Status (1)

CountryLink
CN (1)CN110418081B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN114450934B (en)*2020-08-312023-06-09华为技术有限公司Method, apparatus, device and computer readable storage medium for acquiring image
CN112689100B (en)*2020-12-252022-08-02北京灵汐科技有限公司Image detection method, device, equipment and storage medium
WO2023094870A1 (en)*2021-11-292023-06-01Weta Digital LimitedIncreasing dynamic range of a virtual production display

Citations (8)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN101755286A (en)*2007-07-242010-06-23夏普株式会社Image upscaling based upon directional interpolation
CN101816171A (en)*2007-10-032010-08-25诺基亚公司Multi-exposure pattern for enhancing dynamic range of images
CN102647565A (en)*2012-04-182012-08-22格科微电子(上海)有限公司 Arrangement method of pixel array, image sensor and image sensing method
CN102868890A (en)*2011-07-062013-01-09索尼公司Image processing apparatus, imaging apparatus, image processing method, and program
CN104253946A (en)*2013-06-272014-12-31聚晶半导体股份有限公司 Method for generating high dynamic range image and image sensor thereof
CN104349069A (en)*2013-07-292015-02-11广达电脑股份有限公司Method for shooting high dynamic range film
CN104639920A (en)*2013-11-132015-05-20上海微锐智能科技有限公司Wide dynamic fusion method based on single-frame double-pulse exposure mode
CN105430402A (en)*2010-04-122016-03-23松下电器(美国)知识产权公司 Image encoding method and image encoding device

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
JP4358055B2 (en)*2004-07-212009-11-04株式会社東芝 Interpolated pixel generation circuit
CN102075688B (en)*2010-12-282012-07-25青岛海信网络科技股份有限公司Wide dynamic processing method for single-frame double-exposure image
JP2013026722A (en)*2011-07-192013-02-04Toshiba CorpImage processing apparatus
JP2013066142A (en)*2011-08-312013-04-11Sony CorpImage processing apparatus, image processing method, and program
US9007488B2 (en)*2012-03-082015-04-14Semiconductor Components Industries, LlcSystems and methods for generating interpolated high-dynamic-range images
US20140063300A1 (en)*2012-09-062014-03-06Aptina Imaging CorporationHigh dynamic range imaging systems having clear filter pixel arrays

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN101755286A (en)*2007-07-242010-06-23夏普株式会社Image upscaling based upon directional interpolation
CN101816171A (en)*2007-10-032010-08-25诺基亚公司Multi-exposure pattern for enhancing dynamic range of images
CN105430402A (en)*2010-04-122016-03-23松下电器(美国)知识产权公司 Image encoding method and image encoding device
CN102868890A (en)*2011-07-062013-01-09索尼公司Image processing apparatus, imaging apparatus, image processing method, and program
CN102647565A (en)*2012-04-182012-08-22格科微电子(上海)有限公司 Arrangement method of pixel array, image sensor and image sensing method
CN104253946A (en)*2013-06-272014-12-31聚晶半导体股份有限公司 Method for generating high dynamic range image and image sensor thereof
CN104349069A (en)*2013-07-292015-02-11广达电脑股份有限公司Method for shooting high dynamic range film
CN104639920A (en)*2013-11-132015-05-20上海微锐智能科技有限公司Wide dynamic fusion method based on single-frame double-pulse exposure mode

Also Published As

Publication numberPublication date
CN110418081A (en)2019-11-05

Similar Documents

PublicationPublication DateTitle
US10424049B2 (en)Image processing device, method, and recording medium
CN110418065B (en)High dynamic range image motion compensation method and device and electronic equipment
US6445833B1 (en)Device and method for converting two-dimensional video into three-dimensional video
US7764827B2 (en)Multi-view image generation
US8526729B2 (en)Image processing apparatus and method, and program
US9262811B2 (en)System and method for spatio temporal video image enhancement
KR101460688B1 (en)Image processing apparatus and control method of the same
JP4578566B2 (en) Image generating method, apparatus, program thereof, and recording medium recording program
US8411205B2 (en)Noise reducing image processing apparatus
CN112311962A (en) A kind of video denoising method and apparatus, computer readable storage medium
US20100067818A1 (en)System and method for high quality image and video upscaling
US20170256067A1 (en)Image processing device, image processing method, and solid-state imaging device
CN110418081B (en)High dynamic range image full-resolution reconstruction method and device and electronic equipment
US8503531B2 (en)Image processing apparatus and method, recording medium, and program
JP3674186B2 (en) Image information conversion apparatus and method
CN111294545A (en)Image data interpolation method and device, storage medium and terminal
US9007494B2 (en)Image processing apparatus, method for controlling the same and storage medium
JP5933690B2 (en) Image processing apparatus and method, and image processing program
KR101299196B1 (en)Apparatus for up-converting frame rate of video signal and method thereof
KR101158847B1 (en)Deinterlacing apparatus and method using edge map
KR100741517B1 (en) High Resolution Color Interpolation Method Considering Correlation between Channels
GB2514557A (en)Image processing
TWI386868B (en)Method of motion detection using content adaptive penalty
CN109754370B (en)Image denoising method and device
JP3922286B2 (en) Coefficient learning apparatus and method

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
CB02Change of applicant information
CB02Change of applicant information

Address after:100191, Haidian District, Zhichun Road, Beijing No. 7 to the real building, block B, 18

Applicant after:Beijing Ziguang zhanrui Communication Technology Co.,Ltd.

Address before:100191, Haidian District, Zhichun Road, Beijing No. 7 to the real building, block B, 18

Applicant before:BEIJING SPREADTRUM HI-TECH COMMUNICATIONS TECHNOLOGY Co.,Ltd.

GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp