Movatterモバイル変換


[0]ホーム

URL:


CN107370963B - Image processing method, image processing device and electronic equipment - Google Patents

Image processing method, image processing device and electronic equipment
Download PDF

Info

Publication number
CN107370963B
CN107370963BCN201710748575.5ACN201710748575ACN107370963BCN 107370963 BCN107370963 BCN 107370963BCN 201710748575 ACN201710748575 ACN 201710748575ACN 107370963 BCN107370963 BCN 107370963B
Authority
CN
China
Prior art keywords
image
mirror
partial
luminance
partial image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710748575.5A
Other languages
Chinese (zh)
Other versions
CN107370963A (en
Inventor
李江涛
韩建辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Horizon Information Technology Co Ltd
Original Assignee
Beijing Horizon Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Horizon Information Technology Co LtdfiledCriticalBeijing Horizon Information Technology Co Ltd
Priority to CN201710748575.5ApriorityCriticalpatent/CN107370963B/en
Publication of CN107370963ApublicationCriticalpatent/CN107370963A/en
Application grantedgrantedCritical
Publication of CN107370963BpublicationCriticalpatent/CN107370963B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Classifications

Landscapes

Abstract

An image processing method, an image processing apparatus, and an electronic device are disclosed. The image processing method comprises the following steps: imaging a subject to obtain an initial image, the initial image including at least a first partial image having a first luminance and a second partial image having a second luminance, the first partial image and the second partial image having at least partially the same image content, and the first luminance being different from the second luminance; dividing the initial image into at least a first partial image and a second partial image; and synthesizing the first partial image and the second partial image to obtain a synthesized image, the synthesized image having a high dynamic range with respect to the same image content. Therefore, a single image of a single shot having different brightness portions can be processed as an image having a high dynamic range.

Description

Image processing method, image processing device and electronic equipment
Technical Field
The present invention relates generally to the field of image processing, and in particular, to an image processing method, an image processing apparatus, and an electronic device.
Background
High Dynamic Range (HDR) photography has now been widely used to compensate for the limited dynamic range of most digital imaging sensors. The dynamic range of a photograph refers to the luminance range between the darkest color and the brightest color, and may also represent the hue range. Specifically, in an outdoor scene in bright sunlight, the brightness range from a shadow area to the brightest highlight area far exceeds the capturing capability of the digital camera. If the exposure setting of the camera is biased toward the shadow, the highlight area will be overexposed, becoming a pure white color without detail. Conversely, if the exposure setting of the camera is biased toward the highlight region, the shadow portion becomes a black patch.
A high dynamic range image may provide a higher dynamic range and more image detail than a normal image. Typically, the normal images of different exposure times are first acquired and then synthesized into the final HDR image. That is, the HDR photograph integrates multiple photographs, e.g., 2 or more, of the same scene, each employing a different exposure setting. Dynamic range that cannot be achieved by one exposure is created by combining over-exposed dark detail and under-exposed bright detail.
However, in the prior art, the HDR thus implemented is not real-time, because of the need for multiple exposure composition, there is a ghost image with different exposure times when the moving object is photographed (or the camera/photographing device itself is moving) (because the HDR is a composition of several pictures, the effect of the final composition can result in motion blur or ghost image if the photographed object moves between the first and second shots), and there is no way for video recording.
Thus, there is a need for improved image processing schemes.
Disclosure of Invention
The present application has been made in order to solve the above technical problems. Embodiments of the present application provide an image processing method, an image processing apparatus, and an electronic device capable of processing a single image having different brightness portions captured at a time into an image having a high dynamic range.
According to an aspect of the present application, there is provided an image processing method including: imaging a subject to obtain an initial image, the initial image including at least a first partial image having a first luminance and a second partial image having a second luminance, the first partial image and the second partial image having at least partially the same image content, and the first luminance being different from the second luminance; dividing the initial image into at least a first partial image and a second partial image; and synthesizing the first partial image and the second partial image to obtain a synthesized image, the synthesized image having a high dynamic range with respect to the same image content.
According to another aspect of the present application, there is provided an image processing apparatus including: an image capturing unit configured to image a subject to obtain an initial image including at least a first partial image having a first luminance and a second partial image having a second luminance, the first partial image and the second partial image having at least partially the same image content, and the first luminance being different from the second luminance; an image dividing unit for dividing the initial image into at least a first partial image and a second partial image; and an image synthesizing unit configured to synthesize the first partial image and the second partial image to obtain a synthesized image, the synthesized image having a high dynamic range with respect to the same image content.
According to still another aspect of the present application, there is provided an image forming apparatus including: a lens; and an image processing apparatus as described above.
According to still another aspect of the present application, there is provided an electronic apparatus including: a processor; and a memory in which computer program instructions are stored which, when executed by the processor, cause the processor to perform the image processing method as described above.
According to yet another aspect of the present application, there is provided a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, cause the processor to perform the image processing method as described above.
Compared with the prior art, with the image processing method, the image processing apparatus, and the electronic device according to the embodiments of the present application, a subject can be imaged to obtain an initial image including at least a first partial image having a first luminance and a second partial image having a second luminance, the first partial image and the second partial image having at least partially the same image content, and the first luminance being different from the second luminance; dividing the initial image into at least a first partial image and a second partial image; and synthesizing the first partial image and the second partial image to obtain a synthesized image, the synthesized image having a high dynamic range in the overlapping region. Therefore, the image processing method, the image processing device and the electronic equipment which can process the single image with different brightness parts of single shooting into the image with high dynamic range are provided, so that real-time HDR shooting can be performed, CMOS sensor parameters are not required to be adjusted for each exposure, and the method is suitable for HDR video and HDR monitoring scene application.
Drawings
The foregoing and other objects, features and advantages of the present application will become more apparent from the following more particular description of embodiments of the present application, as illustrated in the accompanying drawings. The accompanying drawings are included to provide a further understanding of embodiments of the application and are incorporated in and constitute a part of this specification, illustrate the application and not constitute a limitation to the application. In the drawings, like reference numerals generally refer to like parts or steps.
FIG. 1 illustrates a schematic diagram of an optical device according to an embodiment of the present application;
FIG. 2 is a schematic illustration of an optical device according to an embodiment of the present application secured to a lens of an imaging device;
FIG. 3 is a schematic view of a field of view formed by an optic according to an embodiment of the present application;
FIG. 4 is a schematic view of a field of view formed by an optic relative to a lens according to an embodiment of the present application;
FIG. 5 is a schematic view of a mirror with a transmissive film according to an embodiment of the present application;
FIG. 6 is a schematic illustration of an imaging process according to an embodiment of the present application;
FIGS. 7A and 7B are schematic diagrams illustrating the manner in which mirror dimensions are determined in an optical device according to an embodiment of the present application;
FIG. 8 illustrates a schematic flow chart of an image processing method according to an embodiment of the present application;
FIG. 9 is a schematic illustration of an initial image of an image processing method according to an embodiment of the present application;
fig. 10A and 10B are schematic diagrams of initial image segmentation in an image processing method according to an embodiment of the present application;
FIG. 11 is a schematic illustration of split image mirroring in an image processing method according to an embodiment of the present application;
FIG. 12 is a schematic diagram of image composition in an image processing method according to an embodiment of the present application;
fig. 13 illustrates a schematic block diagram of an image processing apparatus according to an embodiment of the present application;
FIG. 14 illustrates a schematic diagram of an imaging system according to an embodiment of the present application;
fig. 15 illustrates a block diagram of an electronic device according to an embodiment of the present application.
Detailed Description
Hereinafter, example embodiments according to the present application will be described in detail with reference to the accompanying drawings. It should be apparent that the described embodiments are only some of the embodiments of the present application and not all of the embodiments of the present application, and it should be understood that the present application is not limited by the example embodiments described herein.
Summary of the application
As described above, the HDR in the prior art is not realized in real time, but requires multiple exposure synthesis, and there is no way to capture the afterimage of a moving object with different exposure times.
In view of the above-described problems, a basic idea of the present application is to propose an image processing method, an image processing apparatus, and an electronic device, in which a captured image is a single image including a plurality of regions having different brightnesses, so that a plurality of regions of the same subject having different brightnesses included in the single image are divided and synthesized to obtain an image having a high dynamic range.
Having described the basic principles of the present application, various non-limiting embodiments of the present application will now be described in detail with reference to the accompanying drawings.
Overview of image processing method
The image processing method according to the embodiment of the application comprises the following steps: imaging a subject to obtain an initial image, the initial image including at least a first partial image having a first luminance and a second partial image having a second luminance, the first partial image and the second partial image having at least partially the same image content, and the first luminance being different from the second luminance; dividing the initial image into at least a first partial image and a second partial image; and synthesizing the first partial image and the second partial image to obtain a synthesized image, the synthesized image having a high dynamic range with respect to the same image content.
By the image processing method according to the embodiment of the present application, it is possible to obtain an initial image including a first partial image and a second partial image having first luminance and second luminance different from each other, respectively, wherein image contents of the first partial image and the second partial image are at least partially identical. Therefore, by dividing the obtained initial image into a first partial image and a second partial image and synthesizing, the same image content having different brightness in the first partial image and the second partial image is synthesized into an image portion having a high dynamic range in the synthesized image.
In this way, by the image processing method according to the embodiment of the present application, the same image content having different brightnesses is contained in the initial image obtained by imaging the subject, and since the first partial image and the second partial image are obtained by shooting at the same time, the same image content is identical except for the difference in brightness therebetween. Thus, by synthesizing the first partial image and the second partial image, a completely real-time HDR portion is obtained in the synthesized image, thereby realizing real-time HDR photographing of the image.
Here, in the image processing method according to the embodiment of the present application, an initial image including a first partial image and a second partial image having first luminance and second luminance different from each other, respectively, may be obtained by a variety of methods. For example, in one example, the image processing method according to the embodiment of the present application can image a subject through a fly-eye lens. Specifically, a plurality of images of an object can be obtained simultaneously by imaging the object with the fly-eye lens, and by setting the transmittances of the plurality of lenses in the fly-eye lens to be different from each other, a single image including a plurality of areas having different brightnesses can be obtained, and the image contents of the partial images of the plurality of areas are at least partially identical, if not identical.
In addition, in the image processing method according to the embodiment of the present application, an initial image including a first partial image and a second partial image having first luminance and second luminance different from each other, respectively, may be obtained using a specific optical device.
In one example, the optical device includes: a housing including at least a first inner surface and a second inner surface, the first inner surface and the second inner surface extending in a first direction and being parallel to each other; a first mirror disposed on the first inner surface and having a first reflectivity; and a second mirror disposed on the second inner surface and having the first reflectivity.
By imaging the subject with the optical device, a transmission region transmitted through the housing of the optical device and a reflection region reflected by the first mirror and the second mirror may be included in the obtained image. Here, it is apparent that the transmission image of the transmission region and the reflection image of the reflection region have at least partially the same image content. And, since the first mirror and the second mirror have the first reflectivity, the luminance of the transmission image and the reflected image are made different, for example, the luminance of the transmission image is greater than the luminance of the reflected image. Thus, by dividing the obtained image into a transmission image and a reflection image and combining them, a combined image having a high dynamic range with respect to the same image content in the transmission image and the reflection image can be obtained. The optical device according to embodiments of the present application will be described in further detail below.
Exemplary optical devices
Fig. 1 illustrates a schematic diagram of an optical device according to an embodiment of the present application. As shown in fig. 1, the optical device 100 according to the embodiment of the present application includes a housing 110, the housing 110 includes at least a first inner surface 111 and a second inner surface 112, and the first inner surface 111 and the second inner surface 112 extend in a first direction and are parallel to each other. For example, the first and second inner surfaces 111 and 112 may extend in a horizontal direction or a vertical direction and have a certain scale in a depth direction perpendicular to the horizontal and vertical directions.
The optical device 100 according to an embodiment of the present application further comprises a first mirror disposed on the first inner surface 111 and having a first reflectivity; and a second mirror disposed on the second inner surface 112 and having the first reflectivity.
For example, the housing 110 may have a first end and a second end. When used in the optical device 100 to image or view a subject, the first end may be oriented toward the lens or the human eye and the second end may be oriented toward the subject.
In this way, the optical device according to the embodiment of the present application can form a transmission image of a subject at a first end by aligning a second end with the subject, and further can form two reflection images of the subject at the first end by means of a first mirror provided on the first inner surface 111 and a second mirror provided on the second inner surface 112, so that a single image including a plurality of areas having different brightness can be formed subsequently in combination with the transmission image for image processing (for example, synthesizing an HDR image or for parallax detection).
Further, as shown in fig. 1, the housing 110 further includes a third inner surface 113 and a fourth inner surface 114, the third inner surface 113 and the fourth inner surface 114 extending in a second direction and being parallel to each other, and the first direction and the second direction being perpendicular to each other. For example, the third inner surface 113 and the fourth inner surface 114 may extend in a vertical direction or a horizontal direction, and have a certain dimension in a depth direction perpendicular to the vertical direction and the horizontal direction. The first, second, third and fourth inner surfaces 111, 112, 113 and 114 may constitute a rectangular parallelepiped shape. That is, the four inner surfaces 111, 112, 113, and 114 of the case 110 of the optical device 100 according to the embodiment of the present application are respectively rectangular in shape and are respectively plane-symmetrical with respect to the center line of the case 110 as an axis.
For example, in practice, the housing 110 may be formed to have a rectangular cylindrical shape, and as shown in fig. 1, the cross-sectional shape of the rectangular cylindrical shape of the housing is also rectangular. Here, it will be understood by those skilled in the art that although the four inner surfaces are each considered to be rectangular in shape in fig. 1, the shape thereof may be square. For example, the first and second inner surfaces parallel to each other may be square in shape, or the third and fourth inner surfaces parallel to each other may be square in shape, or the first to fourth inner surfaces may all be square. Accordingly, the cross section of the rectangular cylindrical housing may be rectangular or square depending on the shape of the four inner surfaces. For example, the lengths of the four inner surfaces in the first direction and the length in the second direction may be equal to each other, so that the rectangular-cylinder-shaped housing has a square cross section. Further, the four inner surfaces may each be square, so that the rectangular-cylindrical-shaped housing has a square cross section. Therefore, in the optical device 100 according to the embodiment of the present application, the rectangular cylindrical shape of the housing 110 also covers the case where the housing 110 is in a square cylindrical shape. Further, in the embodiment of the present application, it is only required that the four inner surfaces of the housing 110 have a rectangular parallelepiped shape or a square shape, and the outer surfaces thereof may have any shape, such as a cylinder shape, a prism shape, a cone shape, etc., depending on different industrial designs.
The optical device according to the embodiment of the present application further includes a fixing member for fixing a positional relationship between the optical device and a lens of the imaging apparatus such that a center line of the housing is collinear with a center line of the lens.
For example, the securing means may be any mechanical structure, such as a thread, a hinge, etc., formed at the first end opening for forming a mechanical connection with a corresponding mechanical structure on the lens. Furthermore, the fixing member may also be a mechanical structure independent of the optics and the lens, such as a separate clamp or bracket, etc., as long as it can be used to accurately position the optics on the lens.
Fig. 2 is a schematic view of an optical device according to an embodiment of the present application secured to a lens of an imaging apparatus. As shown in fig. 2, in order to ensure that the transmitted image passing through the housing of the optical device and the reflected image formed by the mirror on the inner surface are imaged by the imaging apparatus via the lens, the center line of the housing of the optical device is collinear with the center line of the lens, so that the housing of the optical device is aligned with the lens of the imaging apparatus. In this way, the transmission image passing through the housing may be located at the center of the single image imaged by the imaging device, and the two reflection images formed by the two reflection mirrors on the inner surface may be located on both sides of the single image imaged by the imaging device, respectively. Also, depending on the position where the mirror is disposed, the reflected image formed by the mirror on the inner surface may be located on both upper and lower sides of a single image imaged by the imaging device, or on both left and right sides thereof.
Also, as shown in fig. 2, the housing of the optical device according to the embodiment of the present application faces the first end of the lens, i.e. the inner edge of the housing may have a predetermined distance from the plane of the lens. Here, the inner edge of the housing of the optical device may also continue to extend in the direction of the lens, for example to the extent of alignment with the plane of the lens, as will be appreciated by those skilled in the art. In addition, the housing may further extend in a direction toward the imaging device, and then a portion extending to the inside of the lens plane is substantially incapable of imaging through the lens by reflection. Therefore, in the optical device according to the embodiment of the present application, if the same reflected image as the transmitted image is to be obtained, it is necessary to define the size of the mirror on the inner surface of the housing.
That is, in the optical device according to the embodiment of the present application, the first mirror and the second mirror have at least a first size, and the first size is such that the first mirror and the second mirror cover a quarter of the field of view of the lens in the second direction, respectively.
As described above, when the first and second inner surfaces extend in the horizontal direction as shown in fig. 1, the first and second mirrors are located at the upper and lower inner surfaces 111 and 112 of the housing, respectively. In this case, the reflected image formed via the first mirror and the second mirror is also located on the upper side and the lower side of the transmitted image. Therefore, in order to be able to obtain the same reflected image as the transmitted image, the first dimensions of the first mirror and the second mirror are set such that the first mirror and the second mirror cover a quarter of the field of view of the lens in the vertical direction as shown in fig. 1, respectively. In this way, since the transmission image occupies one half of the field of view of the lens, and the upper side reflection image and the lower side reflection image obtained by reflection by the first mirror and the second mirror occupy one quarter of the field of view of the lens, respectively. After the upper side reflection image and the lower side reflection image are spliced, the reflection image identical to the transmission image can be obtained.
Also, when the first and second inner surfaces extend in a vertical direction as shown in fig. 1, the first and second mirrors are located at the left and right inner surfaces 113 and 114 of the housing, respectively. In this case, the reflected images formed via the first mirror and the second mirror are also located on the left and right sides of the transmitted image. Therefore, in order to be able to obtain the same reflected image as the transmitted image, the first dimensions of the first mirror and the second mirror are set such that the first mirror and the second mirror cover a quarter of the field of view of the lens in the horizontal direction as shown in fig. 1, respectively. In this way, since the transmission image occupies one half of the field of view of the lens, and the left side reflection image and the right side reflection image obtained by reflection by the first mirror and the second mirror occupy one quarter of the field of view of the lens, respectively. After the left side reflection image and the right side reflection image are spliced, the reflection image which is the same as the transmission image can be obtained.
Of course, those skilled in the art will appreciate that in some application scenarios, it may not be necessary to obtain a reflected image that is exactly the same as the transmitted image, but only the transmitted image and the reflected image may be partially identical. Therefore, in such a case, it is not necessary that the size of the mirror satisfies the above condition of covering one-fourth of the field of view of the lens. Likewise, the sizes of the reflected images formed via the first mirror and the second mirror do not necessarily need to be identical, and therefore, in such a case, it is not necessary that both the first mirror and the second mirror have identical sizes, but may have different sizes.
In addition, if the reflected image has redundant information of the transmitted image, such as the size of the reflected image is greater than one half of the size of the transmitted image, the stitched reflected image has repeated image information. Then, the same reflected image as the transmitted image can be obtained by cropping the reflected image, and therefore the first mirror and the second mirror have at least a first size capable of covering a quarter of the field of view of the lens, that is, the first mirror and the second mirror may also have a size larger than the first size.
Also, it will be appreciated by those skilled in the art that although in the description of the embodiments of the present application, the expressions transmission image and reflection image are used, the transmission image and reflection image substantially belong to different areas of a single image imaged by the optical device. Therefore, in neither the above description nor the following description, the transmission image and the reflection image are intended to refer to a single image, but to different image areas in a single image, i.e., partial images having different brightness.
Alternatively or additionally, an optical lens according to an embodiment of the present application may include: a third mirror disposed on the third inner surface and having a second reflectivity different from the first reflectivity; and a fourth mirror disposed on the fourth inner surface and having a second reflectivity.
And, the third mirror and the fourth mirror have at least a second dimension, and the second dimension is such that the third mirror and the fourth mirror each cover a quarter of the field of view of the lens in the first direction.
Here, the third mirror and the fourth mirror are similar to the first mirror and the second mirror described earlier in terms of specific arrangement and operation, except that the arrangement is different, for example, in the case where the first mirror and the second mirror extend in the horizontal direction, the third mirror and the fourth mirror extend in the vertical direction; and in the case where the first and second mirrors extend in the vertical direction, the third and fourth mirrors extend in the horizontal direction.
In addition, in the optical device according to the embodiment of the present application, if only a transmission image and one reflection image having a brightness different from that of the transmission image need to be formed, there may be only a pair of mirrors, for example, a first mirror and a second mirror. In contrast, if it is necessary to form a transmission image and two reflection images having different brightness from the transmission image, it is necessary to employ two pairs of mirrors, for example, a first mirror and a second mirror, and a third mirror and a fourth mirror, simultaneously.
Further, since the first reflectance of the first mirror and the second mirror is different from the second reflectance of the third mirror and the fourth mirror, the brightness of the first reflected image formed by the first mirror and the second mirror is also different from the brightness of the image formed by the third mirror and the fourth mirror. In this way, the optical device according to embodiments of the present application may be formed to include individual images having different brightnesses for use in the HDR function of the imaging apparatus or the binocular stereoscopic imaging function described above, again via corresponding image processing.
Fig. 3 is a schematic view of a field of view formed by an optical device according to an embodiment of the present application. In fig. 3, it is assumed that the lens (first end of the housing) is in the out-of-paper direction, and the object (second end of the housing) is in the in-paper direction. As shown in fig. 3, with the housing of the rectangular inner surface of the optical device according to the embodiment of the present application, the field of view of the lens actually used for direct imaging is transmitted through the middle of the housing for transmission imaging, and the periphery of the field of view is covered by a mirror in a quarter in the horizontal direction and the vertical direction, respectively, for reflection imaging.
Fig. 4 is a schematic view of a field of view formed by an optical device according to an embodiment of the present application relative to a lens. Fig. 4 is substantially the same as fig. 3, except that the lens (first end of the housing) is assumed to be in-plane direction, and the object (second end of the housing) is assumed to be out-of-plane direction, and the relationship of the field of view with respect to the lens is plotted. As shown in fig. 4, the optical device includes a first mirror 120, a second mirror 130, a third mirror 140, and a fourth mirror 150, and the lens is positioned at the center of the transmission field of view formed by the optical device, that is, the center line of the housing is collinear with the center line of the lens.
For example, in the optical device according to the embodiment of the present application, in the case where only the first mirror and the second mirror are employed, the first reflectance may be 66%, for example.
Also, in the above-described optical device, in the optical device according to the embodiment of the present application, in the case where the first mirror and the second mirror, and the third mirror and the fourth mirror are employed, for example, the first reflectance may be 66%, and the second reflectance may be 33%.
Here, it will be appreciated by those skilled in the art that the values of the first and second reflectivities described above are merely examples, and that other specific reflectivity values may also be employed by the optical device according to embodiments of the present application. Preferably, for convenience of image processing, when it is necessary to form a transmission image and two reflection images having different brightness, brightness values of the transmission image and the two reflection images may constitute an arithmetic progression. Accordingly, since the reflectivity of the transmission image can be regarded as 1, the first reflectivity, the second reflectivity and 1 also constitute an arithmetic series.
Further, in order to achieve the first reflectivities of the first and second mirrors, and the second reflectivities of the third and fourth mirrors, various approaches may be employed.
For example, the mirror surfaces of the first mirror and the second mirror may be directly made of a material having a first reflectance, such as a glass material having a reflectance less than total reflection, and the mirror surfaces of the third mirror and the fourth mirror may be made of a material having a second reflectance.
In addition, the first reflecting mirror and the second reflecting mirror with the first reflectivity, and the third reflecting mirror and the fourth reflecting mirror with the second reflectivity can be obtained by adopting a mode of coating a film on a plane mirror for realizing total reflection.
Specifically, in the optical device according to the embodiment of the present application, the first mirror includes a first plane mirror and a first transmissive film provided on a surface of the first plane mirror, and the first transmissive film has a first transmittance that is associated with the first reflectance; the second mirror includes a second planar mirror and a second transmissive film disposed on a surface of the second planar mirror, and the second transmissive film has a first transmittance.
Fig. 5 is a schematic view of a mirror with a transmissive film according to an embodiment of the present application. As shown in fig. 5, the mirror 200 includes a flat mirror 201 and a semipermeable membrane 202 provided on the surface of the flat mirror 201.
Here, in the case where a transmissive film is provided on the surface of the flat mirror, light rays actually pass through the transmissive film twice when being reflected via the mirror. That is, the incident light is first incident on the plane mirror through the transmissive film, and then reflected by the plane mirror, and then passes through the transmissive film. Therefore, in the case of achieving the same reflectance, the transmittance of the transmissive film should be the square root of the reflectance.
That is, in the optical device according to the embodiment of the present application, the first transmittance is corresponding to 66% of the first reflectance
Similarly, in the optical device according to the embodiment of the present application, the third mirror includes a third plane mirror and a third transmissive film provided on a surface of the third plane mirror, and the third transmissive film has a second transmittance that is associated with the second reflectance; and the fourth reflecting mirror includes a fourth plane mirror and a fourth transmissive film disposed on a surface of the fourth plane mirror, and the fourth transmissive film has a second transmittance.
And, in the optical device according to the embodiment of the present application, the first transmittance is corresponding to 66% of the first reflectanceAnd corresponds to a second reflectivity of 33%, the second transmissivity being +. >
Here, similar to the above description about the first mirror including the first plane mirror and the first transmissive film, the second plane mirror and the second transmissive film included in the second mirror, the third plane mirror and the third transmissive film included in the third mirror, and the fourth plane mirror and the fourth transmissive film included in the fourth mirror. Also, the third and fourth transmissive films have the second transmittance similarly to the above description about the first and second transmissive films having the first transmittance. Therefore, the description is not repeated in order to avoid redundancy.
Fig. 6 is a schematic diagram of an imaging process according to an embodiment of the present application. As shown in fig. 6, a mirror 300 having a semi-transparent film 301 is disposed at an object side of a lens-lens group 310, and imaging light transmitted therethrough and light reflected by the mirror 300 are projected onto a CMOS image sensor 320 through the lens-lens group 310 to form an image. In fig. 6, only a pair of upper and lower mirrors are shown for convenience, but in practical applications, a pair of left and right mirrors may be used in addition to the pair of upper and lower mirrors, or both may be used.
As described above, in order to achieve a mirror covering lens extending in the horizontal direction covering a quarter of the field of view of the lens in the vertical direction and/or a mirror covering lens extending in the vertical direction covering a quarter of the field of view of the lens in the horizontal direction, it is necessary to define the size of the mirror extending in the horizontal direction or in the vertical direction. In the following, an example of calculation of the mirror size will be given.
First, defined in the optical device according to the embodiment of the present application, the first direction corresponds to the length direction of the photosensitive chip of the image forming apparatus, and the second direction corresponds to the width direction of the photosensitive chip of the image forming apparatus.
Generally, when the image forming apparatus is in a normal operation mode (is held horizontally), the length direction of the photosensitive chip is the horizontal direction, and the width direction of the photosensitive chip is the vertical direction. However, since the imaging apparatus is not necessarily operated in the horizontal posture, that is, the imaging apparatus may also be operated in the vertical posture (held vertically). Therefore, in the optical device according to the embodiment of the present application, the calculation method of the mirror size is described in terms of the length direction and the width direction of the photosensitive chip, not in the horizontal direction and the vertical direction.
This is also because the shape of the field of view of the lens is substantially based on the shape of the photosensitive chip. That is, in the case where the photosensitive chip is rectangular, the shape of the formed image is also rectangular, and accordingly, the field of view of the lens is also rectangular. In the case that the photosensitive chip is square, the shape of the formed image is square, and accordingly, the field of view of the lens is square.
Moreover, as described above, since a pair of mirrors parallel to each other need to cover a quarter of the field of view in the corresponding direction, the size of the mirrors is also related to the shape of the photosensitive chip. That is, the cross-sectional shape of the rectangular parallelepiped housing surrounded by the four reflecting mirrors substantially coincides with the shape of the photosensitive chip.
Fig. 7A and 7B are schematic diagrams illustrating a manner of determining the size of a mirror in an optical device according to an embodiment of the present application. Wherein fig. 7A shows the arrangement of the optics relative to the lens and the manner in which the specific parameters of the mirror dimensions are defined. As shown in fig. 7A, it is assumed that first and second mirrors, and third and fourth mirrors are disposed on opposite inner surfaces of four inner surfaces of the housing, respectively. For example, the respective mirrors are arranged starting from an edge of the second end of the housing and extend in a direction towards the first end.
The side lengths of the first dimension and the second dimension of the reflector in the direction perpendicular to the lens plane are respectively defined as h1 And h2 Collectively denoted by h in the figure, and the side lengths of the first and second dimensions of the mirror at a section parallel to the lens plane are defined as L and W, respectively. That is, the first size and the second size are rectangles having one side h and the other side L or W. It is understood that L is the side length in the length direction parallel to the photosensitive chip, and W is the side length in the width direction parallel to the photosensitive chip.
Also, fig. 7B shows a schematic diagram of imaging of an optical device with respect to an imaging apparatus according to an embodiment of the present application. As described above, for the mirror extending in the length direction parallel to the photosensitive chip, the side length L thereof determines whether the mirror extending in the width direction parallel to the photosensitive chip can cover a quarter of the field of view of the lens in the length direction parallel to the photosensitive chip (or vice versa). Therefore, as shown in fig. 7B, consider the angle of view of the lens of the imaging device in the length direction of the photosensitive chipFor convenience of explanation, here shown as a horizontal viewing angle, it is necessary to satisfy the left mirror image portion VL Length = right mirrored part VR Is equal to one half the length of the perspective area V. Also, the length of the transmissive region V is the side length L as shown in fig. 7A, and the width of the transmissive region V is the side length W as shown in fig. 7A.
Since the shape of the transmissive region shown in fig. 7B is actually a shape of a mirror extending in the horizontal direction, the upper side and the lower side of the transmissive region are parallel and equal in length. Thus, based on FIG. 7B and using the principle of parallel bisection, it can be appreciated that in the optical device of the embodiment of the present application, the first dimension has a side length h in the direction perpendicular to the lens plane1 As expressed by the following formula:
h1 =d
wherein d is a distance from a projection point of one end of the first and second reflecting mirrors adjacent to the lens on a center line of the lens to an optical center of the lens. Here, the optical center of the lens refers to the optical center of the lens group 310 of the lens as shown in fig. 6. Specifically, if the lens group includes only one lens, the center position of the lens is the center position of the lens. And if the lens group includes a plurality of lenses, it means the equivalent center of the lens group constituted by the plurality of lenses.
And, the side length L of the first dimension in the length direction parallel to the photosensitive chip is expressed as the following formula:
wherein Y is1 Is the viewing angle of the imaging device in the length direction of the photosensitive chip.
And, in the above optical device, the second dimension has a side length h in a direction perpendicular to the lens plane2 As expressed by the following formula:
h2 =h1 =d。
and, the side length W of the second dimension in the length direction parallel to the photosensitive chip is expressed as the following formula:
where r is the aspect ratio of the photosensitive chip of the image forming apparatus. For example, the aspect ratio of the photo-sensing chip is typically 4:3 or 16:9.
In addition, the angle of view may also be the angle of view of the image forming apparatus in the width direction of the photosensitive chip, unlike the case described above. For example, when in a horizontal posture of the imaging apparatus, it is a viewing angle in the vertical direction. That is, the first direction corresponds to a width direction of a photosensitive chip of the image forming apparatus, and the second direction corresponds to a length direction of the photosensitive chip of the image forming apparatus.
Therefore, in the optical device according to the embodiment of the present application, the side length h of the first dimension in the direction perpendicular to the lens plane1 As expressed by the following formula:
h1 =d
wherein d is a distance from a projection point of one end of the first and second reflecting mirrors adjacent to the lens on a center line of the lens to an optical center of the lens.
And, the side length W of the first dimension in the width direction parallel to the photosensitive chip is expressed as the following formula:
wherein Y is2 Is the viewing angle of the imaging device in the width direction of the photosensitive chip.
And, in the above optical device, the second dimension has a side length h in a direction perpendicular to the lens plane2 As expressed by the following formula:
h2 =h1 =d。
and, the side length L of the second dimension in the length direction parallel to the photosensitive chip is expressed as the following formula:
L=W×r
where r is the aspect ratio of the photosensitive chip of the image forming apparatus.
In addition, it will be appreciated by those skilled in the art that the above-described manner is to calculate one of the length and width of the cross section of the housing, for example, the aspect ratio of the photosensitive chip, after the other. However, it is also possible to calculate both the length and the width of the cross section of the housing based on the angles of view of the image forming apparatus in the length and width directions of the photosensitive chip, respectively.
Thus, the above-described manner of determining the dimensions of the mirror is merely an example, and one skilled in the art will appreciate that it is only necessary to ensure that the dimensions of the mirror are one-fourth of the field of view of the lens in the corresponding direction, and that other manners of determining the dimensions of the mirror may be employed, and the embodiments of the present application are not intended to be limiting in any way.
Here, it is understood by those skilled in the art that in the optical device of the embodiment of the present application, in order to form the transmission image and the reflection image identical to the transmission image, the size of the mirror needs to be defined, but the definition of the size of the mirror does not necessarily require the same size of the mirror as the inner surface of the case. As described above, the case may further extend toward the direction of the lens toward the inner edge (first end) of the lens, and be formed, for example, as a fixing member fixed to the lens. However, based on the optical path shown in fig. 7B above, the outer edge of the housing remote from the second end of the lens is preferably flush with the outer edge of the mirror so as not to affect the optical path. Of course, if the housing itself is made of a transparent material, such as glass, the above limitations will not be present.
Furthermore, as mentioned above, the mirror has at least the dimensions determined in the manner described above. However, the size of the mirror can also be further enlarged. For example, taking the optical path as shown in fig. 7B as an example, when the mirror extends downward, the left mirror portion VL length and the right mirror portion VR length also extend accordingly so as to be greater than half the length of the transmissive region V. In this way, the resulting reflected image will be redundant with respect to the transmitted image, and the same reflected image as the transmitted image can be obtained by clipping.
Also, in the optical device of the embodiment of the present application, as described above, the fixing member is provided at the first end of the housing toward the lens and detachably coupled with the lens.
In addition, in the optical device of the embodiment of the present application, in order to secure imaging effect, it is desirable that all light rays passing through the housing are incident on the photosensitive chip via the lens, thereby being used for imaging. Therefore, preferably, in a case where the fixing member is coupled with the lens, light entering the housing through a second end of the housing opposite to the first end does not leak out of the lens through the first end, and has light tightness.
Exemplary image processing method
Fig. 8 illustrates a schematic flowchart of an image processing method according to an embodiment of the present application. As shown in fig. 8, the image processing method according to the embodiment of the present application includes: s410 of imaging a subject to obtain an initial image including at least a first partial image having a first luminance and a second partial image having a second luminance, the first partial image and the second partial image having at least partially the same image content, and the first luminance being different from the second luminance; s420, dividing the initial image into at least a first partial image and a second partial image; and S430, synthesizing the first partial image and the second partial image to obtain a synthesized image, wherein the synthesized image has a high dynamic range relative to the same image content.
Also, as described above, in the image processing method according to the embodiment of the present application, imaging the subject to obtain the initial image S410 includes: imaging a subject with an optical device to obtain an initial image, the optical device comprising: a housing including at least a first inner surface and a second inner surface, the first inner surface and the second inner surface extending in a first direction and being parallel to each other; a first mirror disposed on the first inner surface and having a first reflectivity; and a second mirror disposed on the second inner surface and having the first reflectivity. Here, the optical device according to the embodiment of the present application has been described in detail above with reference to fig. 1 to 7B, and other aspects of an image processing method employing the optical device will be further described below. It is to be noted that, in the embodiment of the present application, imaging is not limited to the use of the above-described optical device shown in fig. 1 to 7B, but other optical devices (for example, fly's eye lenses) may also be employed as long as it is possible to obtain a single image including a plurality of regions having different brightnesses by a single imaging.
In an image processing method according to an embodiment of the present application, imaging a subject with an optical device to obtain an initial image includes: determining a predetermined exposure parameter according to the global photometry and/or the first reflectivity; and imaging the subject with the optical device based on the predetermined exposure parameter to obtain an initial image.
Here, when imaging a subject using the optical device according to the embodiment of the present application described above, in order to obtain a high dynamic range image, it is necessary to determine a predetermined exposure parameter so that a transmission image imaged directly through the housing is overexposed, and a reflection image imaged via the reflection mirror is underexposed. Thus, when the transmission image and the reflection image are synthesized, the dynamic range which cannot be achieved by one exposure in the prior art can be created by synthesizing the details of the dark portion of the overexposed and the details of the bright portion of the underexposed, thereby obtaining a high dynamic range image capable of retaining both the details of the brightness and the dark portion.
In one example, in the image processing method according to the embodiment of the present application, dividing the initial image into at least a first partial image and a second partial image S420 includes: dividing the initial image into at least a first luminance image and a second luminance image, the first luminance image being a first partial image; and mirror-image processing and stitching are carried out on the second brightness image to obtain the second partial image.
As described above, with the optical device according to the embodiment of the present application, the reflection image can be formed on both sides, for example, the upper side and the lower side, or the left side and the right side, of the transmission image transmitted through the housing. And, the reflected images formed on both sides of the transmitted image are in mirror image relationship. Therefore, in dividing the initial image, the first partial image and the second partial image cannot be obtained by directly cropping the initial image, but the initial image needs to be divided into a first luminance image and a second luminance image according to the luminance of different areas. Wherein the first luminance image corresponds to the transmission image and thus can be directly used as the first partial image. And the second luminance image corresponds to the reflected image, it is necessary to mirror and stitch the second luminance image to obtain a second partial image.
According to one embodiment, in an image processing method according to an embodiment of the present application, dividing an initial image into at least a first luminance image and a second luminance image includes: the initial image is divided into at least a first luminance image and a second luminance image based on the optical device and a dimensional relationship of its first or second inner surface.
In the image processing method according to the embodiment of the present application, in the case where an initial image is obtained by photographing an object with the optical device as described above, since the transmission image and the reflection image in the initial image correspond to the structure of the optical device, the initial image can be divided according to the structural size of the optical device. For example, as previously shown in fig. 3, the initial image is formed with a transmission image in the middle and reflection images on the upper, lower and/or left and right sides, the proportion of the reflection image and the transmission image being determined according to the dimensional relationship of the first inner surface with respect to the whole optical device. Since the dimensions of the optical device and its first or second inner surface can be obtained simply by measurement, the initial image can be divided accordingly into a transmission image with a first brightness and a reflection image with a second brightness. This applies to both cases where the reflected image is located on the upper and lower sides of the transmitted image, and where the reflected image is located on the left and right sides of the transmitted image.
In a more complex scenario, since parameters of the lens may be different, distortion generated by the lens and the mirror may be pre-calibrated when the lens and the optical device are assembled, so that the initial image may be divided according to the pre-calibrated position correspondence in the above steps.
According to another embodiment, in an image processing method according to an embodiment of the present application, dividing an initial image into at least a first luminance image and a second luminance image includes: the initial image is luminance-identified to divide the initial image into at least a first luminance image and a second luminance image based on the first luminance and the second luminance.
In addition, in the image processing method according to the embodiment of the present application, image division may also be performed according to brightness. As described above, since the transmission image has the first luminance and the reflection image has the second luminance, the luminance of the entire transmission image and the luminance of the entire reflection image are uniform. By performing luminance recognition on the initial image, it is possible to divide an area having a first luminance as a first luminance image, that is, a transmission image, and to divide an area having a second luminance as a second luminance image, that is, a reflection image.
Therefore, it will be understood by those skilled in the art that in the image processing method according to the embodiment of the present application, the obtained initial image may be divided in various ways, and the embodiment of the present application is not intended to limit this in any way.
In one example, in the image processing method according to the embodiment of the present application, synthesizing the first partial image and the second partial image to obtain a synthesized image S430 includes: determining weighting coefficients used to generate the composite image; and performing weighted synthesis on each pixel in the first partial image and the second partial image by using the weighting coefficient.
For example, the above-described weighting coefficients may be determined using the shading of a reference shading image, which is an image used as a reference required to determine how to synthesize a low-luminance image and a high-luminance image to create an HDR image. Alternatively, the imaging core object may be determined by scene analysis of the initial image, and the weighting coefficients may be determined based on the imaging effect of the core object in the composite image. Alternatively, in the case where n (n is a positive integer) partial images exist, each of the n weighting coefficients may be set to 1/n simply.
According to a further embodiment, in the above image processing method, the initial image further includes a third partial image having a third luminance, and the third luminance is different from the first luminance or the second luminance; dividing the initial image into at least a first partial image and a second partial image S420 includes: dividing the initial image into a first partial image, a second partial image and a third partial image; synthesizing the first partial image and the second partial image to obtain a synthesized image S430 includes: and synthesizing the first partial image, the second partial image and the third partial image to obtain a synthesized image.
According to one embodiment, the housing of the optical device further comprises a third inner surface and a fourth inner surface, the third inner surface and the fourth inner surface extending in a second direction and being parallel to each other, the second direction and the first direction being perpendicular to each other, the first inner surface, the second inner surface, the third inner surface and the fourth inner surface constituting a cuboid shape; the optical device further includes: a third mirror disposed on the third inner surface and having a second reflectivity, the second reflectivity being different from the first reflectivity; and a fourth mirror disposed on the fourth inner surface and having the second reflectivity.
In the previous description of the optical device, it has been mentioned that the optical device according to the embodiment of the present application may have the mirror on two opposite inner surfaces thereof, or may be provided with the mirror on each of four inner surfaces. In this way, a single image including areas having three different brightnesses can be obtained. And, by dividing the image, three images each having different brightness can be obtained, thereby synthesizing to obtain an image having a high dynamic range.
Here, it will be understood by those skilled in the art that the processing manner of the single image including the three areas of different brightness is substantially the same as that of the previous single image including the two areas of different brightness, and will not be described in detail to avoid redundancy.
Image processing effect
Fig. 9 illustrates a schematic diagram of an initial image of an image processing method according to an embodiment of the present application. In the case where the mirrors are provided on the four inner surfaces of the case of the optical device as shown in fig. 9, the image is symmetrical according to the specular boundary due to the reflection of the four mirrors, and 3 full images, that is, a middle transmission image, and partial images at the upper and lower sides and partial images at the left and right sides, respectively, are actually obtained. Further, since the reflectances of the left and right reflecting mirrors are different from those of the upper and lower reflecting mirrors, for example, the degree of transmission of the coating film is different, and the upper and lower partial images and the left and right partial images have different degrees of brightness.
As described above, in the actual imaging process, the exposure parameters used are determined from the brightness of the reflected image of the global metering or mirror portion. Therefore, the transmission image is brighter and overexposed, and the images of the upper part, the lower part, the left part and the right part are darker and underexposed.
Next, image segmentation is performed on the obtained initial image, as shown in fig. 10A and 10B. Fig. 10A and 10B illustrate diagrams of initial image segmentation in an image processing method according to an embodiment of the present application. Here, it is understood by those skilled in the art that, in the image shown in fig. 9, portions at the four corners of the image are actually redundant portions due to mirror imaging, and therefore, in the actual image division process, only the images just above and below, and just left and right of the transmission image shown in fig. 10A need to be retained. That is, at the time of image cropping, it is sufficient to crop the corresponding four rectangular images in the four directions up, down, left, and right, while removing redundant portions located at the four corners, only in accordance with the rectangular shape of the transmission image. After clipping, an image as shown in fig. 10B is obtained, from left to right, a left mirror-reflected image, a right mirror-reflected image, an upper mirror-reflected image, a lower mirror-reflected image, and a transmission image, respectively.
Fig. 11 is a schematic diagram of image mirroring splicing after segmentation in the image processing method according to the embodiment of the present application. As shown in fig. 11, the left and right images of the left and right sides shown in fig. 10B are respectively mirrored and stitched together to form the lowest brightness image, the upper and lower images of the left and right sides shown in fig. 10B are mirrored and stitched together to form the middle brightness image, and the highest brightness image is directly transmitted, so that three images for the same scene are obtained, but with different exposure degrees.
Fig. 12 is a schematic diagram of image composition in an image processing method according to an embodiment of the present application. As shown in fig. 12, after three images for the same scene but with different exposure levels are obtained (as shown below, the order of which is the same as in fig. 11), weighted synthesis of the three images can be performed in accordance with a positional correspondence that is calibrated in advance, and an HDR image (as shown above) having a dynamic range that cannot be achieved by one exposure in the related art is created by synthesizing the dark portion details of the overexposure and the bright portion details of the underexposure of the main image.
In this way, since the three images are all exposed at the same time physically, the camera parameters do not need to be adjusted every exposure, so that video recording can be performed according to the normal frame rate, and the method can be used for HDR synthesis of video monitoring scenes. In addition, as the three images come from the same light source and are synchronized by using hardware, if the photographed object is moving (or the camera/photographing device is moving), the blurring or double image in the technology of combining a plurality of photos in different exposure photographing can be completely avoided.
Therefore, the image processing method of the embodiment of the application realizes that images with different brightness of the same scene are acquired by one exposure on the same CMOS sensor at the same time and synthesized into the high dynamic range image.
In addition, the image processing method of the embodiment of the application does not need to modify the imaging equipment and the CMOS sensor, and can be compatible with the existing hardware monitoring equipment.
Moreover, the image processing method of the embodiment of the application can enable the imaging equipment to carry out real-time HDR shooting without adjusting CMOS sensor parameters every exposure, is suitable for HDR video and is suitable for HDR monitoring scene application.
In addition, the image processing method of the embodiment of the application can enable the imaging device to be used for photographing in a higher dynamic range, and blurring and double images are not generated even when a moving object is photographed.
Further, the imaging apparatus employing the image processing method of the embodiment of the present application does not need a double CMOS or even a multiple CMOS, thereby realizing third-order real-time HDR at a lower cost.
Exemplary image processing apparatus
Next, an image processing apparatus according to an embodiment of the present application will be described with reference to fig. 13.
Fig. 13 illustrates a schematic block diagram of an image processing apparatus according to an embodiment of the present application.
As shown in fig. 13, an image processing apparatus 500 according to an embodiment of the present application includes: an image capturing unit 510 for imaging a subject to obtain an initial image, the initial image including at least a first partial image having a first luminance and a second partial image having a second luminance, the first partial image and the second partial image having at least partially the same image content, and the first luminance being different from the second luminance; an image dividing unit 520 for dividing the initial image into at least a first partial image and a second partial image; and an image synthesizing unit 530 for synthesizing the first partial image and the second partial image to obtain a synthesized image having a high dynamic range with respect to the same image content.
In one example, the image capturing unit 510 is configured to determine a predetermined exposure parameter according to global light metering and/or the first reflectivity; and imaging a subject with the optical device based on the predetermined exposure parameter to obtain the initial image.
In one example, the image segmentation unit 520 is configured to divide the initial image into at least a first luminance image and a second luminance image, where the first luminance image is used as a first partial image; and mirror-image processing and stitching are carried out on the second brightness image to obtain the second partial image.
In one example, the image segmentation unit 520 is configured to divide the initial image into at least the first luminance image and the second luminance image based on the optical device and the dimensional relationship of the first mirror or the second mirror thereof.
In one example, the image segmentation unit 520 is configured to perform luminance recognition on the initial image to divide the initial image into at least the first luminance image and the second luminance image based on the first luminance and the second luminance.
In one example, the image synthesis unit 530 is configured to determine weighting coefficients used for generating the synthesized image; and performing weighted synthesis on each pixel in the first partial image and the second partial image by using the weighting coefficient.
In one example, the initial image obtained by the image capturing unit 510 further includes a third partial image having a third brightness, and the third brightness is different from the first brightness or the second brightness; the image segmentation unit 520 is configured to divide the initial image into a first partial image, a second partial image, and a third partial image; and the image synthesizing unit 530 is configured to synthesize the first partial image, the second partial image, and the third partial image to obtain a synthesized image.
Here, it will be understood by those skilled in the art that other details of the image processing apparatus according to the embodiment of the present application and corresponding details described in the foregoing regarding the image processing method according to the embodiment of the present application are not repeated in order to avoid redundancy.
Exemplary imaging apparatus
Next, an imaging system according to an embodiment of the present application is described with reference to fig. 14.
Fig. 14 is a schematic diagram of an imaging system according to an embodiment of the present application.
As shown in fig. 14, an imaging system 600 according to an embodiment of the present application includes an optical device 610 and an imaging apparatus 620 as described above, the imaging apparatus 620 including a lens 621 for imaging a subject via the lens 621 with the optical device 610.
The specific functions and connection relationships of the respective units and modules in the optical device 610 have been described in detail above with reference to fig. 1 to 7B (reference numerals 611 to 615 in fig. 14 correspond to 110 to 114 in fig. 1, respectively), and thus, repetitive descriptions thereof will be omitted herein.
In one example, an imaging system according to an embodiment of the present application further includes a driving part mechanically connected to the optical device for moving the optical device to cause the imaging apparatus to image a subject directly via the lens without the optical device in response to receiving the first trigger signal, and for moving the optical device to cause the imaging apparatus to image the subject via the lens in response to receiving the second trigger signal.
In particular, the optical device according to embodiments of the present application substantially sacrifices the total number of pixels in order to achieve a single image comprising multiple regions with different brightness. Therefore, in the imaging system according to the embodiment of the present application, a driving part for turning on or off the function of the optical device is further included.
Here, the driving section may cause the imaging apparatus to image the subject via the lens by means of the optical device or directly image the subject via the lens without by means of the optical device in response to the received trigger signal. Here, the trigger signal may be manually triggered by a user, or may be based on an automatic recognition function of a scene to determine whether the optical device is required to achieve a richer imaging effect. In this way, a balance between the total number of pixels and a rich imaging effect can be obtained.
Also, in the imaging system according to the embodiment of the present application, the specific configuration of the driving part and the manner of driving the optical device are not limited. For example, the driving part may translate the optical device as a whole out of the field of view of the lens, so that the imaging apparatus directly images the subject via the lens without the aid of the optical device. Alternatively, the driving member may also drive the four surfaces of the housing of the optical device to spread outwardly at the second end to form an angle, in the shape of a lotus, such that each surface is outside the field of view of the lens.
In one example, the imaging device is a monitoring camera.
The image processing apparatus 500 according to the embodiment of the present application may be implemented in the imaging device 620, and in addition, the image processing apparatus 500 may also be a stand-alone device independent of the imaging device 620.
In one example, the image processing apparatus 500 according to embodiments of the present application may be integrated into the imaging device 620 as a software module and/or hardware module. For example, the image processing apparatus 500 may be a software module in the operating system of the imaging device, or may be an application developed for the imaging device; of course, the image processing apparatus 500 may also be one of a plurality of hardware modules of the imaging device.
Alternatively, in another example, the image processing apparatus 500 and the imaging device 620 may be separate devices, and the image processing apparatus 500 may be connected to the imaging device through a wired and/or wireless network and transmit interactive information in a contracted data format.
Exemplary electronic device
Next, an electronic apparatus according to an embodiment of the present application is described with reference to fig. 15.
Fig. 15 illustrates a block diagram of an electronic device according to an embodiment of the present application.
As shown in fig. 15, the electronic device 700 includes one or more processors 710 and memory 720.
Processor 710 may be a Central Processing Unit (CPU) or other form of processing unit having data processing and/or instruction execution capabilities and may control other components in electronic device 700 to perform desired functions.
Memory 720 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, random Access Memory (RAM) and/or cache memory (cache), and the like. The non-volatile memory may include, for example, read Only Memory (ROM), hard disk, flash memory, and the like. One or more computer program instructions may be stored on the computer readable storage medium that can be executed by the processor 710 to implement the image processing methods and/or other desired functions of the various embodiments of the present application described above. Various contents such as an initial image, partial images, a composite image, and the like may also be stored in the computer-readable storage medium.
In one example, the electronic device 700 may further include: an input device 730 and an output device 740, which are interconnected by a bus system and/or other form of connection mechanism (not shown).
For example, when the electronic device is a stand-alone device, the input means 730 may be a communication network connector for receiving the acquired input signal from the imaging device.
In addition, the input device 730 may include, for example, a keyboard, a mouse, and the like.
The output device 740 may output various information to the outside, for example, a synthesized high dynamic range image to an imaging apparatus or other image display apparatus. The output device 740 may include, for example, a display, speakers, a printer, and a communication network and remote output devices connected thereto, etc.
Of course, only some of the components of the electronic device 700 that are relevant to the present application are shown in fig. 15 for simplicity, components such as buses, input/output interfaces, and the like are omitted. In addition, the electronic device 700 may include any other suitable components depending on the particular application.
Exemplary computerProgram product and computer readable storage medium
In addition to the methods and apparatus described above, embodiments of the present application may also be a computer program product comprising computer program instructions which, when executed by a processor, cause the processor to perform the steps in a sound source localization method according to various embodiments of the present application described in the "exemplary methods" section of the present specification.
The computer program product may write program code for performing the operations of embodiments of the present application in any combination of one or more programming languages, including an object oriented programming language such as Java, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device, partly on a remote computing device, or entirely on the remote computing device or server.
Furthermore, embodiments of the present application may also be a computer-readable storage medium, having stored thereon computer program instructions, which when executed by a processor, cause the processor to perform the steps in a sound source localization method according to various embodiments of the present application described in the above-mentioned "exemplary method" section of the present specification.
The computer readable storage medium may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium may include, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium would include the following: an electrical connection having one or more wires, a portable disk, a hard disk, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The basic principles of the present application have been described above in connection with specific embodiments, however, it should be noted that the advantages, benefits, effects, etc. mentioned in the present application are merely examples and not limiting, and these advantages, benefits, effects, etc. are not to be considered as necessarily possessed by the various embodiments of the present application. Furthermore, the specific details disclosed herein are for purposes of illustration and understanding only, and are not intended to be limiting, as the application is not intended to be limited to the details disclosed herein as such.
The block diagrams of the devices, apparatuses, devices, systems referred to in this application are only illustrative examples and are not intended to require or imply that the connections, arrangements, configurations must be made in the manner shown in the block diagrams. As will be appreciated by one of skill in the art, the devices, apparatuses, devices, systems may be connected, arranged, configured in any manner. Words such as "including," "comprising," "having," and the like are words of openness and mean "including but not limited to," and are used interchangeably therewith. The terms "or" and "as used herein refer to and are used interchangeably with the term" and/or "unless the context clearly indicates otherwise. The term "such as" as used herein refers to, and is used interchangeably with, the phrase "such as, but not limited to.
It is also noted that in the apparatus, devices and methods of the present application, the components or steps may be disassembled and/or assembled. Such decomposition and/or recombination should be considered as equivalent to the present application.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present application. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the application. Thus, the present application is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description has been presented for purposes of illustration and description. Furthermore, this description is not intended to limit the embodiments of the application to the form disclosed herein. Although a number of example aspects and embodiments have been discussed above, a person of ordinary skill in the art will recognize certain variations, modifications, alterations, additions, and subcombinations thereof.

Claims (15)

CN201710748575.5A2017-08-282017-08-28Image processing method, image processing device and electronic equipmentActiveCN107370963B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN201710748575.5ACN107370963B (en)2017-08-282017-08-28Image processing method, image processing device and electronic equipment

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201710748575.5ACN107370963B (en)2017-08-282017-08-28Image processing method, image processing device and electronic equipment

Publications (2)

Publication NumberPublication Date
CN107370963A CN107370963A (en)2017-11-21
CN107370963Btrue CN107370963B (en)2023-08-08

Family

ID=60311363

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201710748575.5AActiveCN107370963B (en)2017-08-282017-08-28Image processing method, image processing device and electronic equipment

Country Status (1)

CountryLink
CN (1)CN107370963B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN114554050B (en)*2022-02-082024-02-27维沃移动通信有限公司Image processing method, device and equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN102420944A (en)*2011-04-252012-04-18展讯通信(上海)有限公司High dynamic range image synthesis method and device
CN103581565A (en)*2012-07-202014-02-12佳能株式会社Image capture apparatus, method of controlling image capture apparatus, and electronic device
CN105163039A (en)*2015-09-182015-12-16联想(北京)有限公司Control method and control device
CN105578068A (en)*2015-12-212016-05-11广东欧珀移动通信有限公司 A high dynamic range image generation method, device and mobile terminal
CN107045715A (en)*2017-02-222017-08-15西南科技大学A kind of method that single width low dynamic range echograms generates high dynamic range images

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN102420944A (en)*2011-04-252012-04-18展讯通信(上海)有限公司High dynamic range image synthesis method and device
CN103581565A (en)*2012-07-202014-02-12佳能株式会社Image capture apparatus, method of controlling image capture apparatus, and electronic device
CN105163039A (en)*2015-09-182015-12-16联想(北京)有限公司Control method and control device
CN105578068A (en)*2015-12-212016-05-11广东欧珀移动通信有限公司 A high dynamic range image generation method, device and mobile terminal
CN107045715A (en)*2017-02-222017-08-15西南科技大学A kind of method that single width low dynamic range echograms generates high dynamic range images

Also Published As

Publication numberPublication date
CN107370963A (en)2017-11-21

Similar Documents

PublicationPublication DateTitle
JP7371081B2 (en) Night view photography methods, devices, electronic devices and storage media
CN107959778B (en) Imaging method and device based on dual cameras
Tocci et al.A versatile HDR video production system
JP6263623B2 (en) Image generation method and dual lens apparatus
CN108718373B (en)Image device
ES2282429T3 (en) PROCEDURE AND SYSTEM TO PRODUCE FORMATED INFORMATION RELATED TO GEOMETRIC DISTORSIONS.
US8988567B2 (en)Multiple image high dynamic range imaging from a single sensor array
JP6455601B2 (en) Control system, imaging apparatus, and program
CN111526299B (en) A high dynamic range image synthesis method and electronic device
CN110213494B (en)Photographing method and device, electronic equipment and computer readable storage medium
CN107948500A (en)Image processing method and device
CN103780840A (en)High-quality imaging double camera shooting and imaging device and method thereof
JPH09181913A (en)Camera system
JPH09181966A (en)Image processing method and device
JP2013504940A (en) Full beam image splitter system
TW200917833A (en)Image sensor having checkerboard pattern
CN110266966A (en)Image generation method and device, electronic equipment and computer readable storage medium
CN107846556A (en) Imaging method, device, mobile terminal and storage medium
WO2011095026A1 (en)Method and system for photography
CN112019734A (en) Image acquisition method, apparatus, electronic device, and computer-readable storage medium
MontaboneBeginning digital image processing: using free tools for photographers
CN107370963B (en)Image processing method, image processing device and electronic equipment
JP5914881B2 (en) Three-dimensional imaging apparatus, image processing apparatus, image processing method, and program
JP2021127998A (en) Distance information acquisition device and distance information acquisition method
CN112104796B (en) Image processing methods and devices, electronic equipment, computer-readable storage media

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp