Movatterモバイル変換


[0]ホーム

URL:


US10506155B2 - Image capturing apparatus and image stitching method thereof - Google Patents

Image capturing apparatus and image stitching method thereof
Download PDF

Info

Publication number
US10506155B2
US10506155B2US15/890,363US201815890363AUS10506155B2US 10506155 B2US10506155 B2US 10506155B2US 201815890363 AUS201815890363 AUS 201815890363AUS 10506155 B2US10506155 B2US 10506155B2
Authority
US
United States
Prior art keywords
image
overlapping region
auxiliary
photographing information
image sensor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US15/890,363
Other versions
US20180376059A1 (en
Inventor
Sergio CANTERO CLARES
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Acer Inc
Original Assignee
Acer Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Acer IncfiledCriticalAcer Inc
Assigned to ACER INCORPORATEDreassignmentACER INCORPORATEDASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS).Assignors: CANTERO CLARES, SERGIO
Publication of US20180376059A1publicationCriticalpatent/US20180376059A1/en
Application grantedgrantedCritical
Publication of US10506155B2publicationCriticalpatent/US10506155B2/en
Activelegal-statusCriticalCurrent
Adjusted expirationlegal-statusCritical

Links

Images

Classifications

Definitions

Landscapes

Abstract

An image capturing apparatus and an image stitching method there are provided, where the method includes the following steps. A scene is detected by using a first image sensor and a second image sensor of the image capturing apparatus to generate first photographing information and second photographing information. The scene is captured by the first image sensor according to the first photographing information and the second photographing information to respectively generate a first image and a first auxiliary image. The scene is captured by the second image sensor according to the second photographing information and the first photographing information to respectively generate a second image and a second auxiliary image. The first image and the first auxiliary image are fused, and the second image and the second auxiliary image are fused so as to obtain fused results corresponding to overlapping regions, and a stitched image is accordingly generated.

Description

CROSS-REFERENCE TO RELATED APPLICATION
This application claims the priority benefit of Taiwan application serial no. 106120861, filed on Jun. 22, 2017. The entirety of the above-mentioned patent application is hereby incorporated by reference herein and made a part of this specification.
TECHNICAL FIELD
The disclosure relates to an image capturing apparatus and an image stitching technique thereof.
BACKGROUND
With development in technology, various smart image capturing apparatuses, such as tablet computers, personal digital assistants and smart phones, have become indispensable tools for people nowadays. Camera lenses equipped in high-end smart image capturing apparatuses provide same or better specifications and effects than those of traditional consumer cameras, and some even provide near-equivalent pixel qualities to those of digital single lens reflex cameras.
Using a panoramic camera as an example, images simultaneously captured by multiple camera lenses are concatenated by leveraging an image stitching technique to produce a larger scene image that would give a viewer an immersive experience. Since different camera lenses view a same scene from different angle of views so that the detected scene information would be slightly different and result in difficulty in image stitching. For example, when the sunlight comes from a direction closer to the left camera lens, exposure levels of images respectively captured by the left camera lenses and the right camera lenses would be different, and an obvious stitching line or unnatural color transition would appear in a resulting stitched image.
SUMMARY OF THE DISCLOSURE
Accordingly, an image capturing apparatus and an image stitching method thereof is proposed, where the quality of a stitched image is greatly enhanced.
According to one of the exemplary embodiments, the method is applicable to an image capturing apparatus having a first image sensor and a second image sensor and includes the following steps. A scene is detected by using the first image sensor and the second image sensor to generate first photographing info nation corresponding to the first image sensor and second photographing information corresponding to the second sensor. The scene is captured by using the first image sensor according to the first photographing information and the second photographing information to respectively generate a first image and a first auxiliary image, where each of the first image and the first auxiliary image include a first overlapping region. The scene is captured by the second image sensor according to the second photographing information and the first photographing information to respectively generate a second image and a second auxiliary image, where each of the second image and the second auxiliary image include a second overlapping region, and where the first overlapping region corresponds to the second overlapping image. The first image and the first auxiliary image are fused, and the second image and the second auxiliary image are fused so as to accordingly generate a stitched image.
According to one of the exemplary embodiments, the image capturing apparatus includes a first image sensor, a second image sensor, and a processor, where the first image sensor and the second image sensor are coupled to each other, and the processor is coupled to the first image sensor and the second image sensor. The first image sensor and the second image sensor are configured to detect a scene and capture images of the scene. The processor is configured to detect a scene by using the first image sensor and the second image sensor to generate first photographing information corresponding to the first image sensor and second photographing information corresponding to the second image sensor, capture the scene by using the first image sensor according to the first photographing information and the second photographing information to respectively generate a first image and a first auxiliary image, capture the scene by using the second image sensor according to the second photographing information and the first photographing information to respectively generate a second image and a second auxiliary image, fuse the first image and the first auxiliary image as well as fuse the second image and the second auxiliary image so as to accordingly generate a stitched image, where each of the first image and the first auxiliary image includes a first overlapping region, each of the second image and the second auxiliary image includes a second overlapping region, and the second overlapping region corresponds to the first overlapping region.
According to one of the exemplary embodiments, the method is applicable to an image capturing apparatus having only one image sensor and includes the following steps. A scene is detected by using the image sensor from a first angle of view to generate first photographing information corresponding to the first angle of view, and the scene is captured by using the image sensor from the first angle of view according to the first photographing information to generate a first image. The scene is detected by using the image sensor from a second angle of view to generate second photographing information corresponding to the second angle of view, and the scene is captured by using the image sensor from the second angle of view according to the second photographing information and the first photographing information to respectively generate a second image and an auxiliary image, where the first image includes a first overlapping region, each of the second image and the auxiliary image includes a second overlapping region, and the first overlapping image corresponds to the second overlapping region. The second image and the auxiliary image are fused to generate a fused result, and a stitched image is generated according to the first image, the fused result, and the second image.
According to one of the exemplary embodiments, the image capturing apparatus includes only one image sensor and a processor, where the processor is coupled to the image sensor. The image sensor is configured to detect a scene and capture images of the scene. The processor is configured to detect a scene by using the image sensor from a first angle of view to generate first photographing information corresponding to the first angle of view, capture the scene by using the image sensor from the first angle of view according to the first photographing information to generate a first image, detect the scene by using the image sensor from a second angle of view to generate second photographing information corresponding to the second angle of view, capture the scene by using the image sensor from the second angle of view according to the second photographing information and the first photographing information to respectively generate a second image and an auxiliary image, fuse the second image and the auxiliary image to generate a fused result, and generate a stitched image according to the first image, the fused result, and the second image, where the first image includes a first overlapping region, each of the second image and the auxiliary image includes a second overlapping region, and wherein the first overlapping image corresponds to the second overlapping region.
In order to make the aforementioned features and advantages of the present disclosure comprehensible, preferred embodiments accompanied with figures are described in detail below. It is to be understood that both the foregoing general description and the following detailed description are exemplary, and are intended to provide further explanation of the disclosure as claimed.
It should be understood, however, that this summary may not contain all of the aspect and embodiments of the present disclosure and is therefore not meant to be limiting or restrictive in any manner. Also the present disclosure would include improvements and modifications which are obvious to one skilled in the art.
BRIEF DESCRIPTION OF THE DRAWINGS
The accompanying drawings are included to provide a further understanding of the disclosure, and are incorporated in and constitute a part of this specification. The drawings illustrate embodiments of the disclosure and, together with the description, serve to explain the principles of the disclosure.
FIG. 1 illustrates a schematic diagram of an image capturing apparatus in accordance with one of the exemplary embodiments of the disclosure.
FIG. 2 illustrates a flowchart of an image stitching method of an image capturing device in accordance with one of the exemplary embodiments of the disclosure.
FIG. 3 illustrates a second image in accordance with an exemplary embodiment of the disclosure.
FIG. 4 illustrates a functional flowchart of an image stitching method of an image capturing device in accordance with one of the exemplary embodiments of the disclosure.
FIG. 5 illustrates a schematic diagram of an image capturing apparatus in accordance with another exemplary embodiment of the disclosure.
FIG. 6 illustrates a schematic diagram of overlapping regions in accordance with an exemplary embodiment of the disclosure.
FIG. 7A illustrates a schematic diagram of an image capturing apparatus in accordance with an exemplary embodiment of the disclosure.
FIG. 7B illustrates a flowchart of an image switching method of an image capturing apparatus in accordance with an exemplary embodiment of the disclosure.
To make the above features and advantages of the application more comprehensible, several embodiments accompanied with drawings are described in detail as follows.
DESCRIPTION OF THE EMBODIMENTS
Some embodiments of the disclosure will now be described more fully hereinafter with reference to the accompanying drawings, in which some, but not all embodiments of the application are shown. Indeed, various embodiments of the disclosure may be embodied in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will satisfy applicable legal requirements. Like reference numerals refer to like elements throughout.
FIG. 1 illustrates a schematic diagram of an image capturing apparatus in accordance with one of the exemplary embodiments of the disclosure. All components of the image capturing apparatus and their configurations are first introduced inFIG. 1. The functionalities of the components are disclosed in more detail in conjunction withFIG. 2.
Referring toFIG. 1, an image capturingdevice100 would include afirst image sensor10A, asecond image sensor10B, and aprocessor20. In the present exemplary embodiment, the image capturingdevice100 may be, for example, a digital camera, a single-lens reflex camera, a digital camcorder, or other devices with an image capturing feature such as a smart phone, a tabular computer, a personal digital assistant, a head-mounted display, and so forth. The disclosure is not restricted in this regard.
Each of thefirst image sensor10A and thesecond image sensor10B would respectively include a camera lens including a lens, an actuator, and a sensing element. The actuators may be stepping motors, voice coil motors (VCMs), piezoelectric actuators, or other actuators able to mechanically move the lenses. The sensing elements are configured to sense light intensity entering the lenses to thereby generate images. The sensing elements may be, for example, charge-coupled-device (CCD) elements, complementary metal-oxide semiconductor (CMOS) elements. The disclosure is not limited in this regard. It should be noted that, thefirst image sensor10A and thesecond image sensor10B would be coupled to each other and configured to transmit detected photographing information to each other. More details would be provided later on.
Theprocessor20 may be, for example, a central processing unit (CPU) or other programmable devices for general purpose or special purpose such as a microprocessor and a digital signal processor (DSP), a programmable controller, an application specific integrated circuit (ASIC), a programmable logic device (PLD), other similar devices, circuits, or a combination of above-mentioned devices. Theprocessor20 would be coupled to thefirst image sensor10A and thesecond image sensor10B and configured to control overall operation of theimage capturing apparatus100.
It would be apparent to those skilled in the art that theimage capturing apparatus100 would further include a data storage device. The data storage device would be configured to store images and data and may be one or a combination of a stationary or mobile random access memory (RAM), a read-only memory (ROM), a flash memory, a hard drive or other similar devices or circuits.
Detailed steps of how theimage capturing apparatus100 performs the proposed image stitching method would be illustrated along with each component of theimage capturing apparatus100 hereafter.
FIG. 2 illustrates a flowchart of an image stitching method of an image capturing device in accordance with one of the exemplary embodiments of the disclosure. The steps ofFIG. 2 could be implemented by theimage capturing apparatus100 as illustrated inFIG. 1.
Referring toFIG. 2 along withFIG. 1, before theimage capturing apparatus100 captures images of a scene, theprocessor20 would detect the scene by using thefirst image sensor10A to generate first photographing information corresponding to thefirst image sensor10A (Step S202A) and detect the scene by using thesecond image sensor10B to generate second photographing information corresponding to thesecond image sensor10B (Step S202B). The first photographing information may be related information analyzed from the scene detected by thefirst image sensor10A by leveraging the 3A algorithm. The second photographing information may be related information analyzed from the scene detected by thesecond image sensor10B by leveraging the 3A algorithm as well. In the present exemplary embodiment, the first photographing information and the second photographing information may be, for example, exposure level and color temperature.
Next, thefirst image sensor10A and thesecond image sensor10B would transmit the first photographing information and the second photographing information to each other, and theprocessor20 would capture images of the scene by using thefirst image sensor10A according to the first photographing information and the second photographing information respectively to generate a first image and a first auxiliary image (Step S204A) and capture images of the scene by using thefirst image sensor10B according to the second photographing information and the first photographing information respectively to generate a second image and a second auxiliary image (Step S204B). Since thefirst image sensor10A and thesecond image sensor10B capture the same scene respectively from different angle of views, there would exist an overlapping region with same captured contents in each of the first image and the second image. Similarly, the first auxiliary image and the second auxiliary image are respectively same as the first image and the second image captured by thefirst image sensor10A and thesecond image sensor10B according to different photographing information, there would also exist an overlapping region in each of the first auxiliary image and the second auxiliary image same as that in the first image and the second image. For convenience purposes, the overlapping region in each of the first image and the first auxiliary image captured by thefirst image sensor10A would be referred to as “a first overlapping region”, and the overlapping region in each of the second image and the second auxiliary image captured by thesecond image sensor10A would be referred to as “a second overlapping region”.
An overlapping region in each two images would highly affect the quality of image stitching. To ensure natural continuity of image stitching, theprocessor20 would fuse the first image and the first auxiliary image (Step S206A) and fuse the second image and the second auxiliary image (Step S206B) to obtain a fused result corresponding to the first overlapping region and the second overlapping region and thereby generate a stitch image according to the first image, the fused result, and the second image (Step S208). In other words, in the following up steps of image stitching, the portion corresponding to the original first overlapping region and the original second overlapping region would be replaced by the fused result. Since the fused result is generated based on all the photographing information detected by the two image sensors, an obvious stitching line or unnatural color transition would be prevented in the stitched image.
In detail, in terms of thefirst image sensor10A, theprocessor20 would perform fusing on the first overlapping region in the first image and the first overlapping region in the first auxiliary image, and in terms of thesecond image sensor10B, theprocessor20 would perform fusing on the second overlapping region in the second image and the second overlapping region in the second auxiliary image. Herein, the first overlapping region would include a first overlapping boundary line and a first stitching line, and the second overlapping region would include a second overlapping boundary line and a second stitching line, where the first stitching line in the first image and the second stitching line in the second image would be the seams for stitching the two images. For the first image, theprocessor20 would replace a region between the first overlapping boundary line and the first stitching line by the fused result, and such region would be referred to as “a first fused overlapping region”. For the second image, theprocessor20 would replace a region between the second overlapping boundary line and the second stitching line by the fused result, and such region would be referred to as “a second fused overlapping region”. Theprocessor20 would generate the stitched image according to the first image, the first fused overlapping region, the second fused overlapping region, and the second image. In the present exemplary embodiment, assume that an area of the first fused overlapping region is equal to that of the second fused overlapping region. That is, one half of the stitched overlapping region would be formed based on the first image, and the other half would be formed based on the second image. However, this is merely for illustrative purposes, the disclosure is not limited in this regard.
To be specific,FIG. 3 illustrates a second image in accordance with an exemplary embodiment of the disclosure to describe the steps of generating a second fused overlapping region, and the steps of generating a first fused overlapping region may be deduced in a similar fashion.
Referring toFIG. 3, a second image Img2 captured by thesecond image sensor10B would include a second overlapping boundary line LO, a second stitching line LS, and a second image boundary line LB. A region between the second overlapping boundary line LOand the second image boundary line LBwould be a second overlapping region, and a region between the second stitching line LSand the second image boundary line LBwould be a stitched region, where the stitched region would only be used for image stitching but would not appear in a resulting stitched image.
Herein, a region between the second overlapping boundary line LOand the second stitching line LSwould be replaced by a second fused overlapping region. Theprocessor20 would perform image fusing on the same region respectively in the second image Img2 and a second auxiliary image (not shown) to generate the second fused overlapping region. For example, assume that a pixel P is a pixel in the second fused overlapping region. Theprocessor20 would calculate a distance dObetween the pixel P and the second overlapping boundary line LOas well as a distance dSbetween the pixel P and the second stitching line LSto generate a second weighting ratio. Next, theprocessor20 would calculate a weighted sum of a pixel value corresponding to the pixel P in the second image and a pixel value corresponding to the pixel P in the second auxiliary image according to the second weight ratio to generate a pixel value of the pixel P in the second fused overlapping region as follows,
px,y,O=fA(T(dO),T(dS))×px,y,A+fB(T(dO),T(dS))×px,y,B
where px,y,Ois a pixel value of a pixel with a coordinate (x, y) in the second fused overlapping region, px,y,Ais a pixel value of a pixel with a coordinate (x, y) in the second auxiliary image captured by using the first photographing information of thefirst image sensor10A, px,y,Bis a pixel value of a pixel with a coordinate (x, y) in the second image captured by using the second photographing information of thesecond image sensor10B, T is a coordinate transfer function between the image capturing apparatuses, and fAand fBare arbitrary functions that satisfy fA(x,y)+fB(x,y)=1. Moreover, when the pixel P is on the second boundary line LO(i.e. dO=0), which is also the furthest from the first image in the second fused overlapping region, the pixel value of the pixel P would be the closest to an original pixel value in the second image captured by using the second photographing information. On the other hand, when the pixel P is on the second switching line LS(i.e. dS=0), the pixel value of the pixel P would be set according to its pixel values in the second image and in the second auxiliary image respectively captured based on the second photographing information and the first photographing information
(e.g.px,y,O=px,y,A+px,y,B2).
In the present exemplary embodiment, theprocessor20 may generate the pixel value of the pixel P according to the following equation:
px,y,O=dS(x)+dO(x)dS(x)+2dO(x)×px,y,A+dO(x)dS(x)+2dO(x)×px,y,B
For better understanding,FIG. 4 illustrates a functional flowchart of an image stitching method of an image capturing device in accordance with one of the exemplary embodiments of the disclosure to integrate the aforementioned steps.
Referring toFIG. 4, thefirst image sensor10A would detect a scene to generate first photographing information PI1, and thesecond image sensor10B would detect the scene to generate second photographing information PI2. Thefirst image sensor10A would transmit the first photographing information PI1 to thesecond image sensor10B, and thesecond image sensor10B would transmit the second photographing information PI2 to thefirst image sensor10A.
Next, thefirst image sensor10A would capture an image of the scene according to the first photographing information PI1 to generate a first image Img1 and capture an image of the scene according to the second photographing information PI2 to generate a first auxiliary image Img12. Theprocessor20 would perform image fusing process IBP on the first image Img1 and the first auxiliary image Img12. On the other hand, thesecond image sensor10B would capture an image of the scene according to the second photographing information PI2 to generate a second image Img2 and capture an image of the scene according to the first photographing information PI1 to generate a second auxiliary image Img21. Theprocessor20 would perform image fusing process IBP on the second image Img2 and the second auxiliary image Img21.
Next, theprocessor20 would perform image switching process SP on the first image Img1, the second image Img2 along with a fused result to generate a stitched image Img′. The details of the steps may refer to the previous exemplary embodiments and would not be repeated hereinafter.
The aforementioned exemplary embodiments may be extended to an image capturing apparatus having three or more image sensors. When all the image sensors of the image capturing apparatus are collinearly arranged, an overlapping region in each image captured by any two neighboring image sensors may be used for concatenating a stitched image according to the flowchart illustrated inFIG. 2. On the other hand, when all the image sensors of the image capturing apparatus are not collinearly arranged (e.g. an image capturing apparatus configured to capture 360-degree panoramic images in the market), a joint overlapping region simultaneously captured by all the image sensors would be further considered for image switching.
In detail,FIG. 5 illustrates a schematic diagram of an image capturing apparatus in accordance with another exemplary embodiment of the disclosure.
Referring toFIG. 5, animage capturing device100′ would include afirst image sensor10A, asecond image sensor10B, and athird image sensor10C coupled to each other. In other words, theimage capturing device100′ may be viewed as theimage sensor10C additionally configured in theimage sensor100, where thethird image sensor10C would be positioned between thefirst image sensor10A and thesecond image sensor10B but would not be collinear therewith. For simplicity, the positions of thefirst image sensor10A, thesecond image sensor10B, and thethird image sensor10C would be described as “left”, “right”, and “center” hereinafter.
In the present exemplary embodiment, an overlapping region in an image captured respectively by each of thefirst image sensor10A and thesecond image sensor10B for image switching would be performed based on the flowchart inFIG. 2. On the other hand, in terms of thethird image sensor10C, it would also detect the scene to generate third photographing information. Meanwhile, thethird image sensor10C would also receive the first photographing information and the second photographing information from thefirst image sensor10A and thesecond image sensor10B and capture images of the scene accordingly, where the captured images are referred to as a left auxiliary image and a right auxiliary image hereafter. Each of the third image, the left auxiliary image, and the right auxiliary image has an overlapping region with the first image (referred to as “a left overlapping region” hereafter), an overlapping region with the second image (referred to as “a right overlapping region” hereafter), and an overlapping region with both the first image and the second image (referred to as “a joint overlapping region” hereafter).FIG. 6 illustrates a schematic diagram of overlapping regions in accordance with an exemplary embodiment of the disclosure to explain a fused method of the overlapping regions.
Referring toFIG. 6, a region OA is a portion of the third image. A region of the left-hand side of LOLis the left overlapping region overlapped with the first image, and a region of the right-hand side of LORis the right overlapping region overlapped with the second image, and the overlapping region of the left overlapping region and the right overlapping region is the joint overlapping region of the first image, the second image, and the third image. Moreover, in terms of a 360-degree spherical space, a switched region of each overlapping region may be, for example, within a range of 10 degrees. Theprocessor20 would fuse the left overlapping region by using the third image and the left auxiliary image, fuse the right overlapping region by using the third image and the right auxiliary image, and fuse the joint overlapping region by using the third image, the left auxiliary image, and the right auxiliary image. A pixel value of a fused pixel P′ may be expressed by the following equation:
px,y,O=i=0Nfi(T(x,y))×px,y,i=fC(T(x,y))×px,y,C+fL(T(x,y))×px,y,L+fR(T(x,y))×px,y,R
where px,y,Ois a pixel value of a fused pixel with a coordinate (x, y), px,y,Lis a pixel value of a pixel with a coordinate (x, y) in the left auxiliary image captured by using the first photographing information, px,y,Ris a pixel value of a pixel with a coordinate (x, y) in the right auxiliary image captured by using the second photographing information, and px,y,Ris a pixel value of a pixel with a coordinate (x, y) in the third image captured by using the third photographing information. Also, T denotes a coordinate transfer function between a Cartesian coordinate system and a 360-degree spherical space.
In the present exemplary embodiment, theprocessor20 may generate the pixel value of the pixel P′ according to the following equations:
px,y,O={px,y,Cx,yOL,x,yORΓC-rCΓC-ΓLpx,y,C+rC-ΓLΓC-ΓL·px,y,Lx,yOL,x,yORΓC-rCΓC-ΓRpx,y,C+rC-ΓRΓC-ΓR·px,y,Rx,yOL,x,yOR2ΓC-2rC2ΓC-ΓR-ΓLpx,y,C+rC-ΓR2ΓC-ΓR-ΓL·px,y,R+rC-ΓL2ΓC-ΓR-ΓL·px,y,Lx,yOL,x,yOR
where ORis the right overlapping region, OLis the left overlapping region, and a region belongs to both ORand OLis the joint overlapping region. Also, r is a distance between the pixel P′ and a center of the joint overlapping region, and ΓR, ΓL, and ΓCare a distance between the pixel P′ and the right overlapping region silhouette, a distance between the pixel P′ and the left overlapping region silhouette, and a distance between the pixel P′ and the joint overlapping region silhouette respectively.
After all the overlapping regions are fused, theprocessor20 would perform image switching by using the first image, the first fused overlapping region, the second image, the second fused overlapping region, the third image, and the fused result of the overlapping region, the left overlapping region, and the joint overlapping region the third image to generate a stitched image.
The aforementioned concept may be implemented to an image capturing apparatus with a single image sensor. In detail,FIG. 7A illustrates a schematic diagram of an image capturing apparatus in accordance with an exemplary embodiment of the disclosure.
Referring toFIG. 7A, animage capturing apparatus700 would include animage sensor710 and aprocessor720. In the present exemplary embodiment, structures and features of theimage sensor710 and theprocessor720 would be similar to those of theimage sensors10A/10B and theprocessor20 of theimage capturing apparatus100 inFIG. 1. Detailed descriptions may refer to related paragraphs and would not be repeated for brevity's sake.
FIG. 7B illustrates a flowchart of an image switching method of an image capturing apparatus in accordance with an exemplary embodiment of the disclosure. The steps inFIG. 7B would be applicable to theimage capturing apparatus700 inFIG. 7A.
Referring toFIG. 7A andFIG. 7B, theprocessor720 of theimage capturing apparatus700 would detect the scene from a first angle of view by using thefirst image sensor710 to generate first photographing information corresponding to the first angle of view (Step S702) and capture an image of the scene from the first angle of view by using theimage sensor710 to generate a first image (Step S704). Next, theprocessor720 would detect the scene from a second angle of view by using theimage sensor710 to generate second photographing information corresponding to the second angle of view (Step S706) and capture images of the scene by using theimage sensor710 according to the second photographing information and the first photographing information to respectively generate a second image and an auxiliary image (Step S708). In other words, the concept of capturing the second image of the scene from the second angle of the view by using theimage sensor710 is the same as that of capturing the second image of the scene by using thesecond image sensor10B inFIG. 1, but the only difference is that theimage capturing apparatus700 in the present exemplary embodiment would be moved to a position corresponding to the second angle of view before the second image is captured.
Since theimage sensor710 captures the same scene from different angle of views, there would exist an overlapping region with same captured contents in each of the first image and the second image. Similarly, the auxiliary image is the same as the second image captured by theimage sensor710 according to different photographing information, there would also exist an overlapping region in the auxiliary image same as the ones in the first image and the second image. For convenience purposes, the overlapping region in the first image would be referred to as “a first overlapping region”, and the overlapping region in each of the second image and the auxiliary image would be referred to as “a second overlapping region”.
In the present exemplary embodiment, theprocessor720 would fuse the second image and the auxiliary image to generate a fused result (Step S710). Different from the previous exemplary embodiment, the first overlapping herein would be discarded, and the second overlapping region would be replaced by the fused result. A weight ratio to fuse the second image and the auxiliary image may be based on a ratio of distances between a pixel and two overlapping boundary lines, and yet the disclosure is not limited in this regard. Next, theprocessor720 would generate a stitched image according to the first image, the fused result, and the second image (Step S712).
In summary, as for the image capturing apparatus and the image stitching method thereof proposed in the disclosure, when images of a scene are captured by two image sensors with their respective photographing information as well as with others' photographing information, an overlapping region of each image captured with different photographing information would be fused for image stitching. As such, a resulting stitched image would look much closer to the scene, and obvious stitching lines or unnatural color transition would also be avoided for image quality enhancement. Moreover, the disclosure may also be implemented by an image capturing apparatus with a single image sensor, or three or more image sensors to provide more practical application.
No element, act, or instruction used in the detailed description of disclosed embodiments of the present application should be construed as absolutely critical or essential to the present disclosure unless explicitly described as such. Also, as used herein, each of the indefinite articles “a” and “an” could include more than one item. If only one item is intended, the terms “a single” or similar languages would be used. Furthermore, the terms “any of” followed by a listing of a plurality of items and/or a plurality of categories of items, as used herein, are intended to include “any of”, “any combination of”, “any multiple of”, and/or “any combination of” multiples of the items and/or the categories of items, individually or in conjunction with other items and/or other categories of items. Further, as used herein, the term “set” is intended to include any number of items, including zero. Further, as used herein, the term “number” is intended to include any number, including zero.
It will be apparent to those skilled in the art that various modifications and variations can be made to the structure of the disclosed embodiments without departing from the scope or spirit of the disclosure. In view of the foregoing, it is intended that the disclosure cover modifications and variations of this disclosure provided they fall within the scope of the following claims and their equivalents.

Claims (8)

What is claimed is:
1. An image stitching method, applicable to an image capturing apparatus comprising a first image sensor and a second image sensor, wherein the method comprises steps of:
detecting a scene by using the first image sensor and the second image sensor to generate first photographing information corresponding to the first image sensor and second photographing information corresponding to the second image sensor;
capturing the scene by using the first image sensor according to the first photographing information and the second photographing information to respectively generate a first image and a first auxiliary image, wherein each of the first image and the first auxiliary image comprise a first overlapping region;
capturing the scene by using the second image sensor according to the second photographing information and the first photographing information to respectively generate a second image and a second auxiliary image, wherein each of the second image and the second auxiliary image comprise a second overlapping region, and wherein the second overlapping region corresponds to the first overlapping region; and
fusing the first image and the first auxiliary image, and fusing the second image and the second auxiliary image so as to accordingly generate a stitched image.
2. The method according toclaim 1, wherein the step of fusing the first image and the first auxiliary image, and fusing the second image and the second auxiliary image so as to accordingly generate the stitched image comprises:
fusing the first overlapping region in the first image and the first overlapping region in the first auxiliary image to generate a first fused overlapping region, and fusing the second overlapping region in the second image and the second overlapping region in the second auxiliary image to generate a second fused overlapping region; and
generate the stitched image according to the first image, the first fused overlapping region, the second fused overlapping region, and the second image.
3. The method according toclaim 2, wherein the first overlapping region comprises a first overlapping boundary line and a first stitching line, wherein the first overlapping region corresponds to a region between the first overlapping boundary line and the first stitching line, wherein the second overlapping region comprises a second overlapping boundary line and a second stitching line, and wherein the second overlapping region corresponds to a region between the second overlapping boundary line and the second stitching line.
4. The method according toclaim 3,
wherein steps of generating each first pixel in the first fused overlapping region comprises:
calculating a distance between the first pixel and the first overlapping boundary line as well as a distance between the first pixel and the first stitching line to generate a first weight ratio; and
calculating a weighted sum of a pixel value corresponding to the first pixel in the first image and a pixel value corresponding to the first pixel in the first auxiliary image according to the first weight ratio; and
wherein steps of generating each second pixel in the second fused overlapping region comprises:
calculating a distance between the second pixel and the second overlapping boundary line as well as a distance between the second pixel and the second stitching line to generate a second weight ratio; and
calculating a weighted sum of a pixel value corresponding to the second pixel in the second image and a pixel value corresponding to the second pixel in the second auxiliary image according to the second weight ratio.
5. The method according toclaim 1, wherein the image capturing apparatus further comprises a third image sensor, and wherein the method further comprises steps of:
detecting the scene by using the third image sensor to generate third photographing information corresponding to the third image sensor;
capturing the scene by using the third image sensor according to the third photographing information, the first photographing information, and the second photographing information to respectively generate a third image, a left auxiliary image, and a right auxiliary image, wherein each of the third image, the left auxiliary image, and the right auxiliary image comprises a left overlapping region associated with the first image, a right overlapping region associated with the second image, and a joint overlapping region associated with both the first image and the second image; and
fusing the left overlapping region in the third image and the left overlapping region in the left auxiliary image, fusing the right overlapping region the third image and the right overlapping region in the right auxiliary image, and fusing the joint overlapping region in the third image, the joint overlapping region in the left auxiliary image, and the joint overlapping region in the right auxiliary image to generate a fused result associated with the third image.
6. The method according toclaim 5, wherein the step of fusing the first image and the first auxiliary image, and fusing the second image and the second auxiliary image so as to accordingly generate the stitched image further comprises:
generating the stitched image by using the first image, the first fused overlapping region, the second fused overlapping region, the second image, the third image, and the fused result associated with the third image.
7. An image capturing apparatus comprising:
a first image sensor, configured to detect a scene and capture images of the scene;
a second image sensor, coupled to the first image sensor, and configured to capture images; and
a processor, coupled to the first image sensor and the second image sensor, and configured to:
detect the scene by using the first image sensor and the second image sensor to generate first photographing information corresponding to the first image sensor and second photographing information corresponding to the second image sensor;
capture the scene by using the first image sensor according to the first photographing information and the second photographing information to respectively generate a first image and a first auxiliary image, wherein each of the first image and the first auxiliary image comprises a first overlapping region;
capture the scene by using the second image sensor according to the second photographing information and the first photographing information to respectively generate a second image and a second auxiliary image, wherein each of the second image and the second auxiliary image comprises a second overlapping region, and wherein the second overlapping region corresponds to the first overlapping region;
fuse the first image and the first auxiliary image, and fuse the second image and the second auxiliary image so as to accordingly generate a stitched image.
8. The image capturing apparatus according toclaim 7 further comprising a third image sensor coupled to the first image sensor, the second image sensor, and the processor, wherein the processor is further configured to:
detect the scene by using the third image sensor to generate third photographing information corresponding to the third image sensor;
capture the scene by using the third image sensor according to the third photographing information, the first photographing information, and the second photographing information to respectively generate a third image, a left auxiliary image, and a right auxiliary image, wherein the third image, the left auxiliary image, and the right auxiliary image all comprise a left overlapping region associated with the first image, a right overlapping region associated with the second image, and a joint overlapping region associated with both the first image and the second image; and
fuse the left overlapping region in the third image and the left overlapping region in the left auxiliary image, fusing the right overlapping region the third image and the right overlapping region in the right auxiliary image, and fusing the joint overlapping region in the third image, the joint overlapping region in the left auxiliary image, and the joint overlapping region in the right auxiliary image to generate a fused result associated with the third image.
US15/890,3632017-06-222018-02-07Image capturing apparatus and image stitching method thereofActive2038-03-16US10506155B2 (en)

Applications Claiming Priority (3)

Application NumberPriority DateFiling DateTitle
TW1061208612017-06-22
TW106120861ATWI617195B (en)2017-06-222017-06-22 Image capturing device and image mosaic method thereof
TW106120861A2017-06-22

Publications (2)

Publication NumberPublication Date
US20180376059A1 US20180376059A1 (en)2018-12-27
US10506155B2true US10506155B2 (en)2019-12-10

Family

ID=62189254

Family Applications (1)

Application NumberTitlePriority DateFiling Date
US15/890,363Active2038-03-16US10506155B2 (en)2017-06-222018-02-07Image capturing apparatus and image stitching method thereof

Country Status (2)

CountryLink
US (1)US10506155B2 (en)
TW (1)TWI617195B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN110276717B (en)*2019-06-262023-05-05图码思(成都)科技有限公司 Image splicing method and terminal
CN111988540A (en)*2020-08-202020-11-24合肥维信诺科技有限公司Image acquisition method and system and display panel
CN114430456A (en)*2020-10-292022-05-03中兴通讯股份有限公司Image capturing method, image capturing apparatus, and storage medium
CN114567716A (en)*2022-02-282022-05-31业成科技(成都)有限公司Camera module, mobile terminal and vehicle-mounted image shooting system
CN114659987A (en)*2022-04-282022-06-24芯视界(北京)科技有限公司Imaging device, filter film, shape determining method, image splicing method and device

Citations (7)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20070081081A1 (en)*2005-10-072007-04-12Cheng Brett AAutomated multi-frame image capture for panorama stitching using motion sensor
US20070132863A1 (en)*2005-12-142007-06-14Sony CorporationImage taking apparatus, image processing method, and image processing program
US20100097442A1 (en)*2008-10-162010-04-22Peter LablansController in a Camera for Creating a Panoramic Image
US20100097443A1 (en)*2008-10-162010-04-22Peter LablansController in a Camera for Creating a Panoramic Image
US20140118480A1 (en)*2012-10-302014-05-01Donald S. RimaiSystem for making a panoramic image
US20140347501A1 (en)*2013-05-222014-11-27Sony CorporationInformation processing apparatus, information processing method, and program
US20170148138A1 (en)*2015-11-202017-05-25Vivotek Inc.Image stitching method and camera system with an image stitching function

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20110242355A1 (en)*2010-04-052011-10-06Qualcomm IncorporatedCombining data from multiple image sensors
CN103597810B (en)*2011-05-272017-02-15诺基亚技术有限公司 image stitching
US9516225B2 (en)*2011-12-022016-12-06Amazon Technologies, Inc.Apparatus and method for panoramic video hosting
US20170006219A1 (en)*2015-06-302017-01-05Gopro, Inc.Image stitching in a multi-camera array
US9842624B2 (en)*2015-11-122017-12-12Intel CorporationMultiple camera video image stitching by placing seams for scene objects

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20070081081A1 (en)*2005-10-072007-04-12Cheng Brett AAutomated multi-frame image capture for panorama stitching using motion sensor
US20070132863A1 (en)*2005-12-142007-06-14Sony CorporationImage taking apparatus, image processing method, and image processing program
US20100097442A1 (en)*2008-10-162010-04-22Peter LablansController in a Camera for Creating a Panoramic Image
US20100097443A1 (en)*2008-10-162010-04-22Peter LablansController in a Camera for Creating a Panoramic Image
US20140118480A1 (en)*2012-10-302014-05-01Donald S. RimaiSystem for making a panoramic image
US20140347501A1 (en)*2013-05-222014-11-27Sony CorporationInformation processing apparatus, information processing method, and program
US20170148138A1 (en)*2015-11-202017-05-25Vivotek Inc.Image stitching method and camera system with an image stitching function

Also Published As

Publication numberPublication date
TW201906398A (en)2019-02-01
US20180376059A1 (en)2018-12-27
TWI617195B (en)2018-03-01

Similar Documents

PublicationPublication DateTitle
US10506155B2 (en)Image capturing apparatus and image stitching method thereof
US9325899B1 (en)Image capturing device and digital zooming method thereof
US9179059B2 (en)Image capture device and image display method
CN104782110B (en) Image processing device, imaging device and image processing method
US9560243B2 (en)Image processing device, imaging device, program, and image processing method suppressing a reduction in visibility of an image to check focus when distortion is corrected
CN104641625B (en)Image processing apparatus, camera device and image processing method
CN104205827B (en)Image processing apparatus and method and camera head
US11024048B2 (en)Method, image processing device, and system for generating disparity map
CN109196857B (en) Imaging element and camera device
US9167153B2 (en)Imaging device displaying split image generated from interpolation pixel data based on phase difference pixel
CN114693569B (en)Double-camera video fusion method and electronic equipment
US20120002958A1 (en)Method And Apparatus For Three Dimensional Capture
CN115004675A (en) Electronic equipment
US20170155889A1 (en)Image capturing device, depth information generation method and auto-calibration method thereof
US9204114B2 (en)Imaging element and imaging apparatus
US9996932B2 (en)Method and system for multi-lens module alignment
US20160337587A1 (en)Image capturing device and hybrid image processing method thereof
TWI693828B (en)Image-capturing device and method for operating the same
KR20050109190A (en)Wide image generating apparatus and method using a dual camera
CN106067937A (en)Lens module array, image sensing device and digital zoom image fusion method
US20190052815A1 (en)Dual-camera image pick-up apparatus and image capturing method thereof
US9743007B2 (en)Lens module array, image sensing device and fusing method for digital zoomed images
JP5796611B2 (en) Image processing apparatus, image processing method, program, and imaging system
CN109214983B (en)Image acquisition device and image splicing method thereof
KR102860387B1 (en)Method for Stabilization at high magnification and Electronic Device thereof

Legal Events

DateCodeTitleDescription
ASAssignment

Owner name:ACER INCORPORATED, TAIWAN

Free format text:ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:CANTERO CLARES, SERGIO;REEL/FRAME:044849/0285

Effective date:20170829

FEPPFee payment procedure

Free format text:ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STPPInformation on status: patent application and granting procedure in general

Free format text:DOCKETED NEW CASE - READY FOR EXAMINATION

STPPInformation on status: patent application and granting procedure in general

Free format text:NON FINAL ACTION MAILED

STPPInformation on status: patent application and granting procedure in general

Free format text:RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPPInformation on status: patent application and granting procedure in general

Free format text:NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

STCFInformation on status: patent grant

Free format text:PATENTED CASE

MAFPMaintenance fee payment

Free format text:PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment:4


[8]ページ先頭

©2009-2025 Movatter.jp