Movatterモバイル変換


[0]ホーム

URL:


CN102724398B - Image data providing method, combination method thereof, and presentation method thereof - Google Patents

Image data providing method, combination method thereof, and presentation method thereof
Download PDF

Info

Publication number
CN102724398B
CN102724398BCN201110080133.0ACN201110080133ACN102724398BCN 102724398 BCN102724398 BCN 102724398BCN 201110080133 ACN201110080133 ACN 201110080133ACN 102724398 BCN102724398 BCN 102724398B
Authority
CN
China
Prior art keywords
main body
taken
image
view data
size
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201110080133.0A
Other languages
Chinese (zh)
Other versions
CN102724398A (en
Inventor
过晓冰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing LtdfiledCriticalLenovo Beijing Ltd
Priority to CN201110080133.0ApriorityCriticalpatent/CN102724398B/en
Publication of CN102724398ApublicationCriticalpatent/CN102724398A/en
Application grantedgrantedCritical
Publication of CN102724398BpublicationCriticalpatent/CN102724398B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Landscapes

Abstract

The invention provides an image data providing method, an image data combination method, and an image data presentation method. The image data providing method comprises the following steps of: obtaining a distance between a photographing apparatus and a focal plane and a current focal length of the photographing apparatus when shooting an image; extracting position information of a shot subject on the focal plane in the shot image; according to the obtained distance between the photographing apparatus and the focal plane and the current focal length of the photographing apparatus, calculating a corresponding relationship between an imaging size and an actual size of the shot subject; and providing the position information of the shot subject, the corresponding relationship, and the shot image together in association as image data of the shot image.

Description

View data provides method, combined method and rendering method
Technical field
The present invention relates to the offer of view data, combination and rendering method, and more particularly to a kind of band shooting imageWhen focal plane on be taken the offer of the view data of actual size information, combination and the rendering method of main body.
Background technology
At present, photographing unit only obtains captured image after the picture is taken, but has no idea from captured imageExtract captured main body and obtain the actual size of captured main body.
The actual size of object in patent documentation CN101183206 it is proposed that in image captured by a kind of acquisitionMethod, wherein, initially sets up the form of the corresponding relation of focusing step number and object distance, is then passed through defocused table look at can be obtainedKnow actual object distance, finally utilize " object distance/thing height=image distance/image height " computing formula high to calculate thing.However, the method existsFollowing defect:The main body that is taken on the focal plane when calibrating in shooting image in captured image is not it is impossible to obtainThe positional information of the main body that must be taken, and correspondingly can not automatically extract the view data of the main body that is taken.And then, utilizeThe view data of the captured image that the method obtains can not be combined with another image.
Accordingly, it would be desirable to a kind of new view data provides method, view data combination method and the view data side of presentingMethod.
Content of the invention
Propose the present invention in view of the problems referred to above.According to the present invention, by obtain in shooting image capture apparatus withThe distance between focal plane and described capture apparatus the imaging of the main body that is taken to be calculated on focal plane when front focal lengthCorresponding relation between size and actual size, extracts the position letter in captured image for the main body that is taken on focal planeBreath, thus provide the view data of the captured image comprising the described positional information shooting main body and described corresponding relation.The picture number of the main body that is taken can easily be extracted from the view data of the captured image being obtained on focal planeAccording to, and easily the view data of the main body that is taken on the focal plane being extracted is combined with another image to obtain new figurePicture.
According to an aspect of the present invention, there is provided a kind of view data provides method, the method includes:Obtain in shooting figureDuring picture, the distance between capture apparatus and focal plane and described capture apparatus works as front focal length;Extract the quilt on focal planeShoot positional information in captured image for the main body;According to the distance between acquired capture apparatus and focal plane withAnd described capture apparatus work as front focal length, the corresponding pass between the imaging size of the main body that is taken described in calculating and actual sizeSystem;And by the positional information of the described main body that is taken and described corresponding relation, be associated together with captured imageGround provides the view data as captured image.
Be taken described in the comprising positional information of main body and the reality of the described main body that is taken can be obtained using the methodBorder dimension information is in the view data of interior captured image.
Preferably, the method further includes:The distance between capture apparatus and focal plane are taken together with describedThe positional information of main body, described corresponding relation are provided in association with the image as captured image together with captured imageData.
Corresponding relation between the imaging size of the described main body that is taken and actual size can be:In captured imageA pixel and described focal plane in physical length and/or the corresponding relation of height or each described in be taken masterConcrete corresponding relation between the imaging size of body and actual size.
Preferably, the described main body that is taken can be one or more.The positional information of the described main body that is taken can wrapThe main body that is taken described in including is enclosed in the appearance profile being represented with pixel in captured image or in captured imageCoordinate position around a regular shape of the described main body that is taken.
Preferably, described regular shape can be rectangle, circle, ellipse or rhombus.
According to a further aspect in the invention, there is provided a kind of view data combines method, for by image to be combinedBe taken body combination in benchmark image, the view data of described image to be combined includes the position of main body that is wherein takenCorresponding relation between information, the imaging size of the main body that is taken and actual size, and the view data of described benchmark imageThe corresponding relation including wherein being taken between the imaging size of main body and actual size, methods described includes:From described baseThe corresponding relation between the imaging size of the main body that is wherein taken and actual size is obtained in the view data of quasi- image;From describedExtract the view data of the main body that is wherein taken in the view data of image to be combined, and obtain the imaging size of the main body that is takenCorresponding relation and actual size between;According between the imaging size of the main body that is taken in described benchmark image and actual sizeCorresponding relation and described image to be combined in the corresponding relation that is taken between the imaging size of main body and actual size,Determine the first scaling of pixel in the view data of main body that is taken extracted;According to described first scaling, repairChange the view data of the main body that is taken extracted so that meeting imaging size and the reality of the main body that is taken in described benchmark imageCorresponding relation between the size of border;Arrange the main body that is taken in described image to be combined be added to residing in described benchmark imagePosition;And according to set position, by the image data set of the main body that is taken in amended described image to be combinedClose in the view data of described benchmark image.
Preferably, the method further includes:To be taken in described benchmark image main body and be newly incorporated to benchmark imageThe positional information of the main body that is taken and described benchmark image in be taken between the imaging size of main body and actual sizeCorresponding relation, is provided in association with as amended view data together with amended benchmark image.
The view data of described benchmark image further include to shoot the capture apparatus of described benchmark image with its focus flatThe distance between face information.
Preferably, the method further includes:Obtain from the view data of described benchmark image described capture apparatus withThe distance between its focal plane;Be taken in described image to be combined the distance between main body and described capture apparatus are set,And the distance between the capture apparatus according to set distance and described benchmark image and its focal plane, determine projectionCoefficient;And calculate the second scaling using described projection coefficient with described first scaling, and according to the described second contractingThe view data putting the main body that is taken that ratio modification is extracted is so that meet the imaging of the main body that is taken in described benchmark imageCorresponding relation between size and actual size.
In this case it is preferably to, the method further includes:By between described capture apparatus and its focal plane away fromThe main body that is taken in together with described benchmark image and be newly incorporated to the positional information of the main body that is taken of benchmark image, described baseThe imaging size of the main body that is taken in quasi- image phase together with the corresponding relation between actual size and amended benchmark imageAssociatedly provide as amended view data.
According to another aspect of the invention, there is provided a kind of view data rendering method, wherein it is taken according in imageThe actual size of main body has in described image, come the main body that is taken described in presenting, the view data of described image, the main body that is takenImaging size and actual size between correspondence relationship information, methods described includes:Obtain the display screen of display deviceActual size, and obtain the current resolution of this display screen;Obtain imaging size and the reality of the main body that is taken in described imageCorrespondence relationship information between the size of border;Actual size according to described display screen and current resolution and described shootingCorrespondence relationship information between the imaging size of main body and actual size is determining the scaling of pixel in described image;AndShow described image according to described scaling.
Preferably, obtain the correspondence relationship information between the imaging size of the main body that is taken in described image and actual sizeFor obtaining the corresponding relation of the physical length in a pixel and described focal plane in described image and/or height.
Preferably, the actual size according to described display screen and current resolution and the described imaging shooting main bodyCorrespondence relationship information between size and actual size further includes come the scaling to determine pixel in described image:According toThe actual size of described display screen and current resolution calculate one of described display screen pixel and physical length and/orThe corresponding relation of height;And the corresponding relation based on one of described display screen pixel and physical length and/or heightAnd the corresponding relation of the physical length in a pixel and described focal plane in described image and/or height is described to determineThe scaling of pixel in image.
Alternatively, obtain the correspondence relationship information between the imaging size of the main body that is taken in described image and actual sizeFor obtaining actual size and the imaging size of the main body that is taken in described image.
Preferably, the actual size according to described display screen and current resolution and the described imaging shooting main bodyCorrespondence relationship information between size and actual size includes come the scaling to determine pixel in described image:According to described aobviousThe actual size of display screen curtain and current resolution calculate one of described display screen pixel and physical length and/or heightCorresponding relation;Show described in the case of determining the main body that is taken described in showing in the actual size according to the described main body that is takenNumber of pixels used in display screen curtain;And described image is determined based on the imaging size of this number of pixels and the main body that is takenThe scaling of middle pixel.
Preferably, further include according to described scaling display described image:From the view data of described imageThe positional information of the main body that is taken described in acquisition;The picture number of the main body that is taken described in extracting from the view data of described imageAccording to;And the view data of the main body that is taken described in being extracted only is shown according to described scaling.
There is provided method can provide the actual size letter with the main body that is taken in image using the view data of the present inventionThe view data of breath, thus can readily determine that described in be taken the actual size of main body, and then can be clapped according to describedActual size 1: 1 ground taking the photograph main body is shown on the display screen of display device.Furthermore, it is possible to easily combination is with eachThe view data of the multiple images of actual size information of the main body that is taken from image.
Brief description
Fig. 1 shows the flow chart that the view data according to the present invention provides method.
Fig. 2 shows the schematic diagram of the triangle proportionate relationship between image distance, object distance, imaging size and actual size.
The flow chart that Fig. 3 shows the view data rendering method according to the present invention.
The flow chart that Fig. 4 shows the first embodiment of view data rendering method shown in Fig. 3.
The flow chart that Fig. 5 shows the second embodiment of view data rendering method shown in Fig. 3.
The flow chart that Fig. 6 shows the first embodiment of the view data combination method according to the present invention.
The flow chart that Fig. 7 shows the second embodiment of the view data combination method according to the present invention.
Specific embodiment
Hereinafter, will be described in detail with reference to accompanying drawings the view data according to the present invention provides method, view data combination sideMethod and view data rendering method.
First, provide method 100 by describing view data according to embodiments of the present invention in detail with reference to Fig. 1.
View data according to embodiments of the present invention provides method to start at step S101.
In step S110, obtain in shooting image capture apparatus and the distance between focal plane (that is, object distance) andDescribed capture apparatus when front focal length (that is, image distance).
The existing capture apparatus of such as photographing unit, video camera etc need in shooting imageBeing focused, view data according to embodiments of the present invention provides this work that method utilizes existing capture apparatus special to targetProperty, obtain and record the distance between described capture apparatus and focal plane and described capture apparatus works as front focal length, so thatFor calculating the actual size of the main body that is taken.
As an example, described capture apparatus can be by launching infrared ray or ultrasound wave to the described main body and connecing of being takenReceive the infrared ray being reflected back from the described main body that is taken or ultrasound wave to calculate capture apparatus and the distance between the main body that is taken.
As another example, described capture apparatus can work as front focal length and the described main body that is taken captured according to itImaging size in image, to calculate capture apparatus and the distance between the main body that is taken.
As another example, described capture apparatus can include multiple camera lenses, by the plurality of camera lens respectively to being clappedTake the photograph main body to be imaged, the image then being shot respectively using position relationship and the plurality of camera lens of the plurality of camera lens LaiCalculate the distance of the main body that is taken.This multi-lens imaging technology is widely used in 3D rendering process, and here is not gone to live in the household of one's in-laws on getting marriedState.
Although the several method that obtain capture apparatus and focal plane the distance between is presented above, the present invention is notBe limited to this, it should be appreciated by those skilled in the art that obviously can be obtained using alternate manner capture apparatus and focal plane itBetween distance, such as, the distance between direct measurement capture apparatus and focal plane.
Then, in step 120, extract the positional information in captured image for the main body that is taken on focal plane.
As it was previously stated, being required for first when the existing capture apparatus of such as photographing unit, video camera etc are in shooting imageTo be taken, main body is focused for target, need artificially to specify in focus process be taken main body or by shootEquipment is automatically selected the main body that is taken and is focused.For example, in some capture apparatus, the main body that is taken is directed at by cameramanBy manual adjustment focal length come shooting image;Or in some capture apparatus with Touch screen, can touched by cameramanThe main body that selects to be taken is touched on screen, then by capture apparatus auto-focusing;Or in other capture apparatus, can be by taking the photographShadow person selectes a scene in view finder, then passes through that capture apparatus are automatically selected to be taken main body and auto-focusing.No matter existIn above-mentioned any situation, focusing is all carried out premised on selecting the main body that is taken.
View data according to embodiments of the present invention provides method exactly to utilize this characteristic of capture apparatus, is shootingCalibrate the main body that is taken in journey, specifically, in shooting process using the main body that is taken as focal zone, and demarcate instituteState positional information in captured image for the focal zone.Select the situation being taken main body and being focused on the touchscreenUnder, extract the positional information in captured image for the main body that is taken selected.
Additionally, view data according to embodiments of the present invention provides that method can be used for previously having been taken by and storesView data in a storage medium, but this view data need with regard to shoot correspondence image when capture apparatus with rightThe distance between focal plane (that is, object distance) and the information when front focal length of described capture apparatus.Specifically, for this feelingsCondition, needs to obtain described object distance and focal length in shooting image, and by the object distance being obtained and focal length and captured imageStore in association;Then, by artificially specifying or by other means (such as, Color judgment on images etc.) in the figure being storedAs calibrating the main body that is taken in data, and correspondingly obtain the positional information of the main body that is taken.The position of the described main body that is takenConfidence breath can with pixel represent described in be taken main body appearance profile or in captured image around describedThe coordinate position of one regular shape of the main body that is taken.Described regular shape can set as the case may be, and can beRectangle, circle, ellipse or rhombus.
In view of there is the situation of Multi-point focusing in capture apparatus, it will therefore be noted that the described main body that is taken is not limited to one,And can be one or more.
Next, in step S130, according to acquired object distance and image distance, the imaging size of the main body that is taken described in calculatingCorresponding relation and actual size between.Preferably, corresponding between the imaging size of the described main body that is taken and actual sizeRelation is one of following corresponding relation:Physical length in one of captured image pixel and described focal plane and/Or the specifically corresponding pass between the corresponding relation of height or the imaging size of the main body that is taken described in each and actual sizeSystem.
Specifically, calculate the actual chi of the main body that is taken using " object distance/image distance=thing height/image height " computing formulaVery little corresponding relation and imaging size between, that is,:Reality in one of captured image pixel and described focal planeLength and/or the corresponding relation of height.Figure 2 illustrates the triangle proportionate relationship between image distance, object distance and image height, thing height.Or, after obtaining the corresponding relation being taken between the actual size of main body and imaging size, it is taken main body according to oneConcrete imaging size calculating its actual size, and the imaging size of this main body that is taken and actual size are provided simultaneously.
Next, in step S140, by the positional information of the described main body that is taken and described corresponding relation, together with being clappedThe image taken the photograph is provided in association with the view data as captured image together.The view data of captured image is therefore not onlyBe taken including captured image but also described in including the positional information of main body and the actual chi of the described main body that is takenVery little corresponding relation and imaging size between.Thus, the reality of the main body that is taken can be obtained using the view data being providedSize is possibly realized so that carrying out display according to the actual size of the described main body that is taken.
Preferably, in step S150, also by the distance between capture apparatus and focal plane and the described main body that is takenPositional information, described corresponding relation are provided in association with the view data as captured image together with captured image.The view data of captured image therefore not only includes captured image but also includes described capture apparatus and focal planeThe distance between, between the actual size of the positional information of the described main body that is taken and the described main body that is taken and imaging sizeCorresponding relation.Thus, the actual size of the main body that is taken not only can be obtained using the view data being provided, and makeOther body combination that are taken are possibly realized to this view data with the view data forming combination.
View data according to embodiments of the present invention provides method 100 to terminate in step S199.
Although superincumbent description between step S110 and S130 execution step S120, the invention is not restricted toThis.In the case of in shooting image and providing view data as above, step S110 and S130 can be first carried out simultaneouslyThen execution step S120 again;Or can step S120 and step S110 and S130 executed in parallel as illustrated in fig. 1.
Additionally, in shooting image and storing the object distance together with capture apparatus for the captured image and the situation of focal lengthUnder, execution step S110 and S120 can extract object distance and Jiao of capture apparatus from the captured image data being stored simultaneouslyAway from and the positional information in captured image for the main body that is taken, then execution step S130 again.
As described above, utilizing the view data that view data offer method 100 according to embodiments of the present invention provides not onlyIncluding captured image but also include the described shooting positional information of main body and the actual size of the described main body that is takenCorresponding relation and imaging size between.
Next, view data rendering method 300 according to embodiments of the present invention will be described with reference to Figure 3.Described imageData has the correspondence relationship information between the imaging size of the main body that is wherein taken and actual size, this view data side of presentingMethod 300 according to the actual size of the main body that is taken to present described in be taken main body.
In order to be taken described in presenting according to the actual size of the main body that is taken main body it is thus necessary to determine that display device aobviousThe actual size of display screen curtain and the actual size of the main body that is taken, it is then determined that in described image the scaling of pixel is simultaneouslyAccording to this scaling, display is zoomed in and out to described image.
First, view data rendering method according to embodiments of the present invention starts in step S301.
In step S310, obtain the actual size of the display screen of display device, and obtain current point of this display screenResolution.
It is known that, conventionally, the actual size of the display screen of display device.For example, for conventional display curtain, in factBorder size need to follow certain standard.Or, in the case that the actual size of the display screen of display device is unknown orIn the case that the display screen of display device is non-standard display screen, the actual size of display screen can be measured.
In addition, display device often can provide multiple resolution, user can select a kind of resolution, and selectedResolution under using display device display image.
In step S320, obtain the corresponding pass between the imaging size of the main body that is taken in described image and actual sizeSystem.As it was previously stated, have the corresponding pass between the imaging size of the main body that is wherein taken and actual size in described image dataIt is information.
In step S330, the actual size according to described display screen and current resolution and the described main body that shootsCorrespondence relationship information between imaging size and actual size is determining the scaling of pixel in described image.Then, in stepRapid S340, shows described image according to described scaling.
View data rendering method according to embodiments of the present invention terminates in step S399.
First embodiment
The first embodiment of the view data rendering method shown in Fig. 3 to be described below in conjunction with Fig. 4.
First, in view data rendering method 400, in step S410, obtain the reality of the display screen of display deviceSize, and obtain the current resolution of this display screen.
In step S420, obtain from described image data in one of captured image pixel and described focal planePhysical length and/or height corresponding relation.Alternatively, can be obtained each in step S420 from described image dataConcrete corresponding relation between the imaging size of the described main body that is taken and actual size, it is possible to use this concrete corresponding relation changesCalculate the corresponding relation of physical length in one of captured image pixel and described focal plane and/or height.
Then, in step S430, the actual size according to described display screen and current resolution calculate described display screenOne of curtain pixel and the corresponding relation of physical length and/or height, and determined described based on above-mentioned two corresponding relationThe scaling of pixel in image.
Specifically, in the actual size of the display screen obtaining display device and the current resolution of this display screenAfterwards it may be determined that pel spacing under the current resolution of this display screen, i.e. the distance between two neighbors, instituteState the corresponding relation that pel spacing can be considered one of described display screen pixel and physical length and/or height,I.e. physical length represented by a pixel and/or height.For example, the actual size in display screen is (X, Y) and current pointIn the case that resolution is (W, H), pel spacing can be calculated as (X/W, Y/H), wherein X/W is horizontal pixel pitch, Y/H isVertical pixel pitch, and X/W can be equal or different to Y/H.That is, display screen is under current resolution, onePhysical length corresponding to pixel is X/W, and corresponding actual height is Y/H.
Physical length in a pixel and described focal plane in consideration described image and/or the corresponding relation of heightFor:Physical length corresponding to one pixel is IX/IW, and corresponding actual height is IY/IH.
Then, the scaling of pixel in described image is determined using above-mentioned two corresponding relation.For example, display screenPhysical length corresponding in current resolution next one pixel and height are 0.1mm, i.e. X/W=Y/H=0.1mm, andIn described image, the physical length corresponding to a pixel and height are 0.2mm, i.e. IX/IW=IY/IH=0.2mm.Therefore,The scaling that can determine pixel in described image is:
IX/IW=IY/IHX/W=Y/H=0.2mm0.1mm=2---(1)
Finally, in step S440, show described image according to described scaling.For example, to the pixel in described imageCarry out interpolative operation to obtain the number of pixels of the number of pixels four times (2 × 2) being equal in described image, the new image obtainingOne of data pixel corresponds to a pixel on display screen;Or merely with the part picture in described display screenElement is shown:Only odd-numbered line odd column pixel, only odd-numbered line even column pixels, only even number line odd column pixel or only evenSeveral rows even column pixels.
Alternatively, in step S440, the positional information of the main body that is taken described in acquisition, only show described according to described ratioBe taken main body, and does not show the other parts in described image.
Second embodiment
Next, the second embodiment of the view data rendering method shown in Fig. 3 will be described with reference to Fig. 5.
In view data rendering method 500, in step S510, obtain the actual size of the display screen of display device,And obtain the current resolution of this display screen.
In step S520, obtain actual size and the imaging size of the main body that is taken in described image.In described image numberIn the case of the concrete corresponding relation between the middle imaging size that the main body that is taken described in each is provided and actual size, directlyThe actual size of the main body that is taken is obtained from described image data.Alternatively, described image data provides captured figureIn the case of the corresponding relation of the physical length in one of picture pixel and described focal plane and/or height, need into oneStep obtains the imaging size of the main body that is taken in captured image from described image data, and and then utilizes described corresponding relationTo converse the actual size of the main body that is taken.
Then, in step S530, the actual size according to described display screen and current resolution calculate described display screenOne of curtain pixel and the corresponding relation of physical length and/or height, determine in the actual chi according to the described main body that is takenIn the case of the main body that is taken described in very little display, number of pixels used in described display screen, is then based on this number of pixelsTo determine the scaling of pixel in described image with the imaging size of the main body that is taken.
As an example, obtain horizontal pixel pitch X/W of display screen with identical mode in first embodiment and hang downStraight pel spacing Y/H.
The physical length of the main body that is taken described in consideration is PX, and actual height is PY, and corresponding pixels across number isIPW, and corresponding longitudinal number of pixels be IPH.
Physical length PX according to the described main body that is taken and the horizontal pixel pitch of actual height PY and display screenX/W and vertical pixel pitch Y/H, can calculate the master that is taken described in showing in the actual size according to the described main body that is takenRequired number of pixels in described display screen in the case of body, that is,:
NX=PX/ (X/W)=(PX × W)/X
NY=PY/ (Y/H)=(PY × H)/Y
Further, the imaging size of the main body that is taken in described image, the imaging pixel number representing are obtained with pixelMesh (IX, IY).Show number of pixels (NX, NY) and the main body that is taken being taken needed for main body using calculated completelyImaging pixel number (IX, IY), to determine the scaling of pixel in described image, that is,:
RX=NX/IX
RY=NY/IY
For example, the number of pixels (NX, NY) being taken needed for main body imaging of the main body that is taken for (1000,2000)Number of pixels (IX, IY) is (2000,4000), then in described image, the scaling of pixel is:RX=RY=1/2.
Next, in step S540, showing described image according to described scaling.For example, to the picture in described imageElement is extracted to be obtained the number of pixels of the number of pixels a quarter (1/ (2 × 2)) being equal in described image times, newly obtainsOne of the view data obtaining pixel corresponds to a pixel on display screen;Or merely with one in described imageDivide pixel:Only odd-numbered line odd column pixel, only odd-numbered line even column pixels, only even number line odd column pixel or only even number lineEven column pixels.
Alternatively, in step S540, the positional information of the main body that is taken described in acquisition, only show described according to described ratioBe taken main body, and does not show the other parts in described image.
Although giving knot in the middle of each to describe clear in the foregoing description of first embodiment and second embodimentReally, for example, the horizontal pixel pitch of display screen and vertical pixel pitch, but the invention is not restricted to this.People in the artMember is easy to understand, can directly utilize in the actual size and current resolution and captured image of described display screenObtain every in the corresponding relation of the physical length in one pixel and described focal plane and/or height or described image dataConcrete corresponding relation between the imaging size of the individual described main body that is taken and actual size obtains described scaling.
Next, the view data to describe with reference to Fig. 6 and Fig. 7 according to the present invention is combined method.
Show in Fig. 6 that the view data according to the present invention combines the first embodiment of method.In this first embodiment,This view data combination method is by the body combination that is taken in image to be combined (hereinafter, referred to the second image) to reference mapAs, in (hereinafter, referred to the first image), the view data of described second image includes the position letter of main body that is wherein takenCorresponding relation between breath, the imaging size of the main body that is taken and actual size, and in the view data of described first imageCorresponding relation between imaging size including the main body that is wherein taken and actual size.
Started in step S601 according to the view data combination method 600 of this first embodiment.
First, from the view data of described first image, in step S610, obtain the imaging chi of the main body that is wherein takenVery little corresponding relation and actual size between.As it was previously stated, it is corresponding between the imaging size of the main body that is taken and actual sizeRelation can be one of following corresponding relation:Physical length in one of captured image pixel and described focal planeAnd/or it is concrete corresponding between the corresponding relation of height or the imaging size of the main body that is taken described in each and actual sizeRelation.
In step S620, extract the view data of the main body that is wherein taken from the view data of described second image, andObtain the corresponding relation between the imaging size of the main body that is taken and actual size.As an example, from the figure of described second imageBe taken as described in obtain in data the positional information of main body, and the view data of the main body that is taken described in extracting accordingly.
In step S630, according to corresponding between the imaging size of the main body that is taken in described benchmark image and actual sizeCorresponding relation between the imaging size of the main body that is taken in relation and described image to be combined and actual size, determines instituteThe scaling of the view data of the main body that is taken extracted.Then in step S640, according to described scaling, modification is carriedThe view data of the main body that is taken taking is so that meet imaging size and the actual size of the main body that is taken in described benchmark imageBetween corresponding relation.
Taking four kinds of situations to be as a example introduced below.
The first situation:Between the imaging size of the main body that is taken in described benchmark image (the first image) and actual sizeCorresponding relation (the first corresponding relation) and described image to be combined (the second image) in be taken main body imaging size withCorresponding relation (the second corresponding relation) between actual size is all one of captured image pixel and described focal planeIn physical length and/or height corresponding relation.
In this case, it is possible to directly the second corresponding relation and the first corresponding relation are divided by obtain a scaling,And adjust the view data of the main body that is taken extracted according to this scaling, to meet being clapped in the first imageTake the photograph the corresponding relation between the imaging size of main body and actual size.For example, correspond to for a pixel in the first corresponding relation2cm and the second corresponding relation are in the case that a pixel corresponds to 6cm, and described scaling is 3, i.e. need to being extractedThe view data of shooting main body carry out interpolation, specifically, be inserted into two pixels between two pixels.
Second situation:Between the imaging size of the main body that is taken in described benchmark image (the first image) and actual sizeCorresponding relation (the first corresponding relation) be one of captured image pixel with described focal plane in physical length and/Or the imaging size of the main body that is taken in the corresponding relation of height, and described image to be combined (the second image) and actual sizeBetween corresponding relation (the second corresponding relation) be the tool being taken between the imaging size of main body and actual size described in eachBody corresponding relation.
In the case, using one in actual size and the described benchmark image of the main body that is taken in described second imagePhysical length in pixel and described focal plane and/or the corresponding relation of height, calculate the master that is taken in described second imageImaging size in described first image for the body, i.e. number of pixels.It is then possible to using the main body that is taken in described second imageImaging size and described second image that calculated in be taken imaging size in described first image for the main body, comeCalculate the scaling of pixel in the image of the main body that is taken.Finally, according to this scaling to the main body that is taken extractedView data zoom in and out.
For example, it is contemplated that the imaging size of the main body that is taken in described second image is (1000,2000) and calculatedImaging size (500,1000) in described first image for the main body that is taken in described second image, then need to be extractedThe number of pixels of the main body that is taken is reduced to the 1/4 of original pixel number.Can consider merely with the main body that is taken extractedA part in view data:Only odd-numbered line odd column pixel, only odd-numbered line even column pixels, only even number line odd column pixel,Or only even number line even column pixels.
For example, it is contemplated that the imaging size of the main body that is taken in described second image is (500,1000) and calculatedImaging size (1000,2000) in described first image for the main body that is taken in described second image, then need to be extractedThe number of pixels of the main body that is taken increase to 4 times of original pixel number.The picture to the main body that is taken extracted can be consideredElement carries out interpolative operation to obtain the number of pixels being equal to four times of the number of pixels of the main body that is taken being extracted, new acquisitionOne of view data pixel corresponds to one of described first image pixel.
The third situation:Between the imaging size of the main body that is taken in described benchmark image (the first image) and actual sizeCorresponding relation (the first corresponding relation) and described image to be combined (the second image) in be taken main body imaging size withCorresponding relation (the second corresponding relation) between actual size is all to be taken imaging size and the actual chi of main body described in eachConcrete corresponding relation between very little.
In this case, it is possible to this concrete corresponding relation is scaled one of captured image pixel and described focusingPhysical length in plane and/or the corresponding relation of height, and then according to the mode of the first situation above-mentioned is processed.
Additionally, in the case, can also be only by the imaging size of the main body that is taken in described first image and actual chiCorresponding relation between very little is scaled physical length and/or height in one of captured image pixel and described focal planeThe corresponding relation of degree, then according to described second situation is processed.
4th kind of situation:Between the imaging size of the main body that is taken in described benchmark image (the first image) and actual sizeCorresponding relation (the first corresponding relation) be taken described in each specifically right between the imaging size of main body and actual sizeShould be related to, and corresponding between the imaging size of the main body that is taken in described image to be combined (the second image) and actual sizeRelation (the second corresponding relation) is physical length and/or height in one of captured image pixel and described focal planeCorresponding relation.
In the case, the imaging size of the main body that is taken being included using the view data of described second image and realityCorresponding relation between the size of border, obtains the actual size of the main body that is taken.
Then, the corresponding relation between the imaging size of the main body that is taken in foundation described first image and actual size,Calculate the imaging size in described benchmark image for the main body that is taken in described second image.Next, may be referred to secondSituation is processed.
However, the invention is not restricted to this, those skilled in the art can carry out suitable conversion between these four situationsAnd combination.
In step S650, the main body that is taken is set is added to residing position in described first image in described second imagePut;And in step S660, according to set position, by the picture number of the main body that is taken in amended described second imageAccording in the view data being combined to described first image.
Preferably, in step S670, will be taken in described first image main body and be newly incorporated to described first imageRight between the imaging size of the main body that is taken in the positional information of the main body that is taken and described first image and actual sizeShould be related to, be provided in association with as amended view data together with amended first image.
View data combination method 600 according to embodiments of the present invention terminates in step S699.
Next, the view data being described with reference to Figure 7 according to the present invention is combined the second embodiment of method.
Show in Fig. 7 that view data according to embodiments of the present invention combines the second embodiment of method.In this second realityApply in example, this view data combination method is by the body combination that is taken in image to be combined (the second image) to benchmark imageIn (the first image), the view data of described second image include wherein being taken main body positional information, be taken main bodyImaging size and actual size between corresponding relation, and the view data of described benchmark image includes wherein being takenThe capture apparatus of the corresponding relation between the imaging size of main body and actual size and the described benchmark image of shooting are focused with itThe distance between plane information.
By comprising the distance between capture apparatus and focal plane, Ke Yishe in the view data of described benchmark imagePut be taken the distance between main body and the described capture apparatus in image captured by other, and and then according to this two distancesThe view data of the main body that is taken in image captured by other is combined in the view data of described benchmark.
In view data combination method 700 in the figure 7, step S710 to S730 and step S770 to S780 and Fig. 6In step S160 to S630 and step S650 to S660 identical, here is no longer repeated.
In step S740, obtain from the view data of described first image between described capture apparatus and its focal planeDistance.In step S750, be taken in described second image the distance between main body and described capture apparatus are set, and according toThe distance between the capture apparatus of set distance and described second image and its focal plane, determine projection coefficient.
Then, in step S760, calculate the second scaling using described projection coefficient with described first scaling, andThe view data of the main body that is taken extracted according to described second scaling modification is so that meet quilt in described benchmark imageShoot the corresponding relation between the imaging size of main body and actual size.For example, it is possible to by described projection coefficient and described firstScaling is multiplied and to obtain the second scaling.
Preferably, in step S790, by the distance between described capture apparatus and its focal plane, described first imageThe main body that is taken and be newly incorporated in the positional information of the main body that is taken of the first image and described first image the master that is takenCorresponding relation between the imaging size of body and actual size, is provided in association with conduct together with amended first imageAmended view data.
As an example it is considered to situations below:The physical length of the first object is 1 meter, and it is 10 with the distance of capture apparatusRice, and there is the second object at capture apparatus at a distance of 12 meters, the physical length of this second object is 3 meters.When using batWhen taking the photograph equipment imaging, in captured image, there is hiding relation between the first object and the second object, now the first objectRatio with the second object is not 1: 3, but changes into 1.2: 3.Do not exist between the first object and the second object and block passIn the case of system, also similarly there is the proportionate relationship of this change.
And then it is considered to situations below:This second object is in described benchmark image, the capture apparatus of described benchmark imageIt is 12 meters with the distance between focal plane, and the first object is placed at the position of 10 meters of described capture apparatus.ThisWhen, the proportionate relationship of above-mentioned change will be there is.Therefore, in step S760, revise described first using described projection coefficientScaling.
As set forth above, it is possible to application provides method according to the view data of the present invention in shooting image, or can beThe view data according to the present invention is applied to provide method after shooting image.Apply the picture number according to the present invention after an image is capturedIn the case of offer method, need to obtain the distance between capture apparatus and focal plane and described bat in shooting imageTake the photograph the front focal length of working as of equipment, and by the distance between acquired capture apparatus and focal plane and described capture apparatusWhen front focal length stores the view data as captured image together with captured image.
As described above, using the view data rendering method according to the present invention, not only can be according to the reality of subjectBorder size showing captured image, but also can according to the actual size of subject only to show described in be takenObject.
As described above, combining method using the view data according to the present invention, it is particularly advantageous in the matched combined of clothing, exampleAs, extract jacket image from the view data of captured jacket, and by its with the image that trousers are shot according to jacketIt is combined with trousers actual size, in order to experience the arranging effect of clothing.Additionally, the view data combination according to the present inventionMethod is also particularly advantageous for the combination of furniture installation.
It should be appreciated that can be realized with the various forms of hardware, software, firmware, application specific processor or combinations thereofView data according to the present invention provides method, view data combination method and view data rendering method.
It is also understood that preferably realize method illustrated in the accompanying drawings, function blocks therefore therein with softwareBetween actual connection can be so that mode be programmed that and different according to the present invention.Provide description here, association area commonTechnical staff is possible to expect these and similar realization or the configuration of the present invention.
Although some embodiments of the present invention herein with reference to Description of Drawings but it is to be understood that described embodiment onlyIt is illustrative, and not restrictive.It will be appreciated by those skilled in the art that limiting in without departing substantially from appended claims and their equivalentsIn the case of fixed scope and spirit of the present invention, these exemplary embodiments can be made with the change on various forms and detailsChange.

Claims (13)

CN201110080133.0A2011-03-312011-03-31Image data providing method, combination method thereof, and presentation method thereofActiveCN102724398B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN201110080133.0ACN102724398B (en)2011-03-312011-03-31Image data providing method, combination method thereof, and presentation method thereof

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201110080133.0ACN102724398B (en)2011-03-312011-03-31Image data providing method, combination method thereof, and presentation method thereof

Publications (2)

Publication NumberPublication Date
CN102724398A CN102724398A (en)2012-10-10
CN102724398Btrue CN102724398B (en)2017-02-08

Family

ID=46950052

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201110080133.0AActiveCN102724398B (en)2011-03-312011-03-31Image data providing method, combination method thereof, and presentation method thereof

Country Status (1)

CountryLink
CN (1)CN102724398B (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN104252828B (en)*2013-06-292018-06-05华为终端(东莞)有限公司Protect display methods, display device and the terminal device of eyesight
CN104735339B (en)*2013-12-232018-11-09联想(北京)有限公司A kind of automatic adjusting method and electronic equipment
CN103644895B (en)*2013-12-262015-12-30金陵科技学院A kind of digital camera coordinates the method for mapping of ancient architecture of measuring tool
CN104104929A (en)*2014-02-112014-10-15中兴通讯股份有限公司Remote projection method, device and system
US9734553B1 (en)*2014-12-312017-08-15Ebay Inc.Generating and displaying an actual sized interactive object
CN105141892B (en)*2015-08-032018-11-06广州杰赛科技股份有限公司A kind of environment arrangement for detecting and its distance measuring method, detecting system
CN105635570B (en)*2015-12-242019-03-08Oppo广东移动通信有限公司Shooting preview method and system
CN115442516A (en)2019-12-252022-12-06华为技术有限公司Shooting method and terminal in long-focus scene
CN110927918A (en)*2019-12-272020-03-27屏丽科技成都有限责任公司Long-stroke focusing lens rapid convergence focusing method and projector applying same
CN111526286B (en)*2020-04-202021-11-02苏州智感电子科技有限公司Method and system for controlling motor motion and terminal equipment
CN114187615B (en)*2020-09-122025-05-13陈玉冰 A method for extracting fingerprint images
CN117354628A (en)*2023-10-252024-01-05神力视界(深圳)文化科技有限公司 Focus distance determination method, electronic equipment and computer storage media

Citations (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
EP0867690A1 (en)*1997-03-271998-09-30Nippon Telegraph And Telephone CorporationDevice and system for labeling sight images
EP1526727A1 (en)*2002-06-052005-04-27Seiko Epson CorporationDigital camera and image processing device
CN101494735A (en)*2008-01-252009-07-29索尼株式会社Imaging apparatus, imaging apparatus control method, and computer program
CN101902571A (en)*2009-05-272010-12-01索尼公司Image pickup apparatus, electronic device, panoramic image recording method, and program

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
JP3466173B2 (en)*2000-07-242003-11-10株式会社ソニー・コンピュータエンタテインメント Image processing system, device, method and computer program

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
EP0867690A1 (en)*1997-03-271998-09-30Nippon Telegraph And Telephone CorporationDevice and system for labeling sight images
EP1526727A1 (en)*2002-06-052005-04-27Seiko Epson CorporationDigital camera and image processing device
CN1659876A (en)*2002-06-052005-08-24精工爱普生株式会社 Digital cameras and image processing equipment
CN101494735A (en)*2008-01-252009-07-29索尼株式会社Imaging apparatus, imaging apparatus control method, and computer program
CN101902571A (en)*2009-05-272010-12-01索尼公司Image pickup apparatus, electronic device, panoramic image recording method, and program

Also Published As

Publication numberPublication date
CN102724398A (en)2012-10-10

Similar Documents

PublicationPublication DateTitle
CN102724398B (en)Image data providing method, combination method thereof, and presentation method thereof
CN103973978B (en)It is a kind of to realize the method focused again and electronic equipment
US8410441B2 (en)Thermal imaging camera for taking thermographic images
US8760502B2 (en)Method for improving 3 dimensional effect and reducing visual fatigue and apparatus enabling the same
US20130335535A1 (en)Digital 3d camera using periodic illumination
EP3451285B1 (en)Distance measurement device for motion picture camera focus applications
CN101770145B (en) Method for estimating actual size of objects and object projector
US20100194902A1 (en)Method for high dynamic range imaging
JP2011147109A (en)Image capturing apparatus and image processing apparatus
JP5467993B2 (en) Image processing apparatus, compound-eye digital camera, and program
JP2005167517A (en)Image processor, calibration method thereof, and image processing program
JP2003187261A (en) Three-dimensional image generation device, three-dimensional image generation method, three-dimensional image processing device, three-dimensional image capturing and displaying system, three-dimensional image processing method, and storage medium
JP7196421B2 (en) Information processing device, information processing system, information processing method and program
JP2009210840A (en)Stereoscopic image display device and method, and program
US20100245544A1 (en)Imaging apparatus, imaging control method, and recording medium
US20100302234A1 (en)Method of establishing dof data of 3d image and system thereof
JP2011160421A (en)Method and apparatus for creating stereoscopic image, and program
JP5267708B2 (en) Image processing apparatus, imaging apparatus, image generation method, and program
JP2005063041A (en)Three-dimensional modeling apparatus, method, and program
KR101082545B1 (en)Mobile communication terminal had a function of transformation for a picture
JP2009258005A (en)Three-dimensional measuring device and three-dimensional measuring method
CN110087059B (en)Interactive auto-stereoscopic display method for real three-dimensional scene
JP5509986B2 (en) Image processing apparatus, image processing system, and image processing program
JP2005141655A (en)Three-dimensional modeling apparatus and three-dimensional modeling method
JP2013175821A (en)Image processing device, image processing method, and program

Legal Events

DateCodeTitleDescription
C06Publication
PB01Publication
C10Entry into substantive examination
SE01Entry into force of request for substantive examination
C14Grant of patent or utility model
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp