Specific embodiment
Hereinafter, will be described in detail with reference to accompanying drawings the view data according to the present invention provides method, view data combination sideMethod and view data rendering method.
First, provide method 100 by describing view data according to embodiments of the present invention in detail with reference to Fig. 1.
View data according to embodiments of the present invention provides method to start at step S101.
In step S110, obtain in shooting image capture apparatus and the distance between focal plane (that is, object distance) andDescribed capture apparatus when front focal length (that is, image distance).
The existing capture apparatus of such as photographing unit, video camera etc need in shooting imageBeing focused, view data according to embodiments of the present invention provides this work that method utilizes existing capture apparatus special to targetProperty, obtain and record the distance between described capture apparatus and focal plane and described capture apparatus works as front focal length, so thatFor calculating the actual size of the main body that is taken.
As an example, described capture apparatus can be by launching infrared ray or ultrasound wave to the described main body and connecing of being takenReceive the infrared ray being reflected back from the described main body that is taken or ultrasound wave to calculate capture apparatus and the distance between the main body that is taken.
As another example, described capture apparatus can work as front focal length and the described main body that is taken captured according to itImaging size in image, to calculate capture apparatus and the distance between the main body that is taken.
As another example, described capture apparatus can include multiple camera lenses, by the plurality of camera lens respectively to being clappedTake the photograph main body to be imaged, the image then being shot respectively using position relationship and the plurality of camera lens of the plurality of camera lens LaiCalculate the distance of the main body that is taken.This multi-lens imaging technology is widely used in 3D rendering process, and here is not gone to live in the household of one's in-laws on getting marriedState.
Although the several method that obtain capture apparatus and focal plane the distance between is presented above, the present invention is notBe limited to this, it should be appreciated by those skilled in the art that obviously can be obtained using alternate manner capture apparatus and focal plane itBetween distance, such as, the distance between direct measurement capture apparatus and focal plane.
Then, in step 120, extract the positional information in captured image for the main body that is taken on focal plane.
As it was previously stated, being required for first when the existing capture apparatus of such as photographing unit, video camera etc are in shooting imageTo be taken, main body is focused for target, need artificially to specify in focus process be taken main body or by shootEquipment is automatically selected the main body that is taken and is focused.For example, in some capture apparatus, the main body that is taken is directed at by cameramanBy manual adjustment focal length come shooting image;Or in some capture apparatus with Touch screen, can touched by cameramanThe main body that selects to be taken is touched on screen, then by capture apparatus auto-focusing;Or in other capture apparatus, can be by taking the photographShadow person selectes a scene in view finder, then passes through that capture apparatus are automatically selected to be taken main body and auto-focusing.No matter existIn above-mentioned any situation, focusing is all carried out premised on selecting the main body that is taken.
View data according to embodiments of the present invention provides method exactly to utilize this characteristic of capture apparatus, is shootingCalibrate the main body that is taken in journey, specifically, in shooting process using the main body that is taken as focal zone, and demarcate instituteState positional information in captured image for the focal zone.Select the situation being taken main body and being focused on the touchscreenUnder, extract the positional information in captured image for the main body that is taken selected.
Additionally, view data according to embodiments of the present invention provides that method can be used for previously having been taken by and storesView data in a storage medium, but this view data need with regard to shoot correspondence image when capture apparatus with rightThe distance between focal plane (that is, object distance) and the information when front focal length of described capture apparatus.Specifically, for this feelingsCondition, needs to obtain described object distance and focal length in shooting image, and by the object distance being obtained and focal length and captured imageStore in association;Then, by artificially specifying or by other means (such as, Color judgment on images etc.) in the figure being storedAs calibrating the main body that is taken in data, and correspondingly obtain the positional information of the main body that is taken.The position of the described main body that is takenConfidence breath can with pixel represent described in be taken main body appearance profile or in captured image around describedThe coordinate position of one regular shape of the main body that is taken.Described regular shape can set as the case may be, and can beRectangle, circle, ellipse or rhombus.
In view of there is the situation of Multi-point focusing in capture apparatus, it will therefore be noted that the described main body that is taken is not limited to one,And can be one or more.
Next, in step S130, according to acquired object distance and image distance, the imaging size of the main body that is taken described in calculatingCorresponding relation and actual size between.Preferably, corresponding between the imaging size of the described main body that is taken and actual sizeRelation is one of following corresponding relation:Physical length in one of captured image pixel and described focal plane and/Or the specifically corresponding pass between the corresponding relation of height or the imaging size of the main body that is taken described in each and actual sizeSystem.
Specifically, calculate the actual chi of the main body that is taken using " object distance/image distance=thing height/image height " computing formulaVery little corresponding relation and imaging size between, that is,:Reality in one of captured image pixel and described focal planeLength and/or the corresponding relation of height.Figure 2 illustrates the triangle proportionate relationship between image distance, object distance and image height, thing height.Or, after obtaining the corresponding relation being taken between the actual size of main body and imaging size, it is taken main body according to oneConcrete imaging size calculating its actual size, and the imaging size of this main body that is taken and actual size are provided simultaneously.
Next, in step S140, by the positional information of the described main body that is taken and described corresponding relation, together with being clappedThe image taken the photograph is provided in association with the view data as captured image together.The view data of captured image is therefore not onlyBe taken including captured image but also described in including the positional information of main body and the actual chi of the described main body that is takenVery little corresponding relation and imaging size between.Thus, the reality of the main body that is taken can be obtained using the view data being providedSize is possibly realized so that carrying out display according to the actual size of the described main body that is taken.
Preferably, in step S150, also by the distance between capture apparatus and focal plane and the described main body that is takenPositional information, described corresponding relation are provided in association with the view data as captured image together with captured image.The view data of captured image therefore not only includes captured image but also includes described capture apparatus and focal planeThe distance between, between the actual size of the positional information of the described main body that is taken and the described main body that is taken and imaging sizeCorresponding relation.Thus, the actual size of the main body that is taken not only can be obtained using the view data being provided, and makeOther body combination that are taken are possibly realized to this view data with the view data forming combination.
View data according to embodiments of the present invention provides method 100 to terminate in step S199.
Although superincumbent description between step S110 and S130 execution step S120, the invention is not restricted toThis.In the case of in shooting image and providing view data as above, step S110 and S130 can be first carried out simultaneouslyThen execution step S120 again;Or can step S120 and step S110 and S130 executed in parallel as illustrated in fig. 1.
Additionally, in shooting image and storing the object distance together with capture apparatus for the captured image and the situation of focal lengthUnder, execution step S110 and S120 can extract object distance and Jiao of capture apparatus from the captured image data being stored simultaneouslyAway from and the positional information in captured image for the main body that is taken, then execution step S130 again.
As described above, utilizing the view data that view data offer method 100 according to embodiments of the present invention provides not onlyIncluding captured image but also include the described shooting positional information of main body and the actual size of the described main body that is takenCorresponding relation and imaging size between.
Next, view data rendering method 300 according to embodiments of the present invention will be described with reference to Figure 3.Described imageData has the correspondence relationship information between the imaging size of the main body that is wherein taken and actual size, this view data side of presentingMethod 300 according to the actual size of the main body that is taken to present described in be taken main body.
In order to be taken described in presenting according to the actual size of the main body that is taken main body it is thus necessary to determine that display device aobviousThe actual size of display screen curtain and the actual size of the main body that is taken, it is then determined that in described image the scaling of pixel is simultaneouslyAccording to this scaling, display is zoomed in and out to described image.
First, view data rendering method according to embodiments of the present invention starts in step S301.
In step S310, obtain the actual size of the display screen of display device, and obtain current point of this display screenResolution.
It is known that, conventionally, the actual size of the display screen of display device.For example, for conventional display curtain, in factBorder size need to follow certain standard.Or, in the case that the actual size of the display screen of display device is unknown orIn the case that the display screen of display device is non-standard display screen, the actual size of display screen can be measured.
In addition, display device often can provide multiple resolution, user can select a kind of resolution, and selectedResolution under using display device display image.
In step S320, obtain the corresponding pass between the imaging size of the main body that is taken in described image and actual sizeSystem.As it was previously stated, have the corresponding pass between the imaging size of the main body that is wherein taken and actual size in described image dataIt is information.
In step S330, the actual size according to described display screen and current resolution and the described main body that shootsCorrespondence relationship information between imaging size and actual size is determining the scaling of pixel in described image.Then, in stepRapid S340, shows described image according to described scaling.
View data rendering method according to embodiments of the present invention terminates in step S399.
First embodiment
The first embodiment of the view data rendering method shown in Fig. 3 to be described below in conjunction with Fig. 4.
First, in view data rendering method 400, in step S410, obtain the reality of the display screen of display deviceSize, and obtain the current resolution of this display screen.
In step S420, obtain from described image data in one of captured image pixel and described focal planePhysical length and/or height corresponding relation.Alternatively, can be obtained each in step S420 from described image dataConcrete corresponding relation between the imaging size of the described main body that is taken and actual size, it is possible to use this concrete corresponding relation changesCalculate the corresponding relation of physical length in one of captured image pixel and described focal plane and/or height.
Then, in step S430, the actual size according to described display screen and current resolution calculate described display screenOne of curtain pixel and the corresponding relation of physical length and/or height, and determined described based on above-mentioned two corresponding relationThe scaling of pixel in image.
Specifically, in the actual size of the display screen obtaining display device and the current resolution of this display screenAfterwards it may be determined that pel spacing under the current resolution of this display screen, i.e. the distance between two neighbors, instituteState the corresponding relation that pel spacing can be considered one of described display screen pixel and physical length and/or height,I.e. physical length represented by a pixel and/or height.For example, the actual size in display screen is (X, Y) and current pointIn the case that resolution is (W, H), pel spacing can be calculated as (X/W, Y/H), wherein X/W is horizontal pixel pitch, Y/H isVertical pixel pitch, and X/W can be equal or different to Y/H.That is, display screen is under current resolution, onePhysical length corresponding to pixel is X/W, and corresponding actual height is Y/H.
Physical length in a pixel and described focal plane in consideration described image and/or the corresponding relation of heightFor:Physical length corresponding to one pixel is IX/IW, and corresponding actual height is IY/IH.
Then, the scaling of pixel in described image is determined using above-mentioned two corresponding relation.For example, display screenPhysical length corresponding in current resolution next one pixel and height are 0.1mm, i.e. X/W=Y/H=0.1mm, andIn described image, the physical length corresponding to a pixel and height are 0.2mm, i.e. IX/IW=IY/IH=0.2mm.Therefore,The scaling that can determine pixel in described image is:
Finally, in step S440, show described image according to described scaling.For example, to the pixel in described imageCarry out interpolative operation to obtain the number of pixels of the number of pixels four times (2 × 2) being equal in described image, the new image obtainingOne of data pixel corresponds to a pixel on display screen;Or merely with the part picture in described display screenElement is shown:Only odd-numbered line odd column pixel, only odd-numbered line even column pixels, only even number line odd column pixel or only evenSeveral rows even column pixels.
Alternatively, in step S440, the positional information of the main body that is taken described in acquisition, only show described according to described ratioBe taken main body, and does not show the other parts in described image.
Second embodiment
Next, the second embodiment of the view data rendering method shown in Fig. 3 will be described with reference to Fig. 5.
In view data rendering method 500, in step S510, obtain the actual size of the display screen of display device,And obtain the current resolution of this display screen.
In step S520, obtain actual size and the imaging size of the main body that is taken in described image.In described image numberIn the case of the concrete corresponding relation between the middle imaging size that the main body that is taken described in each is provided and actual size, directlyThe actual size of the main body that is taken is obtained from described image data.Alternatively, described image data provides captured figureIn the case of the corresponding relation of the physical length in one of picture pixel and described focal plane and/or height, need into oneStep obtains the imaging size of the main body that is taken in captured image from described image data, and and then utilizes described corresponding relationTo converse the actual size of the main body that is taken.
Then, in step S530, the actual size according to described display screen and current resolution calculate described display screenOne of curtain pixel and the corresponding relation of physical length and/or height, determine in the actual chi according to the described main body that is takenIn the case of the main body that is taken described in very little display, number of pixels used in described display screen, is then based on this number of pixelsTo determine the scaling of pixel in described image with the imaging size of the main body that is taken.
As an example, obtain horizontal pixel pitch X/W of display screen with identical mode in first embodiment and hang downStraight pel spacing Y/H.
The physical length of the main body that is taken described in consideration is PX, and actual height is PY, and corresponding pixels across number isIPW, and corresponding longitudinal number of pixels be IPH.
Physical length PX according to the described main body that is taken and the horizontal pixel pitch of actual height PY and display screenX/W and vertical pixel pitch Y/H, can calculate the master that is taken described in showing in the actual size according to the described main body that is takenRequired number of pixels in described display screen in the case of body, that is,:
NX=PX/ (X/W)=(PX × W)/X
NY=PY/ (Y/H)=(PY × H)/Y
Further, the imaging size of the main body that is taken in described image, the imaging pixel number representing are obtained with pixelMesh (IX, IY).Show number of pixels (NX, NY) and the main body that is taken being taken needed for main body using calculated completelyImaging pixel number (IX, IY), to determine the scaling of pixel in described image, that is,:
RX=NX/IX
RY=NY/IY
For example, the number of pixels (NX, NY) being taken needed for main body imaging of the main body that is taken for (1000,2000)Number of pixels (IX, IY) is (2000,4000), then in described image, the scaling of pixel is:RX=RY=1/2.
Next, in step S540, showing described image according to described scaling.For example, to the picture in described imageElement is extracted to be obtained the number of pixels of the number of pixels a quarter (1/ (2 × 2)) being equal in described image times, newly obtainsOne of the view data obtaining pixel corresponds to a pixel on display screen;Or merely with one in described imageDivide pixel:Only odd-numbered line odd column pixel, only odd-numbered line even column pixels, only even number line odd column pixel or only even number lineEven column pixels.
Alternatively, in step S540, the positional information of the main body that is taken described in acquisition, only show described according to described ratioBe taken main body, and does not show the other parts in described image.
Although giving knot in the middle of each to describe clear in the foregoing description of first embodiment and second embodimentReally, for example, the horizontal pixel pitch of display screen and vertical pixel pitch, but the invention is not restricted to this.People in the artMember is easy to understand, can directly utilize in the actual size and current resolution and captured image of described display screenObtain every in the corresponding relation of the physical length in one pixel and described focal plane and/or height or described image dataConcrete corresponding relation between the imaging size of the individual described main body that is taken and actual size obtains described scaling.
Next, the view data to describe with reference to Fig. 6 and Fig. 7 according to the present invention is combined method.
Show in Fig. 6 that the view data according to the present invention combines the first embodiment of method.In this first embodiment,This view data combination method is by the body combination that is taken in image to be combined (hereinafter, referred to the second image) to reference mapAs, in (hereinafter, referred to the first image), the view data of described second image includes the position letter of main body that is wherein takenCorresponding relation between breath, the imaging size of the main body that is taken and actual size, and in the view data of described first imageCorresponding relation between imaging size including the main body that is wherein taken and actual size.
Started in step S601 according to the view data combination method 600 of this first embodiment.
First, from the view data of described first image, in step S610, obtain the imaging chi of the main body that is wherein takenVery little corresponding relation and actual size between.As it was previously stated, it is corresponding between the imaging size of the main body that is taken and actual sizeRelation can be one of following corresponding relation:Physical length in one of captured image pixel and described focal planeAnd/or it is concrete corresponding between the corresponding relation of height or the imaging size of the main body that is taken described in each and actual sizeRelation.
In step S620, extract the view data of the main body that is wherein taken from the view data of described second image, andObtain the corresponding relation between the imaging size of the main body that is taken and actual size.As an example, from the figure of described second imageBe taken as described in obtain in data the positional information of main body, and the view data of the main body that is taken described in extracting accordingly.
In step S630, according to corresponding between the imaging size of the main body that is taken in described benchmark image and actual sizeCorresponding relation between the imaging size of the main body that is taken in relation and described image to be combined and actual size, determines instituteThe scaling of the view data of the main body that is taken extracted.Then in step S640, according to described scaling, modification is carriedThe view data of the main body that is taken taking is so that meet imaging size and the actual size of the main body that is taken in described benchmark imageBetween corresponding relation.
Taking four kinds of situations to be as a example introduced below.
The first situation:Between the imaging size of the main body that is taken in described benchmark image (the first image) and actual sizeCorresponding relation (the first corresponding relation) and described image to be combined (the second image) in be taken main body imaging size withCorresponding relation (the second corresponding relation) between actual size is all one of captured image pixel and described focal planeIn physical length and/or height corresponding relation.
In this case, it is possible to directly the second corresponding relation and the first corresponding relation are divided by obtain a scaling,And adjust the view data of the main body that is taken extracted according to this scaling, to meet being clapped in the first imageTake the photograph the corresponding relation between the imaging size of main body and actual size.For example, correspond to for a pixel in the first corresponding relation2cm and the second corresponding relation are in the case that a pixel corresponds to 6cm, and described scaling is 3, i.e. need to being extractedThe view data of shooting main body carry out interpolation, specifically, be inserted into two pixels between two pixels.
Second situation:Between the imaging size of the main body that is taken in described benchmark image (the first image) and actual sizeCorresponding relation (the first corresponding relation) be one of captured image pixel with described focal plane in physical length and/Or the imaging size of the main body that is taken in the corresponding relation of height, and described image to be combined (the second image) and actual sizeBetween corresponding relation (the second corresponding relation) be the tool being taken between the imaging size of main body and actual size described in eachBody corresponding relation.
In the case, using one in actual size and the described benchmark image of the main body that is taken in described second imagePhysical length in pixel and described focal plane and/or the corresponding relation of height, calculate the master that is taken in described second imageImaging size in described first image for the body, i.e. number of pixels.It is then possible to using the main body that is taken in described second imageImaging size and described second image that calculated in be taken imaging size in described first image for the main body, comeCalculate the scaling of pixel in the image of the main body that is taken.Finally, according to this scaling to the main body that is taken extractedView data zoom in and out.
For example, it is contemplated that the imaging size of the main body that is taken in described second image is (1000,2000) and calculatedImaging size (500,1000) in described first image for the main body that is taken in described second image, then need to be extractedThe number of pixels of the main body that is taken is reduced to the 1/4 of original pixel number.Can consider merely with the main body that is taken extractedA part in view data:Only odd-numbered line odd column pixel, only odd-numbered line even column pixels, only even number line odd column pixel,Or only even number line even column pixels.
For example, it is contemplated that the imaging size of the main body that is taken in described second image is (500,1000) and calculatedImaging size (1000,2000) in described first image for the main body that is taken in described second image, then need to be extractedThe number of pixels of the main body that is taken increase to 4 times of original pixel number.The picture to the main body that is taken extracted can be consideredElement carries out interpolative operation to obtain the number of pixels being equal to four times of the number of pixels of the main body that is taken being extracted, new acquisitionOne of view data pixel corresponds to one of described first image pixel.
The third situation:Between the imaging size of the main body that is taken in described benchmark image (the first image) and actual sizeCorresponding relation (the first corresponding relation) and described image to be combined (the second image) in be taken main body imaging size withCorresponding relation (the second corresponding relation) between actual size is all to be taken imaging size and the actual chi of main body described in eachConcrete corresponding relation between very little.
In this case, it is possible to this concrete corresponding relation is scaled one of captured image pixel and described focusingPhysical length in plane and/or the corresponding relation of height, and then according to the mode of the first situation above-mentioned is processed.
Additionally, in the case, can also be only by the imaging size of the main body that is taken in described first image and actual chiCorresponding relation between very little is scaled physical length and/or height in one of captured image pixel and described focal planeThe corresponding relation of degree, then according to described second situation is processed.
4th kind of situation:Between the imaging size of the main body that is taken in described benchmark image (the first image) and actual sizeCorresponding relation (the first corresponding relation) be taken described in each specifically right between the imaging size of main body and actual sizeShould be related to, and corresponding between the imaging size of the main body that is taken in described image to be combined (the second image) and actual sizeRelation (the second corresponding relation) is physical length and/or height in one of captured image pixel and described focal planeCorresponding relation.
In the case, the imaging size of the main body that is taken being included using the view data of described second image and realityCorresponding relation between the size of border, obtains the actual size of the main body that is taken.
Then, the corresponding relation between the imaging size of the main body that is taken in foundation described first image and actual size,Calculate the imaging size in described benchmark image for the main body that is taken in described second image.Next, may be referred to secondSituation is processed.
However, the invention is not restricted to this, those skilled in the art can carry out suitable conversion between these four situationsAnd combination.
In step S650, the main body that is taken is set is added to residing position in described first image in described second imagePut;And in step S660, according to set position, by the picture number of the main body that is taken in amended described second imageAccording in the view data being combined to described first image.
Preferably, in step S670, will be taken in described first image main body and be newly incorporated to described first imageRight between the imaging size of the main body that is taken in the positional information of the main body that is taken and described first image and actual sizeShould be related to, be provided in association with as amended view data together with amended first image.
View data combination method 600 according to embodiments of the present invention terminates in step S699.
Next, the view data being described with reference to Figure 7 according to the present invention is combined the second embodiment of method.
Show in Fig. 7 that view data according to embodiments of the present invention combines the second embodiment of method.In this second realityApply in example, this view data combination method is by the body combination that is taken in image to be combined (the second image) to benchmark imageIn (the first image), the view data of described second image include wherein being taken main body positional information, be taken main bodyImaging size and actual size between corresponding relation, and the view data of described benchmark image includes wherein being takenThe capture apparatus of the corresponding relation between the imaging size of main body and actual size and the described benchmark image of shooting are focused with itThe distance between plane information.
By comprising the distance between capture apparatus and focal plane, Ke Yishe in the view data of described benchmark imagePut be taken the distance between main body and the described capture apparatus in image captured by other, and and then according to this two distancesThe view data of the main body that is taken in image captured by other is combined in the view data of described benchmark.
In view data combination method 700 in the figure 7, step S710 to S730 and step S770 to S780 and Fig. 6In step S160 to S630 and step S650 to S660 identical, here is no longer repeated.
In step S740, obtain from the view data of described first image between described capture apparatus and its focal planeDistance.In step S750, be taken in described second image the distance between main body and described capture apparatus are set, and according toThe distance between the capture apparatus of set distance and described second image and its focal plane, determine projection coefficient.
Then, in step S760, calculate the second scaling using described projection coefficient with described first scaling, andThe view data of the main body that is taken extracted according to described second scaling modification is so that meet quilt in described benchmark imageShoot the corresponding relation between the imaging size of main body and actual size.For example, it is possible to by described projection coefficient and described firstScaling is multiplied and to obtain the second scaling.
Preferably, in step S790, by the distance between described capture apparatus and its focal plane, described first imageThe main body that is taken and be newly incorporated in the positional information of the main body that is taken of the first image and described first image the master that is takenCorresponding relation between the imaging size of body and actual size, is provided in association with conduct together with amended first imageAmended view data.
As an example it is considered to situations below:The physical length of the first object is 1 meter, and it is 10 with the distance of capture apparatusRice, and there is the second object at capture apparatus at a distance of 12 meters, the physical length of this second object is 3 meters.When using batWhen taking the photograph equipment imaging, in captured image, there is hiding relation between the first object and the second object, now the first objectRatio with the second object is not 1: 3, but changes into 1.2: 3.Do not exist between the first object and the second object and block passIn the case of system, also similarly there is the proportionate relationship of this change.
And then it is considered to situations below:This second object is in described benchmark image, the capture apparatus of described benchmark imageIt is 12 meters with the distance between focal plane, and the first object is placed at the position of 10 meters of described capture apparatus.ThisWhen, the proportionate relationship of above-mentioned change will be there is.Therefore, in step S760, revise described first using described projection coefficientScaling.
As set forth above, it is possible to application provides method according to the view data of the present invention in shooting image, or can beThe view data according to the present invention is applied to provide method after shooting image.Apply the picture number according to the present invention after an image is capturedIn the case of offer method, need to obtain the distance between capture apparatus and focal plane and described bat in shooting imageTake the photograph the front focal length of working as of equipment, and by the distance between acquired capture apparatus and focal plane and described capture apparatusWhen front focal length stores the view data as captured image together with captured image.
As described above, using the view data rendering method according to the present invention, not only can be according to the reality of subjectBorder size showing captured image, but also can according to the actual size of subject only to show described in be takenObject.
As described above, combining method using the view data according to the present invention, it is particularly advantageous in the matched combined of clothing, exampleAs, extract jacket image from the view data of captured jacket, and by its with the image that trousers are shot according to jacketIt is combined with trousers actual size, in order to experience the arranging effect of clothing.Additionally, the view data combination according to the present inventionMethod is also particularly advantageous for the combination of furniture installation.
It should be appreciated that can be realized with the various forms of hardware, software, firmware, application specific processor or combinations thereofView data according to the present invention provides method, view data combination method and view data rendering method.
It is also understood that preferably realize method illustrated in the accompanying drawings, function blocks therefore therein with softwareBetween actual connection can be so that mode be programmed that and different according to the present invention.Provide description here, association area commonTechnical staff is possible to expect these and similar realization or the configuration of the present invention.
Although some embodiments of the present invention herein with reference to Description of Drawings but it is to be understood that described embodiment onlyIt is illustrative, and not restrictive.It will be appreciated by those skilled in the art that limiting in without departing substantially from appended claims and their equivalentsIn the case of fixed scope and spirit of the present invention, these exemplary embodiments can be made with the change on various forms and detailsChange.