The content of the invention
Based on this, it is necessary to provide the device that a kind of virtual reality is merged with real scene, to realize in virtual reality mistakeReal scene can be combined in journey, the effect virtually merged with reality is realized, man-machine interaction, lifting Consumer's Experience can be promoted.
The device that a kind of virtual reality is merged with real scene, including:
Acquisition module, for obtaining the image information inside virtual reality device, generates virtual reality scenario;
Acquisition module, the real goal scene information for obtaining 3D camera acquisitions;
Fusion Module, it is raw inside virtual reality device for according to the real goal scene information and virtual reality scenarioInto fusion scene.
In one of the embodiments, the acquisition module, specifically for:
Image inside virtual reality device is read out, analyze, recognized, and is virtually showed using recognition result generation is differentReal field scape.
In one of the embodiments, the acquisition module, including:
Reading unit, for being read out to the image inside virtual reality device;
Analytic unit, for carrying out the characteristic point that data analysis obtains image to the image read;
Comparing unit, carries out contrast for image in the image characteristic point and database by acquisition and is identified result;
Generation unit, for generating different virtual reality scenarios using the recognition result.
In one of the embodiments, the acquisition module, including:
Tracing unit, the sight for following the trail of human eye changes;
Adjustment unit, for changing according to the sight of the human eye, adjusts the 3D camera directions, so that the 3D video camerasDirection and the human eye sight change after direction of visual lines it is consistent;
Collecting unit, for obtaining the real goal scene information that the 3D video cameras are gathered in real time according to the direction after adjustment.
In one of the embodiments, the Fusion Module, including:
Initial velocity given unit, for assigning an initial velocity vector formation image motion to each pixel in image;
Dynamic analytic unit, for entering Mobile state analysis to image according to the velocity feature of each pixel;
Judging unit, for judging whether there is moving object in image, if not having moving object in image, light stream vector is wholeIndividual image-region is consecutive variations;If there is moving object in image, there is relative motion in real goal scene and image background,The velocity that moving object is formed is inevitable different with neighborhood background velocity, so as to detect moving object and position;
Picture position acquiring unit, the position new for obtaining image characteristic point;
Computing unit, the position new for the image characteristic point according to acquisition and home position, the physics ginseng based on 3D camerasNumber calculates the translation, rotation and scaled vectors of object in three dimensions;
Integrated unit, virtual reality scenario is completed for assigning obtained translation, rotation and scaled vectors by virtual reality scenarioMerged with real goal scene.
The device that a kind of virtual reality is merged with real scene is provided in above-described embodiment, including:Acquisition module, is used forThe image information inside virtual reality device is obtained, virtual reality scenario is generated;Acquisition module, for obtaining 3D camera acquisitionsReal goal scene information;Fusion Module, for according to the real goal scene information and virtual reality scenario, in virtualGeneration fusion scene inside real world devices.Real scene can be combined during virtual reality to realize, is realized virtually with showingThe effect of real fusion, can promote man-machine interaction, lifting Consumer's Experience.
Embodiment
In order to make the purpose , technical scheme and advantage of the present invention be clearer, it is right below in conjunction with drawings and ExamplesThe present invention is further elaborated.It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, andIt is not used in the restriction present invention.
Element and component in the description of specific distinct unless the context otherwise, the present invention, quantity both can be with single shapeFormula is present, and form that can also be multiple is present, and the present invention is defined not to this.Although the step in the present invention is entered with labelArrangement is gone, but is not used to limit the precedence of step, unless expressly stated the order of step or holding for certain stepRow is needed based on other steps, and otherwise the relative rank of step is adjustable.It is appreciated that used hereinTerm "and/or" is related to and covers one of associated Listed Items or one or more of any and all possible groupClose.
It should be noted that real scene information includes the ambient condition information by 3D video camera captured in real-time, such as it is, leftRight two cameras respectively according to user's right and left eyes direction of visual lines captured in real-time real scene image sequence, at a timeAn image is obtained in t, the image sequence that can be provided from left camera, as left figure, the image sequence provided from right cameraAn image is obtained in row, as right figure, wherein, left figure simulates the content that user's left eye is seen, it is right that right figure simulates userThe content arrived soon.Virtual reality scenario information includes the image information of virtual reality model, such as, virtual reality scenario modelLeft view and right view.
In embodiments of the present invention, augmented reality scene refer to be in by real scene information using augmented realityExisting scene, virtual reality scenario refers to the scene that virtual reality scenario information is presented using virtual reality technology.
In embodiments of the present invention, virtual reality device can be Intelligent worn device, and Intelligent worn device can be wrappedInclude the head-wearing type intelligent equipment for possessing AR and VR functions, such as, intelligent glasses or the helmet.
In one embodiment, as shown in figure 1, the device that a kind of virtual reality is merged with real scene, including:
Acquisition module 10, for obtaining the image information inside virtual reality device, generates virtual reality scenario;
Acquisition module 20, the real goal scene information for obtaining 3D camera acquisitions;
Fusion Module 30, for according to the real goal scene information and virtual reality scenario, inside virtual reality deviceGeneration fusion scene.
In one of the embodiments, the acquisition module, specifically for:
Image inside virtual reality device is read out, analyze, recognized, and is virtually showed using recognition result generation is differentReal field scape.
In one of the embodiments, as shown in Fig. 2 the acquisition module 10, including:
Reading unit 101, for being read out to the image inside virtual reality device;
Analytic unit 102, for carrying out the characteristic point that data analysis obtains image to the image read;
Comparing unit 103, carries out contrast for image in the image characteristic point and database by acquisition and is identified result;
Generation unit 104, for generating different virtual reality scenarios using the recognition result.
Specifically, after by system start-up initialisation, system reads what is accessed in virtual reality device by reading unitSpecify image;The image file accessed in virtual reality device is user by the photo of photography or by otherThe picture that obtains of approach, by these photos and picture storage in image data base into virtual reality device, supply is follow-upNeed to select the source of various images.
Analytic unit, the resolution ratio of image file first can be unified, by its resolution compression to relatively low, for example, dividedResolution 320*240 sizes, need to carry out format conversion to image file after by resolution adjustment, the color format of image are turnedGrayscale format is turned to, the imagery exploitation two dimensional image brightness after form will be converted and change tool on the point or image border curve of distanceThere is the feature of the point analysis image angle point of curvature maximum, and image characteristic point is used as using the image Corner Feature of analysis.
Comparing unit, it is possible to use local random binary feature, calculate respectively above-mentioned middle acquisition characteristic point information andThe characterization information of image in database, their pairs in two images is being judged by the description information of each angle pointIt should be related to, remove the exterior point of erroneous matching in two pictures, retain the interior point correctly matched, when feature is matched somebody with somebody in the correct north of reservationThe quantity of point has exceeded the threshold value of setting, then is judged as that identification is successfully entered next step;It is again new if identification is unsuccessfulCircular treatment is carried out to picture untill recognizing successfully.
Generation unit, the target designation that the result recognized using comparing unit is identified, according to numbering in databaseIn retrieve corresponding virtual content, and generate virtual reality scenario.
In one of the embodiments, as shown in figure 3, the acquisition module 20, including:
Tracing unit 201, the sight for following the trail of human eye changes;
Adjustment unit 202, for changing according to the sight of the human eye, adjusts the 3D camera directions, so that the 3D takes the photographDirection of visual lines after the direction of camera changes with the human eye sight is consistent;
Collecting unit 203, believes for obtaining the real goal scene that the 3D video cameras are gathered in real time according to the direction after adjustmentBreath.
In embodiments of the present invention, the acquisition module, is specifically included:Tracing unit, tracing unit, and collecting unit,The sight for following the trail of human eye by tracing unit changes, and adjustment unit changes according to the sight of the human eye, adjusts the 3D shootingsMachine dual camera direction, so that the direction of visual lines after the direction of the dual camera changes with the human eye sight is consistent, collectionUnit obtains the real scene information that the dual camera is gathered in real time according to the direction after adjustment.In order to realize dual camera mouldAnthropomorphic eye shoots real scene information, it is necessary to which camera is according to human eye sight direction, collection real scene information.In order to obtain peopleThe sight change of eye, eye Eye-controlling focus module can be installed in VR inner helmets, to follow the trail of sight change.In order to allow two to take the photographAs head can scene that preferably simulated dual is arrived soon, the processor of Intelligent worn device such as VR inner helmets needed according to doubleAn eye line running parameter come adjust respectively left and right two cameras viewing angle.The real-time acquisition of dual camera picture and differenceRight and left eyes are presented to, can now reappear the viewing effect of human eye.Specifically, it is possible to use eye tracking skill of the prior artArt, for example, be tracked according to the changing features of eyeball and eyeball periphery, be tracked according to iris angle change, actively being projectedThe light beams such as infrared ray extract sight change that feature is tracked to determine human eye etc. to iris.Certainly, the embodiment of the present inventionNot limited to this, under the technical concept of the present invention, those skilled in the art can follow the trail of human eye using any feasible technologySight changes and then adjusted the collection direction of the right and left eyes camera of simulation human eye, in real time collection real scene information.
In one of the embodiments, the Fusion Module, including:
Initial velocity given unit, for assigning an initial velocity vector formation image motion to each pixel in image;
Dynamic analytic unit, for entering Mobile state analysis to image according to the velocity feature of each pixel;
Judging unit, for judging whether there is moving object in image, if not having moving object in image, light stream vector is wholeIndividual image-region is consecutive variations;If there is moving object in image, there is relative motion in real goal scene and image background,The velocity that moving object is formed is inevitable different with neighborhood background velocity, so as to detect moving object and position;
Picture position acquiring unit, the position new for obtaining image characteristic point;
Computing unit, the position new for the image characteristic point according to acquisition and home position, the physics ginseng based on 3D camerasNumber calculates the translation, rotation and scaled vectors of object in three dimensions;
Integrated unit, virtual reality scenario is completed for assigning obtained translation, rotation and scaled vectors by virtual reality scenarioMerged with real goal scene.
Specifically, initial velocity given unit assigns an initial velocity vector to each pixel in image, makesIt forms scene image sports ground, in the particular moment of operation, corresponds the point on point and the three-dimensional body on its image,This corresponding relation can be obtained by projection relation, and dynamic analytic unit reads vector characteristic according to each pixel, to figureAs entering Mobile state analysis, judging unit judges the object for whether having motion in image, if not having object in image in motion,Light stream vector is consecutive variations in whole image region;If there is the object of motion in image, target and image background are depositedIn relative motion, the velocity that moving object is formed is inevitable different with neighborhood background vector, so as to detect moving objectAnd position, the new position of picture position acquiring unit acquisition scene image characteristic point.
By still image change into virtual content and dynamic real scene get ready after, in picture pick-up device spaceThe virtual content of above-mentioned identification is placed in the characteristic point locus of tracking, virtual content is merged with real scene;MeterUnit is calculated according to the scene image characteristic point of acquisition new position and home position, the physical parameter according to camera is calculatedVirtual content is assigned and calculated in three-dimensional by translation, rotation and the scaled vectors of object in three-dimensional image space, integrated unitIn space in translation, rotation and the scaled vectors of object, the complete fusion of virtual content and real scene is achieved that.
In the present embodiment, it can recognize the picture to excite virtual content by using single picture as input source;Scene characteristic tracer technique is utilized simultaneously, virtual content is placed in the true environment of user, so as to realize the effect of augmented realityReally, the limitation that characteristic image excites virtual content is relieved, the development of industry is promoted.
In another embodiment of the present invention, the Fusion Module, may particularly include:
First superpositing unit, the left figure for the left camera to be shot is superimposed with the left view of virtual scene, synthesis fusionScene left figure;
Second superpositing unit, the right figure for the right camera to be shot is superimposed with the right view of virtual scene, synthesis fusionScene right figure;
Integrated unit, according to the fusion scene left figure and right figure, generation fusion scene.
Specifically, by by virtual scene information and real scene information superposition, such as, by dummy model information superpositionThe real-time image sequence of real scene is provided to the camera of during real scene, it is necessary to left and right two, at a time t, Ke YicongAn image is obtained in the image sequence that left camera is provided, as left figure, is obtained in the image sequence provided from right cameraOne image, is used as right figure.Left figure simulates the content that left eye is seen, right figure simulates the content that right eye is seen.Left and right is imagedHead provides real-time image sequence, and these image sequences can be obtained by a variety of methods, and a kind of method is to use camera factoryThe SDK (SoftwareDevelopmentKit) that business provides carries out image acquisition, and another method conventional is opened using someSource instrument reads image, such as Opencv from camera.In order to obtain the hierarchical relationship of real scene, it can calculate after parallax,The hierarchical relationship of scene is represented with the hierarchical relationship of parallax.The parallax between the figure of left and right is calculated, BM, figure can be used to cut,Any one parallax calculation method such as ADCensus is calculated.There is parallax just to know scene hierarchical information, the layer of sceneSecondary information is also referred to as the depth of view information of scene, and depth of view information can be used for instructing merging for dummy model and real scene, allows voidAnalog model is more rationally put into real scene.Specific method is, dummy model left and right figure minimum parallax than virtual mouldMaximum disparity of the type in the overlay area of left and right figure is big, and using needing to carry out median smoothing to parallax information before parallax.Dummy model is separately added into left figure and right figure, if minimum parallax of the dummy model in the figure of left and right is d, d needs to be more than voidThe maximum disparity of analog model overlay area.The corresponding left view of dummy model is added in left figure, dummy model is correspondingRight view is added in right figure, it is possible to generation fusion scene.
In one of embodiment of the invention, the left figure that module will be superimposed with dummy model left view is presented, and it is foldedAdded with dummy model right view right figure synthesized after send into display together, respectively in the left-half of display and right halfPart is shown, you can the fusion scene is presented, so, user is watched by right and left eyes respectively, now can just experience trueScene is merged with the good of dummy model.
In embodiments of the present invention, except realizing real scene information and virtual scene information fusion, generation fusion sceneOutside, the real scene information that can also be gathered according to the 3D video cameras dual camera, generates augmented reality scene, or, rootAccording to the virtual reality scenario information, virtual reality scenario is generated, in embodiments of the present invention, generation augmented reality scene or voidIntend reality scene, i.e. AR functions or VR functions, those skilled in the art combine the embodiment of the present invention, it is possible to achieve, herein no longerRepeat.
The device that a kind of virtual reality is merged with real scene is provided in above-described embodiment, including:Acquisition module, is used forThe image information inside virtual reality device is obtained, virtual reality scenario is generated;Acquisition module, for obtaining 3D camera acquisitionsReal goal scene information;Fusion Module, for according to the real goal scene information and virtual reality scenario, in virtualGeneration fusion scene inside real world devices.Real scene can be combined during virtual reality to realize, is realized virtually with showingThe effect of real fusion, can promote man-machine interaction, lifting Consumer's Experience.
Embodiment described above only expresses the several embodiments of the present invention, and it describes more specific and detailed, but simultaneouslyTherefore the limitation to the scope of the claims of the present invention can not be interpreted as.It should be pointed out that for one of ordinary skill in the artFor, without departing from the inventive concept of the premise, various modifications and improvements can be made, these belong to the guarantor of the present inventionProtect scope.Therefore, the protection domain of patent of the present invention should be determined by the appended claims.