技术领域technical field
本发明涉及虚拟现实技术领域,尤其涉及基于虚拟现实的街景实现方法和装置。The invention relates to the technical field of virtual reality, in particular to a virtual reality-based street view realization method and device.
背景技术Background technique
街景作为一种全新的地图服务方式,从推出以来就被寄予了很高的期望和广泛的关注。街景是一种通过街景车拍摄街道两旁360度的照片,然后将这些照片经过处理上传至网站,供访问者浏览。这与2D平面地图形成了强烈的对比,使原本无聊的地图更加生动,更有可读性和娱乐性。使用者就仿佛身临其境,足不出户便可以了解户外的各种风景。街景开创了一种全新的地图阅读方式,也开启了一个实景地图体验的模式。尤其是到达某一陌生城市的时候,通过查找街景地图,可迅速了解当地街景状态,但是,现有的街景也存在一定的弊端,例如,用户与街景交互性不强,用户的沉浸感还远远不够,因此,如何增强用户与街景的交互性和沉浸性,是一个亟待解决的问题。As a brand-new map service method, Street View has received high expectations and widespread attention since its launch. Street view is a way to take 360-degree photos of both sides of the street through street view cars, and then upload these photos to the website after processing for visitors to browse. This is a strong contrast to the 2D flat maps and makes an otherwise boring map more alive, readable and entertaining. The user seems to be on the scene, and can understand various outdoor scenery without leaving home. Street View has created a new way of map reading, and also opened a mode of real map experience. Especially when you arrive in an unfamiliar city, you can quickly understand the status of the local street view by looking up the street view map. However, the existing street view also has certain disadvantages. For example, the interaction between the user and the street view is not strong, and the user's sense of immersion is far away Therefore, how to enhance the interaction and immersion between users and street view is an urgent problem to be solved.
发明内容Contents of the invention
本发明的主要目的在于提出一种基于虚拟现实的街景实现方法和装置,旨在增强用户与街景的交互性和沉浸性。The main purpose of the present invention is to propose a virtual reality-based street view realization method and device, aiming at enhancing the interaction and immersion between users and the street view.
为实现上述目的,本发明提供的一种基于虚拟现实的街景实现方法,所述基于虚拟现实的街景实现方法包括步骤:In order to achieve the above object, the present invention provides a method for realizing a street view based on virtual reality, and the method for realizing a street view based on virtual reality includes the steps of:
获取头戴式显示设备当前所在位置;Obtain the current location of the head-mounted display device;
根据所述头戴式显示设备当前所在位置,获取所述头戴式显示设备当前所在位置的三维周边街景;According to the current location of the head-mounted display device, acquire the three-dimensional surrounding street view of the current location of the head-mounted display device;
构建与所述头戴式显示设备对应的三维虚拟人,将所述三维虚拟人融入到所述三维周边街景中。A three-dimensional virtual human corresponding to the head-mounted display device is constructed, and the three-dimensional virtual human is integrated into the three-dimensional surrounding street view.
优选地,所述根据所述头戴式显示设备当前所在位置,获取所述头戴式显示设备当前所在位置的三维周边街景的步骤包括:Preferably, the step of obtaining the three-dimensional surrounding street view of the current location of the head-mounted display device according to the current location of the head-mounted display device includes:
采用双全景摄像头在不同的视角下预先采集周边街景的立体全景图像对;Using dual panoramic cameras to pre-collect stereoscopic panoramic image pairs of surrounding street scenes from different perspectives;
将所述立体全景图像对和对应视角事先保存在设定的街景数据库中;Preserving the stereoscopic panoramic image pair and the corresponding viewing angle in a set street view database;
若获取到所述头戴式显示设备当前所在位置在设定范围内时,则相应调出所述街景数据库中的所述立体全景图像对。If it is acquired that the current location of the head-mounted display device is within the set range, the pair of stereoscopic panoramic images in the street view database is correspondingly called out.
优选地,所述根据所述头戴式显示设备当前所在位置,获取所述头戴式显示设备当前所在位置的三维周边街景的步骤包括:Preferably, the step of obtaining the three-dimensional surrounding street view of the current location of the head-mounted display device according to the current location of the head-mounted display device includes:
若识别到所述头戴式显示设备当前所在位置大于设定的距离阈值时,则开启建筑群显示模式,显示所述三维周边街景的建筑群。If it is recognized that the current location of the head-mounted display device is greater than the set distance threshold, the building group display mode is turned on to display the building groups in the three-dimensional surrounding street view.
优选地,所述根据所述头戴式显示设备当前所在位置,获取所述头戴式显示设备当前所在位置的三维周边街景的步骤包括:Preferably, the step of obtaining the three-dimensional surrounding street view of the current location of the head-mounted display device according to the current location of the head-mounted display device includes:
若识别到所述头戴式显示设备小于或等于设定的距离阈值时,则开启商铺显示模式,显示所述三维周边街景的商铺。If it is recognized that the head-mounted display device is less than or equal to the set distance threshold, the shop display mode is turned on to display the shops in the three-dimensional surrounding street view.
优选地,所述将所述三维虚拟人融入到所述三维周边街景中的步骤包括:Preferably, the step of integrating the 3D virtual human into the 3D surrounding street view includes:
采用动作捕捉技术,捕捉所述头戴式显示设备的运动轨迹;Using motion capture technology to capture the motion trajectory of the head-mounted display device;
为所述三维虚拟人建立与所述运动轨迹相映射的虚拟运动方向。A virtual movement direction mapped to the movement track is established for the three-dimensional virtual human.
此外,为实现上述目的,本发明还提出一种基于虚拟现实的街景实现装置,应用于头戴式显示设备中,所述基于虚拟现实的街景实现装置包括:In addition, in order to achieve the above object, the present invention also proposes a virtual reality-based street view realization device, which is applied to a head-mounted display device, and the virtual reality-based street view realization device includes:
当前位置获取模块,用于获取头戴式显示设备当前所在位置;The current location obtaining module is used to obtain the current location of the head-mounted display device;
街景获取模块,用于根据所述头戴式显示设备当前所在位置,获取所述头戴式显示设备当前所在位置的三维周边街景;A street view acquisition module, configured to obtain a three-dimensional surrounding street view of the current location of the head-mounted display device according to the current location of the head-mounted display device;
融入模块,用于构建与所述头戴式显示设备对应的三维虚拟人,将所述三维虚拟人融入到所述三维周边街景中。The integration module is used to construct a 3D virtual person corresponding to the head-mounted display device, and integrate the 3D virtual person into the 3D surrounding street view.
优选地,所述街景获取模块包括:Preferably, the street view acquisition module includes:
采集单元,用于采用双全景摄像头在不同的视角下预先采集周边街景的立体全景图像对;The acquisition unit is used to pre-acquire stereo panoramic image pairs of surrounding street scenes under different viewing angles by using dual panoramic cameras;
存储单元,用于将所述立体全景图像对和对应视角事先保存在设定的街景数据库中;a storage unit, configured to store the pair of stereoscopic panoramic images and the corresponding viewing angles in a set street view database in advance;
调用单元,用于若获取到所述头戴式显示设备当前所在位置在设定范围内时,则相应调出所述街景数据库中的所述立体全景图像对。The calling unit is configured to correspondingly call out the pair of stereoscopic panoramic images in the street view database when the acquired current location of the head-mounted display device is within a set range.
优选地,所述街景获取模块还用于若识别到所述头戴式显示设备当前所在位置大于设定的距离阈值时,则开启建筑群显示模式,显示所述三维周边街景的建筑群。Preferably, the street view acquiring module is further configured to turn on the building group display mode to display the building groups in the three-dimensional surrounding street view if it is recognized that the current location of the head-mounted display device is greater than a set distance threshold.
优选地,所述街景获取模块还用于若识别到所述头戴式显示设备小于或等于设定的距离阈值时,则开启商铺显示模式,显示所述三维周边街景的商铺。Preferably, the street view acquisition module is further configured to turn on the shop display mode and display the shops in the three-dimensional surrounding street view if it is recognized that the head-mounted display device is less than or equal to the set distance threshold.
优选地,所述融入模块包括:Preferably, the integration module includes:
捕捉单元,用于采用动作捕捉技术,捕捉所述头戴式显示设备的运动轨迹;A capture unit, configured to capture the motion track of the head-mounted display device by using motion capture technology;
建立单元,用于为所述三维虚拟人建立与所述运动轨迹相映射的虚拟运动方向。The establishing unit is configured to establish a virtual movement direction mapped to the movement trajectory for the three-dimensional virtual human.
本发明提出的基于虚拟现实的街景实现方法和装置,通过获取头戴式显示设备当前所在位置;根据所述头戴式显示设备当前所在位置,获取所述头戴式显示设备当前所在位置的三维周边街景;构建与所述头戴式显示设备对应的三维虚拟人,将所述三维虚拟人融入到所述三维周边街景中。本发明自动识别三维周边街景,并将构建的虚拟人融入到所述三维周边街景中,为用户提供更为沉浸式和具有互动性的体验。The virtual reality-based street view realization method and device proposed by the present invention obtain the current location of the head-mounted display device; according to the current location of the head-mounted display device, obtain the three-dimensional image of the current location of the head-mounted display device Surrounding street view: constructing a 3D virtual human corresponding to the head-mounted display device, and integrating the 3D virtual human into the 3D surrounding street view. The present invention automatically recognizes the three-dimensional surrounding street view, and integrates the constructed virtual person into the three-dimensional surrounding street view, thereby providing users with a more immersive and interactive experience.
附图说明Description of drawings
图1为本发明基于虚拟现实的街景实现方法结构示意图之一;Fig. 1 is one of the schematic structural diagrams of the present invention's street view realization method based on virtual reality;
图2为本发明基于虚拟现实的街景实现方法结构示意图之二;Fig. 2 is the second structural diagram of the realization method of street view based on virtual reality in the present invention;
图3为本发明基于虚拟现实的街景实现方法第一实施例的流程示意图;FIG. 3 is a schematic flow chart of the first embodiment of the method for implementing a street view based on virtual reality in the present invention;
图4为图3中所述获取头戴式显示设备当前所在位置的步骤的细化流程示意图;FIG. 4 is a schematic diagram of a detailed flow of the step of obtaining the current location of the head-mounted display device described in FIG. 3;
图5为本发明基于虚拟现实的街景实现方法第二实施例的流程示意图;FIG. 5 is a schematic flow diagram of the second embodiment of the method for realizing a street view based on virtual reality in the present invention;
图6为本发明基于虚拟现实的街景实现方法第三实施例的流程示意图;FIG. 6 is a schematic flow chart of the third embodiment of the method for realizing a street view based on virtual reality in the present invention;
图7为图3中所述将所述三维虚拟人融入到所述三维周边街景中的步骤的细化流程示意图;FIG. 7 is a schematic diagram of a detailed flow chart of the step of integrating the three-dimensional virtual person into the three-dimensional surrounding streetscape described in FIG. 3;
图8为本发明基于虚拟现实的街景实现装置第一实施例的功能模块示意图;FIG. 8 is a schematic diagram of the functional modules of the first embodiment of the device for realizing a street view based on virtual reality in the present invention;
图9为图8中所述街景获取模块的功能模块示意图;Fig. 9 is a schematic diagram of functional modules of the street view acquisition module described in Fig. 8;
图10为图8中所述融入模块的功能模块示意图。FIG. 10 is a schematic diagram of functional modules of the integration module in FIG. 8 .
本发明目的的实现、功能特点及优点将结合实施例,参照附图做进一步说明。The realization of the purpose of the present invention, functional characteristics and advantages will be further described in conjunction with the embodiments and with reference to the accompanying drawings.
具体实施方式detailed description
应当理解,此处所描述的具体实施例仅仅用以解释本发明,并不用于限定本发明。It should be understood that the specific embodiments described here are only used to explain the present invention, not to limit the present invention.
请参阅图1—图2,本发明包括虚拟现实头盔11、头盔定位装置21、手柄定位装置22、虚拟现实手柄41和处理器39,其中,头盔定位装置21可以观察虚拟现实头盔11的位置并对其进行定位,手柄定位装置22可以观察虚拟现实手柄41的位置并对其进行定位。Referring to Fig. 1-Fig. 2, the present invention comprises virtual reality helmet 11, helmet positioning device 21, handle positioning device 22, virtual reality handle 41 and processor 39, wherein, helmet positioning device 21 can observe the position of virtual reality helmet 11 and To position it, the handle positioning device 22 can observe the position of the virtual reality handle 41 and position it.
如图3所示,本发明第一实施例提出一种基于虚拟现实的街景实现方法,所述基于虚拟现实的街景实现方法包括步骤:As shown in Figure 3, the first embodiment of the present invention proposes a method for implementing a virtual reality-based street view, the method for implementing a virtual reality-based street view includes steps:
步骤S100、获取头戴式显示设备当前所在位置。Step S100, obtaining the current location of the head-mounted display device.
头戴式显示设备按照事先设定的时间阈值来执行定位操作,所述时间阈值可以设定为1秒,也可以设定为其他值,时间阈值可以根据用户的设定随时进行调整或是更改,定位操作通过GPS(Global Positioning System,全球定位系统)和RFID(Radio FrequencyIdentification,射频识别)的合作来完成,若是在室外时,则通过GPS来获取当前所在位置的经纬度和坐标;若是在室内时,则改用RFID获取当前所在位置的经纬度和坐标。The head-mounted display device performs positioning operations according to a preset time threshold, which can be set to 1 second or other values, and the time threshold can be adjusted or changed at any time according to the user's settings , the positioning operation is completed through the cooperation of GPS (Global Positioning System, Global Positioning System) and RFID (Radio Frequency Identification, radio frequency identification). If it is outdoors, the latitude, longitude and coordinates of the current location are obtained through GPS; , then use RFID instead to obtain the latitude, longitude and coordinates of the current location.
步骤S200、根据所述头戴式显示设备当前所在位置,获取所述头戴式显示设备当前所在位置的三维周边街景。Step S200, according to the current location of the head-mounted display device, acquire the three-dimensional surrounding street view of the current location of the head-mounted display device.
头戴式显示设备采用双全景摄像头在两个不同的视点各个不同的视角下预先采集周边街景的立体全景图像对;并将所述立体全景图像对和对应视角事先保存在设定的街景数据库中;若获取到所述头戴式显示设备当前所在位置时,则相应调出所述街景数据库中的所述立体全景图像对。The head-mounted display device uses dual panoramic cameras to pre-collect the stereoscopic panoramic image pairs of the surrounding street scenes at two different viewpoints and different viewing angles; and save the stereoscopic panoramic image pairs and corresponding viewing angles in the set street view database in advance ; If the current location of the head-mounted display device is obtained, correspondingly call out the pair of stereoscopic panoramic images in the street view database.
步骤S300、构建与所述头戴式显示设备对应的三维虚拟人,将所述三维虚拟人融入到所述三维周边街景中。Step S300, constructing a 3D virtual person corresponding to the head-mounted display device, and integrating the 3D virtual person into the 3D surrounding street view.
头戴式显示设备在虚拟现实场景中构建与所述头戴式显示设备对应的三维虚拟人,并在立体全景图像对中将所述三维虚拟人融入到所述三维周边街景中。其中,所述头戴式显示设备与所述三维虚拟人之间的对应关系包括动作的对应,即所述头戴式显示设备与所述三维虚拟人形成动作映射关系,若所述头戴式显示设备向左移动时,则所述三维虚拟人相应向左移动。The head-mounted display device constructs a 3D virtual person corresponding to the head-mounted display device in the virtual reality scene, and integrates the 3D virtual person into the 3D surrounding street view in the stereo panoramic image pair. Wherein, the corresponding relationship between the head-mounted display device and the 3D virtual person includes the correspondence of actions, that is, the head-mounted display device and the 3D virtual person form an action mapping relationship, if the head-mounted display device When the display device moves to the left, the three-dimensional virtual human moves to the left accordingly.
本实施例提出的基于虚拟现实的街景实现方法,通过获取头戴式显示设备当前所在位置;根据所述头戴式显示设备当前所在位置,获取所述头戴式显示设备当前所在位置的三维周边街景;构建与所述头戴式显示设备对应的三维虚拟人,将所述三维虚拟人融入到所述三维周边街景中。本实施例自动识别三维周边街景,并将构建的虚拟人融入到所述三维周边街景中,为用户提供更为沉浸式和具有互动性的体验。The virtual reality-based street view implementation method proposed in this embodiment obtains the current location of the head-mounted display device; according to the current location of the head-mounted display device, obtains the three-dimensional surroundings of the current location of the head-mounted display device Street view: constructing a 3D virtual person corresponding to the head-mounted display device, and integrating the 3D virtual person into the 3D surrounding street view. This embodiment automatically recognizes the three-dimensional surrounding street view, and integrates the constructed virtual person into the three-dimensional surrounding street view, so as to provide users with a more immersive and interactive experience.
如图4所示,图4为图3中所述步骤S200的细化流程示意图,在第一实施例的基础上,所述步骤S200包括:As shown in FIG. 4, FIG. 4 is a schematic diagram of a detailed flow chart of step S200 in FIG. 3. On the basis of the first embodiment, the step S200 includes:
步骤S210、采用双全景摄像头在不同的视角下预先采集周边街景的立体全景图像对。Step S210, using dual panoramic cameras to pre-collect stereo panoramic image pairs of surrounding street scenes from different viewing angles.
头戴式显示设备采用双全景摄像头在两个不同的视点以及不同的视角下预先采集周边街景的立体全景图像对,在本实施例中,所述全景摄像头优选为鱼眼摄像头,所述视角包括高度、朝向和俯仰角等。The head-mounted display device uses dual panoramic cameras to pre-collect stereo panoramic image pairs of surrounding street scenes at two different viewpoints and different viewing angles. In this embodiment, the panoramic cameras are preferably fisheye cameras, and the viewing angles include Altitude, heading and pitch angle etc.
步骤S220、将所述立体全景图像对和对应视角事先保存在设定的街景数据库中。Step S220, saving the pair of stereoscopic panoramic images and the corresponding viewing angles in a set street view database in advance.
头戴式显示设备将采集到的所述立体全景图像对以及视角之间形成一一映射关系,并汇集成视角映射表;将所述立体全景图像对和所述视角映射表事先保存在设定的街景数据库中。The head-mounted display device forms a one-to-one mapping relationship between the collected stereoscopic panoramic image pairs and viewing angles, and collects them into a viewing angle mapping table; saves the stereoscopic panoramic image pair and the viewing angle mapping table in advance in the Street View database.
步骤S230、若获取到所述头戴式显示设备当前所在位置在设定范围内时,则相应调出所述街景数据库中的所述立体全景图像对。Step S230, if it is acquired that the current location of the head-mounted display device is within the set range, call out the stereoscopic panoramic image pair in the street view database accordingly.
头戴式显示设备若获取到所述头戴式显示设备当前所在位置在设定范围内时,则检测所述头戴式显示设备当前视角,根据所述街景数据库中的所述视角映射表,调出与头戴式显示设备当前视角对应的所述立体全景图像对。If the head-mounted display device obtains that the current location of the head-mounted display device is within the set range, it detects the current viewing angle of the head-mounted display device, and according to the viewing angle mapping table in the street view database, Calling out the stereoscopic panoramic image pair corresponding to the current viewing angle of the head-mounted display device.
本实施例提出的基于虚拟现实的街景实现方法,采用双全景摄像头在不同的视角下预先采集周边街景的立体全景图像对;将所述立体全景图像对和对应视角事先保存在设定的街景数据库中;若获取到所述头戴式显示设备当前所在位置在设定范围内时,则相应调出所述街景数据库中的所述立体全景图像对。本实施例自动调出周边街景三维全景视图,为用户提供更为沉浸式和具有互动性的体验。The virtual reality-based street view implementation method proposed in this embodiment uses dual panoramic cameras to pre-collect stereo panoramic image pairs of surrounding street scenes from different viewing angles; the stereo panoramic image pairs and corresponding viewing angles are stored in a set street view database in advance If the acquired current location of the head-mounted display device is within the set range, correspondingly call out the stereoscopic panoramic image pair in the street view database. In this embodiment, a three-dimensional panoramic view of surrounding street scenes is automatically called up, providing users with a more immersive and interactive experience.
如图5所示,图5为本发明基于虚拟现实的街景实现方法第二实施例的流程示意图,在第一实施例的基础上,所述步骤S200包括:As shown in FIG. 5, FIG. 5 is a schematic flow chart of the second embodiment of the method for implementing a street view based on virtual reality in the present invention. On the basis of the first embodiment, the step S200 includes:
步骤S240、若识别到所述头戴式显示设备当前所在位置大于设定的距离阈值时,则开启建筑群显示模式,显示所述三维周边街景的建筑群。Step S240, if it is recognized that the current location of the head-mounted display device is greater than the set distance threshold, turn on the building group display mode to display the building groups in the three-dimensional surrounding street view.
头戴式显示设备若识别到当前所在位置大于设定的距离阈值时,即所述头戴式显示设备距离所述周边街景较远时,则开启建筑群显示模式,显示所述三维周边街景的建筑群,从而通过观看建筑群鸟瞰图的方式,使用户一览所述三维周边街景的全景。If the head-mounted display device recognizes that the current location is greater than the set distance threshold, that is, when the head-mounted display device is far away from the surrounding street view, it turns on the building group display mode to display the three-dimensional surrounding street view. The building group, so that the user can see the panorama of the three-dimensional surrounding street view at a glance by viewing the bird's-eye view of the building group.
本实施例提出的基于虚拟现实的街景实现方法,若识别到所述头戴式显示设备当前所在位置大于设定的距离阈值时,则开启建筑群显示模式,显示所述三维周边街景的建筑群,从而使用户迅速了解周边街景的全景,为用户提供更为沉浸式和具有互动性的体验。The virtual reality-based street view implementation method proposed in this embodiment, if it is recognized that the current location of the head-mounted display device is greater than the set distance threshold, the building group display mode is turned on, and the building group of the three-dimensional surrounding street view is displayed. , so that users can quickly understand the panorama of the surrounding street view, providing users with a more immersive and interactive experience.
如图6所示,图6为本发明基于虚拟现实的街景实现方法第三实施例的流程示意图,在第一实施例的基础上,所述步骤S200包括:As shown in FIG. 6, FIG. 6 is a schematic flow chart of the third embodiment of the method for implementing a street view based on virtual reality in the present invention. On the basis of the first embodiment, the step S200 includes:
步骤S250、若识别到所述头戴式显示设备小于或等于设定的距离阈值时,则开启商铺显示模式,显示所述三维周边街景的商铺。Step S250, if it is recognized that the head-mounted display device is less than or equal to the set distance threshold, turn on the shop display mode to display the shops in the three-dimensional surrounding street view.
头戴式显示设备若识别到当前所在位置小于或等于设定的距离阈值时,即所述头戴式显示设备距离所述周边街景较近时,则开启商铺显示模式,显示所述三维周边街景的商铺,从而使用户迅速获取周边实体店铺的详细信息。If the head-mounted display device recognizes that the current location is less than or equal to the set distance threshold, that is, when the head-mounted display device is closer to the surrounding street view, it turns on the shop display mode to display the three-dimensional surrounding street view stores, so that users can quickly obtain detailed information on surrounding physical stores.
本实施例提出的基于虚拟现实的街景实现方法,若识别到所述头戴式显示设备小于或等于设定的距离阈值时,则开启商铺显示模式,显示所述三维周边街景的商铺,从而使用户迅速获取周边实体店铺的详细信息,为用户提供更为沉浸式和具有互动性的体验。The virtual reality-based street view implementation method proposed in this embodiment, if it is recognized that the head-mounted display device is less than or equal to the set distance threshold, the shop display mode is turned on, and the shops in the three-dimensional surrounding street view are displayed, so that Users quickly obtain detailed information of surrounding physical stores, providing users with a more immersive and interactive experience.
如图7所示,图7为图3中所述步骤S300的细化流程示意图,在第一实施例的基础上,所述步骤S300包括:As shown in FIG. 7, FIG. 7 is a schematic diagram of a detailed flow chart of step S300 in FIG. 3. On the basis of the first embodiment, the step S300 includes:
步骤S310、采用动作捕捉技术,捕捉所述头戴式显示设备的运动轨迹。Step S310, using motion capture technology to capture the motion track of the head-mounted display device.
头戴式显示设备采用动作捕捉技术,对运动轨迹进行捕捉,首先利用背景差分法和帧间差分法相结合提取出运动的所述头戴式显示设备,其中,帧间差分法是利用图像序列中相邻帧图像之间做差来提取出所述立体全景图像对中的运动区域。首先将数帧图像校正在同一坐标系中,然后将同一背景不同时刻的两幅图像进行差分运算,灰度不发生变化的背景部分被减掉,由于所述头戴式显示设备在相邻两帧中的位置不同,且与背景灰度有所差异,两帧相减后将所述头戴式显示设备突现出来,从而大致确定所述头戴式显示设备在所述立体全景图像对中的位置。背景差分法是利用图像序列和参考背景模型相减实现所述目标物体检测的。背景差分法能够提供较为完整的特征数据从而提取出所述头戴式显示设备,但其对光照和外部条件造成的动态场景变化过于敏感,在非受控情部下需要加入背景图像更新机制,且不是用于与双全景摄像头运动,或背景灰度变化较大的情况。再后所述头戴式显示设备采用SIFT(Scale-invariant feature transform,尺度不变特征变换匹配)算法对所述目标物体进行追踪。其主要思想是建立目标库,将所述立体全景图像对第一帧中的所述头戴式显示设备提取出来,进行SIFT变换后将特征数据库存入目标数据库中,每个数据库包括目标标号、质心坐标、目标坐标块以及SIFT信息。每个目标的特征信息又包括特征点坐标、特征向量以及特征向量对应的留存优选级。然后以目标库为中介,与第二帧中目标SIFT特征信息进行匹配,找到前后二帧的关联性,确定所述目标物体的位置及轨迹,然后利用库中目标与第二帧目标的匹配关系,采用特定策略更新、淘汰目标库信息。之后再以目标库为中介继续对后续帧进行处理。SIFT算法分为匹配和更新两个过程。匹配过程通过二个目标特征的匹配概率,找出前后两帧相同的目标,对目标进行关联。更新过程则是在匹配的基础上对目标库进行补充与更新,确保目标库信息与最近几帧目标保持相似性,以保证识别的正确性。The head-mounted display device adopts motion capture technology to capture the motion track, and first uses the combination of the background difference method and the frame difference method to extract the motion of the head-mounted display device, wherein the frame difference method is to use the image sequence A difference is made between adjacent frame images to extract the motion region in the stereo panoramic image pair. First, several frames of images are corrected in the same coordinate system, and then the difference operation is performed on two images of the same background at different times, and the background part whose gray level does not change is subtracted. The position in the frame is different, and the gray scale is different from the background. After subtracting the two frames, the head-mounted display device will be highlighted, so as to roughly determine the position of the head-mounted display device in the stereoscopic panoramic image pair. Location. The background subtraction method realizes the target object detection by subtracting the image sequence and the reference background model. The background difference method can provide relatively complete feature data to extract the head-mounted display device, but it is too sensitive to dynamic scene changes caused by illumination and external conditions, and a background image update mechanism needs to be added under uncontrolled conditions, and Not for use with dual panoramic camera movements, or situations where the background grayscale changes greatly. Then, the head-mounted display device uses a SIFT (Scale-invariant feature transform, scale-invariant feature transform matching) algorithm to track the target object. The main idea is to establish a target library, extract the stereoscopic panoramic image from the head-mounted display device in the first frame, and store the feature database into the target database after performing SIFT transformation. Each database includes target labels, Centroid coordinates, target coordinate block, and SIFT information. The feature information of each target includes feature point coordinates, feature vectors, and retention priorities corresponding to feature vectors. Then use the target library as an intermediary to match the SIFT feature information of the target in the second frame, find the correlation between the two frames before and after, determine the position and trajectory of the target object, and then use the matching relationship between the target in the library and the target in the second frame , using a specific strategy to update and eliminate target database information. Then continue to process subsequent frames with the target library as an intermediary. The SIFT algorithm is divided into two processes: matching and updating. The matching process uses the matching probability of the two target features to find out the same target in the two frames before and after, and associate the target. The update process is to supplement and update the target library on the basis of matching to ensure that the target library information remains similar to the target in the last few frames to ensure the correctness of recognition.
步骤S320、为所述三维虚拟人建立与所述运动轨迹相映射的虚拟运动方向。Step S320, establishing a virtual movement direction mapped to the movement track for the 3D virtual human.
头戴式显示设备根据捕捉的运动轨迹,为所述三维虚拟人建立与所述运动轨迹相映射的虚拟运动方向,若所述头戴式显示设备移动时,所述三维虚拟人则在所述三维周边街景中相应实现街景漫游;并在所述三维周边街景的商铺中选购和试用各类商品,若选择到中意的商品后,通过扫描所述商品上的二维码后即可加入虚拟购物车,并进一步实现网上支付。The head-mounted display device establishes a virtual motion direction for the three-dimensional virtual human being mapped to the motion track according to the captured motion track, and when the head-mounted display device moves, the three-dimensional virtual human will Correspondingly realize street view roaming in the three-dimensional surrounding street view; and purchase and try various commodities in the shops of the three-dimensional surrounding street view. Shopping cart, and further realize online payment.
本实施例提出的基于虚拟现实的街景实现方法,采用动作捕捉技术,捕捉所述头戴式显示设备的运动轨迹;为所述三维虚拟人建立与所述运动轨迹相映射的虚拟运动方向,从而使用户迅速选购到中意的商品,为用户提供更为沉浸式和具有互动性的体验。The virtual reality-based street view implementation method proposed in this embodiment uses motion capture technology to capture the movement trajectory of the head-mounted display device; establishes a virtual movement direction mapped to the movement trajectory for the three-dimensional virtual person, thereby Allow users to quickly purchase the desired products, and provide users with a more immersive and interactive experience.
如图8所示,图8为本发明基于虚拟现实的街景实现装置第一实施例的功能模块示意图,在第一实施例中,所述基于虚拟现实的街景实现装置包括:As shown in FIG. 8, FIG. 8 is a schematic diagram of the functional modules of the first embodiment of the virtual reality-based street view realization device of the present invention. In the first embodiment, the virtual reality-based street view realization device includes:
当前位置获取模块10,用于获取头戴式显示设备当前所在位置,当前位置获取装置10集成于虚拟现实头盔11中;The current location acquisition module 10 is used to acquire the current location of the head-mounted display device, and the current location acquisition device 10 is integrated in the virtual reality helmet 11;
街景获取模块20,用于根据所述头戴式显示设备当前所在位置,获取所述头戴式显示设备当前所在位置的三维周边街景;A street view acquisition module 20, configured to acquire a three-dimensional surrounding street view of the current location of the head-mounted display device according to the current location of the head-mounted display device;
融入模块30,用于构建与所述头戴式显示设备对应的三维虚拟人,将所述三维虚拟人融入到所述三维周边街景中,融入装置30包括虚拟现实头盔11。The integration module 30 is configured to construct a 3D virtual person corresponding to the head-mounted display device, and integrate the 3D virtual person into the 3D surrounding street view. The integration device 30 includes a virtual reality helmet 11 .
头戴式显示设备的当前位置获取模块10按照事先设定的时间阈值来执行定位操作,所述时间阈值可以设定为1秒,也可以设定为其他值,时间阈值可以根据用户的设定随时进行调整或是更改,定位操作通过GPS和RFID的合作来完成,若是在室外时,则通过GPS来获取当前所在位置的经纬度和坐标;若是在室内时,则改用RFID获取当前所在位置的经纬度和坐标。The current location acquisition module 10 of the head-mounted display device performs a positioning operation according to a preset time threshold. The time threshold can be set to 1 second, or can be set to other values. The time threshold can be set according to the user's Adjust or change at any time. The positioning operation is completed through the cooperation of GPS and RFID. If it is outdoors, use GPS to obtain the latitude, longitude and coordinates of the current location; if it is indoors, use RFID to obtain the current location. Latitude and longitude and coordinates.
头戴式显示设备的街景获取模块20采用双全景摄像头在两个不同的视点各个不同的视角下预先采集周边街景的立体全景图像对;并将所述立体全景图像对和对应视角事先保存在设定的街景数据库中;若获取到所述头戴式显示设备当前所在位置时,则相应调出所述街景数据库中的所述立体全景图像对。The street view acquisition module 20 of the head-mounted display device adopts a dual panoramic camera to pre-collect the stereo panoramic image pair of the surrounding street scene at two different viewpoints at different viewing angles; and the stereo panoramic image pair and the corresponding viewing angle are stored in the device If the current location of the head-mounted display device is obtained, the stereoscopic panoramic image pair in the street view database is called out accordingly.
头戴式显示设备的融入模块30在虚拟现实场景中构建与所述头戴式显示设备对应的三维虚拟人,并在立体全景图像对中将所述三维虚拟人融入到所述三维周边街景中。其中,所述头戴式显示设备与所述三维虚拟人之间的对应关系包括动作的对应,即所述头戴式显示设备与所述三维虚拟人形成动作映射关系,若所述头戴式显示设备向左移动时,则所述三维虚拟人相应向左移动。The integration module 30 of the head-mounted display device constructs a 3D virtual person corresponding to the head-mounted display device in the virtual reality scene, and integrates the 3D virtual person into the 3D surrounding street scene in the stereo panoramic image pair . Wherein, the corresponding relationship between the head-mounted display device and the 3D virtual person includes the correspondence of actions, that is, the head-mounted display device and the 3D virtual person form an action mapping relationship, if the head-mounted display device When the display device moves to the left, the three-dimensional virtual human moves to the left accordingly.
本实施例提出的头戴式显示设备,通过获取头戴式显示设备当前所在位置;根据所述头戴式显示设备当前所在位置,获取所述头戴式显示设备当前所在位置的三维周边街景;构建与所述头戴式显示设备对应的三维虚拟人,将所述三维虚拟人融入到所述三维周边街景中。本实施例自动识别三维周边街景,并将构建的虚拟人融入所述三维周边街景中,为用户提供更为沉浸式和具有互动性的体验。The head-mounted display device proposed in this embodiment obtains the current location of the head-mounted display device; according to the current location of the head-mounted display device, obtains the three-dimensional surrounding street view of the current location of the head-mounted display device; A three-dimensional virtual human corresponding to the head-mounted display device is constructed, and the three-dimensional virtual human is integrated into the three-dimensional surrounding street view. This embodiment automatically recognizes the three-dimensional surrounding street view, and integrates the constructed virtual person into the three-dimensional surrounding street view, so as to provide users with a more immersive and interactive experience.
如图9所示,图9为图8中所述当前位置获取模块的功能模块示意图,,在第一实施例的基础上,所述街景获取模块20包括:As shown in FIG. 9, FIG. 9 is a schematic diagram of the functional modules of the current location acquisition module in FIG. 8. On the basis of the first embodiment, the street view acquisition module 20 includes:
采集单元21,用于采用双全景摄像头在不同的视角下预先采集周边街景的立体全景图像对;The acquisition unit 21 is used to pre-acquire stereo panoramic image pairs of surrounding street scenes from different angles of view using dual panoramic cameras;
存储单元22,用于将所述立体全景图像对和对应视角事先保存在设定的街景数据库中;A storage unit 22, configured to store the pair of stereoscopic panoramic images and the corresponding angle of view in a set street view database in advance;
调用单元23,用于若获取到所述头戴式显示设备当前所在位置在设定范围内时,则相应调出所述街景数据库中的所述立体全景图像对。The calling unit 23 is configured to correspondingly call out the pair of stereoscopic panoramic images in the street view database when the acquired current location of the head-mounted display device is within a set range.
头戴式显示设备的采集单元21采用双全景摄像头在两个不同的视点以及不同的视角下预先采集周边街景的立体全景图像对,在本实施例中,所述全景摄像头优选为鱼眼摄像头,所述视角包括高度、朝向和俯仰角等。The acquisition unit 21 of the head-mounted display device adopts dual panoramic cameras to pre-acquire stereo panoramic image pairs of surrounding street scenes at two different viewpoints and different viewing angles. In this embodiment, the panoramic cameras are preferably fisheye cameras. The angle of view includes height, orientation, pitch angle, and the like.
头戴式显示设备的存储单元22将采集到的所述立体全景图像对以及视角之间形成一一映射关系,并汇集成视角映射表;将所述立体全景图像对和所述视角映射表事先保存在设定的街景数据库中。The storage unit 22 of the head-mounted display device forms a one-to-one mapping relationship between the collected stereoscopic panoramic image pairs and viewing angles, and collects them into a viewing angle mapping table; Saved in the set Street View database.
头戴式显示设备的调用单元23若获取到所述头戴式显示设备当前所在位置在设定范围内时,则检测所述头戴式显示设备当前视角,根据所述街景数据库中的所述视角映射表,调出与头戴式显示设备当前视角对应的所述立体全景图像对。If the calling unit 23 of the head-mounted display device obtains that the current location of the head-mounted display device is within the set range, it detects the current viewing angle of the head-mounted display device, and according to the The viewing angle mapping table is used to call out the stereoscopic panoramic image pair corresponding to the current viewing angle of the head-mounted display device.
本实施例提出的头戴式显示设备,采用双全景摄像头在不同的视角下预先采集周边街景的立体全景图像对;将所述立体全景图像对和对应视角事先保存在设定的街景数据库中;若获取到所述头戴式显示设备当前所在位置在设定范围内时,则相应调出所述街景数据库中的所述立体全景图像对。本实施例自动调出周边街景三维全景视图,为用户提供更为沉浸式和具有互动性的体验。The head-mounted display device proposed in this embodiment adopts dual panoramic cameras to pre-collect stereoscopic panoramic image pairs of surrounding street scenes at different viewing angles; and store the stereoscopic panoramic image pairs and corresponding viewing angles in a set street view database in advance; If it is acquired that the current location of the head-mounted display device is within the set range, the pair of stereoscopic panoramic images in the street view database is correspondingly called out. In this embodiment, a three-dimensional panoramic view of surrounding street scenes is automatically called up, providing users with a more immersive and interactive experience.
进一步参见图8,所述街景获取模块20还用于若识别到所述头戴式显示设备当前所在位置大于设定的距离阈值时,则开启建筑群显示模式,显示所述三维周边街景的建筑群。Further referring to FIG. 8 , the street view acquisition module 20 is also configured to turn on the building group display mode and display the buildings in the three-dimensional surrounding street view if it is recognized that the current location of the head-mounted display device is greater than the set distance threshold. group.
头戴式显示设备的街景获取模块20若识别到当前所在位置大于设定的距离阈值时,即所述头戴式显示设备距离所述周边街景较远时,则开启建筑群显示模式,显示所述三维周边街景的建筑群,从而通过观看建筑群鸟瞰图的方式,使用户一览所述三维周边街景的全景。If the street view acquisition module 20 of the head-mounted display device recognizes that the current location is greater than the set distance threshold, that is, when the head-mounted display device is far away from the surrounding street view, it turns on the building group display mode and displays the The building group of the three-dimensional surrounding street view is described, so that the user can have a panoramic view of the three-dimensional surrounding street view by viewing the bird's-eye view of the building group.
本实施例提出的头戴式显示设备,若识别到所述头戴式显示设备当前所在位置大于设定的距离阈值时,则开启建筑群显示模式,显示所述三维周边街景的建筑群,从而使用户迅速了解周边街景的全景,为用户提供更为沉浸式和具有互动性的体验。The head-mounted display device proposed in this embodiment, if it is recognized that the current location of the head-mounted display device is greater than the set distance threshold, the building group display mode is turned on, and the building group of the three-dimensional surrounding street view is displayed, so that It enables users to quickly understand the panorama of surrounding street scenes, and provides users with a more immersive and interactive experience.
进一步参见图8,所述街景获取模块20还用于若识别到所述头戴式显示设备小于或等于设定的距离阈值时,则开启商铺显示模式,显示所述三维周边街景的商铺。Referring further to FIG. 8 , the street view acquisition module 20 is further configured to enable the shop display mode to display the shops in the three-dimensional surrounding street view if it is recognized that the head-mounted display device is less than or equal to the set distance threshold.
头戴式显示设备的街景获取模块20若识别到当前所在位置小于或等于设定的距离阈值时,即所述头戴式显示设备距离所述周边街景较近时,则开启商铺显示模式,显示所述三维周边街景的商铺,从而使用户迅速获取周边实体店铺的详细信息。If the street view acquisition module 20 of the head-mounted display device recognizes that the current location is less than or equal to the set distance threshold, that is, when the head-mounted display device is closer to the surrounding street view, it turns on the store display mode and displays The shops in the three-dimensional surrounding street view, so that the user can quickly obtain the detailed information of the surrounding physical shops.
本实施例提出的头戴式显示设备,若识别到所述头戴式显示设备小于或等于设定的距离阈值时,则开启商铺显示模式,显示所述三维周边街景的商铺,从而使用户迅速获取周边实体店铺的详细信息,为用户提供更为沉浸式和具有互动性的体验。The head-mounted display device proposed in this embodiment, if it is recognized that the head-mounted display device is less than or equal to the set distance threshold, it will turn on the shop display mode and display the shops in the three-dimensional surrounding street view, so that the user can quickly Obtain detailed information of surrounding physical stores to provide users with a more immersive and interactive experience.
如图10所示,图10为图8中所述融入模块的功能模块示意图,在第一实施例的基础上,所述融入模块包括:As shown in Figure 10, Figure 10 is a schematic diagram of the functional modules of the integration module described in Figure 8, on the basis of the first embodiment, the integration module includes:
捕捉单元31,用于采用动作捕捉技术,捕捉所述头戴式显示设备的运动轨迹,捕捉单元31包括头盔定位装置11和手柄定位装置22;The capture unit 31 is used to capture the motion trajectory of the head-mounted display device by using motion capture technology, and the capture unit 31 includes a helmet positioning device 11 and a handle positioning device 22;
建立单元32,用于为所述三维虚拟人建立与所述运动轨迹相映射的虚拟运动方向。The establishing unit 32 is configured to establish, for the three-dimensional virtual human, a virtual movement direction mapped to the movement trajectory.
头戴式显示设备的捕捉单元31采用动作捕捉技术,对运动轨迹进行捕捉,首先利用背景差分法和帧间差分法相结合提取出运动的所述头戴式显示设备,其中,帧间差分法是利用图像序列中相邻帧图像之间做差来提取出所述立体全景图像对中的运动区域。首先将数帧图像校正在同一坐标系中,然后将同一背景不同时刻的两幅图像进行差分运算,灰度不发生变化的背景部分被减掉,由于所述头戴式显示设备在相邻两帧中的位置不同,且与背景灰度有所差异,两帧相减后将所述头戴式显示设备突现出来,从而大致确定所述头戴式显示设备在所述立体全景图像对中的位置。背景差分法是利用图像序列和参考背景模型相减实现所述目标物体检测的。背景差分法能够提供较为完整的特征数据从而提取出所述头戴式显示设备,但其对光照和外部条件造成的动态场景变化过于敏感,在非受控情部下需要加入背景图像更新机制,且不是用于与双全景摄像头运动,或背景灰度变化较大的情况。再后所述头戴式显示设备采用SIFT算法对所述目标物体进行追踪。其主要思想是建立目标库,将所述立体全景图像对第一帧中的所述头戴式显示设备提取出来,进行SIFT变换后将特征数据库存入目标数据库中,每个数据库包括目标标号、质心坐标、目标坐标块以及SIFT信息。每个目标的特征信息又包括特征点坐标、特征向量以及特征向量对应的留存优选级。然后以目标库为中介,与第二帧中目标SIFT特征信息进行匹配,找到前后二帧的关联性,确定所述目标物体的位置及轨迹,然后利用库中目标与第二帧目标的匹配关系,采用特定策略更新、淘汰目标库信息。之后再以目标库为中介继续对后续帧进行处理。SIFT算法分为匹配和更新两个过程。匹配过程通过二个目标特征的匹配概率,找出前后两帧相同的目标,对目标进行关联。更新过程则是在匹配的基础上对目标库进行补充与更新,确保目标库信息与最近几帧目标保持相似性,以保证识别的正确性。The capture unit 31 of the head-mounted display device adopts motion capture technology to capture the motion trajectory, and first uses the combination of the background difference method and the frame difference method to extract the moving head-mounted display device, wherein the frame difference method is The difference between adjacent frame images in the image sequence is used to extract the motion region in the stereo panoramic image pair. First, several frames of images are corrected in the same coordinate system, and then the difference operation is performed on two images of the same background at different times, and the background part whose gray level does not change is subtracted. The position in the frame is different, and the gray scale is different from the background. After subtracting the two frames, the head-mounted display device will be highlighted, so as to roughly determine the position of the head-mounted display device in the stereoscopic panoramic image pair. Location. The background subtraction method realizes the target object detection by subtracting the image sequence and the reference background model. The background difference method can provide relatively complete feature data to extract the head-mounted display device, but it is too sensitive to dynamic scene changes caused by illumination and external conditions, and a background image update mechanism needs to be added under uncontrolled conditions, and Not for use with dual panoramic camera movements, or situations where the background grayscale changes greatly. Then, the head-mounted display device uses the SIFT algorithm to track the target object. The main idea is to establish a target library, extract the stereoscopic panoramic image from the head-mounted display device in the first frame, and store the feature database into the target database after performing SIFT transformation. Each database includes target labels, Centroid coordinates, target coordinate block, and SIFT information. The feature information of each target includes feature point coordinates, feature vectors, and retention priorities corresponding to feature vectors. Then use the target library as an intermediary to match the SIFT feature information of the target in the second frame, find the correlation between the two frames before and after, determine the position and trajectory of the target object, and then use the matching relationship between the target in the library and the target in the second frame , using a specific strategy to update and eliminate target database information. Then continue to process subsequent frames with the target library as an intermediary. The SIFT algorithm is divided into two processes: matching and updating. The matching process uses the matching probability of the two target features to find out the same target in the two frames before and after, and associate the target. The update process is to supplement and update the target library on the basis of matching to ensure that the target library information remains similar to the target in the last few frames to ensure the correctness of recognition.
头戴式显示设备的建立单元32根据捕捉的运动轨迹,为所述三维虚拟人建立与所述运动轨迹相映射的虚拟运动方向,若所述头戴式显示设备移动时,所述三维虚拟人则在所述三维周边街景中相应实现街景漫游;并在所述三维周边街景的商铺中选购和试用各类商品,若选择到中意的商品后,通过扫描所述商品上的二维码后即可加入虚拟购物车,并进一步实现网上支付。The establishment unit 32 of the head-mounted display device establishes a virtual movement direction mapped to the movement trajectory for the three-dimensional virtual human according to the captured movement trajectory. When the head-mounted display device moves, the three-dimensional virtual human Then correspondingly realize street view roaming in the three-dimensional surrounding street view; and purchase and try out various commodities in the shops of the three-dimensional surrounding street view, if you choose the desired commodity, after scanning the two-dimensional code on the commodity, You can add it to the virtual shopping cart and further realize online payment.
本实施例提出的头戴式显示设备,采用动作捕捉技术,捕捉所述头戴式显示设备的运动轨迹;为所述三维虚拟人建立与所述运动轨迹相映射的虚拟运动方向,从而使用户迅速选购到中意的商品,为用户提供更为沉浸式和具有互动性的体验。The head-mounted display device proposed in this embodiment uses motion capture technology to capture the motion track of the head-mounted display device; establishes a virtual motion direction mapped to the motion track for the three-dimensional virtual human, so that the user Quickly purchase the desired products to provide users with a more immersive and interactive experience.
以上仅为本发明的优选实施例,并非因此限制本发明的专利范围,凡是利用本发明说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本发明的专利保护范围内。The above are only preferred embodiments of the present invention, and are not intended to limit the patent scope of the present invention. Any equivalent structure or equivalent process conversion made by using the description of the present invention and the contents of the accompanying drawings, or directly or indirectly used in other related technical fields , are all included in the scope of patent protection of the present invention in the same way.
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201611209993.9ACN106774910A (en) | 2016-12-24 | 2016-12-24 | Streetscape implementation method and device based on virtual reality |
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201611209993.9ACN106774910A (en) | 2016-12-24 | 2016-12-24 | Streetscape implementation method and device based on virtual reality |
| Publication Number | Publication Date |
|---|---|
| CN106774910Atrue CN106774910A (en) | 2017-05-31 |
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201611209993.9APendingCN106774910A (en) | 2016-12-24 | 2016-12-24 | Streetscape implementation method and device based on virtual reality |
| Country | Link |
|---|---|
| CN (1) | CN106774910A (en) |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN107391546A (en)* | 2017-06-01 | 2017-11-24 | 浙江唯见科技有限公司 | The fully associative method and system of VR resources |
| CN108364353A (en)* | 2017-12-27 | 2018-08-03 | 广东鸿威国际会展集团有限公司 | The system and method for guiding viewer to watch the three-dimensional live TV stream of scene |
| CN109032344A (en)* | 2018-07-04 | 2018-12-18 | Oppo(重庆)智能科技有限公司 | Location processing method and Related product |
| CN109151517A (en)* | 2017-06-27 | 2019-01-04 | 中兴通讯股份有限公司 | Shopping centre interactive approach, device, system, storage medium and computer equipment |
| CN110019583A (en)* | 2017-08-29 | 2019-07-16 | 深圳市掌网科技股份有限公司 | A kind of environment methods of exhibiting and system based on virtual reality |
| CN114185437A (en)* | 2021-12-16 | 2022-03-15 | 浙江小族智能科技有限公司 | Method and device for interaction between amusement vehicle and real scene, storage medium and terminal |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN103345357A (en)* | 2013-07-31 | 2013-10-09 | 关鸿亮 | Method for realizing automatic street view display based on mobile equipment sensor |
| CN104050177A (en)* | 2013-03-13 | 2014-09-17 | 腾讯科技(深圳)有限公司 | Street view generation method and server |
| CN104199944A (en)* | 2014-09-10 | 2014-12-10 | 重庆邮电大学 | Method and device for achieving street view exhibition |
| US20150268473A1 (en)* | 2014-03-18 | 2015-09-24 | Seiko Epson Corporation | Head-mounted display device, control method for head-mounted display device, and computer program |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN104050177A (en)* | 2013-03-13 | 2014-09-17 | 腾讯科技(深圳)有限公司 | Street view generation method and server |
| CN103345357A (en)* | 2013-07-31 | 2013-10-09 | 关鸿亮 | Method for realizing automatic street view display based on mobile equipment sensor |
| US20150268473A1 (en)* | 2014-03-18 | 2015-09-24 | Seiko Epson Corporation | Head-mounted display device, control method for head-mounted display device, and computer program |
| CN104199944A (en)* | 2014-09-10 | 2014-12-10 | 重庆邮电大学 | Method and device for achieving street view exhibition |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN107391546A (en)* | 2017-06-01 | 2017-11-24 | 浙江唯见科技有限公司 | The fully associative method and system of VR resources |
| CN107391546B (en)* | 2017-06-01 | 2020-07-07 | 浙江唯见科技有限公司 | Method and system for full association of VR resources |
| CN109151517A (en)* | 2017-06-27 | 2019-01-04 | 中兴通讯股份有限公司 | Shopping centre interactive approach, device, system, storage medium and computer equipment |
| CN110019583A (en)* | 2017-08-29 | 2019-07-16 | 深圳市掌网科技股份有限公司 | A kind of environment methods of exhibiting and system based on virtual reality |
| CN110019583B (en)* | 2017-08-29 | 2021-01-05 | 深圳市掌网科技股份有限公司 | Environment display method and system based on virtual reality |
| CN108364353A (en)* | 2017-12-27 | 2018-08-03 | 广东鸿威国际会展集团有限公司 | The system and method for guiding viewer to watch the three-dimensional live TV stream of scene |
| CN109032344A (en)* | 2018-07-04 | 2018-12-18 | Oppo(重庆)智能科技有限公司 | Location processing method and Related product |
| CN114185437A (en)* | 2021-12-16 | 2022-03-15 | 浙江小族智能科技有限公司 | Method and device for interaction between amusement vehicle and real scene, storage medium and terminal |
| Publication | Publication Date | Title |
|---|---|---|
| US11393173B2 (en) | Mobile augmented reality system | |
| CN110568447B (en) | Visual positioning method, device and computer readable medium | |
| CN108446310B (en) | Virtual street view map generation method and device and client device | |
| CN106774910A (en) | Streetscape implementation method and device based on virtual reality | |
| US8872851B2 (en) | Augmenting image data based on related 3D point cloud data | |
| US8633970B1 (en) | Augmented reality with earth data | |
| JP2019087229A (en) | INFORMATION PROCESSING APPARATUS, CONTROL METHOD FOR INFORMATION PROCESSING APPARATUS, AND PROGRAM | |
| CN108830894A (en) | Remote guide method, apparatus, terminal and storage medium based on augmented reality | |
| CN110443898A (en) | A kind of AR intelligent terminal target identification system and method based on deep learning | |
| CN106650723A (en) | Method for determining the pose of a camera and for recognizing an object of a real environment | |
| CN107665505B (en) | Method and device for realizing augmented reality based on plane detection | |
| KR20150082379A (en) | Fast initialization for monocular visual slam | |
| CN106683195B (en) | AR scene rendering method based on indoor positioning | |
| CN110858414A (en) | Image processing method and device, readable storage medium and augmented reality system | |
| CN108564662A (en) | The method and device that augmented reality digital culture content is shown is carried out under a kind of remote scene | |
| US20240331245A1 (en) | Video processing method, video processing apparatus, and storage medium | |
| JP6699518B2 (en) | Image recognition processing method, image recognition processing program, data providing method, data providing system, data providing program, recording medium, processor and electronic device | |
| CN112422653A (en) | Scene information pushing method, system, storage medium and equipment based on location service | |
| CN108235764B (en) | Information processing method, apparatus, cloud processing device, and computer program product | |
| CN114723923B (en) | Transmission solution simulation display system and method | |
| US20210385428A1 (en) | System and method for identifying a relative position and direction of a camera relative to an object | |
| CN112598803A (en) | Scenic spot AR group photo method | |
| CN110276837B (en) | Information processing method and electronic equipment | |
| CN119942031A (en) | Display method, device, equipment, medium and program | |
| CN113450439A (en) | Virtual-real fusion method, device and system |
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| WD01 | Invention patent application deemed withdrawn after publication | ||
| WD01 | Invention patent application deemed withdrawn after publication | Application publication date:20170531 |